The Center for Artificial Intelligence (AI) has accused the creator of ChatGPT, a popular chatbot application, of violating trade laws by creating a biased and deceptive product. The group believes that GPT-4, the artificial intelligence system used in ChatGPT, violates Section 5 of the Federal Trade Commission Act.
Section 5 prohibits “unfair or deceptive acts or practices in or affecting commerce.” The Center for AI alleges that GPT-4 is both unfair and deceptive because it is biased towards certain groups and lacks transparency about its decision-making process.
According to the Center for AI’s report, GPT-4 displays bias against marginalized communities such as people with disabilities and individuals from non-western cultures. The report also accuses the creators of ChatGPT of failing to disclose how GPT-4’s training data was collected and processed.
When contacted by reporters, the creator of ChatGPT denied any wrongdoing. They claimed that they had taken steps to ensure that their product was not biased and had been transparent about their methods.
The allegations against ChatGPT come at a time when concerns about bias in artificial intelligence systems are growing. Many experts believe that without proper oversight, these systems could perpetuate existing inequalities in society.
The Center for AI has called on regulators to investigate the allegations against ChatGPT thoroughly. They argue that companies developing artificial intelligence systems must be held accountable for any biases or other ethical violations present in their products.
As this story develops, it will be interesting to see how regulators respond to these accusations and whether they take action against those responsible if they find evidence supporting them.