5 Min
The Federal Trade Commission (FTC) of the United States is investigating OpenAI, the company behind the popular chatbot ChatGPT. The agency wants to know if the company's artificial intelligence products generate false and harmful content, violating consumer protection laws and endangering user rights.
This week, the FTC issued a 20-page investigation demand to OpenAI. It asked the company dozens of questions about how it trains its large language models with data and how ChatGPT can make false, misleading, or derogatory statements about real people. The Washington Post reported the document for the first time on Thursday, and a reliable source told CNN that it was true. Neither OpenAI nor the FTC have commented for now.
The FTC provides numerous examples of ChatGPT's misdeeds. In March 2023, OpenAI disclosed one of them, explaining that users were free to view other users' chats and payment information. The agency also requests that OpenAI detail all complaints about its products making "false, misleading, derogatory, or harmful" statements.
One of the most striking examples is when ChatGPT claimed that a lawyer had sexually harassed a student on a school trip, citing an article that the chatbot said had appeared in The Washington Post. But there was no such article, the trip never happened, and the lawyer denied ever harassing a student.
The US agency has also asked ChatGPT’s maker, Sam Altman, to explain the data that OpenAI uses to train its products and how it is avoiding what the tech industry calls “hallucinations”, which occur when the chatbot’s responses are well-formed but completely incorrect.
OpenAI has not officially commented on the FTC investigation, but Sam Altman expressed his disappointment and confidence on Twitter: “It is very disappointing to see the FTC’s request start with a leak and does not help build trust. That said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”
In defense of the company’s data, privacy, and safety practices, Altman tweeted: “We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. We protect user privacy and design our systems to learn about the world, not private individuals.”
And he also pointed out: “We’re transparent about the limitations of our technology, especially when we fall short. And our capped-profits structure means we aren’t incentivised to make unlimited returns.”
His tweets sparked a debate on AI regulation and ethics. Some praised OpenAI's innovation and transparency, while others criticized its lack of accountability and oversight. Some also questioned the FTC’s authority and approach to regulating AI under existing consumer protection laws.
The investigation is not surprising, given that FTC Chairwoman Lina Khan has previously expressed concerns about AI and its potential impacts on society. In June, she said that authorities “need to be on guard from the outset” with transformative tools like AI. She also said that the FTC was looking into how AI could affect competition, consumer protection, and privacy
Khan is not alone in her worries. Elon Musk joined a group of entrepreneurs and scientists a few months ago, warning about the possible consequences of AI and calling for a six-month suspension of “systems more powerful than GPT-4”.
In his recent trip to Europe, after his earlier comments about potentially ceasing operations, Altman tweeted: “very productive week of conversations in Europe about how to best regulate AI,” adding that the OpenAI team was excited to continue to operate here and has no plans to leave. In May, he testified before the U.S. Congress and said that lawmakers should create strong safety rules for advanced AI systems. “If AI goes wrong, it can go quite wrong,” Altman said.
ChatGPT's FTC investigation is not OpenAI's first setback. In March, the Italian Data Protection Authority ordered ChatGPT's immediate blocking for violating consumer data protection laws. Spain and France followed, investigating ChatGPT for privacy issues and complaints.
European AI rules will go into effect in 2024. According to Brussels, the main goal is to deal with the risks caused by the different ways AI can be used by creating "a set of complementary, proportionate, and flexible rules."
The FTC's investigation into OpenAI comes after the agency previously warned about exaggerated AI claims and discriminatory use of the technology. In blog posts and public comments, the FTC has stated that businesses that use AI will be held accountable for any unfair or deceptive practices. As the primary consumer protection watchdog, the FTC can prosecute privacy abuses, deceptive marketing, and other harmful conduct.
Lina Khan said Congress gave the agency enough power to stop AI abuse. She also stated in a New York Times article that while these tools are new, they must still adhere to existing rules, and the FTC will heavily rely on them in this new market.
Acknowledging the risk of nonsensical or untruthful content and discrimination against vulnerable groups, OpenAI takes proactive measures.
OpenAI is under fire from the FTC for its controversial AI products. The clash between ethical principles and consumer rights is heating up. How can OpenAI justify its claims and avoid discrimination? How can the FTC ensure the fair and safe use of AI? The answers will shape the future of AI governance and set the standards for a more honest and accountable industry. Don’t miss this historic showdown that will define the next era of technology.