[ad_1]
The Federal Trade Commission (FTC) is exploring the possibility of penalizing OpenAI for potential deceptive or unfair business practices, according to a document obtained by The Washington Post(opens in a new tab). The document, a “Civil Investigative Demand” (CID) sent to OpenAI sometime this week(opens in a new tab), signals the start of a wide-ranging federal probe into the inner workings of the AI business, and could bring about an info-dump from the AI industry leader that brought you ChatGPT, if the FTC does indeed uncover harmful practices.
The FTC, which exists to prevent unfair and anticompetitive business practices, is exploring OpenAI’s potential violations “relating to risks of harm to consumers, including reputational harm,” and seeks to determine, “whether Commission action to obtain monetary relief would be in the public interest.” In other words, the FTC suspects OpenAI may have violated regulations about deceiving customers or violating people’s privacy, and might slap it with a fine.
The information demanded in the CID is, well, extensive, including 49 requests for written information, and 17 requests for documents. These range from general information requests like “State Your full legal name,” to requests that get down into the nitty-gritty, such as, “Describe in detail each and every data store where Personal Information is stored or used,” with subsections about every aspect of data storage in each location.
It’s unlikely that all this information will be made public. For starters, the document is written like an opener in an expected, drawn-out volley of legal counterarguments and attempts to stall. It requires OpenAI to hold a phone meeting two weeks after receipt of the CID to discuss things like cost burden, “protected status” of information, and modifications to the FTC’s request.
Still, the FTC is likely to receive at least some long-sought information about OpenAI’s notorious black boxes(opens in a new tab), like the inner workings of its flagship model, GPT-4. At the moment, for instance, users are complaining that GPT-4 has gotten “dumber” recently, and have speculated(opens in a new tab) that OpenAI may have partitioned GPT-4 into multiple cooperating models in order to cut down on the computing power needed to run it.
The FTC may soon find out if this speculation is correct. The CID requests that OpenAI, “Describe in detail the Company’s process for refining a Large Language Model in order to modify an existing version of the Model.” OpenAI is being asked to include the circumstances under which a model has to be refined, who at the company does the refining, what steps they took, and how they evaluate whether or not it worked. That could certainly end speculation, assuming OpenAI doesn’t find some legal justification for not answering that question.
But if OpenAI is cleared in the investigation, the FTC will keep its secrets, because the proceedings of FTC investigations are private(opens in a new tab). However, the commission’s complaints often disclose information, such as when the FTC revealed previously unknown privacy violations by the staff of Amazon’s Ring products(opens in a new tab).
Sam Altman, for his part, has signaled that regulators like those at the FTC have a role to play in AI’s development — specifically in keeping people safe from its potential harms. “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said at a May congressional hearing(opens in a new tab) in Washington D.C., adding, “We want to work with the government to prevent that from happening.”
In practice, however, Altman and his company have been a behind-the-scenes presence amid the drafting of regulations like the AI Act, a law that was approved by the European Parliament last month. As Time documented in its reporting, OpenAI’s lobbying appears to have softened the AI Act, with the approved version echoing suggestions laid out in an OpenAI white paper(opens in a new tab) provided to European lawmakers.
The U.S. has not yet passed a similar law regulating the activities of AI companies. This FTC investigation is perhaps the closest the federal government has come to applying regulatory pressure, though a new bipartisan legislative proposal(opens in a new tab) from Senators Josh Hawley and Richard Blumenthal could soon make it easier to sue AI companies by limiting their immunity under Section 230, the law that shields tech companies from liability for third-party content posted to their platforms.
Still, despite their calls for regulation, don’t expect Altman and OpenAI to just roll over for the FTC. OpenAI has shown that it prefers to shape proceedings like these, not be shaped by them.
[ad_2]
Source link