OpenAI has Little Legal Recourse against DeepSeek, Tech Law Experts Say
OpenAI and the White House have implicated DeepSeek of using ChatGPT to inexpensively train its new chatbot.
- Experts in tech law state OpenAI has little recourse under intellectual home and contract law.
- OpenAI's terms of usage may use however are mainly unenforceable, they state.
This week, OpenAI and the White House accused DeepSeek of something akin to theft.
In a flurry of press declarations, they stated the Chinese upstart had bombarded OpenAI's chatbots with inquiries and hoovered up the resulting data trove to quickly and cheaply train a design that's now practically as good.
The Trump administration's top AI czar said this training process, called "distilling," amounted to copyright theft. OpenAI, on the other hand, informed Business Insider and other outlets that it's investigating whether "DeepSeek might have wrongly distilled our models."
OpenAI is not stating whether the business prepares to pursue legal action, instead assuring what a representative called "aggressive, proactive countermeasures to secure our technology."
But could it? Could it take legal action against DeepSeek on "you took our material" grounds, just like the premises OpenAI was itself took legal action against on in an ongoing copyright claim submitted in 2023 by The New York City Times and other news outlets?
BI presented this question to specialists in technology law, who said difficult DeepSeek in the courts would be an uphill struggle for OpenAI now that the content-appropriation shoe is on the other foot.
OpenAI would have a difficult time proving an intellectual residential or commercial property or copyright claim, these legal representatives said.
"The concern is whether ChatGPT outputs" - suggesting the responses it generates in reaction to questions - "are copyrightable at all," Mason Kortz of Harvard Law School said.
That's due to the fact that it's unclear whether the responses ChatGPT spits out certify as "imagination," he said.
"There's a teaching that states innovative expression is copyrightable, but realities and ideas are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, stated.
"There's a huge concern in copyright law right now about whether the outputs of a generative AI can ever constitute imaginative expression or if they are necessarily unprotected facts," he added.
Could OpenAI roll those dice anyway and declare that its outputs are protected?
That's unlikely, the attorneys said.
OpenAI is currently on the record in The New York Times' copyright case arguing that training AI is an allowed "fair use" exception to copyright protection.
If they do a 180 and tell DeepSeek that training is not a fair use, "that may return to type of bite them," Kortz said. "DeepSeek could state, 'Hey, weren't you just saying that training is fair usage?'"
There might be a difference between the Times and DeepSeek cases, Kortz added.
"Maybe it's more transformative to turn news short articles into a design" - as the Times accuses OpenAI of doing - "than it is to turn outputs of a design into another design," as DeepSeek is said to have actually done, Kortz said.
"But this still puts OpenAI in a pretty challenging scenario with regard to the line it's been toeing regarding reasonable use," he added.
A breach-of-contract lawsuit is more likely
A breach-of-contract lawsuit is much likelier than an IP-based claim, though it includes its own set of problems, stated Anupam Chander, who teaches innovation law at Georgetown University.
Related stories
The regards to service for Big Tech chatbots like those by OpenAI and Anthropic forbid using their content as training fodder for a competing AI model.
"So perhaps that's the claim you might possibly bring - a contract-based claim, not an IP-based claim," Chander said.
"Not, 'You copied something from me,' but that you gained from my design to do something that you were not permitted to do under our agreement."
There might be a hitch, Chander and Kortz said. OpenAI's regards to service need that the majority of claims be resolved through arbitration, not suits. There's an exception for claims "to stop unauthorized usage or abuse of the Services or copyright violation or misappropriation."
There's a larger hitch, though, specialists said.
"You must understand that the dazzling scholar Mark Lemley and a coauthor argue that AI terms of usage are likely unenforceable," Chander said. He was describing a January 10 paper, "The Mirage of Artificial Intelligence Terms of Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Information Technology Policy.
To date, "no design developer has in fact tried to impose these terms with financial penalties or injunctive relief," the paper says.
"This is most likely for good reason: we believe that the legal enforceability of these licenses is questionable," it adds. That's in part because design outputs "are largely not copyrightable" and because laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "deal restricted option," it states.
"I think they are likely unenforceable," Lemley told BI of OpenAI's regards to service, "due to the fact that DeepSeek didn't take anything copyrighted by OpenAI and since courts typically won't impose agreements not to contend in the absence of an IP right that would avoid that competitors."
Lawsuits between parties in different nations, each with its own legal and enforcement systems, chessdatabase.science are constantly tricky, Kortz said.
Even if OpenAI cleared all the above hurdles and won a judgment from a United States court or arbitrator, "in order to get DeepSeek to turn over money or stop doing what it's doing, the enforcement would come down to the Chinese legal system," he said.
Here, OpenAI would be at the mercy of another extremely complex area of law - the enforcement of foreign judgments and the balancing of specific and corporate rights and national sovereignty - that stretches back to before the founding of the US.
"So this is, a long, made complex, filled process," Kortz included.
Could OpenAI have secured itself better from a distilling attack?
"They might have used technical steps to obstruct repetitive access to their website," Lemley said. "But doing so would also interfere with normal consumers."
He added: "I do not think they could, or should, have a valid legal claim against the browsing of uncopyrightable info from a public website."
Representatives for DeepSeek did not immediately react to a demand for comment.
"We understand that groups in the PRC are actively working to utilize methods, including what's referred to as distillation, to try to replicate advanced U.S. AI models," Rhianna Donaldson, wiki.dulovic.tech an OpenAI spokesperson, told BI in an emailed declaration.