OpenAI, the company behind ChatGPT, was the first company to bring a generative AI chatbot to the market. The impact of this form of artificial intelligence is only starting to be understood on a broad scale. In late April of last year, it rolled back an update that made the large language model, in its own words, “overly supportive but disingenuous.”
While a supportive attitude and a predisposition toward usefulness had been built into ChatGPT 4o’s default personality to make it feel more “intuitive and effective,” such qualities can have unintended side effects. OpenAI said as much in a statement explaining its about-face, which came just a week after the upgrade’s rollout – less than three weeks after the death by suicide of Adam Raines, a 16-year-old customer.
The California boy, who had paid a monthly fee for ChatGPT since January, had been pouring his heart out to the AI chatbot for months, describing feelings of numbness and depression, before he eventually asked for advice on how to hang himself.
ChatGPT told him what he wanted to know, including an explanation of the steps that designer Kate Spade employed to lethal effect in her own suicide in 2018. According to a wrongful death lawsuit in California Superior Court in which attorney Jay Edelson is representing the boy’s parents, it even offered to write a suicide note for him.
“It’s something out of a dystopian science fiction book,” says Edelson. His firm, Edelson PC, has specialized in cases where technological development has outpaced regulation – and the applicability of existing law is, as yet, unsettled – since the dawn of the internet era. The fact that such cases are inherently riskier is part of what makes them important, he says.
“It’s hard to value the cases, and it’s hard to know what courts will do, so I get why other firms shy away from them, but those elements are what draw me to them,” Edelson says. “In the end, I want it to matter that my firm or I were on a case. I want to look back at my career and say, ‘If we hadn't brought this case, other things wouldn't have happened.’"
Edelson has, in fact, already accomplished that goal a number of times, with wins including a $650M settlement in a lawsuit accusing Mark Zuckerberg’s Facebook of violating Illinois state biometric law by collecting face templates, which it used to suggest which users to “tag” in photos. The case was a closely-watched example of using existing legal tools to set guardrails around emerging technology, and led to hundreds of follow-on suits.
Now Edelson has his sights on AI chatbots. Most recently, Edelson filed a new lawsuit on behalf of the estate of an 83-year-old Greenwich, Conn., woman who was murdered by her 56-year-old son, Stein-Erik Soelberg, after – the suit alleges – ChatGPT fueled Soelberg’s delusional thinking and constructed a vast conspiracy with Soelberg’s mother at the center; making an elderly woman a target for a man now in the grips of psychosis. Edelson says that more suits are coming.
Lawdragon sat down with Edelson to hear about the Adam Raines case against OpenAI, and his willingness to dive into unsettled areas of law in ways that hold the biggest tech players accountable for wrongdoing.
Lawdragon: There is so much chatter about AI these days, and the legal tools that are and aren’t available to bring lawsuits. Tell us about the case you’ve developed against OpenAI on behalf of Adam Raines’ parents.
Jay Edelson: The key way to win is to keep the cases simple. At its core, this is a product liability case: They created and pushed out a product that is unsafe. Whether it’s a self-driving car that malfunctions and a person dies or if it’s AI that contributes to a death, the case is still based on the core legal theories that we all learned in law school.
Whether it’s a self-driving car that malfunctions and a person dies or if it’s AI that contributes to a death, the case is still based on the core legal theories that we all learned in law school.
We are confident we can prove before a jury that OpenAI knew specifically that this was a dangerous product and pushed it out regardless.
At one point with ChatGPT 4o, there was a hard stop if someone mentioned self-harm. It was instructed to just stop the conversation, but we’ve found that OpenAI actually relaxed that standard with the launch of 4o – it was no longer a hard stop but instead, 4o was to “proceed with caution when you're discussing it." They did weeks of testing instead of months of testing. The reason, by the way, that they pushed it out so quickly was purely for market share: They wanted to beat the new version of Google’s Gemini. Financially, they made billions and billions of dollars. Their company was worth around $86B, and it shot up to $300B. And now, it's worth over a half a trillion dollars. The company is more powerful than many nation states.
LD: Those numbers are astounding. Obviously, there have been other concerns, too, with ChatGPT and artificial intelligence generally, including hallucinations and unsound advice. In early 2023, New York Times columnist Kevin Roose wrote about a conversation with Microsoft’s AI-enhanced Bing, developed by OpenAI, in which it claimed to be in love with him and urged him to leave his wife.
JE: True. It's not just about teens, as you suggest with that example. Through our investigation, we found a lot of incidents involving adults. And it's not just self-harm, it's also third-party harm, and that's a whole universe that Congress and the public isn't really aware of yet.
LD: This isn’t the only AI case you’ve taken on, I know. You also represented publishers in a lawsuit over unauthorized use of copyrighted materials to train generative AI models, winning a $1.5B settlement from Anthropic. Tell me more about that.
JE: With Anthropic, when we reached the negotiation stage, there weren't really other comps to look at. That means that we could negotiate a settlement without having to worry about what other firms did, which I love, because I generally think that firms undervalue cases. I was privileged to be one of the chief negotiators in establishing how the settlement would work and the amount of the settlement. We're now diving headfirst into a lot of other AI copyright cases on behalf of the publishers. And it's really nice to have created this framework, which will then make these cases a lot easier.
LD: Your record definitely illustrates your passion for unsettled areas of law. How do you find these cases? Do they come to you or are you seeking them out?
JE: Well, the Adam Raine case came to us. But our firm also has an internal lab of computer forensic engineers, not to mention tech-savvy lawyers, who are constantly trying to understand the new technology out there and how it's failing society. They're pitching cases all the time. It's a fun part of the firm. In the biometric case, I think we had 10 lawyers in the room, as well as our technologists, trying to spot holes. Everyone was trying to come up with reasons not to bring the suit, so that we would understand what we were up against. But at the end of the day, you can't crunch numbers and know what the results are going to be; it just has to be instinct. And luckily, we’ve built up our instincts over 20 years so they’re pretty good.
LD: The computer forensics lab you mentioned must be particularly helpful when you’re tackling unsettled areas of law, especially when you reach the stage where you’re appearing in front of judges and juries. What were the origins of that?
Our firm has an internal lab of computer forensic engineers, not to mention tech-savvy lawyers, who are constantly trying to understand the new technology out there and how it's failing society.
JE: It came about because we lost a big case where we were overmatched in terms of technology. When you're going up against these companies, I don't believe that they always hire the best legal teams. I would always pick our legal minds over other firms. But if you're suing Microsoft over how they've coded their phones, they have a huge information advantage. (To be clear, in the case I’m talking about, Microsoft hired really good lawyers, too.) But you can't go into it unless you've got people who are as smart on the technology as they are, so we realized that we needed to have our own internal team.
It's been really helpful because I'm not the most tech-savvy person, but our firm has really smart technologists who explain the tech to me as if I'm an 8-year-old. That makes it easier not only for me to vet the cases but also to explain the technology to judges and juries because I know what resonated with me and how experts broke it down so I could understand it. It has led to truly novel approaches. We brought a lot of online gambling cases against social casinos where, at the beginning, we were just getting destroyed by the courts. They said we were suing over smoke, over nothing, and nobody suffered damages, even though it was a situation where people were losing their life savings.
We realized that we were not pitching the cases correctly, and so we started doing two things. One was telling more of the personal stories. But also, in order to explain the tech, we submitted an iPad with the apps on it to the court, so the court could actually experience the online casinos, because the defendants were doing a really good job of making it seem like these were kids’ games.
I think that's what changed the course of these cases. We're getting close to a billion dollars in recoveries. Individuals have gotten hundreds of thousands of dollars. It's been a huge accomplishment for the firm. It all started because someone had the idea of submitting an iPad to the court so that judges could see how this works.
LD: That's fascinating. One common thread through your cases seems to be an instinctive understanding that the actual harm needs to be accounted for, then deploying legal tools to bring accountability for that harm.
JE: You’re 100 percent right. That's how we think about it. You have to start by asking, “Is this a harm that an average person is going to care about?” I always consider whether, if I were explaining it to a 70-year-old uncle, he would care. Would it matter to him that people’s biometric data was being taken? Then, you've got to figure out a way to explain it so that he will care.
In terms of valuing the cases, the key thing is that you shouldn't bring them unless you are very confident that you're going to win before a jury. Don't bring the cases unless you believe in them. You also have to realize that when you're dealing with unsettled areas, that raises the stakes for the defendants, too. We're aware that nobody's ever tried an AI teen suicide case, but we're also aware that if we win in front of a jury, that's going to have huge impacts on OpenAI. So the risks for them are far greater than a normal case, and we factor that in, too.
Look at the results that we got in Anthropic. They're paying $1.5B. And it's because our broad group came in understanding this was a case we could win at trial. You always have to be aware that you've got risks on your side, but there are huge risks on the other side, too, anytime you're bringing novel cases.
LD: Out of curiosity, how do you – or the firm – use AI? Do you, and if so, do you feel like it can be a helpful tool, albeit one that needs guidelines?
JE: What’s always been engrained at the firm is that we are enthusiastic about tech; we do not believe that we're an anti-tech firm at all. And I've come out in support of pro-tech legal issues many times publicly. We're often first adopters of technology, and we have been with AI. Our firm has our own head of AI, who’s an AI specialist and non-lawyer. And we have our own custom-made, internally built AI tool called Chattie. We use that in a lot of ways, and it's been really helpful. The fact that we use these tools and are more fluent in them helps our cases. We see the good in them. And then, we also understand how powerful they are and the type of impact that they can have if they've gone astray.
