We are thrilled to introduce The 2026 Lawdragon 100 Leading AI & Legal Tech Advisors.
This was a fascinating year for this guide. When we first published it in 2024, ChatGPT had burst onto the market a short year before. With any breakneck technology, regulations tend to lag – and private litigators step in to shape the companies, protect consumers, and guide the bucking horse of the revolution into the new era.
(Prose too flowery? We promise, despite the temptation for AI, this was written by a human.)
We received an astounding number of nominations for the guide this year, reflective of the exponential number of litigators who are shaping the laws of this revolutionary technology, and the dealmakers that are applying the technology in fascinating ways. The guide also honors the legal technologists who are building systems – sometimes anchored in AI, sometimes not – that are making the practice and business of law ever quicker, smarter and more finely attuned.
Jessica Lee, Chair of Privacy, Security & Data Innovations at Loeb & Loeb, was an early thought leader on the impact of AI on data security and privacy. She works regularly with some of the world's largest companies to build and update their AI governance structures, identify and address risks, and leverage their digital products in the market.
Jay Edelson, CEO and founder of the eponymous firm, Edelson, secured a landmark $1.5B settlement against Anthropic over the use of copyrighted materials to train generative AI models, and is currently handling a case against OpenAI over a teen user's suicide. In our recent profile of him, he espouses the simple approach of applying existing laws – say, copyright or product liability – to this breakthrough technology: "Whether it’s a self-driving car that malfunctions and a person dies or if it’s AI that contributes to a death, the case is still based on the core legal theories that we all learned in law school."
Ted Boutrous, Lawdragon Legend and partner at Gibson Dunn, was part of the trial team that secured a precedent-setting win for OpenAI in a closely-watched defamation suit brought by radio host Mark Walters. The case was based on a "hallucination" by the large language model of a non-existent lawsuit against Walters in the course of a reporter's research. It tested an early key theory of whether generative AI software could be exposed to defamation cases, which typically require proof of negligence or malice. That court decided, it could not.
We chose these 100(ish) names through our time-honed methodology of journalistic research, robust submissions and vetting with peers, clients and others. Watch this space – next year, we may be growing this guide to a standard 500 to honor all the incredible, frontline work being done in this rapidly growing area.
