Yesterday, I attended the Harvard Law AI Summit organized by the Library Innovation Lab at Harvard Law School. It was a fairly intimate, invitation-only gathering of roughly 65 people, held under the Chatham House Rule, meaning that participants were free to use the information we received, but we agreed not to disclose the identity or affiliation of the speakers or participants.

The idea, of course, is to allow participants to speak frankly about an issue that is undeniably challenging and complex — the rise of generative AI in legal. And speak frankly they did. Even though the themes generally tracked those I’ve already seen raised in other forums and articles, the insights that came out of the summit were enlightening and thought-provoking, especially given the bona fides of those who were there.

As I reflect on the conference this morning, I thought I’d share a couple takeaways floating through my head. These are my impressions and not necessarily reflective of anything any of the speakers explicitly said.

1. Armed with AI, pro se litigants could overwhelm the courts, so the courts need to be prepared to respond in kind.

Generative AI could lower the hurdles and the costs for pro se litigants to bring their grievances to court. While that could potentially be a good thing for access to justice, it could also have the unintended consequence of overwhelming the courts — courts that are already overwhelmed by pro se litigants — and reducing their ability to process this flood of AI-fueled cases. What that means is that courts need to be prepared to respond in kind, likewise incorporating generative AI to enhance their efficiency and their ability to process cases. Exactly what those tools will look like remains to be seen, but the bottom line is that courts should be starting to think about this today so that they can be prepared for what is to come tomorrow.

2. If AI is to enhance access to justice, it will not be only by increasing lawyer productivity, but also by directly empowering consumers.

The legal profession faces no greater crisis than that of addressing the justice gap. Yet, while study after study over the past decade has documented the severity of this gap, we have seen no progress in narrowing it. If anything, the gap seems only to be widening. Generative AI offers the promise of finally helping us to narrow this gap by enhancing the ability to create legal documents and deliver legal information.

However, any number of times recently, when I have heard lawyers or even legal tech vendors talk about how AI can help close the justice gap, they focus on the potential for AI to increase lawyer productivity. If lawyers are more productive, goes their reasoning, they will be able to serve more clients and therefore narrow the justice gap.

The problem with this reasoning is that lawyers, alone, will never be enough to close the justice gap, because it is simply too vast. In addition, the nature of the legal problems many individuals face are not of a type a lawyer would handle in the first place. The fact is that, if generative AI is going to help close the gap, it will be by also directly empowering consumers to help themselves with their legal problems.

Given this, at yesterday’s AI Summit, I was heartened to hear many participants express ideas that seemed to recognize this idea that we need to harness AI in ways that can directly empower pro se individuals who face legal problems. Some of those at yesterday’s summit came from the judiciary, and they were among those who seemed to understand and embrace this. AI’s potential is huge, but not if we look at it through the limited lens of helping lawyers be more efficient.

3. Even the AI experts don’t understand AI.

One of the phrases most commonly uttered yesterday was “black box.” Given that attendees and speakers included computer scientists, AI researchers, and product developers, this was notable. Even those who are immersed in generative AI will be the first to admit that they do not fully understand how it works or of what it is capable. That said, there seemed to be general agreement that the power of this technology is not simply its ability to “generate,” but also to interpret and synthesize. At one point yesterday, I wrote down this note to myself: “A repeating theme today has been, ‘We don’t know how it works, we don’t have good answers to all the questions about it, but we know it is important and will change everything.'”

4. Experts are already striving to make the black box of AI more transparent. 

Given the black box nature of AI, some are working to make it more transparent. One way to do this is to become attuned to the signals we can draw out of generative AI tools and then incorporate them into some sort of a dashboard that let’s us see those signals in a more transparent way. For example, generative AI seems capable sometimes of detecting the gender of a user and delivering a response tailored to gender. Could we create interfaces that let us understand that? Or when AI delivers a response that uses certain data but omits other potentially relevant data, could we create ways to inform the user about what was left out?

5. Even as law firms adopt AI, they are finding implementation to be a challenge.

Even at law firms that have been early adopters of generative AI tools, getting buy-in across their attorneys and legal professionals is a challenge. Even at leading-edge firms, many lawyers remain skeptical and even fearful of this technology. A related issue is training for lawyers and legal professionals. Some firms are already developing inhouse training programs on understanding and using AI and some vendors are developing training of their own.

6. Founded or unfounded, fears continue of AI-driven job losses.

Will AI replace jobs now performed by lawyers, paralegals and law librarians? I’d say that among yesterday’s attendees, the verdict is still very much out on that question. One perspective is that we’ve all heard that saw before with other advances in technology that has actually ended up creating new opportunities. The other perspective is that we still do not understand the limits of this technology and what it could someday do.

7. AI could be a catalyst for inequality in law.

Current generative AI tools are expensive to use. That raises the concern that only those with deep pockets — big firms and big corporations — will have access to them, while pro se individuals, smaller firms, and legal aid organizations will be shut out. Given the potential power of generative AI, this could further exacerbate inequality in the delivery of justice. One possible answer: public AI models not owned or controlled by any single corporation.

8. Methods are needed to benchmark the quality of AI products.

As more legal vendors develop products based on generative AI, how do we assess and monitor the quality of these tools? We need to come up with ways of benchmarking generative AI products.

9. Law firms are questioning how best to harness AI to leverage their own legal knowledge.

While nowhere near the scale of the data collections used to train large language models such as ChatGPT, law firms — and particularly larger firms — have their own “large language” collections of their cumulative work product and know-how that is a reflection of what makes the firm unique. In the quest to make legal AI more precise and less hallucinatory, firms are wrestling with how to leverage this internal knowledge. Some are already developing their own proprietary AI tools, while others are turning to legal tech vendors to help them in achieving this goal.

10. The need for legal training data could exacerbate questions of who owns the law.

As we seek to better train AI on the law, we must inevitably confront the question of who owns the law and who has access to that data. Already, some organizations are working to create open-access collections of legal data to be used in support of creating openly accessible generative AI tools in law.

11. AI will force courts and lawyers to grapple with new issues over authentication of evidence.

A recurring theme yesterday was the danger AI poses of creating evidence such as images and videos that are fake beyond detection or authentication. What impact could this have on how courts consider and accept evidence?

12. AI’s decisions need to be not only explainable, but justifiable.

Gillian Hadfield, the legal scholar who is the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, has put forth the notion that AI needs to be not only explainable, but justifiable, meaning AI that can show how its decisions are justifiable according the rules and norms of society. That concept was cited yesterday in support of the idea that we need to find ways to establish and maintain trust and accountability in AI, not just as it is used in law, but across all sectors and geographies.

Thanks for a great event.

Before ending this post, allow me to thank Jonathan Zittrain, faculty director, Jack Cushman, director, Clare Stanton, product and research manager, and everyone else at the Library Innovation lab for organizing this summit and allowing me to be part of it. Thanks also to the folks at Casetext who provided financial and other support for the conference.

Photo of Bob Ambrogi Bob Ambrogi

Bob is a lawyer, veteran legal journalist, and award-winning blogger and podcaster. In 2011, he was named to the inaugural Fastcase 50, honoring “the law’s smartest, most courageous innovators, techies, visionaries and leaders.” Earlier in his career, he was editor-in-chief of several legal publications, including The National Law Journal, and editorial director of ALM’s Litigation Services Division.