Talk Justice, an LSC Podcast: Putting AI to the Test for Legal Services
Contact
Carl Rauscher
Director of Communications and Media Relations
rauscherc@lsc.gov
202-295-1615
WASHINGTON鈥擫egal tech experts discuss their research and experiments with generative artificial intelligence (AI) for legal services on of LSC鈥檚 鈥淭alk Justice鈥 podcast, released today. Host Cat Moon is joined by Margaret Hagan, Executive Director of the Legal Design Lab at Stanford University, and Sateesh Nori, New York University law professor, author and longtime legal aid lawyer.
Hagan and her team have just finished phase one of their research into generative AI and access to justice.
鈥淥ur lab at Stanford, the Legal Design Lab, has been really focused on understanding, both from community members鈥 perspective and from those who provide frontline legal help at legal aid and court help centers: is generative AI worth the hype?鈥 she says. 鈥淒o people really have an appetite to use it for legal problem-solving scenarios, and do frontline providers think that AI can be helpful either behind the scenes or as they deliver services?鈥
To get a picture of this, the Design Lab conducted many interviews and placed people in simulated scenarios with AI tools for legal help. Hagan wants to identify the most promising use cases for AI from the perspective of legal professionals, and also think through the best ways to analyze quality and risks. In the second phase, they will launch research and development projects based upon the initial findings, as well as create national networks to build shared AI infrastructure.
As a legal services lawyer representing low-income people in New York City for 22 years, Nori says he has 鈥渢ried everything that is out there,鈥 including building a dozen GPT tools to help tenants with their legal problems and to test out use cases for generative AI. He also worked with the company Josef to create a tool for a nonprofit service called Housing Court Answers, which fields about 50,000 calls from New York City tenants each year.
鈥淲e built a copilot model for these operators so that the gen AI can be in the background like a really experienced supervisor and can feed answers to the person who can then relay them to the caller as a first step,鈥 Nori says. 鈥淓ventually, maybe the gen AI can stand on its own and it can just be a chat bot.鈥
鈥淎nd I think if this works, this could revolutionize the triage part of legal services, which is such a huge expenditure of resources and time in probably 100% of the legal services nonprofits across the country,鈥 he continues.
Nori worries about lawyers stalling progress of AI tools by holding them up against a 鈥減erfect鈥 system, rather than the actual level of services currently being provided to low-income Americans. Hagan has put a lot of thought into effectively and efficiently lab testing AI tools to keep things moving forward responsibly. She explains that the focus should be on two measures: can the AI system beat or match the quality of the best humans doing the work today, and can it beat or match the efficiency of current practices?
鈥淎nd once we see the AI system in the lab setting matching or beating the best available human quality at that task鈥hen it's time to go to the copilot phase where we're rolling it out with a human strongly in the loop using this AI, but having a lot of ability to spot problems [and] record those, correct the AI, but putting it in the field because we are not going to know performance issues or risk issues until we start doing controlled pilots in the field,鈥 Hagan continues.
Then, Hagan explains, with a human working closely with the AI, we can truly measure how common errors, bias and 鈥渉allucinations鈥 are, rather than just speculating.
Talk Justice episodes are鈥鈥痑nd on Spotify, Stitcher, Apple and other popular podcast apps. The podcast is sponsored by LSC鈥檚 Leaders Council.鈥
The next episode of LSC鈥檚 podcast will explore Legal Services of Eastern Missouri鈥檚 Neighborhood Advocacy Program.