Study finds AI language models unreliable for legal advice, risking harm to less powerful groups
-
Popular large language models like GPT-3.5 and PaLM-2 frequently generate inaccurate legal information, fabricating up to 88% of responses.
-
The models struggle with complex legal reasoning, like comparing cases or forming arguments, though they can complete simple factual tasks.
-
Overreliance on AI models could create a "legal monoculture," limiting perspectives and overlooking relevant precedents.
-
Risks are highest for less powerful lawyers and self-represented litigants querying models on false assumptions.
-
Researchers conclude LLMs cannot effectively assist lawyers and should not be relied on for litigation purposes.