Artificial Intelligence capabilities have grown fast and furious. It was only a year ago that the founder of DoNotPay, an AI chatbot “lawyer” service, offered to pay someone $100,000 to take its services to court and parrot what the AI told it to say through a bluetooth earpiece.
And while that offer was quickly rescinded as unlawful, the looming prospect of robots cum lawyers continued to be ever-present.
Only a few months later, lawyers across the U.S. were being accused of having used fake cases, imagined by AI, to support their legal arguments. This included the high profile incident of Michael Cohen sending fake AI generated cases to his lawyer. Ooops.
The incidents prompted Canadian law societies to issue directions on the use of AI for legal research and legal work. These directions were preemptive — attempting to warn lawyers of the possibility that AI, while certainly able to assist in generating legal work, could get the facts and law wrong. It is the responsibility of any lawyer to check the validity of information given by AI, both in legal cases it cites and in its summary of how the law works.