Weird and Wacky Wednesdays: Volume 365

This week on Weird and Wacky Wednesdays: AI Pioneers Meet Old Rules

This week on Weird and Wacky Wednesdays, we look at some legal considerations of being a pioneer in new AI technology. New tools create new roles. Old rules still apply. These three stories show the fit isn’t always easy.

When an AI avatar tries to argue in court

In March 2025, a self-represented litigant at New York’s Appellate Division, First Department, queued up a prerecorded oral argument that featured an AI-generated talking head instead of his real face. The judges stopped the video within seconds when realizing the speaker was not a human. The self-rep apologized and continued without the avatar. The case is still pending.

Of course, courts expect real people to appear. The predictions that AI would be the demise of the legal profession have been laughable, particularly when it comes to us lawyers who actually appear in court to conduct trials and appeals. Technology can help with presentation, but it can’t appear as a lawyer or witness, at least so long as we have human judges. All bets are off when judges are replaced by robots, however.

When retail “computer vision” meets privacy law

On August 1, 2025, a Chicago customer filed a proposed class action alleging that The Home Depot’s self-checkout kiosks quietly scanned faces with AI “computer vision” to deter theft, without the written consent that Illinois’ Biometric Information Privacy Act requires. The complaint points to a green box tracking the shopper’s face on the screen and the absence of posted notices.

The suit seeks statutory damages, presumably under the Act, and class certification. BIPA cases are strict about advance notice, written consent, and retention policies. If a retailer wants the benefits of real-time analytics, it must meet the statute’s disclosure and consent steps.

Although The Home Depot has run into legal trouble in the past with the collection and retention of personal information, one gets the sense that they are more interested in using technology to spy on customers rather than following the law.

When your chatbot gives the wrong legal answer

In February 2024, the B.C. Civil Resolution Tribunal held Air Canada liable for negligent misrepresentation after a website chatbot told a traveler he could request a bereavement refund after flying. Another page said the opposite. The tribunal ruled that a company is responsible for information on its own site, whether it comes from a static page or a chatbot, and awarded modest damages.

I’ve seen TikToks where people reportedly engaged a chatbot on a car dealer’s website to try and get the chatbot to form a contract for the sale of a car for much less than the listed price. These people were clearly aware that they were negotiating with a chatbot and were trying to take advantage of that.

The idea with chatbots is to try and suggest there is someone at the other end responding as an actual person representing the company. As chatbots are set up to pretend to be representatives of the company, one must reasonably assume that in circumstances where the person is legitimately engaging with the company’s website, they should be able to rely on the responses. Companies will not be able to hide behind the technology when it goes wrong in the coming years.

If you put an AI tool in front of customers, you should expect to own what it tells them.

The pattern

Early adopters often find the edges. These cautionary tales suggest some level of arrogance in the deployment of new technology. One could imagine it was painful for the parties to deal with their errors in these cases. For us, it’s entertaining as most of us take some joy in watching arrogance or stupidity end in consequences. Saying that, I keep in mind that any of us could potentially run afoul of the laws as we deploy and use new technology.

Scroll to Top
CALL ME NOW