Navigating Deepfake Allegations in Court

The rapid rise of generative AI has introduced a new challenge for the Canadian legal system: the deepfake. In the recent decision of R. v. Medow, the court conducted a seminal analysis of how courts must address allegations of manipulated digital evidence. This case is a blueprint for judges navigating the murky waters of AI-generated content in criminal trials.

R. v. Medow: Setting a Legal Precedent

Mr. Medow, faced multiple charges, including assaulting a peace officer and obstructing justice. Central to the prosecution’s case was Body-Worn Camera (BWC) and In-Car Camera (ICC) footage.

Mr. Medow asserted the videos were “deepfakes” digitally altered to frame him. He believed he was the victim of a long-standing harassment campaign by the Toronto Police. This claim required the court to determine the appropriate standard for challenging the authenticity of digital recordings.

Justice Brock Jones handled the deepfake allegation with a balanced approach that acknowledged technological realities without succumbing to baseless skepticism. The judge took judicial notice of the existence and widespread proliferation of AI technology capable of creating realistic deepfakes.

Judicial Notice and the Threat of AI Manipulation

Taking judicial notice is a significant development when it comes to AI and deepfake allegations in court. By doing so, the court accepted as a fact that deepfake technology is notorious, generally accepted, and poses a “present and growing danger” to the justice system.

However, the judge clarified that simply acknowledging the existence of deepfakes does not mean every video is presumed fraudulent. This is because the Canada Evidence Act governs the authentication of electronic documents.

When Speculation Isn’t Enough: Burden of Proof

Under this legislation, the party introducing the evidence bears the burden of showing the document is what it purports to be. But this is not an onerous standard. Once admitted, the trier of fact must determine the evidence’s ultimate reliability based on the totality of the trial.

This case serves as an example for other judges of how to address these allegations in practice in the courtroom. The judge refused to place blind faith in digital evidence but also rejected what he characterized as rank speculation on Mr. Medow’s part that the video had been altered. He ruled that an accused person cannot merely speculate that a video is falsified to diminish its weight or preclude its admissibility. This sets a clear standard for the burden of showing a deepfake. To discharge this burden, an individual making the allegation must provide an evidentiary basis rather than mere theory.

In Mr. Medow’s case, there was some evidence of tampering with the video. That is, the face of a possible witness had been blurred for privacy reasons. There was also his testimony, which differed substantially from that of the police. There were elements of his version of events the police did not recall or could not comment on.

Access to Justice in the Age of AI

To successfully challenge digital evidence as a deepfake, a person must point to specific indicators of tampering or provide expert forensic evidence. Although Mr. Medow suggested that the metadata might indicate manipulation, he failed to produce any expert testimony or concrete proof to support his theory. The judge noted that in cases where witnesses might have a motive to fabricate evidence, expert evidence might indeed be necessary for the Crown to authenticate the video. However, in Medow, the police officers were found to be honest and impartial witnesses with no technical control over the BWC footage after it was uploaded to the docking station.

The court found the “blurring” of another party’s face for privacy did not constitute the type of malicious manipulation that would trigger a deepfake concern. For a deepfake allegation to succeed, the proponent would likely need to demonstrate a clear “motive for fabrication” or present forensic anomalies that contradict the testimony of the authenticating witnesses.

The judge ruled that as AI technology continues to advance, the authentication process must not be rendered meaningless.

This case therefore sets out what is effectively a three-step process: first, taking judicial notice of AI capabilities; second, requiring the Crown to show a prima facie case for authenticity; and third, performing a rigorous analysis of reliability based on witness credibility and forensic integrity.

The case also presents a serious access-to-justice concern for self-represented litigants. Accused individuals often lack the resources to retain experts to examine and successfully challenge digital evidence. The judge, recognizing this limitation, suggested courts must find ways to assist litigants to ensure a fair trial without compromising judicial neutrality. This might involve more comprehensive disclosure or questioning assisted by the court.

The Future of Digital Evidence in Court

Ultimately, the Medow case teaches that the mere “possibility” of a deepfake is not a legal defence.

Digital evidence requires a witness to confirm it accurately depicts the events. If those foundational elements are present, the burden shifts to the challenger to prove the digital thumbprint has been forged with more than just theories. This decision is a reminder that while the tools of deception are evolving, the fundamental principles of evidence remain in place and must be adhered to rigorously.

As we enter the era of AI, judges must remain vigilant. We must treat digital evidence not as an absolute truth, but as one piece of a much larger puzzle.

Scroll to Top
CALL ME NOW