By Michele Hall and Michael Abrams
The law always trails behind technological advancements. As law enforcement surveillance technologies have evolved from taps on phone booths to thermal imaging to GPS tracking to Shotspotter, courts remain one step behind in responding to technological evolution. The Supreme Court of Maryland’s recent opinion in Christopher Mooney v. State unfortunately replicates this shortcoming. As a result, the Supreme Court fails to prepare trial courts and litigants for how to best respond to the quickly evolving landscape of video authentication in the age of artificial intelligence.
Breaking Down the Case
In Mooney, the complaining witness, Mr. Zimmerman, suffered a non-fatal contact shooting. Mr. Zimmerman suspected that Mr. Mooney was the shooter because immediately prior to the shooting, Mooney walked up to Zimmerman’s car, they had a brief verbal exchange, and after Mooney passed the vehicle, Zimmerman was shot from behind. Critically, Mr. Zimmerman did not see the shooter at the time of the shooting. Police later canvassed the scene and saw a video camera outside of a residence. The officers apparently obtained the video from the homeowner, but the homeowner did not testify to how the video system was kept, and the officer did not testify to how he obtained the video.
To admit videos or pictures into evidence, they must first be properly authenticated. Following the Supreme Court of Maryland’s opinion in Washington v. State, 406 Md. 642, 961 A.2d 1110 (2008), it seemed that pictures and videos could only be authenticated with: testimony from a witness with first hand knowledge of the events depicted; by a “silent witness” theory, under which a witness speaks to the reliability of the system used to obtain the video, and the video is therefore self-authenticating; or through a business record theory, with testimony or authenticating documentation showing that the video was kept in the regular course of business.
In Mooney, because the portion of the video that showed the shooting could not be authenticated through any of these methods, the question was whether it was properly authenticated and admitted at trial.
The Mooney Court concluded that the video was properly authenticated by circumstantial evidence. It also found that there was sufficient evidence for a “reasonable juror” to find by a preponderance of the evidence that the video was authentic, even though Mr. Zimmerman did not see the shooting occur, the video system was not shown to be reliable, and no business record was produced.
Video Authentication and the Rise of Artificial Intelligence
The majority opinion in Mooney pays lip service to the risks posed by what it describes as “the advent of image-generating artificial intelligence,” by concluding that “at this time, video footage can be authenticated through vigilant application of existing methods for authentication of evidence.”
But what are those “existing methods for authentication?” The majority points to Mr. Zimmerman’s testimony that the video did not appear to him to be altered or edited. But many “deep fakes” and video alterations are undetectable to the regular viewer and are becoming more sophisticated by the day.
The majority points to the close temporal proximity of the portions of the video Mr. Zimmerman was able to authenticate to the portion he was unable to authenticate—the shooting—as circumstantial evidence that the shooting itself was also authentic. But this logic suggests that either the entire video would be authentic, or none of the video would be authentic, which is at odds with the precision of video alteration techniques.
The majority also points to the fact that the detective obtained the video from a purportedly unbiased source: the homeowner’s private camera. But, as the dissent points out, we know nothing of the circumstances of the officer obtaining the video: who the homeowner was, how the officer got the video from the homeowner, and whether that video system in itself was reliable.
The majority approach comes close to suggesting that we should presume that videos are authentic unless the objecting party can demonstrate otherwise. Chief Justice Fader’s concurrence that trial courts “should be alert to claims that evidence has been altered by the use of artificial intelligence” suggests the same. But as the dissent points out, the burden on establishing authenticity of the video lies solely with the proffering party.
Missed Opportunities in Mooney
Instead of ignoring the realities of artificial intelligence until a case brings it to the Court’s attention, the inquiry into authenticity of videos should bake in the risks posed by artificial intelligence today. In this regard, the Mooney opinion squarely misses the mark, and follows in the tradition of courts remaining three steps behind when responding to technological advancement.
Post-Mooney, it is now up to parties objecting to the admissibility of video and picture evidence to rigorously challenge authenticity. They should cross-examine witnesses on how the video was obtained and argue to the trial court the unknowns underlying the system recording the video or the person who maintained the system, the lack of a business record authentication, and the lack of expertise of a witness testifying to whether a video has been altered or not.
Parties should make direct arguments to trial courts on the risks of altered evidence and force the proffering party to demonstrate that such videos and pictures are authentic. Finally, litigation may be too slow to address the risks posed by artificial intelligence. Advocates should also take the Chief Justice’s suggestion in the concurrence to suggest to the Judiciary and the Legislature new rules and procedures to authenticate electronic evidence.
If you have questions about this matter or another legal matter, please call us at 410-962-1030 or contact us here today for a consultation.