Artificial Intelligence

US judicial panel to develop rules to address AI-produced evidence

Illustration shows miniature of robot and toy hand
Words reading “Artificial intelligence AI”, miniature of robot and toy hand are pictured in this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

 

(Reuters) – A federal judicial panel on Friday agreed to develop a rule to regulate the introduction of artificial intelligence-generated evidence and begin work on a policy to potentially help judges deal with claims that a piece of audio or video evidence is a “deep fake.”

The U.S. Judicial Conference’s Advisory Committee on Evidence Rules during a meeting in New York said they would press ahead with developing the two potential rules even as some expressed concern about whether old ones that predated the rise of AI technology were good enough to guard against “deep fakes.”

Join YouTube banner

U.S. District Judge Jesse Furman, a Manhattan-based judge who chairs the panel, acknowledged the need for “caution” in developing new rules for the evolving technology. But he noted that developing such rules can take years, raising the risk that doing nothing would leave the judiciary unprepared if and when new technologies create problems.

“I think there’s an argument for moving forward to avoid getting caught completely flat-footed,” Furman said.

The meeting came amid broader efforts by federal and state courts nationally to address the rise of generative AI, including programs like OpenAI’s ChatGPT that are capable of learning patterns from large datasets and then generating text, images and videos.


Chief U.S. Supreme Court Justice John Roberts in his annual report on Dec. 31 cited the potential benefits of AI for litigants and judges while saying the judiciary would need to consider its proper uses in litigation.

During Friday’s meeting at New York University Law School’s campus, committee members unanimously agreed to move forward with developing a rule to address evidence that is the product of machine learning. The rule will be prepared for the committee to vote on whether to put it out for public comment at its May meeting.

The rule will be designed to address concerns about the reliability of the processes used by computer technologies to make predictions or draw inferences from existing data, akin to issues courts have long addressed concerning the reliability of expert witnesses’ testimony.

Join YouTube banner

A potential rule under discussion would require such computer-generated evidence to be subjected to the same reliability standards as expert witnesses, who are governed by Rule 702 of the Federal Rules of Evidence.

While the panel was unanimous on wanting to develop such a rule, panel members expressed less certainty about whether they should similarly create a rule to address worries that the courts could someday be inundated with claims by litigants that video or audio evidence were AI-generated fakes.

“I still haven’t seen that this feared tsunami is really coming,” U.S. Circuit Judge Richard Sullivan of the 2nd U.S. Circuit Court of Appeals said.


Aware that the technological environment was rapidly evolving, though, Sullivan and other panel members agreed they should at least develop a potential rule, even if they opt not to vote for it now, so they could get a jump on the lengthy rulemaking process and be ready to act if and when deep fake evidence becomes a big issue.

“It seems like a good idea to have something in the bullpen as it were rather than nothing,” said Daniel Capra, a professor at Fordham School of Law who serves as a reporter to the committee and will help draft the potential rule.

REUTERS