Michael D. Cohen, the former fixer for former President Donald J. Trump, mistakenly provided his attorney with false legal citations produced by the artificial intelligence program Google Bard, he said in court papers. unsealed on Friday.
The fabricated citations were used by the attorney in a motion submitted to a federal judge, Jesse M. Furman. Mr. Cohen, who pleaded guilty in 2018 to campaign finance violations and served time in prison, asked the judge for an early termination of the court’s supervision of his case now that he is out of prison and has complied with the conditions of his release .
The ensuing series of misunderstandings and mistakes ended with Mr. Cohen asking the judge to use “discretion and mercy.”
In a sworn declaration made public on Friday, Mr. Cohen explained that he had not kept up with “emerging trends (and associated risks) in legal technology and did not realize that Google Bard was a generative text service that , such as ChatGPT, may display .citations and descriptions that appear to be true but actually are not.”
He also said he did not realize that the attorney who filed the motion on his behalf, David M. Schwartz, “would put the charges in his wholesale submission without even confirming that they exist.”
The episode could have implications for the Manhattan criminal case against Mr. Trump where Mr. Cohen is expected to be the star witness. The former president’s lawyers have long attacked Mr. Cohen as a serial fabulist; now, they say they have a new example.
The ill-starred filing is at least the second this year by attorneys in Manhattan federal court in which attorneys cited false decisions generated by artificial intelligence. The legal profession, like others, is struggling to consider a novel technology meant to mimic the human brain.
Artificial intelligence programs like Bard and ChatGPT generate realistic responses by hazarding predictions about which text fragments should follow other sequences. Such programs take billions of text samples obtained from all over the internet. Although they can synthesize a great deal of information and present it persuasively, there are still bugs to work out.
The three citations in the case of Mr. Cohen appears to be hallucinations created by the Bard chatbot, taking bits and pieces of actual cases and combining them with robotic imagination. Mr. Schwartz then weaved them into the motion he submitted to Judge Furman.
Mr. Cohen, in his declaration, said he understood Bard as “a supercharged search engine,” which he had previously used to find accurate information online.
Mr. Schwartz, in his own declaration, acknowledged the use of the citations and said he had not independently reviewed the cases because Mr. Cohen had indicated that another lawyer, E. Danya Perry, was providing suggestions for in motion.
“I sincerely apologize to the court for not reviewing these cases personally before submitting them to the court,” Mr. Schwartz wrote.
Barry Kamins, a lawyer for Mr. Schwartz, declined to comment Friday.
said Ms. Perry that he began to represent Mr. Cohen just after Mr. filed a motion. Schwartz. He wrote to Judge Furman on Dec. 8 that after reading the filed document, he could not verify the case law cited.
In a statement at the time, he said that “pursuant to my ethical obligation of loyalty to the court, I advised Judge Furman on this issue.”
He said in a letter made public on Friday that Mr. Cohen, a former lawyer who was disbarred, “did not know that the charges he identified were false and, unlike his lawyer, was under no obligation to confirm it.”
“It should be emphasized that Mr. Cohen has not engaged in any misconduct,” Ms. Perry. He said on Friday that Mr. Cohen had no comment, and that he agreed to the unsealing of the court papers after a judge asked if they contained information protected by the attorney-client privilege.
The imbroglio emerged when Judge Furman said in a Dec. 12 order that he could not find any of the three rulings. He ordered Mr. Schwartz to provide copies or “a thorough explanation of how the motion came to cite cases that did not exist and what role, if any, Mr. Cohen played.”
The matter could have significant implications, given Mr. Cohen’s key role in a case brought by the Manhattan district attorney scheduled for trial on March 25.
The district attorney, Alvin L. Bragg, charged Mr. Trump of orchestrating a hush money scheme centered on a payment Mr. Cohen made during the 2016 election to a pornographic film star, Stormy Daniels. Mr. Trump has pleaded not guilty to 34 felony charges.
In seeking to refute the statements of the lawyers of Mr. Mr. Trump Cohen can’t be trusted, his defenders say Mr. lied. Cohen on behalf of Mr. Trump but has told the truth since splitting with the former president in 2018 and pleading guilty to federal charges.
On Friday, Mr. Trump’s lawyers immediately seized on the Google Bard revelation. Susan R. Necheles, a lawyer representing Mr. Trump in the upcoming trial in Manhattan, said it was “typical Michael Cohen.”
“He is an admitted perjurer and has pled guilty to multiple felonies and this is a further indication of his lack of character and continued criminality,” said Ms. Necheles.
Ms. Perry, the lawyer now representing Mr. Cohen in the motion, said that the consent of Mr. Cohen unsealing the files showed he had nothing to hide.
“He relies on his lawyer, as he has the right to do,” he said. “Unfortunately, it appears that his attorney made the mistake of not verifying the citations in the brief he drafted and filed.”
A spokesman for Mr. Bragg declined to comment Friday.
Prosecutors may argue that Mr. Cohen’s actions were not intended to deceive the court, but rather, by his own admission, were a product of an unfortunate misunderstanding of the new technology.
The issue of lawyers relying on chatbots exploded into public view earlier this year after another federal judge in Manhattan, P. Kevin Castel, fined two lawyers $5,000 after they admitted to filing legal brief full of non-existent cases and citations, all generated by ChatGPT.
Such cases appear to be flowing through the nation’s courts, said Eugene Volokh, a UCLA law professor who has written about artificial intelligence and the law.
Professor Volokh said he counted a dozen cases in which lawyers or self-represented litigants were believed to have used chatbots for legal research that ended up in court filings. “I strongly suspect that this is only the tip of the iceberg,” he said.
Stephen Gillers, a legal ethics professor at New York University School of Law, said: “People have to understand that generative AI is not the bad guy here. It holds great promise.”
“But lawyers cannot treat AI as their co-counsel and simply ignore what it says,” he added.
The nonexistent cases cited in Mr. Schwartz’s motion — United States v. Figueroa-Flores, United States v. Ortiz and United States v. Amato — includes corresponding summaries and notations that they were affirmed by the US Court of Appeals for the Second Circuit.
Judge Furman noted in his Dec. 12 order that the Figueroa-Flores passage actually referenced a page from a decision issued by another federal appeals court and “has nothing to do with supervised release.”
The Amato case named in the motion, the judge said, actually concerns a decision by the Board of Veterans’ Appeals, an administrative tribunal.
And the citation in the Ortiz case, Judge Furman wrote, seems “without merit.”
William K. Rashbaum contributed reporting.