Lawyers Warn AI Chats May Be Used in Court After Privilege Ruling in New York



All articles are carefully reviewed and reviewed by leading blockchain experts and industry experts.
  • More than a dozen major US corporations have warned clients that AI chatbot conversations could be found in court.
  • The warning followed a New York ruling that a fraudulent plaintiff’s communications with Anthropic’s Claude were not protected by the privilege of representing a client or employment doctrine.

US law firms are moving quickly to warn clients that conversations with AI chatbots may not be confidential when the case ends up in court.

Fast forward to February control and Judge Jed Rakoff in New York, who said that Bradley Heppner, who was the former chairman of the bankrupt company GWG Holdings, had to turn over 31 documents made by Claude of Anthropic to the prosecutors who are pursuing charges of fraud and fraud.

Rakoff discovered that there is no relationship between the user and Claude, and that any privacy is removed by sharing information with the platform.

Law firms have begun writing caveats into client contracts

Reuters report that more than a dozen major US companies issued advisories telling customers to be wary of legal negotiations involving chatbots such as Claude and ChatGPT. Some companies have gone so far as to put those warnings directly into the collective agreement.

For example, New York-based firm Sher Tremonte said in a recent client agreement that disclosing privileged communications to third-party AI could undermine client representation.

That is a meaningful change. What was, a few months ago, an internal warning from lawyers is now being implemented on client correspondence.

One sentence, but a larger legal sign

The Rakoff decision is not the court’s only reference in this matter. On the same day, a federal judge in Michigan ruled that the plaintiff’s ChatGPT conversations could be considered personal activity and should not be produced. However, legal advisers seem to see the New York case as the most important warning sign for now.

The deeper issue isn’t just AI. It’s a mystery. As Reuters reported, both Anthropic and OpenAI state in their statement that user data may be shared with third parties, including government authorities in some cases. For lawyers, this makes the old law important. Do not discuss your case with anyone other than your attorney, and that includes a chatbot.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *