
David Kluft asks: “If my litigation opponent uses a hallucinated case, can I use it too instead of requesting sanctions?” —
- “Plaintiff’s counsel in a CA corporate debt dispute used some #AI -generated false citations in its opposition to a motion to dismiss.”
- “Defense counsel, instead of realizing the citations were false, copied one of them into its reply brief. ‘In other words,’ as the court later explained, ‘[Defense counsel] merely followed Plaintiff’s counsel over the cliff.’”
- “The Court noted that ‘perhaps due to the asynchronous Thelma and Louise, neither side request[ed] sanctions.’ While the Court ‘certainly [did] not insist on candlelight in this age of electricity,’ it also reminded the parties that AI misuse can lead to Rule 11 sanctions.”
- Minutes: here.
Peter J. Winders, general counsel at Carlton Fields, writes: “Why Our Firm Still Prohibits Generative AI for Legal Research and Written Advocacy” —
- “Two years ago, at the request of Carlton Fields’ CEO, I published a short article titled ‘Why Our Law Firm Bans Generative AI for Research and Writing.’ As always, a lot has happened since then.”
- “More hype, more pressure. Can a hundred big law firms be wrong? Of course they can… The enthusiasm for generative AI may prove to be another, especially for applications with little tolerance for errors.”
- “A lot of claims about AI use in law firms are just not true. ‘Clients are demanding that firms use our product.’ I review every set of outside counsel guidelines we get from the sophisticated clients that issue them. We have hundreds of them. The overwhelming majority require that if we use a generative AI product, we (a) identify the app, (b) get written permission, (c) justify the use case, (d) explain how we guarantee against the inherent problems of generative AI, (e) describe human supervision, (f) estimate any cost savings to the client, (g) bear responsibility if anything goes wrong, and various other things. Many clients appear to distrust it as much as I do.”
- “‘Your competitors are ‘all in’ on AI.’ Many say they are, but to the extent we can find their actual use of generative AI, it consists of back-office uses, not legal research and advocacy.”
- “There are daily reports in the legal press of fake cases, false quotations, and generated facts in legal briefs, expert reports, and judicial opinions involving generative AI tools.”
- “Courts now regularly announce local court rules requiring a certificate that the signer has not used generative AI or a certificate that he or she has removed the fabricated portions. But most courts do not seem to get the point that removing the fake parts of a legal brief does not necessarily leave a competent brief—just a recitation assembled by the tool with no easily detectable falsehoods.”
- “Generative AI has its uses. But judgment as to risk tolerance is necessary, and it depends on the application. For example: Assume a customer-facing chatbot is the application. If the typical 50% customer frustration level enables a cost savings of $X in employee cost and a 10% drop in sales, but results in a 5% increase in profit, a business may think generative AI is a godsend. But for legal research, reasoning, writing, and advocacy, an entirely different set of tolerances applies.”
- “As important, it does not appear that the advocates for generative AI actually understand what lawyers are sworn to do. Lawyers deal with specific problems for a specific client in a specific set of disputed facts. It is not a matter of finding a list of statutes, rules, or cases. In litigation, the facts are of primary importance, and they are often nuanced and in dispute, sometimes by honest witnesses who saw, heard, remember, or understood things differently. And what a case holds depends on the facts of the case, rather than a rule. The entire common law is built on what a judge concludes is the right thing to do in each fact situation, and then whether a similar situation in a following case is similar enough or different enough to cause a judge to rule the same way or differently. Researching the law is largely a matter of understanding the context of each prior case.”
- “If you do as the rules and legal opinions require and verify that your AI assistant is telling the truth, you must do traditional research, spend more time, not less, to provide accuracy. In fact, the time AI ‘saves’ misleads, or has the potential to mislead, and may put you off track rather than giving you helpful background as a starting point.”
- “If lawyers allow themselves to use a generative product to substitute for background research to gain context, how many will have the discipline to double-check the output? After all, they are ‘only using it for background.’ They will not be citing fake cases from the background. But they risk the understanding and inspiration that they would have received from true research. A person tends to believe what they are first told. I once heard a five-year-old explain to his buddy that ‘dogs pee through their toes,’ having learned that nugget by asking, ‘Hey, Dad! What is that dog doing on that fire hydrant?’ I consider that an important parable. If it is not yet in the Bible, I expect generative AI will eventually put it there.”
- “Businessmen do not hire lawyers because lawyers are smarter. Clients and their very smart engineers, financial folks, and marketing people have a problem that they have not been able to answer. They do not want ‘an answer,’ and neither the judge nor the client wants ‘a brief.’ Instead, they hope the lawyer will use his training, knowledge, investigation, research, and the expertise of others to find a cogent resolution of the problem or an authoritative, honest, candid and persuasive argument, if there is no clear answer. The deception of AI is that it can provide ‘an answer’ or ‘a brief.’ It cannot provide what is needed to help a client or a court reach a fully informed decision, and it cannot educate a lawyer to the extent immersive research can. The risk is that lawyers may be tempted by the hype, or will trust the plausible answer the machine creates, and will not do the hard and creative work they were trained for and have sworn to do. Clients can ask generative AI questions themselves. They need more from their lawyers.”
- “AI tools may be useful as a final review of a completed or semi-completed brief. Ask the tool for a redline suggesting changes in grammar, highlighting passive voice, the word ‘blatant,’ and all modifiers. Because it is a redline, the lawyer can remove all but the modifiers that make a difference and decide whether a passive phrase adds emphasis. But that is useful because the lawyer actually knows if any suggestion is good or bad, right or wrong. I really do not know whether that is generative AI. But whether it ‘generated’ the suggestion or not is immaterial because the choices are completely up to the lawyer who knows what he or she needs to say. The tool cannot mislead the lawyer.”
- “Some—many—wanting to compromise between the hype and the commonsense argument that one should not trust a fabricating tool as a source of truth, argue that maybe generative AI is better suited for transactional work—contract drafting, due diligence, coding. But that reasoning is flawed. Nothing makes it less likely that a tool whose nature is to fabricate is not fabricating in transactional tasks. It might be harder to find than a fake case; it might or might not be a more acceptable risk; but it is no more worthy of trust by a novice practitioner who does not immediately know if a suggestion is beneficial or not. I am not competent to talk about the scary things that might happen with hallucinated code. So, I am sticking with generative AI’s dilution of the quality of legal research and advocacy.”
- “Moreover, case law in a common law system itself is fluid. There are no cut and dried answers to legal problems. An advocate might need to dig deeper into the circumstances of her own case to identify key facts buried there that seemed insignificant until the research is done. Good legal research is an iterative process. And as lawyers, we make the law with each new complex case. We help the law evolve. We do not simply take a case off the shelf that fits our case.”
- “Lawyers gain their strength and effectiveness from hard work. We have not been able to persuade ourselves that the so-called shortcut of prompting a machine to produce a more believable answer is a good thing, if it robs the product, the client, the court, and the lawyer’s reputation of the benefit of immersive research into the facts, policies, goals, and history of the problem and the present state of the law.”
“Colorado policy could shield AI from complaints regarding unauthorized practice of law” —
- “Does using artificial intelligence for legal help constitute the unauthorized practice of law? Lawyers in Colorado are saying no and taking novel approaches to protect the developers of AI tools that aim to expand access to legal services.”
- “In September, the Colorado Office of Attorney Regulation Counsel adopted a first-of-its-kind ‘nonprosecution policy’ that deprioritizes the UPL prosecution of developers of legal-help technology tools.”
- “According to Jessica Yates, the attorney regulation counsel for the Colorado Supreme Court, the policy recognizes that developers are often nonlawyers who still can deliver vital legal assistance to low- and moderate-income members of the public.”
- “‘Many of us have become aware over time of technological developments that would potentially provide some access that had not been there previously or hadn’t been there as easily,’ says Yates, a member of the Colorado Supreme Court’s Advisory Committee on the Practice of Law subcommittee that developed the policy.”
- “‘And we started talking about, well, can we change the definition of what it means to practice law and what is considered the unauthorized practice of law, so that companies that are interested in developing technology-based tools could do so without fear of regulatory action or ultimately being enjoined from developing those tools?’ Yates adds.”
- “The subcommittee’s focus then turned to the nonprosecution policy, which initially will be in place for three years, so that the state can evaluate whether it creates more space for innovation and benefits consumers who want to use AI for legal guidance, Yates says. It includes specific safeguards, including that developers must be supervised by lawyers and clearly disclose that they are not lawyers.”
- “Lucian Pera, a past chair of the ABA Center for Professional Responsibility and a member of the New York City Bar Association’s Presidential Task Force on Artificial Intelligence and Digital Technologies, has written about how states can follow Texas’ approach to remove uncertainty around AI for legal use.”
- “That state’s law came about after UPL charges were brought against a software company in the late 1990s. Parsons Technology, the company at the center of the case, offered legal forms and instructions on how to use them, which a district court said violated the state’s UPL statute. But while the appeal was pending, the Texas legislature amended the statute to exclude computer software from the definition of UPL, as long as products clearly said they were not a substitute for an attorney.”
- “According to Pera, it’s simple—AI isn’t UPL. In an article published in the ABA Law Practice Division’s magazine in September, he pointed out that ‘legal advice’ must be provided by lawyers, but ‘legal information’ can be dispensed by anyone. Or, in this case, anything.”
- “‘But there’s a lot of uncertainty on the part of entrepreneurs, businesspeople, etc., about it, and you could remove that very, very serious chill on their work by providing almost a safe harbor,’ says Pera, a partner at Adams and Reese in Memphis, Tennessee.”
- “Pera suggests that states could pass a law that clearly exempts software or apps that offer legal help from UPL prosecution. He notes that it could include some guardrails, such as requiring developers to inform clients that their services are not confidential and do not constitute an attorney-client relationship.”
California State Bar Committee on Professional Responsibility and Conduct “COPRAC Advisory Regarding Artificial Intelligence (AI) Hallucinations” —
- “Due to the increased usage of artificial intelligence (AI) in the legal profession, the Committee on Professional Responsibility and Conduct (COPRAC) continues to provide guidance on relevant ethical and practical considerations that arise from the use of these technologies.”
- “Generative AI tools (such as ChatGPT and Perplexity) are computer applications that can create text, images, or other content in response to user prompts. In the legal context, they may be used for tasks such as brainstorming, research, drafting, or summarizing information.”
- “While these tools can be helpful in streamlining some aspects of legal work, attorneys must use them in a manner consistent with their duty of competence (rule 1.1), diligence (rule 1.3), and responsibilities as managerial and supervisory lawyers (rule 5.1). Competent use of such technology requires understanding its limitations, including the risk of fake or ‘hallucinated’ content, outdated or incomplete legal authorities, and the inadvertent disclosure of confidential client information through prompts.”
- “Courts have sanctioned attorneys for submitting AI-generated filings containing false or fabricated authorities, and an attorney’s lack of awareness of the risk of ‘hallucinated’ content does not relieve the attorney of responsibility for ensuring the accuracy and integrity of any work product submitted. Attorneys must independently verify any AI-assisted work product before relying on it in any context. Diligent representation requires that attorneys not delegate their professional judgment to AI, but instead review, edit, and take responsibility for the substance and timing of all filings, communications, and advice.”
- “Attorneys with managerial or supervisory authority must also implement reasonable policies, training, and oversight to ensure that any use of generative AI by attorneys does not compromise client confidentiality or replace appropriate legal analysis, supervision, or quality control. Ultimately, licensees should evaluate these tools thoughtfully, balancing their potential benefits while understanding the potential pitfalls.”
- “COPRAC is actively working on revisions and updates to its practical guidance regarding AI. The updated practical guidance will be presented at the May 14–15, 2026, Board of Trustees meeting for approval. In addition, proposed amendments to the Rules of Professional Conduct are currently out for public comment. Licensees and members of the public are encouraged to submit written comments on the proposed amendments.”
“Aligning Microsoft Tools With NYC Bar AI Recording Guidance” —
- “On Dec. 22, 2025, the New York City Bar Association’s Professional Ethics Committee issued Formal Opinion 2025-6, addressing the ethical obligations arising when attorneys or clients use artificial intelligence tools to record, transcribe or create summaries of their conversations.”
- “Among other things, the opinion: ”
- “States that an attorney should obtain client consent before the attorney engages AI to record a call and consider whether recording, transcribing and summarizing is well advised in the specific circumstances, including issues of confidentiality and privilege;”
- “Explains that if an attorney knows that a client is recording a call with an AI tool, the lawyer should advise the client of the disadvantages of doing so; and”
- “Recommends that attorneys review any AI-generated transcripts, summaries or other meeting artifacts for accuracy with respect to any meetings with counsel to effectuate ethical duties.”
- “While the opinion focuses on attorney-client communications between outside counsel and their clients, it is not limited to, and its guidance applies equally to, in-house counsel communicating with internal business partners. For organizations using Microsoft 365 and Copilot, the opinion raises immediate questions about platform configuration, data governance and e-discovery readiness.”
- “Recording and transcription are no longer just discrete, intentional acts by meeting participants. They can also be triggered by default settings, enabled automatically when Copilot features are used or initiated by any meeting participant. The resulting data artifacts can proliferate across mailboxes, OneDrive accounts and SharePoint sites in ways that create novel challenges for preservation and collection.”
- “This article explores those technical realities and offers guidance for aligning the Microsoft 365 environment with the opinion’s requirements.”
- “1. The opinion emphasizes obtaining client consent before recording. How does Microsoft Teams handle consent notification, and what configuration options are available?”
- “2. The opinion distinguishes between recordings that persist — i.e., a kept record of a conversation that may be relied upon years later — and those that don’t, i.e., those that exist only long enough to support other Copilot features. What are the Copilot configuration options in Teams meetings, and how do they affect data retention?”
- “3. Where are these AI-generated artifacts actually stored? How does this affect data mapping and legal holds?”
- “4. The opinion emphasizes the duty of competence and the need to review AI-generated content for accuracy. What accuracy concerns may arise with Teams AI features? “
- “5. Are there other AI features that create artifacts without direct user action that organizations should be aware of?”
- “6. What practical steps should organizations take to configure their Microsoft 365 environment in alignment with the opinion’s requirements of consent, confidentiality and accuracy?”
