
AI in Legal: Balancing Innovation with Accountability
With tools capable of summarizing voluminous documents, conducting legal research, identifying potentially privileged materials and drafting litigation content, the promise of AI to accelerate transformation in the legal profession, delivering efficiency and scalability, is undeniable. But with great promise comes real peril. With courts sanctioning attorneys for uncritical reliance on AI and regulators scrambling to establish governance frameworks, legal professionals are left to reconcile the benefits of AI with their enduring ethical and professional obligations.
As generative AI (GAI) tools such as ChatGPT and Claude become more prevalent in legal workflows, the challenge is clear: How can attorneys harness these technologies responsibly while maintaining accountability, confidentiality, and trust?
The Promise: Efficiency, Insight and Innovation
AI offers significant advantages to litigation teams. In document review, AI-powered tools can identify relevant documents, flag potentially privileged content, and generate draft privilege logs. During discovery, these tools can help summarize lengthy transcripts and assist in drafting interrogatories and deposition outlines. At the trial preparation stage, attorneys can use GAI tools to assist with writing examination scripts and cross-examination questions.
These advanced capabilities are no longer hypothetical, and the adoption of AI in legal practice is climbing rapidly. According to the ABA’s 2024 Legal Technology Survey, nearly 30% of U.S. attorneys reported using AI tools in their practice—up nearly threefold from just 11% in 2023. In large firms (100+ attorneys), adoption reached 46%. Legal departments are keeping pace, with almost half of the surveyed GCs reporting active use of GAI in 2025.
This surge in usage is driven by real gains in speed, cost reduction, and consistency. Tasks that once required hours of junior associate time can now be completed in minutes. AI’s ability to digest and synthesize large datasets opens the door for deeper litigation analytics and predictive modeling, which means faster answers, lower bills, and potentially better client outcomes.
The Perils: Hallucinations, Bias, and Ethical Breaches
Despite its promise, GAI presents unique risks, with the generation of fictitious information (‘hallucinations’) topping the list. This peril was painfully illustrated in Mata v. Avianca Airlines, where attorneys submitted a brief containing six fake case citations generated by ChatGPT. This led to them to being sanctioned and ordered to notify the real judges whose names were falsely invoked in the hallucinated cases.*
Similar incidents have occurred in courts across the U.S., leading to sanctions, fines, and referrals to grievance panels.** These cases illustrate the core principle that no matter how the work is produced, the attorney remains the accountable party.
Other potential perils to navigate include data privacy, bias in training data, and overreliance on tools not designed for legal judgment. AI systems trained on historical data may perpetuate existing disparities or fail to recognize novel legal arguments. Moreover, using public or self-learning models to process confidential client data can expose firms to confidentiality breaches.
Ethical Obligations: What the Rules Require
In response to these challenges, the American Bar Association and various state bar associations have issued guidance emphasizing that traditional ethical rules apply with full force to AI use. ABA Formal Opinion 512,*** issued in July 2024, outlines six key areas of concern:
- Competence (Rule 1.1): Attorneys must understand the capabilities and limitations of any GAI tool they use and must verify its output before relying on it.
- Confidentiality (Rule 1.6): Attorneys must not disclose client information through GAI systems unless confidentiality is assured. Informed client consent may be required when using self-learning tools.
- Communication (Rule 1.4): Attorneys must inform clients if GAI tools are used in ways that impact representation, fees, or confidentiality.
- Candor Toward the Tribunal (Rules 3.1, 3.3, 8.4): Submissions to courts must be accurate and truthful, and AI-generated errors do not excuse an attorney’s ethical obligations.
- Supervisory Duties (Rules 5.1, 5.3): Supervisory attorneys must implement policies and training to ensure responsible AI use across teams.
- Fees (Rule 1.5): Attorneys must bill reasonably, reflecting efficiencies gained through AI, and must avoid charging clients for learning time unless previously agreed.
These guidelines underscore a central truth: AI cannot replace an attorney’s judgment, diligence, or ethical duties, with the attorney remaining responsible for oversight, verification, and communication when using AI.
Regulatory and Judicial Responses
Courts have begun issuing standing orders and local rules addressing AI use in litigation. By late 2024, 39 federal judges had implemented such orders. The most common theme: attorneys must disclose AI use in filings and certify that all content has been reviewed for accuracy.
For example, in the Northern District of Texas, Judge Brantley Starr requires every filing to include a certification stating whether GAI was used and affirming that a human verified all output. The Western District of North Carolina goes further, forbidding the use of GAI in brief writing unless accompanied by attorney verification of every citation.
Other jurisdictions, including parts of California and Pennsylvania, mandate disclosure but not prohibition. The Illinois Supreme Court, by contrast, has discouraged mandatory AI-use disclosures, favoring reliance on existing ethical rules. This patchwork reflects a legal system in transition, where some courts seek transparency, others impose limits, and a few advocate a more hands-off approach.
Best Practices for Attorneys and Legal Departments
In this evolving environment, legal professionals must take proactive steps to use AI tools responsibly. The following best practices can help strike the right balance:
- Develop Internal Policies: Law firms and legal departments should implement policies for AI usage that cover approved tools, verification requirements, and data protection protocols.
- Vet Vendors Carefully: Ensure AI providers follow robust cybersecurity and confidentiality standards. Understand whether data entered into the system is stored or reused.
- Train and Educate: All users from junior associates to partners should understand how AI tools work, what they do well, where they can fail and what responsibilities attorneys have when using AI.
- Maintain Transparency: Communicate with clients when AI is used in ways that affect fees, confidentiality, or key decisions—documenting consent when necessary.
- Monitor Regulatory Developments: As the legal framework around AI evolves, attorneys must stay up to date on local court rules, bar opinions, and new legislation.
Conclusion: Accountability is the Anchor
AI is reshaping the legal profession, but it does not relieve attorneys of their core responsibilities and ethical obligations. As litigation teams embrace GAI for its efficiency and insight, they must remain anchored in the principles of ethics, particularly competence and candor. Technology may change how legal work is done, but it cannot substitute for the professional judgment of a well-informed attorney.
In the coming years, firms that embrace innovation while reinforcing accountability will be best positioned to lead the way in integrating AI into their legal practice. The key is not to fear AI but to manage it wisely by balancing promise with caution, innovation with ethics, and always keeping an attorney in the loop to oversee and verify the output.
* Mata v. Avianca Airlines, 22-cv-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023)
** Gauthier v. Goodyear Tire & Rubber Co, No. 1:23-cv-00281 (E. D. Tex. Nov. 2024); Park v. Kim, No. 22-2057 (2d Cir. 2024)
*** Formal Opinion 512, ABA Standing Committee on Ethics and Professional Responsibility
By Danielle Noonan
QuisLex

Danielle Noonan is an Associate Vice President, Legal Services at QuisLex. She has over a decade of experience executing all aspects of e-discovery reviews for global clients, including hundreds of financial services cases, and consulting with in-house legal teams to streamline and standardize their protocols, processes and guidelines for all stages of the EDRM cycle.