As AI tools rapidly enter legal practice, bar associations are weighing in on the ethical implications. This article breaks down what the ABA and state bars are saying about confidentiality, competence, and responsible use—so lawyers can stay compliant while leveraging AI.
Artificial intelligence is here—and it’s not waiting for legal ethics to catch up.
Lawyers across every practice area are now experimenting with AI tools to streamline research, draft documents, and even generate courtroom arguments. But as the technology accelerates, one question keeps coming up:
Is using AI in legal practice ethical?
Bar associations across the country—and the American Bar Association (ABA) itself—have started weighing in. If you’re a lawyer thinking about using AI tools like ChatGPT, Harvey, Casetext CoCounsel, or others, it’s time to understand the ethical landscape.
This article breaks down what’s happening, what bar associations are saying, and how you can stay compliant while leveraging AI in your practice.
At first glance, AI may feel like just another legal tech tool. But under the hood, it works very differently—and that’s exactly what concerns bar associations.
Unlike traditional software, generative AI systems:
These traits touch multiple core obligations in the ABA Model Rules of Professional Conduct, including:
While the ABA hasn’t issued a formal opinion specific to generative AI, it passed Resolution 604 in August 2023 urging lawyers to:
“develop an understanding of the benefits and risks associated with relevant technology, including Artificial Intelligence.”
The ABA has also reminded lawyers that competence includes technological literacy, particularly when the technology could affect how you serve or protect your clients.
Although not binding, the ABA’s guidance influences many state bars—and signals that AI literacy is becoming part of legal ethics.
Several state bars have started addressing the ethics of AI more directly. Here are a few key updates from across the U.S.:
The State Bar of California has formed a Working Group on AI and the Legal Profession and issued draft guidance highlighting:
The New York State Bar Association released a comprehensive AI Task Force Report in 2024, urging lawyers to:
Each of these states has active ethics committees reviewing AI. Common themes include:
Here are the key risk zones lawyers need to navigate when integrating AI into their practice:
Inputting sensitive client data into tools like ChatGPT or Gemini could violate Rule 1.6—especially if the AI retains or learns from that data.
AI can suggest answers, but it can’t think like a lawyer. Delegating too much—especially without review—can compromise your duty of competence.
In some situations (especially litigation), lawyers may need to disclose that AI was used in preparing materials or making decisions.
AI tools trained on biased data can amplify discrimination, especially in areas like criminal sentencing or employment law.
AI isn’t off-limits—but you must use it wisely and transparently. Here’s how:
Lawyers don’t get a pass on ethics just because technology is changing fast. If anything, your duty to understand and supervise technology has never been more important.
AI offers powerful new capabilities—but it also introduces real risks that can compromise your license and your clients’ trust.
Your clients depend on your judgment. AI can assist—but ethics must lead.