AI and Attorney Ethics: What Bar Associations Are Saying in 2025

As AI tools rapidly enter legal practice, bar associations are weighing in on the ethical implications. This article breaks down what the ABA and state bars are saying about confidentiality, competence, and responsible use—so lawyers can stay compliant while leveraging AI.

AI and Attorney Ethics: What Bar Associations Are Saying in 2025

Artificial intelligence is here—and it’s not waiting for legal ethics to catch up.
Lawyers across every practice area are now experimenting with AI tools to streamline research, draft documents, and even generate courtroom arguments. But as the technology accelerates, one question keeps coming up:

Is using AI in legal practice ethical?

Bar associations across the country—and the American Bar Association (ABA) itself—have started weighing in. If you’re a lawyer thinking about using AI tools like ChatGPT, Harvey, Casetext CoCounsel, or others, it’s time to understand the ethical landscape.

This article breaks down what’s happening, what bar associations are saying, and how you can stay compliant while leveraging AI in your practice.

Why AI Raises New Ethical Questions for Lawyers

At first glance, AI may feel like just another legal tech tool. But under the hood, it works very differently—and that’s exactly what concerns bar associations.

Unlike traditional software, generative AI systems:

  • Don’t explain their reasoning

  • Can hallucinate facts or case law

  • Aren’t transparent about their data sources

  • Might store or reuse your input, raising client confidentiality issues

These traits touch multiple core obligations in the ABA Model Rules of Professional Conduct, including:

  • Rule 1.1 – Competence: Lawyers must understand the risks and benefits of the technology they use.

  • Rule 1.6 – Confidentiality: Client data must be protected—no exceptions.

  • Rule 5.3 – Responsibility for Nonlawyer Assistants: If AI helps with work you delegate, you’re responsible for it.

What the American Bar Association (ABA) Is Saying

While the ABA hasn’t issued a formal opinion specific to generative AI, it passed Resolution 604 in August 2023 urging lawyers to:

“develop an understanding of the benefits and risks associated with relevant technology, including Artificial Intelligence.”

The ABA has also reminded lawyers that competence includes technological literacy, particularly when the technology could affect how you serve or protect your clients.

Although not binding, the ABA’s guidance influences many state bars—and signals that AI literacy is becoming part of legal ethics.

What State Bar Associations Are Recommending

Several state bars have started addressing the ethics of AI more directly. Here are a few key updates from across the U.S.:

California

The State Bar of California has formed a Working Group on AI and the Legal Profession and issued draft guidance highlighting:

  • Risks related to client confidentiality when using public AI tools

  • The danger of “unauthorized practice of law” if clients rely on AI-generated legal advice

  • The need for “human supervision” of any AI-generated output

New York

The New York State Bar Association released a comprehensive AI Task Force Report in 2024, urging lawyers to:

  • Use AI only in areas they understand well enough to supervise

  • Disclose AI use when relevant to courts or clients

  • Stay alert to AI-generated bias in legal analysis

Florida, Illinois, and Texas

Each of these states has active ethics committees reviewing AI. Common themes include:

  • Ensuring AI-generated legal arguments don’t violate rules of candor

  • Maintaining professional judgment when delegating tasks to AI

  • Educating lawyers on the limitations of AI before allowing its use in case work

Top Ethical Risks Lawyers Face When Using AI

Here are the key risk zones lawyers need to navigate when integrating AI into their practice:

1. Confidentiality and Privacy

Inputting sensitive client data into tools like ChatGPT or Gemini could violate Rule 1.6—especially if the AI retains or learns from that data.

2. Over-Reliance on AI

AI can suggest answers, but it can’t think like a lawyer. Delegating too much—especially without review—can compromise your duty of competence.

3. Disclosure and Transparency

In some situations (especially litigation), lawyers may need to disclose that AI was used in preparing materials or making decisions.

4. Bias and Fairness

AI tools trained on biased data can amplify discrimination, especially in areas like criminal sentencing or employment law.

How to Use AI Ethically in Your Legal Practice

AI isn’t off-limits—but you must use it wisely and transparently. Here’s how:

  • Vet Your Tools Carefully
    Choose AI providers that are built for legal use, with clear data policies and attorney-focused safeguards.

  • Avoid Sharing Confidential Information with Public AI Models
    If it’s client data, don’t paste it into a chatbot—unless you're using an enterprise version with explicit protections.

  • Keep the Human in the Loop
    Always review and take full responsibility for anything AI generates.

  • Track Your Usage
    Keep a log of AI tools used in casework, especially if you're drafting court documents or providing legal opinions.

  • Stay Informed and Trained
    Consider taking an AI ethics CLE or joining bar association task forces to stay ahead.

Conclusion: Ethics Is Not Optional—Even in the Age of AI

Lawyers don’t get a pass on ethics just because technology is changing fast. If anything, your duty to understand and supervise technology has never been more important.

AI offers powerful new capabilities—but it also introduces real risks that can compromise your license and your clients’ trust.

Your clients depend on your judgment.  AI can assist—but ethics must lead.