Highlights from Jeff Howell’s CLE Presentation for the National Academy of Continuing Legal Education
By Jeff Howell, Founder of Lex Wire Media & AI Law Analyst
Artificial intelligence isn’t coming to the legal profession, it’s already here. And it’s not just changing how lawyers work; it’s raising new ethical questions that most of us never imagined having to answer.
From document review and research to client intake and predictive analytics, AI is reshaping legal workflows across the country. But how do we use these tools without running afoul of confidentiality rules, malpractice exposure, or professional responsibility requirements?
This article draws from Jeff Howell’s in-depth CLE presentation for the National Academy of Continuing Legal Education, titled “Ethics in the Age of AI.” It distills the most important insights into a practical guide you can use to assess your own AI readiness and compliance.
Whether you’re experimenting with tools like ChatGPT or evaluating enterprise legal platforms, this summary will help you approach AI in your practice with clarity, caution, and confidence.
Download the presentation slides “Ethics in the Age of AI”
How AI Is Transforming Legal Work
According to the ABA’s 2024 Technology Survey, over 68% of law firms are now using AI in some capacity, including tools like:
-
ChatGPT and Claude for legal drafting
-
Harvey AI and Casetext for research
-
Relativity and Logikcull for e-discovery
-
Kira Systems and Luminance for contract review
What used to take days can now be done in hours, or minutes. But speed without safeguards can create serious risk. That’s why ethical and professional responsibility frameworks must evolve alongside these tools.
Five Core Risks to Know (and Mitigate)
Jeff outlined five key risk categories attorneys must address when integrating AI into legal work:
1. Bias in Machine Learning Models
AI often learns from biased historical data—leading to:
-
Discriminatory sentencing predictions
-
Unequal treatment based on demographics
-
Inaccurate assessments in criminal or civil risk models
What to Do: Test outputs across diverse hypotheticals. Document review procedures. Train staff to recognize biased patterns.
2. Hallucinations and Inaccurate Citations
Generative AI tools can confidently produce legal citations that don’t exist. In a notable New York case, attorneys were sanctioned for submitting a brief filled with fake ChatGPT-generated case law.
What to Do: Always verify every citation using trusted legal databases. Assume AI will get some things wrong, until proven otherwise.
3. Confidentiality Breaches
Public AI platforms often store or use submitted inputs to train their models. If you’re uploading sensitive case files or client data, you could be violating Rule 1.6 of the ABA Model Rules, even unintentionally.
What to Do: Use legal-specific platforms with end-to-end encryption and clear data retention policies. Never share identifiable client data with general-purpose tools like free versions of ChatGPT.
4. Overreliance on AI Recommendations
AI can generate convincing content, but it lacks judgment, context, and professional nuance. Relying on AI alone can result in:
-
Poor legal strategy
-
Incomplete analysis
-
Unethical shortcuts
What to Do: Treat AI outputs like a research assistant’s rough draft, not final work product. Keep humans in the loop, always.
5. Transparency and Disclosure Challenges
As courts and state bars begin requiring disclosure of AI-generated content, questions arise: Do you need to tell your client or the court when you use AI?
What to Do: Err on the side of transparency. Include disclosures in engagement letters, and create internal guidelines for court-facing AI declarations.
Ethical Rules That Still Apply in the AI Era
Jeff’s CLE presentation emphasized that while technology is evolving, your ethical obligations as a lawyer have not changed, they’ve simply expanded in scope.
Rule 1.1 – Competence
You must understand the tools you use. That includes:
-
How the AI system was trained
-
Its limitations and strengths
-
When human verification is required
“Don’t hit ‘generate’ and walk away,” said Jeff Howell, Esq. “You’re still the lawyer. AI is just a very persuasive intern, with no law degree.”
Rule 1.6 – Confidentiality
Anything you input into a cloud-based system must be secured. Review privacy policies and ensure your client’s data won’t be stored, shared, or used for training.
Rule 5.1 and 5.3 – Supervision
If junior attorneys or staff use AI tools, you are still ethically responsible for their conduct, and the tool’s output.
Best Practices:
-
Provide role-specific training
-
Document verification steps
-
Monitor for ethical compliance
Rule 1.4 – Communication
Clients deserve to understand how their legal matters are being handled. If AI plays a major role in strategy, document review, or predictions, they should be informed.
Rule 1.5 – Reasonable Fees
If AI significantly reduces the time it takes to perform legal work, your billing model should reflect that. Consider flat fees or value-based billing models.
Building Ethical AI Systems in Your Firm
Jeff shared several real-world recommendations for attorneys who want to integrate AI while staying compliant and competitive.
Build an AI Use Policy
Include clear guidelines on:
-
Which tools are approved
-
What verification is required
-
Who oversees compliance
Train Your Team
From paralegals to partners, everyone should understand:
-
The risks of hallucinated content
-
How to verify AI output
-
When to escalate to a human lawyer
Track and Audit AI Usage
Create internal logs that record:
-
What AI tools were used
-
For which task
-
By whom
-
How the output was verified
Classify Client Data
Segment matters into tiers based on sensitivity. For example:
-
Public: safe for general AI tools
-
Confidential: legal-grade AI only
-
Privileged: no AI interaction allowed
Case Studies: AI in Action (Both Good and Bad)
Jeff’s presentation also included real-world examples to show how this plays out in practice:
Hallucinated Brief
A cautionary tale:
Two attorneys submitted a ChatGPT-generated brief with six fake cases. They were sanctioned, forced to explain themselves in court, and required to notify the non-existent judges. Lesson? Always verify.
E-Discovery Done Right
A success story:
A firm used AI to sort 2 million discovery documents. With strong human oversight, they reduced costs by 60%, met court deadlines, and avoided any disclosure of privileged materials.
Contract Review with Human Oversight
A time-saving win:
One firm used AI to analyze over 200 acquisition-related contracts. They completed the project in one week instead of four, cut costs by half, and kept the client fully informed throughout.
Regulatory Trends to Watch
The regulatory patchwork is evolving quickly. Key developments include:
-
ABA Formal Opinion 498 – Recommends due diligence for cloud and AI tools
-
California & Florida Bars – Offering explicit AI guidance
-
Federal Judges – Some now require AI use to be disclosed in filings
-
EU AI Act – Could impact firms handling international matters
“The trend is moving toward more disclosure and more accountability, not less,” said Jeff Howell, Esq., during his CLE presentation for the National Academy of Continuing Legal Education.
Transparency = Trust
According to Jeff, one of the best things attorneys can do is frame AI usage as a value-add, not a secret. Be proactive. Explain how AI helps clients:
-
Save money
-
Get faster turnaround
-
Receive more comprehensive analysis
And above all, make it clear that your human legal judgment is always at the center of the process.
Final Takeaways for Lawyers
As Jeff emphasized in the CLE, your ethical obligations are not going away. If anything, they’re becoming more critical. Here’s how to move forward with confidence:
Three Rules for AI in Your Practice
-
Know your tools. Understand how they work and when they don’t. Verify everything. AI is fast, but not flawless. Be transparent. Clients, courts, and regulators are watching.
A 60-Day AI Ethics Challenge
Jeff closed his presentation with a challenge for all attendees. Ask yourself:
-
Have you audited your AI tools for compliance and security?
-
Do you have clear internal policies around AI usage?
-
Are your clients informed?
-
Is your team trained?
If not, now is the time.
Want Help Structuring AI Visibility and Compliance?
Lex Wire helps law firms stay visible, credible, and compliant in an AI-driven world. We support legal professionals with structured content, AI visibility strategy, CLE partnerships, and more.
👉 Explore our AI Ethics and Visibility Services
Jeff Howell is a licensed attorney in Texas (State Bar #24104790) and California (State Bar #239410) and founder of Lex Wire Journal. He advises law firms on AI implementation, Answer Engine Optimization, and legal technology integration, with a focus on AI ethical compliance and internal AI governance. Jeff specializes in helping legal professionals navigate practical AI adoption while maintaining compliance and professional standards.