The AI revolution is here, and it feels like it’s being used for just about everything. People are going to AI for legal advice, writing assistance, image creation, and even therapy.
Some are excited about this revolution, while many others are deeply (and understandably) concerned. However you feel, there is no denying that AI is having a massive impact on daily life. Seemingly overnight, it went from something your nerdy tech friend experimented with to something just about everyone has at least tried.
ChatGPT 3.5 launched in November 30, 2022. It had 1 million users in one day. Within 2 months, it had gathered 100 million users. As of the time of this post, it has 700-800 million weekly users.
And of course, ChatGPT is just one of the many platforms out there.
While it’s easy to see AI’s usefulness, there are also very valid concerns regarding ethics, environmental sustainability, and mental wellbeing. Even in instances where it may seem helpful, it can actually be causing more harm than good.
Today, we’d like to discuss why you should be careful using AI for legal advice. But first, let’s touch on the broader concerns with AI in general.
The AI Problem
When discussing any topic related to AI, it’s important to touch on some of the primary concerns. While this is hardly an exhaustive list, here are a few of the main issues:
Job Displacement and Economic Impact
One of the most pressing concerns with the rise of AI is how it could reshape the workforce. Automation powered by AI is already replacing jobs that involve repetitive tasks, such as data entry, customer service, and even aspects of legal research or medical diagnostics. While this can theoretically lead to increased efficiency and lower operational costs, it can also leave thousands (if not millions) of people without a clear path to new employment.
It’s not just entry level jobs that are disappearing. White collar jobs that were once considered safe are being outsourced to artificial intelligence, including:
- Paralegals and legal assistants
- Financial analysts
- Accountants and bookkeepers
- Insurance underwriters
- Market research analysts
- Technical support specialists
- Report writers and editors
- Human resources coordinators
- Medical coders and billers
- Loan officers
- And more
This technological shift may require a significant reevaluation of how we train and educate people for a sustainable career. It may also require certain legal protections to be put in place.
Without proper planning and support, entire job sectors could face disruption faster than society is able to respond.
It’s also worth noting that these replacements have often resulted in worst performance/products, increased errors, and lost time/revenue, ultimately defeating the purpose of using them in the first place.
Privacy and Data Security
AI depends heavily on vast amounts of data, much of which comes from users’ personal behavior, preferences, and communications. This creates major privacy concerns, especially when AI tools are embedded in everything from smartphones to home assistants. People may not fully realize how much of their data is being collected or how it’s being used.
And yet, people are sharing some of their most personal/vulnerable data.
This data isn’t just susceptible to a data leak. Unless laws are enacted to protect it, this data could also be released due to a court investigation.
Environmental Impact of AI
Behind every AI tool is a massive infrastructure of servers, data centers, and energy-intensive computing systems. Training advanced AI models, especially large language models like ChatGPT, requires immense computational power. These training processes can consume hundreds of megawatt-hours of electricity, resulting in a significant carbon footprint. And even after training, running AI services at scale—serving millions of users around the clock—demands continuous energy input and efficient cooling systems.
AI data centers are already depleted water sources in multiple areas across the US.
They’re also increasing the cost of local energy bills for every day citizens.
And they’re just kind of unsightly, taking up massive amounts of space.
As major companies race to unlock AGI (the next level of AI capabilities), these problems are only compounding.
Misinformation and Trust in Reality
AI was supposed to improve access to accurate, reliable information, making us smarter and more capable as a society. Instead, it seems to be having the opposite effect.
AI is great at generating believable content. Unfortunately, this information is often not true. Sometimes, it’s pulled from inaccurate (or satirical) sources. Sometimes, AI simply makes up information to meet your specific query. This is known as AI hallucination.

All of this contributes to a world where people increasingly don’t know what’s real as misinformation spreads at an uncontrollable rate.
It is for this reason especially that you should be hesitant to use AI for legal advice. It seems so simple and convenient to ask ChatGPT questions like “what do I do after a car accident” or “how can I scare insurance adjusters.” However, it can quickly turn into you receiving incorrect and even damaging information.
Let’s take a look.
The Problems with Using AI for Legal Advice
Some of the potential problems using AI for legal advice are more obvious than others. We’ve already touched on inaccurate or made-up data. However, there are more specific issues that may not seem apparent until it’s too late.
Different States, Different Laws
When you ask AI for legal advice, it may pull data that is technically true in one region but not in yours. In the US specifically, laws can vary by quite a bit from state to state. The very scope of law can shift drastically depending on where you live. For example, New York has 300,000 restrictions in its administrative code, while a state like Arizona has only 65,000 restrictions.
A platform like ChatGPT may provide you with info from California. Meanwhile, you’re dealing with a car accident in Ohio. Even if you specifically ask for laws from your state, it can sometimes get confused and mix up critical information.
Outdated Information
Our legal system is known to change. While this can often be a good thing, it can result in inaccurate information being left behind. AI isn’t always great at checking for dates and relevancy, resulting in it feeding you outdated information that may no longer apply to your situation.
Confirmation Bias
Platforms like ChatGPT love to say yes and encourage our previously conceived notions. While that may sound nice, it’s not so great when you’re wrong. When it comes to legal matters, it’s ultimately about what the law says, not how you feel.
Strategy and Reasoning
Despite potential inaccuracies and outdate info, AI is generally good and providing and summarizing information. The problem is, it’s only providing you with info that already exists somewhere else. It can’t actually think. It’s not being strategic regarding your specific situation, and it can’t reason with your position.
While AI may be able to provide the appearance of conversation, it’s ultimately preprogrammed and limited. This deception is part of what results in confirmation bias and data hallucination. If you’re using AI for legal advice, it can result in a bad strategy that will hurt your case.
Not Admissible in Court
Legal information and advice provided by AI is only useful if it’s based on actual existing law in your region. When standing before a judge, you cannot say, “Well, ChatGPT said…”
It won’t hold up in a court of law.
The Illusion of Progress
In many legal situations, such as an injury or accident case, it’s important that you act quickly and take the appropriate steps. Talking back and forth with a chat bot can provide a sense that you’re progressing and building your case. However, this is mostly false.
The action you need to take is in the real world, and if you wait too long, you could suffer the consequences.

What Can You Use AI For?
We know we’re being a little hard on AI, so let’s turn things around for a second. Are there things you could safely use AI for? Sure. Let’s say you’ve been in a car accident. The first thing you should always do is take care of your physical needs. Once you are safe and stable, however, you may be uncertain of what to do next.
It can be difficult to think clearly after a car accident, after all.
While it’s best to speak to an experienced professional, a platform like ChatGPT can help get the gears turning in your head. Especially if you’re sitting in a hospital waiting room or at a repair shop.
Ask some basic questions. Maybe request a list of action items you should immediately take. From there, however, we highly recommend contacting an experienced attorney, and we’re not the only ones who think so.
Why You Should Speak to an Attorney Instead of ChatGPT
As an injury and accident attorney office in Hamilton, Ohio, it’s understandable that we would recommend speaking to a lawyer. Rather than take our word for it, we decided to see what ChatGPT had to say about it.
Here’s our conversation:

The question asked at the end is actually a great example of what you can use AI for. If you’ve scheduled a consultation, ChatGPT can help put together some questions to go over. From there, you attorney should be able to handle the rest.
If you’ve been injured or suffered from a wrongful death or personal injury in the Greater Cincinnati area, The Richards Firm is here for you. With The Richards Firm, you will always be treated as an individual with unique goals and needs. Your initial consultation is free, so you truly have nothing to lose. We’re here to fill in all of the gaps that AI exhibits when it comes to legal advice.
Give us a call at 513-461-0084 or visit our contact page here for a free consultation