If AI Goes Wrong, Who’s Liable?
A 2025 Guide for UK Small Businesses
Artificial intelligence (AI) is everywhere now. Many small businesses are using tools like chatbots, pricing engines, and generative AI writers.
“Reported use of AI increased in 2024. In the latest survey, 78 percent of respondents say their organizations use AI in at least one business function, up from 72 percent in early 2024…”
“Figures show that small firms are adopting AI at a rapid pace, with one in five (20%) already using it, and 11 in 20 (55%) recognising its potential benefits. Similarly, three in five (60%) aiming for rapid growth plan to use it – but these figures are likely to grow rapidly as the technology gets smarter.”
These can save time and boost productivity, but what happens if something goes wrong?
Whether it’s an AI tool that gives false information, breaks copyright rules, or leaks sensitive data, one key question remains:
If AI gives bad advice, who’s responsible?
You are.
Even if you didn’t build the tool, if you use AI in your business, you’re still responsible for what it does. Whether it shares inaccurate content, breaks copyright rules, or leaks personal data, the risk falls on you.
This is also how many UK insurers now see it.
If you’re using AI as part of your service, they expect you to be in control of it, just like any other business process.
What can go wrong with AI?
Here are just a few examples:
- A chatbot gives financial advice that turns out to be wrong
- AI-generated marketing content includes false claims or copied material
- A customer service bot shares private data
- Automated pricing tools make unfair or biased decisions
If any of this happens, and a customer suffers a loss, you could face complaints, legal action, or regulatory scrutiny. Even if the error came from software, your business is on the hook.
According to Markel, professionals using AI tools to generate content or advice are increasingly exposed to risk, especially if those tools create misleading or inaccurate information.
For instance, consultants or advisors using AI to generate advice could face legal action if the tool’s output turns out to be biased, misleading, or in conflict with client interests.
“Professionals who rely on or implement these systems could face liability.”
Why it matters: If AI is part of your service, insurers will expect oversight. That means reviewing outputs, catching errors, and disclosing how you use the tools, or risk being caught short if something goes wrong.
What insurers expect from SMEs in 2025
Insurers are now looking more closely at how small businesses use AI. If you use it to create content, automate decisions, or handle customer data, they may ask:
- What AI tools are you using and why?
- Are you checking the output?
- Do humans review decisions before they’re sent to clients?
- Are you keeping a record of what’s being generated?
“As companies rely more and more on GenAI tools for process automation, risk managers need to evaluate the risks that emerge for their organisation by relying on AI.”
If you can’t answer these questions, you might struggle to get cover. Or worse, you might find you’re not protected when you need it.
How AI could affect your professional indemnity (PI) insurance
In simple terms (because all policies are different), Professional Indemnity (PI) insurance covers you in circumstances in which, as a result of your wrongful advice, a client suffers financial harm for which you can be held legally liable..
But many policies now include new questions or exclusions if AI is involved.
If you use AI to:
- Generate reports or analysis
- Automate advice or decisions
- Screen data or customer information
…then your insurer may ask for more detail. They want to know you’re not blindly trusting the tool, you’re checking it, recording it, and staying in control.
If you don’t disclose your AI use when arranging PI cover, you may find yourself without protection when it counts.
What you should do now
Here are 6 simple steps to stay protected:
- Make a list of AI tools you use and what you use them for
- Check outputs before anything goes to a client or customer
- Keep a log of prompts, responses, and changes made
- Train your team to use AI responsibly
- Review your insurance policies for any AI-related exclusions
- Talk to your broker about what your insurer needs to know
The bottom line
AI is a powerful tool, but it doesn’t remove your responsibility. If it’s part of your business, then insurers will treat it just like any other part of your service.
You don’t need to be an expert in AI. But you do need to:
Understand how you use it
Supervise it
And disclose it
If you do that, insurance can still protect you.
Need help checking your cover?
📞 Call FSB Insurance Service on 020 3883 7976
Need to skill up on AI?
You don’t need to be a tech expert to use AI safely, but you do need to understand what the tools are doing and where the risks lie. If you want to build your confidence, Google offers a free, beginner-friendly course:
Learn how AI works, what it can and can’t do, and how to use it responsibly in your business, no coding required.
You can complete it online, in your own time, and earn a certificate at the end. It’s a great way to skill up, especially if you want to stay compliant and avoid liability risks.