By Dr. James L. Norrie | September 24, 2025
Agentic AI is no longer a futuristic abstraction. These systems do not simply respond to human commands; they can negotiate, schedule, trade, and purchase — acting as your authorized agent with real-world consequences. Once you cross that threshold, one urgent question arises: when your AI acts, who is liable?
From Human Intent to Machine Action
Common law assumes a human actor at the center of accountability. Intent, consent, and knowledge are its cornerstones. But what happens when AI takes an action you did not foresee — and perhaps cannot fully explain? Can you still claim ignorance? Or does broad instruction equal consent?
Europe
United States
The EU’s AI Act (2024) begins to tackle this dilemma. Updates to the Product Liability Directive extend strict liability to software, including AI, and the proposed AI Liability Directive lowers barriers for victims. Financial regulators also insist that executives remain accountable when AI drives decisions.
The U.S. response remains fragmented. Proposals range from safety testing to regulatory freezes. Without clear rules, accountability risks dissolving into so-called “moral crumple zones,” where blame diffuses until no one is responsible.
A Warning We Cannot Ignore
Big Tech thrives in ambiguity, where liability evaporates in algorithmic fog. Will we let them design systems of power without matching systems of responsibility?
Beyond Courtrooms and Corporations
The stakes go beyond boardrooms. If harms caused by agentic AI cannot be traced to a responsible party, public trust will wither and victims will lack remedy. Repeating the “neutral platform” mistake of the early internet would be catastrophic — because AI does not just mediate; it acts.
What You Can Do
- Demand disclosure of how AI makes decisions.
- Support policymakers who enforce transparency and liability.
- Question whether delegating choices to machines erodes your own agency.
- Push employers to adopt standards that prioritise human oversight.
Concerned about AI liability? Put guardrails around agentic AI and keep humans accountable.
Talk to us about SAFER AIDownload PDF