Skip to main content

By Dr. James L. Norrie | September 24, 2025

Agentic AI is no longer a futuristic abstraction. These systems do not simply respond to human commands; they can negotiate, schedule, trade, and purchase — acting as your authorized agent with real-world consequences. Once you cross that threshold, one urgent question arises: when your AI acts, who is liable?

From Human Intent to Machine Action

Common law assumes a human actor at the center of accountability. Intent, consent, and knowledge are its cornerstones. But what happens when AI takes an action you did not foresee — and perhaps cannot fully explain? Can you still claim ignorance? Or does broad instruction equal consent?

Europe

United States

The EU’s AI Act (2024) begins to tackle this dilemma. Updates to the Product Liability Directive extend strict liability to software, including AI, and the proposed AI Liability Directive lowers barriers for victims. Financial regulators also insist that executives remain accountable when AI drives decisions.

The U.S. response remains fragmented. Proposals range from safety testing to regulatory freezes. Without clear rules, accountability risks dissolving into so-called “moral crumple zones,” where blame diffuses until no one is responsible.

A Warning We Cannot Ignore

Big Tech thrives in ambiguity, where liability evaporates in algorithmic fog. Will we let them design systems of power without matching systems of responsibility?

Beyond Courtrooms and Corporations

The stakes go beyond boardrooms. If harms caused by agentic AI cannot be traced to a responsible party, public trust will wither and victims will lack remedy. Repeating the “neutral platform” mistake of the early internet would be catastrophic — because AI does not just mediate; it acts.

What You Can Do

  1. Demand disclosure of how AI makes decisions.
  2. Support policymakers who enforce transparency and liability.
  3. Question whether delegating choices to machines erodes your own agency.
  4. Push employers to adopt standards that prioritise human oversight.
The promise of agentic AI is immense — but so is its peril. Regulators will argue, corporations will lobby, and courts will hesitate. Meanwhile, major platforms will press forward, content to operate in moral crumple zones. The real question is: will you let them?

Concerned about AI liability? Put guardrails around agentic AI and keep humans accountable.

Talk to us about SAFER AI
Prefer the designed version? Download the printable PDF with visuals and references.
Download PDF

Leave a Reply