Farrer & Co LLP has published an overview on liability for AI in the supply chain, the first in a series examining what happens when AI systems or tools do not work as intended or as promised. Released on October 6, 2025, the piece focuses on organisations that deploy AI to deliver services or products to their customers. It sets out broad concepts and steps to mitigate risk, with later articles to address specific use cases and sectors. For founders, developers, and operators embedding third-party AI into products and processes, the analysis charts where liabilities are likely to sit, and why your contractual and operational choices will determine how much of that exposure lands with you.

The supply chain for modern AI is layered. At one end are large language model providers responsible for obtaining and organising the underlying data. AI system designers build sector-specific or use-specific models from that data. Consultants often sit between builders and buyers. Deploying organisations integrate these tools into services, and clients and customers are ultimately affected by the outcomes. The tools now span customer chatbots and AI agents for bespoke products and services, as well as systems used for financial services eligibility, wealth management investment assessment, marketing content and campaigns, contract awards in tendering, legal and accounting advice, investigations and audits, employee performance monitoring and hiring decisions, medical diagnoses, and government resource allocation. That breadth matters because it multiplies the points where errors can occur and complicates who is responsible when they do.

The central axis for liability is whether the AI is off the shelf or bespoke. Generic, publicly available tools are often provided on standard terms with extensive exclusions and limitations. Providers of such tools are less likely to be liable, both because they lack knowledge of specific use cases and because their terms are designed to protect them. There are exceptions for fraudulent misrepresentation and serious over promising, but the default posture is that risk shifts downstream. By contrast, liability for providers can increase when a system is adapted for a specified use. Bespoke deployments call for bespoke contractual provisions that set performance levels, define liability, and calibrate exclusions and caps. If you are adapting a model for a precise function, you should expect the provider to stand behind defined outcomes and to memorialise that in negotiated terms.

For deploying organisations, how you use AI is as important as which AI you choose. Allocating business critical decisions to an off the shelf model that was not designed for that purpose will weigh against the deployer. In that scenario, you are the party making the consequential choice to rely on a tool under protective standard terms. The analysis underscores the practical need to assess a provider’s ability to meet liabilities, including whether warranties and indemnities are backed by insurance. That assessment should not be theoretical. Know who in the chain is carrying which risk and whether they have the financial capacity to honor it. Consultants also play a role in structuring these relationships and should be included in the allocation of responsibilities where they influence design or integration.

Operational controls are the other half of the liability equation. Pre deployment testing, post deployment critical assessment of outputs, staff training on appropriate use, and periodic reviews and renewals as base datasets age are recommended. These measures reduce the likelihood of harm and also demonstrate that the deployer acted responsibly. They are particularly important because attribution is hard in AI systems. Unexpected or incorrect results may be difficult to explain, and when multiple models or processes interact, identifying the responsible component can be challenging. That complexity argues for disciplined testing gates, clear usage policies, and ongoing monitoring so that issues are detected and contained before they propagate to customers.

The message from this first look is pragmatic. Managing AI liability starts with understanding the technology, defining responsibilities across the supply chain, allocating risk in contracts, and ensuring compliance in operations. If you are a deploying organisation, treat the decision to insource AI the way you would any critical dependency. Map the actors, decide who is accountable for what, negotiate terms that match the specificity of your use, and backstop those obligations with real insurance where appropriate. Then run the system with controls that are commensurate with the impact of its outputs. As AI tools take a larger role in decisions that affect customers and the public, the unknowns shrink when responsibilities are explicit and processes are mature.

Conclusion: Liability for AI will not be solved by a single clause or a single vendor. It will be managed by aligning the nature of the tool with its use, by allocating risk to the party best placed to control it, and by operating with rigor as models and datasets evolve.