December 14, 2025
8 min

Shared Responsibility in the Age of AI

Shared Responsibility in the Age of AI

Introduction

Cloud computing made data accessible.
Artificial intelligence made it powerful.
Together, they made accountability more complex than ever.

In many organizations today, large‑scale transformation is driven not by CIOs, but by CFOs and COOs who own the investment case, the risk profile, and the regulatory exposure of AI programs. Yet the models powering those programs are designed, trained, deployed, and integrated almost entirely in the cloud. Treating security and compliance as an “IT matter” in a multi‑model, API‑driven environment is one of the fastest ways to lose control over data, risk, and financial liability.

AI does not solve governance challenges – it amplifies them.

From shared infrastructure to shared risk

Cloud services have always operated under a shared responsibility model. Providers secure the physical and virtual foundations: data centers, hardware, networks, and core platform services. The customer is responsible for everything built on top: configurations, data, applications, models, and access.

In AI‑enabled environments, this responsibility expands. You are no longer protecting only static data; you are protecting learning systems that adapt over time, drive decisions at scale, and often operate close to core financial processes. A single misconfigured access policy or poorly governed integration can expose entire datasets used by credit scoring engines, voice‑AI agents, forecasting models, or dispute classifiers.

For high‑risk use cases under regulations such as the EU AI Act, ignorance of these responsibilities is not a defence. Finance leaders must be able to show where training data came from, how it was processed, and how models behave in production, including how risks are identified, mitigated, and monitored over time.

A composite example: the bank’s credit scoring incident

Consider a large European bank that deploys an AI‑based credit scoring model in the cloud to automate lending decisions for small and medium‑sized enterprises. The cloud provider secures the underlying infrastructure and network, exactly as the shared responsibility model promises. The weaknesses sit entirely in how the bank configures and governs its own environment.

Internally, the bank:

  • Assigns overly broad access roles that give junior analysts direct connectivity to sensitive training data and live model endpoints.
  • Fails to enforce strong encryption for data in transit between the scoring service and the core banking platform.
  • Does not maintain complete data lineage for the model: data sources, transformations, and enrichment logic are poorly documented and inconsistently governed.

When an internal audit reviews the setup, it discovers that sensitive customer information – including income data and detailed transaction histories – is accessible via an unprotected API and can be queried well beyond agreed business purposes. The model is classified as high‑risk under the EU AI Act, yet the bank cannot demonstrate effective risk management, human oversight, or auditability across the model lifecycle.

The outcome is predictable and painful:

  • Significant regulatory penalties for breaches of data protection and AI‑specific obligations.
  • A mandatory and expensive re‑architecture of access controls, pipelines, and monitoring.
  • Erosion of trust from the board, supervisors, and customers in the bank’s AI governance capability.

In this kind of scenario, the CFO’s role is not limited to funding the model. The CFO is accountable for ensuring that the environment around the model – its access, data flows, controls, and evidence – stands up to regulatory scrutiny.

The new attack surface: AI systems in the cloud

Traditional IT security focused on relatively static workloads: applications, databases, and networks that changed slowly and predictably. AI introduces dynamic, data‑hungry systems that train, retrain, and adapt in near real time. That creates a new attack surface with three critical layers:

  • Model traceability: clear records of who built the model, on what data, in which environment, and with which versions and parameters.
  • Data integrity: protection against tampering, poisoning, or misuse of both training and inference data, including robust controls around third‑party data sources.
  • APIs and pipelines: secure integration points for data and decision logic, where an unprotected endpoint can leak both sensitive information and model behaviour.

In this context, the weakest link is rarely the traditional firewall. The weakest link is the unsecured AI endpoint that can be queried, probed, extracted, or manipulated – often outside the view of legacy perimeter‑based controls.

Encryption and identity as financial controls

Every serious AI implementation processes large volumes of structured and unstructured data: invoices, contracts, emails, voice logs, transactions, and customer interactions. Encryption in all states – at rest, in transit, and, where feasible, in use – becomes more than a technical best practice. It becomes a financial control that directly influences regulatory risk, incident losses, and remediation cost.

Identity and Access Management (IAM) must also evolve. Access is no longer granted only to employees and human users. It now extends to bots, pipelines, microservices, and autonomous agents, each acting in ways that can impact customers and balance sheets.

An effective IAM regime for AI should ensure that:

  • Permissions are clearly defined and mapped to business roles and responsibilities.
  • Every human and non‑human identity is governed by strict least‑privilege principles, aligned to the organization’s risk appetite.
  • Audit trails for access and changes are as robust as those for financial reporting systems and core ledgers.

If an intern would never receive unrestricted access to the general ledger, an autonomous AI process should not receive unrestricted access to production data or high‑impact decisions without equally strong constraints and oversight.

Beyond checkboxes: compliance in the AI era

Historically, cloud compliance has often been treated as a certification checklist: confirm that providers support GDPR, ISO 27001, SOC 2, PCI DSS, and similar standards, then move on. In the era of AI, this mindset is dangerously incomplete.

Regulators and auditors now look for tangible, continuous evidence of control, including:

  • Data lineage: demonstrable insight into data origin, transformations, enrichment steps, and retention.
  • Model transparency: clear logging and explanation of how decisions are generated, including support for explainability requirements in emerging AI regulation.
  • Operational controls: defined policies for when and how models are retrained, who approves changes, and how performance, bias, and drift are monitored.
  • Continuous evidence: real‑time or near‑real‑time metrics, documentation, and governance signals, rather than static reports produced once or twice a year.

Under regulations such as the EU AI Act, high‑risk systems must meet strict requirements around documentation, risk classification, human oversight, and auditability. Penalties are linked to global revenue, not only to the size of a single incident, which moves AI governance squarely into the realm of financial and regulatory control.

A practical framework for CFOs

To ground AI governance in financial and operational reality, CFOs can use the following five questions whenever evaluating or scaling cloud‑based AI initiatives:

  1. Do we clearly understand the shared responsibility model?
    What exactly does the provider secure, and what remains the organization’s responsibility across data, models, and integrations?
  2. Do we have complete data lineage for critical AI models?
    Can the team show where data comes from, how it is transformed and enriched, and how those choices affect model outputs and risks?
  3. Is every access to models and APIs governed by least‑privilege roles and explicit permissions?
    Are all human and non‑human identities tightly scoped, regularly reviewed, and traceable in a way auditors would accept?
  4. Do we have an AI‑specific incident response plan?
    Is there a defined process for detecting and responding to model drift, data leaks, and harmful or incorrect decisions, aligned with broader risk and resilience frameworks?
  5. Can we demonstrate compliance with AI‑related regulation and sector standards for high‑risk systems?
    Can the organization evidence its governance maturity to regulators and supervisors, rather than scrambling to produce documentation under pressure?

These questions complement established cloud responsibility models and align with emerging AI governance frameworks that formalize how organizations should manage AI‑related risk over time.

The CFO’s security playbook for AI transformation

For CFOs, an AI program in the cloud is not only about funding models and calculating return on investment. It requires a security and governance playbook that treats the entire AI ecosystem as a controlled asset, not just a promising technology.

Key elements of that playbook include:

  • Redefine the perimeter: think in terms of models, APIs, agents, and workflows, not just networks and physical locations.
  • Embed governance into architecture: design governance into cloud and data architectures from the outset; retrofitting controls into live learning systems is slow, costly, and often incomplete.
  • Enforce role‑based access and least privilege: ensure every identity, human or algorithmic, operates within tightly scoped permissions aligned with business roles and risk.
  • Automate monitoring: leverage cloud‑native monitoring, logging, and anomaly detection, including dedicated model and data‑drift monitoring, as the only scalable way to detect manipulation or leakage.
  • Maintain encryption discipline: treat data in all states as sensitive, and apply strong cryptographic controls as a default, not an exception.
  • Prepare an AI‑specific incident response plan: define who detects, who investigates, who decides, and who communicates with regulators and stakeholders when AI misbehaves.
  • Treat models as financial assets: manage models through structured lifecycles, with clear owners, performance criteria, maintenance budgets, and risk assessments.

AI governance starts with cloud discipline

Trust in AI is not a property of the model. It is a property of the environment around the model: its data, access controls, pipelines, monitoring, auditability, and documentation.

CFOs do not need to become engineers, but they do need to understand that disciplined use of the cloud is the foundation of credible AI governance. In the age of intelligent automation, trust is not inherited from providers. Trust is engineered by the organization, through its architecture, its controls, and its willingness to treat AI as both a strategic asset and a regulated risk.

Related
01

Similar Articles

Explore our featured articles below or dive deeper into specific categories that interest you the most. Our blog is constantly updated with fresh content to keep you ahead of the curve.

reach out
02

Let’s create smarter, tailored solutions for your business.

AI works best when it adapts to your unique needs. Every process has its own challenges — and with the right strategy, we can boost efficiency, unlock insights, and drive sustainable growth. I’ll help you shape AI solutions that simplify complexity and turn technology into a real strategic advantage.

Got an idea? Let’s talk.