- Return on Security
- Posts
- The AI Security Shared Responsibility Model
The AI Security Shared Responsibility Model
Learn about the AI Security Shared Responsibility Model, a framework for managing risks and ensuring compliance in AI deployments.

Table of Contents
AI is transforming industries at an unprecedented pace, but with great power comes great responsibility.
Shared responsibility, that is.
Enter the shared responsibility model for AI security—a framework designed to clarify the division of security responsibilities between AI service providers and the businesses that use them. This framework was designed to help security practitioners navigate the complex landscape of AI deployments in the current state of the world.
The shared responsibility model is not a new concept; it's been a cornerstone in cloud security for years. However, its application to AI security is relatively novel.
The idea is simple: both the service provider and the user have roles to play in securing AI systems. This division of labor ensures that all aspects of security are covered, reducing the risk of vulnerabilities and breaches.
As businesses increasingly adopt AI technologies, understanding and managing the associated risks is table stakes.
The Framework
The purpose of this resource is to give the general public, and security leaders specifically, a way to think about the various risks and shared responsibility of using AI systems and services. I made this model to help people find a way to speak about these problems in a clear and consistent way.
The goal is for someone to consume this page and the diagram to better understand the risk they are introducing (and possibly accepting) into a business when using AI systems.
I want this to generate better-informed risk discussions at companies and help practitioners understand where additional friction or security controls can be applied.
Remember that this is just version one. I will continue refining the image and framing as I receive feedback and learn more.
The AI Deployment Models
The deployment model a company chooses for using AI sets the inherent risk posture. Yes, even using ChatGPT on the public internet vs. Microsoft's version of ChatGPT carries different inherent risk postures.
The AI Security Shared Responsibility Model outlines the division of security responsibilities between AI service providers and users across various deployment models.
Let's break these down.
SaaS AI Models
Software as a Service (SaaS) AI models are fully managed by the service provider and accessed by users through web interfaces or APIs. These AI models are the most common deployments that the general public and tech people are aware of.
Within the SaaS AI model category, we have:
Public SaaS AI Models - AI models that are publicly accessible and managed entirely by the service provider.
Example: ChatGPT, Claude, Perplexity
Risk Profile: Higher risk due to limited control over the model's behavior and data handling.
Public API-based AI Models - AI models accessed through public APIs provided by the service provider.
Example: OpenAI's GPT-4o API, Anthropic's Claude 3.5 Sonnet
Risk Profile: Moderate to high risk, depending on API security measures and usage policies.
Private SaaS AI Models - AI models hosted by a provider but restricted to a specific organization or group of users.
Example: Microsoft's private ChatGPT deployments for enterprises
Risk Profile: Moderate risk, with improved governance and enterprise controls.
Private API-based AI Models - AI models accessed through APIs restricted to specific organizations or users.
Example: Enterprise-specific APIs for language models
Risk Profile: Moderate risk, with improved control over access and usage.
Code Copilot Platforms - AI-assisted code generation platforms with direct access to development environments and code bases. These platforms analyze code patterns, comments, and projects and either suggest code snippets or auto-generate blocks of code.
Example: GitHub Copilot, Amazon CodeWhisperer
Risk Profile: Moderate to high risk, as exposure to sensitive code and intellectual property and the potential for insecure code is higher than other deployment models.
PaaS AI Models
Platform as a Service (PaaS) AI models provide a platform for deploying, running, and managing AI models without the complexity of maintaining the underlying infrastructure.
These are managed platforms that allow organizations to deploy and run AI models with some level of customization.
Examples: Azure OpenAI Service, Google AI Platform
Risk Profile: Moderate risk, with shared responsibility between the platform provider and the organization using the service.
IaaS AI Models
Infrastructure as a Service (IaaS) AI models involve deploying AI models on cloud infrastructure managed by the organization.
Examples: Deploying open-source models on AWS EC2 instances, running custom models on Google Cloud VMs
Risk Profile: Higher responsibility for the organization, with increased control but also increased security management requirements.
On-Premises AI Models
On-premises AI models are deployed and run entirely within an organization's own hardware and infrastructure, whether for internal use or external services.
Examples: Hedge funds running proprietary trading models on in-house servers, healthcare providers using on-premises models for patient data analysis
Risk Profile: The highest level of control and responsibility for the organization, requiring comprehensive security measures across all layers of the stack.
It's important to note that this is a general guide, and the exact distribution of responsibilities may vary depending on specific service agreements, regulations, and organizational policies.
Building a Robust AI Security Stack
Here’s a breakdown of some essential security domains to consider for various AI deployment models, from the application layer to the infrastructure and everything in between:
Application Security - This domain focuses on securing the user-facing application that utilizes the AI system. This area can include vulnerability remediation, code reviews, red teaming, and more.
AI Ethics and Safety - While not strictly a technical layer, AI Ethics and Safety overarch the entire stack. It informs how AI should be designed by model providers and how it is used by consumers in an ethical, transparent, and safe manner.
User Access Control - This is about managing who can access the AI system at what level and what the AI system can access at what level.
Model Security - This includes protecting the AI model itself against adversarial attacks where malicious inputs are designed to deceive the model. Techniques such as adversarial training, defending against model poisoning, AI output monitoring and validation, and model hardening are included here.
Data Privacy - This area is about ensuring personal or sensitive data used by the AI is protected. This involves implementing data anonymization techniques, ensuring compliance with data protection regulations like GDPR, and regularly auditing data usage to prevent unauthorized access or leaks.
Data Security - Protecting the integrity and confidentiality of all data used by the AI system. Encryption at rest and in transit are included here.
Monitoring and Logging - Tracking the operations and usage of who is using the AI system to detect anomalies, understand user behavior, and respond to potential security incidents.
Compliance and Governance - This area focuses on ensuring that the AI system meets regulatory requirements and internal policies. This is an emerging and continuously evolving area, especially as new regulations around AI ethics and data protection, like the EU AI Act, come into play. I've written more about the collision course of AI and compliance here.
Supply Chain Security - This area focuses on addressing security concerns in the AI development and deployment pipeline. This includes ensuring that all components, from third-party libraries to hardware, are secure and free from vulnerabilities and understanding upstream and downstream impacts.
Network Security - Securing the communication channels used by the AI system and its users. This includes firewalls, anti-DDoS measures, intrusion detection systems, load balancing, and more.
Infrastructure Security - This area covers protecting the underlying hardware and software that hosts the AI system. This includes securing cloud environments, physical servers, and virtual machines with regular patching, vulnerability assessments, and more.
Incident Response - This area spans across each of the other areas above, but it covers the ability to respond when incidents occur.
I know some of the responsibilities are broad and can contain multiple sub-components. I'm also positive that, given enough time and pontification, you could expand (or easily overcomplicate) these responsibilities much further.
Securing an AI system is a multi-faceted challenge that requires attention to various domains and usage states. As the deployment models evolve, so too will these focus areas. This is just meant to be the starting point.
Things Not Covered
I want to acknowledge some of the things this framework is intentionally not intended to cover but can still be valuable parts of a safe and secure AI ecosystem overall.
Data Management - This can include training data provenance, data lineage, data quality assurance, and data governance specific to AI training and inference data. AI still suffers from garbage in, garbage out (GIGO).
AI Governance - This is not meant to be an all-inclusive governance framework for AI systems, as that can fit into a company’s broader AI strategy and has much less to do with the security of AI deployment specifically. Consider AI Security a sub-component of your AI Strategy. Check out the NIST AI Risk Management Framework if you want to go deeper on that topic.
AI Development- This framework is not meant to cover the secure and safe development of AI software. This is for consumers and users of AI models, not developers of the models.
AI-Enabled Products - This framework does not cover products that are “AI-Enabled” but do not specifically sell access to an AI model. These products clearly use AI under the covers, but it the AI models used in the product are not directly exposed to customers. These products should still be evaluated from a security standpoint and a legal terms and conditions standpoint with regard to using your company data for training the service’s models. Examples include AI meeting assistants like Otter or AI-enhanced email platforms like Superhuman.
Applying This Model
Now, we come to how you can apply this.
One of the shared responsibility model's primary goals is to foster informed risk discussions between AI service providers and users. By clearly defining who is responsible for what, both parties can better understand their roles and work together to improve security practices.
This model can help organizations understand their security obligations when deploying and using AI systems. It can also help decide which deployment model to choose based on the organization's capacity, budget, and risk appetite.
A few key observations when using this framework:
Customers still share some responsibilities even in fully managed services like public SaaS AI. You can't blame it all on ChatGPT. At this stage, we see a massive amount of interest from AI Security startups.
As you move from SaaS models to IaaS and on-premises deployments, your control, risks, and responsibilities increase.
The larger and more mature the company and the more it invests in technologies to govern and protect AI usage, the more some of these responsibilities will shift into Shared or Customer modes. This is not meant to be a one-size-fits-all model.
I consider "AI Ethics and Safety" to be a shared responsibility across all deployment models because providers have an inherent joint obligation to create ethical and safe models, but consumers must also use AI systems in a safe and ethical manner.
These deployment models are subject to change and become more complex as the industry adopts a more agentic approach to using AI systems.
It's worth noting that many businesses are using multiple AI deployment models today at both the individual and business unit/product levels.
It's also worth noting that AI usage in enterprises today is similar to the early days of the cloud. There are legitimate and sanctioned use cases and experimentation, but there is an unknown amount of Shadow AI usage across all parts of the business.
Currently, many security and IT teams can’t fully grasp who uses AI and for what purposes, but they know it's happening. This is another area of innovation we see in the AI Security market.
None of these determinations are set in stone, so please adjust this as needed. I would love to hear your feedback on how to improve this and how you use it.
Updates
August 2, 2024 - Initial posting
August 2, 2024 - Added the “Things Not Covered” section
August 3, 2024 - Added software copilots to the Private SaaS AI Model, added AI Development to Things Not Covered
August 6, 2024 - Added “AI-Enabled Products” to the Things Not Covered section
September 25, 2024 - Removed software copilots from Private SaaS AI Model and created “Code Copilot Platforms” as a new section
Reply