
Enterprise security questionnaires are adding more AI sections. If you’re processing people’s data with LLMs and selling to financial services, healthcare, or large tech companies, you’ve probably seen this already. Questions about training data, customer data handling, prompt injection, and incident response for AI-specific failures.
And most growing SaaS companies are handling it the way they handle all security questionnaires: imperfectly, iteratively, on demand, and with varying degrees of frustrated typing. But if you’re getting these questions repeatedly, having documented answers ready certainly beats improvising or hoping they don’t notice your AI-generated answers.
Based on what we’re seeing in enterprise security reviews:
If your AI features call OpenAI, Anthropic, Google, or use managed services like AWS Bedrock, your data flow story is probably more complicated. “We use Bedrock” is only the start of a longer conversation.
When customer data hits a third-party model API, that provider becomes a sub-processor under GDPR and similar regimes. Buyers will want to know:
The standard answer to “Do you train on customer data?” is “No.” With retrieval-augmented generation, that answer is technically accurate, but as with everything, it’s a little complicated.
In a RAG system, you’re fundamentally augmenting the foundational model’s context and response based on data you’re providing - not training the model. More specifically, you’re chunking data, embedding it into vectors, storing the vectors, and at query time embedding the query, retrieving semantically similar chunks, and injecting them into the model's context to inform its generation.
But there a several things to keep in mind. It does:
Buyers who understand this will ask the following:
If you have EU customers or process EU personal data, LLMs and RAG architectures create some complications. None of this should be construed as legal advice.
Legal basis for processing
You need a lawful basis for both your primary service and the AI-specific processing. If customers consented to “analytics,” that doesn’t cover “we embed your data and use it to generate AI responses.” Make it transparent. Get consent. Review your legal basis with your counsel.
Right to erasure
Requests for the right to erasure or right to be forgotten (Article 17) get complicated with vector embeddings. If someone requests deletion, can you actually remove their data from your vector database? Will you need full reindexing? Is it simple as deleting a database? The Bavarian State Office for Data Protection Supervision says post-training options may be necessary.
Data transfers
If you’re using US-based model providers, you need valid transfer mechanisms for EU personal data. This means standard contractual clauses and a transfer impact assessment to describe the specifics around circumstances, destination laws, and any safeguards in place. AWS has SCCs, and you should also understand the supplementary measures question.
Sub-processor disclosure
Your customers need to know about your sub-processors who process their data. Your privacy policy probably lists AWS if you’re using Bedrock. Otherwise, list your vendors who see any customer data.
Several frameworks cover AI governance. Here’s a current picture
NIST AI RMF — US government-adjacent. Relevant if you’re selling to federal or if your enterprise buyers reference NIST frameworks generally.
EU AI Act — Mandatory for the EU market, and quite complex. The risk-based system means that if your product falls into high-risk categories (healthcare AI, emotion recognition, employment decisions, creditworthiness, certain infrastructure), you’re looking at entire bans, conformity assessments, technical documentation requirements, and ongoing monitoring obligations, far beyond documentation exercises. If you’re selling AI products in the EU, get legal counsel who deeply understands this. Of course, nothing here is meant to be legal advice.
Colorado AI Act — Relevant if you have Colorado customers. Set to come into effect in February 2026, but under some federal scrutiny at the time of publishing. It creates disclosure and risk management requirements for “high-risk AI systems” in consequential decisions (employment, financial services, healthcare, housing, insurance, legal services, education, government services). Likely that similar state-level legislation will be coming elsewhere.
AIUC-1 — A newer framework with detailed, practical requirements. Less market traction than ISO 42001 currently, but the implementation guidance is more specific. Launched in July 2025 by the Artificial Intelligence Underwriting Company. Built around six pillars: data & privacy, security, safety, reliability, accountability, and societal risks and builds on NIST AI RMF, EU AI Act, ISO 42001, and MITRE ATLAS. Early commercial framework - worth understanding, but early to bet heavily on.
ISO 42001 — Likely to stick around for enterprise adoption as it’s from ISO, focused specifically on AI management systems.
These frameworks all aim to answer the questions sophisticated buyers are asking. If you document your AI practices clearly and can speak to your operational processes, you can answer questionnaires regardless of which framework the buyer cares about. As with SOC2, you don’t necessarily need it if you can confidently answer questions and don’t mind completing every organization’s snowflake unique security questionnaire. Don’t overbuild for any single standard until you know which ones your specific buyers are most likely to ask for.
It’s worth noting here that both academic and industry research are still working to address agentic security gaps, and no current framework fully addresses them all.
There are no standards for agent identities, there are no established solutions for agentic authorization, there are no foolproof ways to differentiate between instructions and data, and this is a fast-moving and complex space across multiple spheres.
So having thoughtful answers, even if it’s “yes, that’s a gap we’ve thought about and here’s how we’re trying to mitigate it”, will differentiate you from those who haven’t considered these issues.
The following is good operational hygiene that will get you started, regardless of the frameworks you use.
There are policy templates you can use, but to truly reflect your organization, application, and product, this will take a week or more of focused work. If you’ve already gone down the road of GDPR or other data privacy frameworks, you may already have a head start on this. Reconcile what you write (or tell AI to write) with what engineering is really doing, which means pulling engineers into the conversation. If your documentation says one thing and your implementation does another, that will surface in a serious security review.
Document the following
Where sophisticated buyers separate vendors who’ve thought about AI operations from those who are winging it - asking about model maturity and AI-specific incident response.
Model lifecycle management
Incident response (AI-specific)
If you have decent incident response documentation, updating it for AI scenarios should be relatively straight forward.
AWS Bedrock Guardrails, Azure AI Content Safety, and Google Vertex AI safety filters are worth evaluating, but it’s not necessarily as simple as enabling them. They’re good for a few things:
But what you need to do
This takes evaluation and testing to deploy, plus ongoing monitoring as models and capabilities change.
AWS Bedrock gets positioned as the “enterprise-ready” option, and there’s truth to that, but this is a starting point.
Bedrock gives you a legal and foundational start
But - you still have to
“Show me your Bedrock configuration and explain your design decisions” isn’t an unreasonable request. Be ready to walk through your model access policies, guardrail configuration, logging setup, and VPC architecture.
Unless a specific buyer requires these, wait and see where the market goes:
Now, if you’re selling AI products to highly regulated industries or anything that might fall under the EU AI Act high-risk categories or the Colorado AI Act scope, accelerate this timeline.
Documentation and questionnaire responses are where conversations start. Contracts are where they finish. A few additional things to think about regarding AI usage:
MSA and DPA language
Your Master Service Agreement and Data Processing Addendum probably weren’t written with AI features in mind. Enterprise buyers may want to negotiate AI-specific terms such as restrictions on training, data handling commitments, and liability allocation for AI-generated outputs. Get legal or your attorney involved before these negotiations.
Liability for AI outputs
If your AI hallucinates in a regulated context (think financial advice, health information, legal guidance) that’s not only a PR problem but a legal and customer retention problem. Contracts should address this and your product should have appropriate guardrails and disclaimers.
We need to talk to your AI/Security team requests
In deals with sophisticated buyers or mature security teams, they’ll want to talk to someone technical who can walk through architecture, explain design decisions, and answer follow-up questions. Documentation doesn’t solve this, especially since so much of it can be AI-generated, killing confidence in the documentation. You either need internal expertise or a partner who can credibly represent your technical approach.
You’ll know when more time and investment is needed. Obvious things like deals delayed and too much time is put into AI governance questions and explicit requests around AI governance and compliance.
Keep an eye on on state level regulation such as Colorado’s AI Act, and other state level requirements. Until then, keep building operational maturity, extend your existing security program, and answer questions as they come. But don’t yet build an entire compliance program around a framework that is still proving market traction.
The AIUC-1 Navigator provides searchable implementation guidance across AI governance requirements. Useful for understanding what’s being asked even if you’re not specifically pursuing AIUC-1 compliance since the standard pulls requirements from NIST AI RMF, EU AI Act, ISO 42001, and MITRE ATLAS into specific tasks, which can help you structure your own documentation.
NIST AI RMF is another one to review, though that’s a bit heavier to parse.
Document what you're actually doing, extend your existing security practices to cover AI-specific scenarios, and answer buyer questions as they come. The framework adoption conversations will shake out over the next year or two, hopefully. Until then, operational maturity beats anything else.