Enterprise security questionnaires — SIG Core, CAIQ, VSA, and the custom 300-question spreadsheets that seem to multiply every quarter — all ask the same hard questions in different wording. AI tools handle the repeatable 70%: encryption standards, password policies, compliance certifications. The remaining 30% is where the buyer's analyst is actually paying attention.
This answer bank covers that 30%. Each answer is written at two maturity levels:
- Early-stage - you've completed SOC 2, you're building program depth, you some tens of employees, and your CTO or a security lead is wearing multiple hats
- Growth-stage - you have an established security program, some dedicated security team or function, and you're scaling enterprise sales with many dozens or more employees
How to use this: Adapt these answers to your specific environment, tools, and architecture. Don't copy them verbatim. The buyer's analyst has read enough questionnaire responses to recognize template language, and template language triggers follow-up questions designed to expose whether the answer is real. For the full framework on handling the toughest 30% of questionnaire questions, read The 30% AI Can't Answer For You.
Framework note: Categories map to SIG Core domains, but the underlying questions overlap 60-70% with CAIQ, VSA, and custom questionnaires. If you can answer these well, you can handle any of them.
1. Security Governance & Leadership (SIG Domain 3: Security Policy)
What the buyer is evaluating: Whether someone is accountable for security decisions, or whether security happens reactively when deals demand it. The buyer's analyst isn't looking for a Fortune 500 org chart — they're looking for evidence that security has an owner, a reporting line, and a seat at the table when trade-offs are made. A fractional CISO with real authority beats a CTO who "also handles security" with no defined responsibility.
Q: Does your organization have a dedicated Chief Information Security Officer (CISO) or equivalent role?
Early-stage answer: Our security program is led by <named individual>, VP of Engineering, who dedicates approximately 40% of their time to security operations, architecture review, and buyer evaluations. We supplement internal leadership with a fractional CISO engagement through <firm name> for strategic guidance, policy development, and enterprise buyer interactions. Security decisions escalate to the CEO, and our fractional CISO presents a quarterly security update to the executive team.
Growth-stage answer: Our security organization is led by <named individual>, Director of Security, reporting to the CTO with a dotted-line reporting relationship to the board's audit committee. The security team of N includes dedicated functions for application security, infrastructure security, and GRC. <Named individual> presents to the board quarterly on security posture, risk register updates, and program investment. The security team has budget authority for tooling and vendor relationships independent of the engineering budget.
Q: How is your information security policy managed, reviewed, and communicated to employees?
Early-stage answer: Our information security policy set - covering access control, data handling, incident response, and acceptable use — is maintained in <Confluence/Notion/Google Docs> with version control and annual review cycles. Our last full policy review was completed in MMYY as part of our SOC 2 Type II audit preparation. New employees review and acknowledge policies during onboarding through <Vanta/Drata/manual process>, and policy changes are communicated via <Slack channel/email> with acknowledgment tracking.
Growth-stage answer: Our policy framework includes 12 core policies aligned to ISO 27001 control domains, managed in <Vanta, Drata, GRC platform> with automated version control and review workflows. Each policy has a designated owner and undergoes annual review with interim updates triggered by material changes to our environment or regulatory requirements. Employee acknowledgment is tracked automatically, with completion rates reported to management. Policies are tested against actual controls quarterly - we verify that what's documented matches what's enforced.
Q: Does your organization maintain a security committee or equivalent governance body?
Early-stage answer: We hold a monthly security review meeting attended by the CTO, VP of Engineering, and our fractional CISO. This meeting covers open security items from pen test findings and questionnaire gaps, reviews upcoming buyer security evaluations, and tracks progress on our security roadmap. Meeting notes and action items are documented in <some tool>. We handle governance through this regular cadence rather than a formal security committee - the monthly reviews ensure security decisions aren't made ad hoc.
Growth-stage answer: Our Security Steering Committee meets monthly and includes the Director of Security, CTO, VP of Engineering, Head of Product, and General Counsel. The committee reviews the risk register, approves security investment decisions, evaluates policy changes, and tracks security program metrics. Meeting minutes and decisions are documented and retained. The committee reports quarterly to the board's audit committee through a formal security posture update that covers risk trends, incident metrics, and program maturity benchmarks.
2. Risk Management (SIG Domain 4)
What the buyer is evaluating: Whether you've identified your own risks before they had to point them out. A company that can articulate its top risks, explain what's mitigated and what's accepted, and show a process for ongoing assessment signals a mature program. A company that answers every risk question with "we're SOC 2 compliant" signals the opposite. Buyers also focus heavily on third-party risk because your vendors become their fourth-party risk.
Q: Describe your organization's risk assessment process, including frequency and methodology.
Early-stage answer: We conduct a formal risk assessment annually as part of our SOC 2 audit cycle, using a qualitative risk matrix that evaluates likelihood and impact across our infrastructure, application, and operational domains. Risks are identified through pen test findings, buyer security evaluation patterns, and internal architecture reviews. Our risk register is maintained in <Google Sheets/Jira/Vanta> and reviewed quarterly by the CTO and fractional CISO. We're evaluating more structured methodologies, including NIST 800-30, to improve how we score and track risks as our program matures.
Growth-stage answer: We perform risk assessments annually using a methodology aligned to NIST 800-30, with interim assessments triggered by material changes to our architecture, vendor relationships, or threat landscape. The assessment covers technical, operational, and third-party risk domains and produces a scored risk register maintained in <ServiceNow, OneTrust, or equivalent>. Risk owners are assigned for each identified risk. The risk register is reviewed quarterly by the Security Steering Committee, and risk treatment decisions (mitigate, accept, transfer) are documented with executive sign-off. Our last full assessment identified N risks, of which Y are actively being mitigated and Z have been formally accepted with documented rationale.
Q: How does your organization manage third-party/vendor risk?
Early-stage answer: We maintain an inventory of third-party vendors with access to customer data or critical infrastructure in <spreadsheet/Notion>. Before onboarding a new vendor in these categories, we review their SOC 2 report, security documentation, and data handling practices. We don't yet have a formal tiered TPRM program, but our critical vendors, <AWS, Datadog, Stripe, etc> - are reviewed annually for continued compliance and security posture changes. We're building a more structured vendor risk assessment process for our next audit cycle.
Growth-stage answer: Our third-party risk management program categorizes vendors into three tiers based on data access, system integration, and business criticality. Tier 1 vendors (access to customer data or production systems) undergo full security assessment before onboarding: SOC 2 review, security questionnaire, and architecture evaluation. Tier 2 vendors receive a reduced-scope assessment: SOC 2 review and security questionnaire, without the architecture evaluation. All tiered vendors are reassessed annually. Vendor assessments are tracked in <OneTrust/Vanta/Prevalent> with risk scores and remediation tracking. We monitor critical vendors for security incidents through <SecurityScorecard/BitSight> and have contractual requirements for breach notification. Our current vendor inventory includes N Tier 1 vendors, all with current assessments.
Q: Describe your risk acceptance process. Who has authority to accept security risks?
Early-stage answer: Risk acceptance decisions are made by the CTO in consultation with our fractional CISO and documented in our risk register. Each accepted risk includes a written rationale explaining why acceptance is appropriate given our current stage, what compensating controls are in place, and under what conditions the decision should be revisited. We've formally accepted N risks, primarily related to capabilities that are on our roadmap but not yet implemented. Accepted risks are reviewed quarterly.
Growth-stage answer: Risk acceptance follows a defined process based on risk severity. Risks rated medium or below can be accepted by the Director of Security with documented rationale. High risks require CTO approval. Critical risks require Security Steering Committee approval with documented compensating controls. All accepted risks include an expiration date for re-evaluation, a description of compensating controls, and the conditions under which the risk must be re-assessed. The full register of accepted risks is reviewed quarterly by the Security Steering Committee and included in board reporting.
3. Asset Management (SIG Domain 5)
What the buyer is evaluating: Whether you know what you have. A company that can produce a current inventory of systems, data stores, and endpoints — and explain how data is classified — can protect what matters. A company that discovers assets during the questionnaire process is telling the buyer they'll discover more assets after the contract is signed. Shadow IT and untracked data stores are what keep TPRM analysts up at night.
Q: Do you maintain a current inventory of all information assets, including hardware, software, and data stores?
Early-stage answer: We maintain our asset inventory in <Notion/Confluence/spreadsheet>, covering cloud infrastructure (AWS/GCP accounts, services, and regions), SaaS applications used by the team, and endpoint devices. Cloud infrastructure is managed through Terraform, which serves as our source of truth for production assets. Endpoint devices are tracked through <Kandji/Iru/Jamf/manual inventory>. We perform quarterly reconciliation between our infrastructure-as-code definitions and running resources to identify drift or untracked assets. Our IaC-driven approach gives us strong coverage of production infrastructure without a formal CMDB. A dedicated CMDB is scoped for when our asset complexity outgrows this model.
Growth-stage answer: Our asset inventory is maintained in <Snipe-IT/ServiceNow/Oomnitza> and covers cloud infrastructure, SaaS applications, endpoint devices, and data stores. Cloud assets are managed through Terraform with drift detection via <Spacelift/Atlantis/CloudQuery>, ensuring our inventory matches actual deployed resources. SaaS applications are discovered and tracked through <Nudge Security/Productiv/CASB tool> to identify shadow IT. Endpoint devices are managed through Jamf (macOS) with MDM enrollment required for all corporate devices. The inventory is updated automatically through integrations and reconciled quarterly. Each asset has an assigned owner and classification level.
Q: How does your organization classify data, and what controls are applied at each classification level?
Early-stage answer: We use a three-tier data classification scheme: Public, Internal, and Confidential. Customer data is classified as Confidential by default. Classification determines encryption requirements (Confidential data encrypted at rest and in transit), access controls (Confidential data restricted to roles with business need), and retention policies. Our classification scheme is documented in our data handling policy and applied during architecture reviews for new features. We're working on implementing automated data classification tagging in our primary data stores.
Growth-stage answer: Our data classification framework defines four tiers: Public, Internal, Confidential, and Restricted. Each tier has defined controls for encryption, access, retention, logging, and disposal. Customer PII and authentication credentials are classified as Restricted, with field-level encryption in addition to storage-level encryption, access limited to specific service accounts with audit logging, and 90-day access reviews. Classification is applied at the schema level in our primary data stores and enforced through IAM policies. Data classification is part of our design review process for new features, and our DLP tooling <Nightfall/BigID> monitors for classification policy violations in production.
Q: How do you identify and manage shadow IT within your organization?
Early-stage answer: We use Google Workspace admin reporting to identify SaaS applications authenticated through Google SSO, which covers the majority of tools adopted by our team. New tool adoption requires approval through our IT channel in Slack. We acknowledge that our shadow IT detection isn't fully automated — our primary control is SSO enforcement, which means tools that don't support SSO are either approved exceptions with documented justification or flagged for removal during quarterly access reviews.
Growth-stage answer: Shadow IT discovery is managed through [Nudge Security/Productiv], which monitors SaaS application usage by analyzing authentication events and email-based signups across the organization. New applications are automatically flagged for security review before data is shared. We enforce SSO for all applications that support it, and our CASB integration provides visibility into unsanctioned cloud service usage. Quarterly SaaS audits reconcile discovered applications against our approved vendor list, and unapproved tools with access to company data are either onboarded through our TPRM process or decommissioned.
4. Access Control (SIG Domain 6)
What the buyer is evaluating: Whether access to their data is controlled structurally or depends on someone remembering to revoke a permission. Buyers probe the full lifecycle: how access is granted, how it's scoped, how it's reviewed, and how quickly it's removed when someone leaves or changes roles. MFA and SSO are baseline expectations — nobody gets credit for those anymore. The real test is whether access reviews actually result in access being removed, or whether they're a quarterly exercise that produces a spreadsheet and no revocations.
Q: Describe your identity and access management (IAM) program, including how access is provisioned and deprovisioned.
Early-stage answer: Access provisioning follows a documented process: new employees receive role-based access through our identity provider (<Okta/Google Workspace>) based on their department and function. Access requests beyond the baseline role require manager approval via <Linear ticket/Slack workflow>. Deprovisioning is triggered by our HR system — when an employee is offboarded, IT disables their identity provider account within 24 hours, which cascades SSO-connected application access. For applications not connected to SSO, we maintain a deprovisioning checklist that's executed manually. We're implementing SCIM provisioning for our top 10 SaaS applications to automate this further.
Growth-stage answer: Our IAM program is built on <Okta/Azure AD> as the central identity provider, with SCIM provisioning and deprovisioning for N integrated applications. Access is provisioned based on role definitions mapped to job functions, with access beyond the standard role requiring manager approval and security team review through our <ServiceNow/Jira Service Management> workflow. Deprovisioning is automated: when HR processes a termination in <Workday/BambooHR>, our identity provider disables the account within 1 hour, and SCIM propagates deprovisioning to all connected applications. Accounts are verified disabled within 24 hours through automated checks. Access provisioning and deprovisioning events are logged and auditable.
Q: How is multi-factor authentication (MFA) implemented and enforced across your organization?
Early-stage answer: MFA is required for all employees on all systems that support it. We enforce MFA at the identity provider level (<Okta/Google Workspace>), which covers SSO-connected applications. For our cloud infrastructure (AWS/GCP), MFA is required for console access and enforced through IAM policies. We require phishing-resistant MFA methods (hardware keys or authenticator apps). SMS-based MFA is disabled. For customer-facing authentication, we offer MFA as an option and support enforcement at the organization level for enterprise customers.
Growth-stage answer: MFA is enforced organization-wide through our identity provider with conditional access policies that require phishing-resistant authentication (WebAuthn/FIDO2 hardware keys) for access to production systems and sensitive applications. All employees are issued YubiKeys during onboarding. Cloud infrastructure access (AWS) requires MFA for all console and CLI operations, enforced through IAM policies that deny actions without MFA context. Our customer-facing product supports MFA with TOTP and WebAuthn, with organization-level enforcement available for enterprise customers. MFA adoption is tracked and reported — current enforcement coverage is 100% of employees and N% of enterprise customer organizations.
Q: Describe your privileged access management (PAM) approach for production systems.
Early-stage answer: Production system access is restricted to senior engineers and operations staff — N people currently have standing access. We use <AWS IAM roles/GCP service accounts> with scoped permissions rather than shared credentials. SSH access to production instances requires key-based authentication and is logged through <CloudTrail or equivalent>. Our next step is replacing standing access with just-in-time access through an approval workflow — we're scoping this for QYY, which will limit standing access to on-call engineers only.
Growth-stage answer: Privileged access to production follows a just-in-time model managed through <Teleport/StrongDM/AWS SSM>. Engineers request production access through a workflow that requires approval and grants time-limited credentials (maximum 4-hour sessions). No engineer has standing SSH or database access to production systems. Emergency access ("break glass") follows a separate process with mandatory post-access review within 24 hours. All privileged sessions are logged with full command auditing. Service accounts use short-lived credentials through <AWS STS/workload identity>. Privileged access patterns are reviewed monthly, and any standing access is flagged for conversion to JIT.
Q: How frequently are user access reviews conducted, and what is the process for removing unnecessary access?
Early-stage answer: We conduct access reviews quarterly for production systems and critical SaaS applications. Reviews are performed by team leads who verify that each team member's access is appropriate for their current role. Access that's no longer needed is revoked within 5 business days of review completion. We track review completion and revocation in <spreadsheet/Notion>. Our most recent review in MYY resulted in N access revocations, primarily from role changes and employees who had accumulated access beyond their current needs.
Growth-stage answer: Access reviews are conducted quarterly for all systems and applications, with monthly reviews for privileged access and production systems. Reviews are managed through <Vanta/ConductorOne/Zluri>, which automates the review workflow: managers receive a list of their reports' access, certify or revoke each entitlement, and revocations are executed automatically through SCIM. Reviews must be completed within 10 business days, and completion is tracked at the management level. Our last quarterly review cycle covered N entitlements across Y applications, resulting in Z revocations. Access review metrics are reported to the Security Steering Committee.
Q: How do you enforce the principle of least privilege across your environment?
Early-stage answer: We implement least privilege through role-based access control in our identity provider, scoped IAM roles in AWS, and application-level RBAC with N defined roles. New employees receive the minimum access needed for their role, and additional access requires a documented request. In our cloud infrastructure, we use Terraform-managed IAM policies scoped to specific services and resources rather than broad administrative permissions. We acknowledge that least-privilege enforcement is easier in our cloud environment than in our SaaS application stack, where some tools have limited role granularity.
Growth-stage answer: Least privilege is enforced at multiple layers. Identity provider groups map to role-based access in each integrated application, with role definitions reviewed semi-annually. AWS IAM follows a deny-by-default model: policies are scoped to specific resources and actions, managed through Terraform, and validated with <IAM Access Analyzer/CloudSploit> for overly permissive configurations. Database access is restricted to application service accounts with query-level permissions - engineers access production data through audited tooling, not direct database connections. API tokens issued to third-party integrations are scoped to named permissions and expire after 12 months. We run quarterly access pattern analysis to identify permissions that are provisioned but unused.
5. Cryptography (SIG Domain 7)
What the buyer is evaluating: Whether encryption is a real control or a checkbox. Saying "AES-256 at rest, TLS in transit" is the minimum — the buyer's analyst has read that sentence a thousand times. What separates credible answers is specificity about key management, key rotation practices, and what happens when a key is compromised. Certificate management failures cause outages that make the news, so buyers increasingly ask about lifecycle management too.
Q: Describe your encryption standards for data at rest and data in transit.
Early-stage answer: Data in transit is encrypted using TLS 1.2 or higher for all external communications, enforced at the load balancer level with managed certificates through AWS Certificate Manager. We've disabled TLS 1.0 and 1.1. Data at rest is encrypted using AES-256 through AWS-managed encryption: RDS instances use AWS-managed keys, S3 buckets enforce server-side encryption with SSE-S3 or SSE-KMS depending on the data classification, and EBS volumes are encrypted by default. For sensitive fields (API keys, credentials stored on behalf of customers), we apply application-layer encryption using AWS KMS customer-managed keys in addition to storage-level encryption.
Growth-stage answer: All data in transit is encrypted with TLS 1.2+ enforced across external and internal service-to-service communications. Our TLS configuration follows Mozilla's "Modern" compatibility profile, and we perform quarterly scans with SSL Labs to verify configuration. Data at rest is encrypted using AES-256 across all storage services. We use AWS KMS customer-managed keys (CMKs) for all production data stores, with separate keys per data classification tier. Restricted data (customer PII, credentials) receives application-layer encryption using envelope encryption through KMS, ensuring that storage-level access alone cannot expose plaintext. Encryption standards are documented in our cryptography policy, reviewed annually, and validated during pen test engagements.
Q: Describe your encryption key management practices, including key rotation.
Early-stage answer: Encryption keys are managed through AWS KMS. We use customer-managed keys (CMKs) for production databases and sensitive data stores, with key policies restricting administrative access to [N] authorized personnel. KMS key usage is logged through CloudTrail. We have automatic annual rotation enabled for our KMS CMKs. Application-level secrets encryption keys follow the same rotation schedule. We don't yet have a documented emergency key rotation procedure, but key compromise is covered in our incident response plan as a scenario requiring immediate rotation and re-encryption.
Growth-stage answer: Key management is centralized through AWS KMS with customer-managed keys organized by data classification and service boundary. Key policies enforce separation of duties — key administrators cannot use keys for cryptographic operations, and key users cannot modify key policies. Automatic rotation is enabled on an annual cycle for all CMKs. Our emergency key rotation procedure is documented and has been tested: it covers immediate key rotation, re-encryption of affected data, revocation of grants, and communication to affected customers. Key usage is monitored through CloudTrail with alerts for anomalous access patterns (access from unexpected roles, high-volume decryption requests). Certificate lifecycle is managed through AWS Certificate Manager with automated renewal and expiration alerting at 30, 14, and 7 days.
Q: How do you manage TLS certificate lifecycle, including renewal and expiration monitoring?
Early-stage answer: We use AWS Certificate Manager (ACM) for TLS certificates on our public-facing endpoints, which handles automatic renewal. For certificates that can't be managed through ACM (third-party integrations, internal services), we track expiration dates in [spreadsheet/PagerDuty] and have calendar reminders set for 30 days before expiration. We haven't had a certificate-related outage, but we recognize that our manual tracking for non-ACM certificates is a gap. We're evaluating cert-manager for our Kubernetes workloads to automate internal certificate lifecycle.
Growth-stage answer: TLS certificate lifecycle is automated across our infrastructure. Public-facing certificates are managed through AWS Certificate Manager with automatic renewal. Internal service-to-service certificates are managed through [cert-manager on Kubernetes/HashiCorp Vault PKI], with automated issuance and short-lived certificates (72-hour TTL) that rotate automatically. Certificate expiration monitoring is implemented in Datadog with alerts at 30, 14, and 7 days for any certificate in our inventory. We maintain a certificate inventory that's reconciled monthly against discovered certificates through network scanning. Our last certificate audit identified [N] certificates under management, all with automated renewal or tracked manual renewal processes.
6. Physical & Environmental Security (SIG Domain 8)
What the buyer is evaluating: For cloud-native SaaS companies, this section is primarily about whether you've delegated physical security appropriately and can articulate that delegation clearly. Buyers know you're not running your own data center — but they want to confirm you understand the shared responsibility model and can point to your cloud provider's physical security controls. Office security matters less than it used to, but if your team handles customer data from laptops in coffee shops with no endpoint management, that's worth knowing.
Q: Describe the physical security controls for the facilities where customer data is stored and processed.
Early-stage answer: Customer data is stored and processed entirely in AWS [region(s)]. We do not operate any physical data centers or co-location facilities. Physical security for our infrastructure is provided by AWS, whose data center controls are documented in their SOC 2 Type II report and include biometric access controls, 24/7 security staffing, video surveillance, and environmental controls. Our AWS SOC 2 report is available for review. Our team operates remotely [or from an office at location], and no customer data is stored on local devices or on-premises servers.
Growth-stage answer: All customer data is stored and processed in AWS [regions], operating under the AWS shared responsibility model. AWS's physical security controls — biometric access, mantrap entry, 24/7 monitoring, environmental controls, and media destruction — are documented in their SOC 2 Type II and ISO 27001 certifications, which we review annually as part of our vendor risk management program. We do not maintain any on-premises infrastructure that stores or processes customer data. Our corporate office [location] has badge-controlled access, visitor logging, and clean-desk policies, but is not in scope for customer data processing. All production access is performed from managed endpoints with full-disk encryption and MDM enrollment.
Q: How are employee endpoints secured, particularly for remote workers?
Early-stage answer: All company-issued devices run macOS with [Kandji/Jamf] for device management. Policies enforced through MDM include full-disk encryption (FileVault), automatic OS updates, screen lock after 5 minutes, and firewall enabled. Remote employees connect to internal tools through SSO with MFA — we don't use a traditional VPN for application access. Endpoint detection is provided through <SentinelOne/CrowdStrike>. We require company-issued devices for accessing production systems and customer data; personal devices are limited to communication tools (Slack, email).
Growth-stage answer: All employees use company-managed devices enrolled in Jamf (macOS) with enforced security policies: FileVault encryption, automatic patching within 72 hours of release, screen lock, firewall, and Gatekeeper enabled. EDR is deployed through CrowdStrike Falcon on all endpoints with 24/7 monitoring and alerting. Remote access to internal applications is managed through <Cloudflare Access/Zscaler> with device posture checks — access is denied if the device fails compliance checks (unencrypted disk, missing EDR, outdated OS). Production infrastructure access requires additional device attestation through our PAM solution. BYOD is not permitted for systems that handle customer data. Device compliance status is reported weekly, and non-compliant devices are quarantined within 24 hours.
7. Operations Security (SIG Domain 10)
What the buyer is evaluating: Whether you'd know if something went wrong, and how fast you'd respond. This category reveals the gap between "we have security" and "we operate security." The buyer's analyst looks for evidence of active monitoring (not just log storage), defined vulnerability remediation timelines (not just scanning), and change management that prevents a single bad deploy from reaching production. Companies that describe tools without describing processes fail this section consistently.
Q: Describe your security monitoring and SIEM implementation.
Early-stage answer: We aggregate logs from our cloud infrastructure (AWS CloudTrail, VPC Flow Logs), application layer, and identity provider into <Datadog/CloudWatch Logs>. We've configured security-specific alerts for critical events: failed authentication spikes, IAM policy changes, production database access outside of normal patterns, and infrastructure changes outside of CI/CD. Alerts route to a dedicated Slack channel monitored by the engineering on-call rotation. We don't have a traditional SIEM deployment — our current approach prioritizes high-signal alerting over broad log analysis. We're evaluating whether to implement a SIEM or continue building on our observability platform as we scale.
Growth-stage answer: Security monitoring is centralized in <Panther/Datadog Security Monitoring/Elastic Security>, ingesting logs from AWS CloudTrail, VPC Flow Logs, application audit logs, identity provider events, endpoint detection alerts, and CI/CD pipeline activity. We maintain N active detection rules tuned to our environment, covering authentication anomalies, privilege escalation attempts, data exfiltration indicators, infrastructure drift, and suspicious API access patterns. Detection rules are reviewed and updated quarterly based on threat intelligence and pen test findings. Alerts are triaged by severity with defined response SLAs: critical alerts trigger the on-call security engineer within 15 minutes, high-severity alerts within 1 hour. We track mean time to detect (MTTD) and mean time to respond (MTTR) as operational metrics reported monthly.
Q: Describe your vulnerability management process, including scanning frequency and remediation timelines.
Early-stage answer: We scan our infrastructure weekly using <AWS Inspector/Qualys> and run dependency vulnerability scanning on every pull request through <Snyk/Dependabot/GitHub Advanced Security>. Container images are scanned at build time. Our remediation SLAs are: critical vulnerabilities within 72 hours, high within 2 weeks, medium within 30 days. Findings are tracked in <Jira/Linear> and included in sprint planning. We supplement automated scanning with annual penetration testing that covers application-layer vulnerabilities scanners miss. Our current critical/high vulnerability backlog is N items, all within SLA.
Growth-stage answer: Vulnerability management covers infrastructure (AWS Inspector, weekly scans), application dependencies (Snyk in CI/CD on every PR), container images (Trivy at build and runtime), cloud configuration (e.g Prowler/CloudSploit, daily), and web application (quarterly DAST scans with [Burp Suite Enterprise/StackHawk]). Findings are automatically triaged by exploitability and business context using <Snyk risk scoring/EPSS data>, not CVSS alone. Remediation SLAs: critical within 48 hours, high within 7 days, medium within 30 days, low within 90 days. SLA compliance is tracked and reported monthly — our current 12-month compliance rate is N%. Findings that exceed SLA require risk acceptance from the Director of Security. Vulnerability metrics are included in our quarterly security review and board reporting.
Q: Describe your change management process for production systems.
Early-stage answer: All production changes go through our CI/CD pipeline (GitHub Actions), which requires a pull request with at least one reviewer approval before merging. Branch protection prevents direct pushes to main. Infrastructure changes are managed through Terraform with the same PR and review process. The CI pipeline runs automated tests, SAST scanning (Semgrep/CodeQL), and dependency checks before merge is permitted. Deployments to production are automated on merge to main — there are no manual deployment paths. Emergency changes follow the same process but with an expedited review from the on-call engineer, and a post-deploy review within 24 hours.
Growth-stage answer: Production changes follow a defined change management process with three categories: standard (pre-approved, low-risk), normal (requires review and approval), and emergency (expedited with mandatory post-implementation review). All code changes require pull request review from at least two engineers, with changes touching authentication, authorization, or data access requiring security team review. Infrastructure changes are managed through Terraform with plan output reviewed before apply. Deployment is automated through <GitHub Actions/CircleCI/GitLab CI> with progressive rollout (canary deployments for critical services). Change logs are maintained automatically, and changes are correlated with monitoring data to enable rapid rollback. Emergency changes require on-call lead approval and a documented review within 24 hours. Change management metrics (frequency, failure rate, time to recovery) are tracked as operational indicators.
Q: What is your patching cadence for operating systems, applications, and third-party dependencies?
Early-stage answer: Our infrastructure runs on managed AWS services (RDS, ECS/EKS, Lambda) where OS patching is handled by AWS or through automated container image rebuilds. Container base images are rebuilt weekly with the latest security patches. Third-party dependency updates are monitored through Dependabot and Snyk, with security-relevant updates prioritized in our remediation SLAs (critical within 72 hours). Application-level patches are deployed through our standard CI/CD pipeline. Endpoint devices receive OS patches automatically through MDM within 7 days of release, with critical patches pushed within 48 hours.
Growth-stage answer: Patching follows a tiered cadence based on risk. Container base images are rebuilt weekly from updated upstream images, with critical CVE patches triggering immediate rebuilds. Managed services (RDS, EKS) follow AWS maintenance windows with testing in staging first. Third-party dependencies are updated automatically for patch versions through Renovate Bot, with minor and major updates reviewed by engineering within their sprint cycle. Security-critical dependency patches follow our vulnerability SLAs (48 hours for critical). Endpoint OS patches are deployed through Jamf within 72 hours of release, with zero-day patches pushed immediately. Patch compliance is measured weekly — our current compliance rate is [% for critical patches within SLA. Exceptions require documented risk acceptance.
8. Application Security (SIG Domain 12)
What the buyer is evaluating: Whether a single developer can push a vulnerability to production, and how many layers exist to prevent that. Buyers look for evidence that security is built into the development process, not bolted on at the end. They're particularly interested in automated scanning (and what happens when it finds something), code review practices, dependency management, and whether security testing goes beyond annual pen tests. The absence of specific tooling and process details signals a program that exists on paper.
Q: Describe your Secure Software Development Lifecycle (SSDLC), including how security is integrated into each phase.
Early-stage answer: Security is integrated into our development process at four checkpoints. During design, new features that touch authentication, authorization, or data handling go through a security review with the CTO or senior engineer. During development, engineers follow our secure coding guidelines, which cover OWASP ASVS L1 prevention patterns for our stack ([language/framework]). During CI, automated checks run SAST (Semgrep/CodeQL), dependency scanning (Snyk/Dependabot), and secrets detection (GitLeaks/TruffleHog) on every pull request. During deployment, branch protection ensures all CI checks pass before merge. We supplement these automated controls with annual penetration testing for findings that tooling can't catch.
Growth-stage answer: Our SSDLC integrates security across five phases. Threat modeling is performed during design for features involving new data flows, trust boundary changes, or authentication/authorization modifications — we use a lightweight threat model template based on STRIDE. Development follows secure coding standards documented per language, with security-focused training conducted annually for all engineers. CI pipelines enforce SAST (Semgrep with custom rules for our codebase), SCA (Snyk), secrets detection (GitLeaks), and IaC scanning (Checkov for Terraform). Pre-production, DAST runs against staging for changes touching API endpoints. Security findings from any phase block deployment until triaged — critical and high findings must be resolved, medium findings can be risk-accepted with a documented timeline. Our AppSec engineer reviews all findings weekly and tunes tooling to reduce false positives.
Q: How do you perform code review, and are there specific security review requirements?
Early-stage answer: All code changes require at least one reviewer approval before merging. Pull requests include automated checks (SAST, dependency scanning, tests) that must pass. We don't have a separate security review requirement for all PRs, but changes touching authentication, authorization, data access patterns, or API endpoints are flagged for review by a senior engineer with security awareness. Our code review checklist includes security-relevant items: input validation, authorization checks, data exposure in API responses, and logging of sensitive actions. We're working on formalizing which change types require mandatory security-focused review.
Growth-stage answer: Code review is mandatory for all changes with a minimum of two reviewers, one of whom must be a senior engineer. Changes are automatically tagged by risk level using e.g. Danger or custom GitHub Actions, based on files modified: changes to authentication, authorization, cryptography, data access, or API surface area are flagged as security-relevant and require review from a security-trained engineer or the AppSec team. Security review focuses on authorization enforcement, input validation, data exposure, and adherence to our secure coding standards. The AppSec engineer performs weekly reviews of recently merged security-relevant PRs to catch patterns that individual reviews might miss. Code review completion and security review coverage are tracked metrics.
Q: Describe your approach to application security testing, including SAST, DAST, and penetration testing.
Early-stage answer: We run SAST (e.g. Semgrep/CodeQL) on every pull request, configured with rulesets relevant to our language and framework. Dependency scanning through Snyk or GitHub Advanced Security checks for known vulnerabilities in third-party libraries on every PR. We conduct annual penetration testing through <named firm>, scoped to our product API, authentication flows, and multi-tenant boundaries. Our most recent pen test was completed in MMYY, with all critical and high findings remediated and retested. We don't yet run regular DAST scans but are evaluating automated ZAP orBurp Suite Enterprise for integration into our CI pipeline.
Growth-stage answer: Application security testing operates at three layers. SAST (Semgrep with custom rules and CodeQL) runs on every PR, with findings triaged by the AppSec engineer weekly. SCA (Snyk) monitors dependencies continuously with automated PR generation for security updates. DAST (Burp Suite Enterprise/StackHawk/etc) runs against staging on a weekly schedule and on-demand before major releases, covering OWASP ASVS L2 categories and our custom test cases for business logic. Penetration testing is conducted annually by <named firm>, with scope covering API endpoints, authentication and authorization flows, multi-tenant isolation, and cloud configuration. We also run a private bug bounty program through <HackerOne/Bugcrowd> that provides continuous external testing coverage. Findings from all sources are centralized in <Jira/DefectDojo/etc> with unified remediation tracking.
Q: How do you manage third-party dependencies and open-source software risks?
Early-stage answer: Dependencies are managed through our language-specific package managers with lockfiles committed to version control. (Snyk/Dependabot/GitHub Advanced Security) scans for known vulnerabilities on every pull request and alerts on new vulnerabilities in existing dependencies. Critical dependency vulnerabilities follow our standard remediation SLAs (72 hours). We pin dependency versions and review major version upgrades before adopting them. We don't yet generate a formal Software Bill of Materials (SBOM), but our lockfiles and scanning tooling provide equivalent visibility into our dependency tree.
Growth-stage answer: Third-party dependency management includes automated vulnerability scanning (Snyk) on every PR and continuous monitoring of deployed dependencies. Dependencies are version-pinned with lockfiles, and updates are managed through Renovate Bot with automated PRs for patch versions and human review for minor/major versions. License compliance is checked through <e.g. FOSSA/Snyk License Compliance> to ensure we don't introduce dependencies with incompatible licenses. We generate SBOMs in CycloneDX format on every production build, available to customers on request. Our dependency review process evaluates new dependencies for maintenance health (last update, maintainer count, known vulnerabilities) before adoption. We maintain an approved-dependencies list for security-critical areas (cryptography, authentication libraries) to prevent developers from introducing unvetted alternatives.
Q: How do you ensure secure API design and prevent common API vulnerabilities?
Early-stage answer: Our API design follows REST conventions with authentication required on all endpoints (enforced at the API gateway level). Authorization is enforced through [middleware/framework-level checks] with tenant scoping applied to all data queries. API input validation is implemented using schema validation (e.g. Joi/Zod/Pydantic) on all endpoints. Rate limiting is configured at the API gateway to prevent abuse. We conduct annual pen testing that specifically targets OWASP ASVS L1 categories, including BOLA/IDOR testing across tenant boundaries. API changes go through code review with attention to authorization and data exposure.
Growth-stage answer: API security is addressed through design standards, automated enforcement, and testing. All APIs follow our API security guidelines, which require authentication, authorization at the middleware layer (not route handler), schema-based input validation, and explicit response filtering to prevent data leakage. New API endpoints go through a security design review before implementation. Automated enforcement includes rate limiting (per-tenant and per-endpoint), request size limits, and schema validation. Our CI pipeline includes API-specific security testing with <custom BOLA test suite/Burp Suite API Scanner>. The AppSec team maintains a catalog of API security test cases that expand after each pen test engagement. API authentication and authorization patterns are documented in our developer wiki with approved patterns and anti-patterns specific to our stack.
9. Incident Management (SIG Domain 16)
What the buyer is evaluating: Whether your incident response capability is a real, tested process or a document someone wrote for the SOC 2 audit. Buyers focus on three things: would you detect an incident affecting their data, would you respond competently, and would you tell them in time? A company that has conducted tabletop exercises and can describe specific outcomes signals a different level of maturity than one that points to a policy document. Breach notification timelines and customer communication processes are what the buyer's legal and security teams care about most.
Q: Do you maintain a documented incident response plan? When was it last reviewed?
Early-stage answer: Yes. Our incident response plan covers detection, escalation, containment, eradication, recovery, and post-incident review. The plan defines severity levels (Sev 1-4) with escalation paths for each. It was last reviewed and updated in [month/year] during our SOC 2 audit cycle. The plan is stored in [Confluence/Notion] and accessible to all engineering team members. We've designated an incident commander role that rotates with our on-call schedule. We've used the plan once in a real scenario — [brief description, e.g., a service disruption] — which led us to update our escalation paths and communication templates.
Growth-stage answer: Our incident response plan is maintained in [Confluence/PagerDuty incident management] and reviewed semi-annually by the Director of Security, with ad hoc updates following any incident or exercise. The plan covers our full incident lifecycle: detection and triage, escalation, containment, eradication, recovery, customer communication, regulatory notification, and post-incident review. Severity definitions are tied to specific criteria (customer data impact, service availability, financial impact) rather than subjective assessment. The plan includes role assignments (incident commander, technical lead, communications lead, executive sponsor), communication templates for internal and external stakeholders, and runbooks for our most likely incident scenarios. Our last plan review was [month/year], incorporating lessons learned from our most recent tabletop exercise.
Q: When did you last conduct an incident response tabletop exercise? Describe the scenario and outcomes.
Early-stage answer: We conducted our first tabletop exercise in MMYY with a scenario involving unauthorized access to customer data through a compromised employee credential. The exercise included the CTO, lead engineers, and our fractional CISO. Key findings: our detection would have relied on log review rather than automated alerting (we've since added alerts for anomalous data access patterns), our customer notification process was undefined (we've since drafted notification templates and timelines), and our escalation path had a single point of failure (we've added backup contacts). We plan to conduct tabletop exercises semi-annually going forward.
Growth-stage answer: We conduct tabletop exercises semi-annually with different scenarios. Our most recent exercise MMYY simulated a supply chain attack through a compromised third-party dependency that introduced a data exfiltration mechanism in our CI/CD pipeline. Participants included the security team, engineering leads, legal, and customer success. The exercise revealed three areas for improvement: our dependency monitoring didn't cover post-build artifacts (remediated by adding runtime SCA scanning), our communication plan lacked specific language for supply-chain scenarios (updated with new templates), and our forensics capability for pipeline analysis needed documented procedures (runbook created). Exercise results are documented and tracked to completion. Previous exercises have covered scenarios including ransomware, insider threat, and customer data breach through application vulnerability.
Q: What are your breach notification timelines and procedures for affected customers?
Early-stage answer: Our breach notification policy commits to notifying affected customers within 72 hours of confirming a breach involving their data. Notification is delivered via email to the designated security contact and through our status page. The notification includes: what happened, what data was affected, what we've done to contain it, and what the customer should do. For customers with specific contractual notification requirements, we honor the shorter timeline specified in their agreement. We acknowledge that our notification process is still partially manual — we have templates drafted but distribution to affected customers at scale would require coordination with our customer success team.
Growth-stage answer: Breach notification follows a defined process in our incident response plan. Upon confirming a breach involving customer data, we initiate parallel workstreams: legal assessment (regulatory notification obligations based on affected jurisdictions), customer notification (within 72 hours or shorter per contractual commitments), and executive communication. Customer notifications are delivered via email to designated security contacts, through our status page, and through in-product notification for active sessions. Notification content follows a standard template that includes: incident summary, affected data categories, containment actions taken, recommended customer actions, and ongoing investigation status. For enterprise customers with specific breach notification requirements in their DPA or MSA, we maintain a registry of contractual obligations and honor the most restrictive applicable timeline. Our notification process was tested end-to-end in our last tabletop exercise.
Q: Describe your forensic investigation capabilities for security incidents.
Early-stage answer: Our forensic capabilities are based on the logging and monitoring infrastructure we have in place: CloudTrail for AWS API activity, application audit logs, VPC Flow Logs, and identity provider logs. We retain these logs for <12 months/duration> in immutable storage. For application-layer forensics, our structured logging captures request details, user context, and data access events. For cloud infrastructure actions, we can reconstruct a complete timeline through CloudTrail. For application-layer actions, our structured logging covers the primary user-facing workflows, though coverage is not yet complete across all internal services. For investigations beyond our in-house capability, we maintain a retainer with <incident response firm> who can provide forensic analysis within 2 days of engagement.
Growth-stage answer: Forensic investigation capability spans cloud infrastructure, application, endpoint, and network layers. CloudTrail logs, application audit logs, VPC Flow Logs, DNS logs, and endpoint telemetry are retained in immutable storage for <e.g 18 months or some duration> with tamper-evident controls. Our security team is trained on forensic investigation procedures, including evidence preservation (no modification of affected systems before imaging), chain-of-custody documentation, and timeline reconstruction. For cloud-native forensics, we use <AWS Detective or custom tooling> to correlate events across services. We maintain a retainer with <incident response firm> for incidents requiring external forensic expertise, including legal hold procedures and expert witness capability. Forensic procedures are documented in our IR playbooks and tested during tabletop exercises.
Q: Describe your post-incident review process and how lessons learned are integrated into your security program.
Early-stage answer: Post-incident review is mandatory for any Sev 1 or Sev 2 incident. We conduct a blameless retrospective within 5 business days of incident resolution, documented in a post-incident report that covers timeline, root cause, impact assessment, what went well, what didn't, and action items. Action items are tracked in Jira/Linear with owners and deadlines. Findings from post-incident reviews feed into our security roadmap — for example, <some specific improvement that resulted from a past incident or exercise>. Post-incident reports are retained and reviewed during our quarterly security review.
Growth-stage answer: Every incident at Sev 2 or above triggers a mandatory post-incident review within 5 business days, facilitated by someone not involved in the incident response. Reviews follow a blameless retrospective format and produce a documented report covering: timeline with decision points, root cause analysis (using "5 whys" or equivalent), impact assessment, detection effectiveness, response effectiveness, customer communication review, and remediation actions. Actions are categorized as immediate (patch/fix), short-term (process improvement), and systemic (architecture or tooling change). All actions are tracked in <Jira/Asana> with SLAs: immediate actions within 48 hours, short-term within 30 days, systemic within the current quarter. A summary of post-incident findings and resulting improvements is included in our quarterly Security Steering Committee report. Our incident database is reviewed annually to identify recurring patterns that indicate systemic issues.
10. Business Continuity & Disaster Recovery (SIG Domain 17)
What the buyer is evaluating: Whether their operations would survive a failure in yours. Buyers care about two numbers: RTO (how fast you recover) and RPO (how much data they'd lose). But they also evaluate whether those numbers are tested or theoretical. A BCP/DR plan that's never been exercised is a document, not a capability. Companies that can name their last DR test date and describe what they learned earn significantly more trust than those that cite untested recovery targets.
Q: Do you have documented Business Continuity and Disaster Recovery plans? When were they last tested?
Early-stage answer: We have a documented DR plan covering our primary failure scenarios: AWS region outage, database failure, and critical service disruption. Our architecture runs in AWS <region> with automated daily backups replicated to <some second region> and point-in-time recovery enabled for our primary database (RDS). Our last DR test was YYMM, where we validated database restoration from backup and measured recovery time. The test revealed that our documented RTO of N hours was achievable for database recovery but that application-layer recovery required additional manual steps we've since automated. We don't yet have a formal BCP covering non-technical business continuity scenarios, but our remote-first structure provides inherent resilience to physical facility disruption.
Growth-stage answer: Our BCP and DR plans are documented, maintained by the Director of Security, and tested semi-annually. The BCP covers operational continuity for critical business functions (customer support, security monitoring, engineering operations) during disruption scenarios. The DR plan addresses technical recovery across tiers: Tier 1 services (customer-facing API, authentication, core data processing) have an RTO of N hours and RPO of N minutes. Tier 2 services (internal tools, analytics, non-critical integrations) have an RTO of N hours. Plans are tested semi-annually: our most recent test (MMYY) included a simulated primary region failover for Tier 1 services, which completed within <some actual recovery time>. Test results are documented with deviations from expected recovery times, and findings are tracked to remediation. Both plans are reviewed and updated quarterly.
Q: What are your defined Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO)?
Early-stage answer: Our primary database (RDS) is configured with automated backups and point-in-time recovery, giving us an RPO of approximately 5 minutes for our core data. Daily full backups are replicated to a second AWS region. Our target RTO is <e.g 8 hours> for full service restoration, which includes database restoration, application redeployment, and DNS cutover. We've validated database recovery time through testing, but we acknowledge that full end-to-end recovery has not been tested as a complete failover scenario. Our infrastructure-as-code (Terraform) approach means we can rebuild our full environment from code, which reduces recovery complexity.
Growth-stage answer: RTO and RPO are defined per service tier based on business impact analysis. Tier 1 services (customer-facing API, authentication, data processing): RTO of 1 hour, RPO of 5 minutes, achieved through multi-AZ deployment, database read replicas with automated failover, and point-in-time recovery. Tier 2 services (admin dashboard, reporting, integrations): RTO of 4 hours, RPO of 1 hour. Tier 3 services (internal tooling, development environments): RTO of 24 hours, RPO of 24 hours. These targets were established through a business impact analysis and validated through semi-annual DR testing. Our most recent test achieved actual recovery times of <specific times> for Tier 1 services. RTO/RPO targets are included in our enterprise customer SLAs and DPAs. Recovery targets are reviewed annually as our architecture evolves.
Q: Describe your backup strategy, including backup frequency, retention, and testing.
Early-stage answer: Our primary database (RDS) has automated daily snapshots retained for 30 days and continuous transaction log backups enabling point-in-time recovery to any second within the retention window. Snapshots are replicated to <second AWS region>. Application data stored in S3 uses cross-region replication with versioning enabled and a 90-day retention policy for deleted objects. Configuration and infrastructure are stored as code in Git, which serves as our backup for application configuration and infrastructure state. We test backup restoration quarterly by restoring a database snapshot to a test environment and validating data integrity. Our last restoration test was MMYY and completed in N minutes.
Growth-stage answer: Backup strategy follows a tiered approach aligned with our data classification and service tiers. Databases (RDS) maintain continuous transaction log backups with point-in-time recovery (5-minute RPO) plus daily snapshots retained for 90 days. All database backups are encrypted with customer-managed KMS keys and replicated to our DR region ([region]). S3 data uses cross-region replication with versioning and lifecycle policies (90-day soft delete retention, 365-day archive retention for compliance-relevant data). Application configuration and infrastructure are managed through Terraform in Git — our full environment can be rebuilt from code. Backup integrity is validated monthly through automated restoration tests that verify data checksums against production. Full DR restoration tests are performed semi-annually. Backup retention policies are documented and aligned with our data retention schedule and customer contractual requirements.
Q: Do you have geographic redundancy for your production infrastructure?
Early-stage answer: Our production infrastructure runs in AWS <region> with multi-AZ deployment for critical services (RDS Multi-AZ, ECS/EKS across multiple availability zones). This protects against single-facility failures within the region. Database backups are replicated to <second region> for disaster recovery, and our infrastructure-as-code approach enables environment reconstruction in a secondary region. We don't currently run active-active across multiple regions — our DR strategy is a warm standby approach where we can fail over to the secondary region within our defined RTO. Geographic redundancy to active-active is on our roadmap as customer SLA requirements justify the investment.
Growth-stage answer: Production infrastructure is deployed across multiple availability zones within our primary region (<region>) with automated failover for all critical services. Database layer uses RDS Multi-AZ with synchronous replication and automated failover (typically under 60 seconds). Application layer runs across three AZs with load balancing and auto-scaling. For regional disaster recovery, we maintain a warm standby in <DR region> with replicated data stores and infrastructure defined in Terraform. Regional failover is tested semi-annually as part of our DR exercises. Our architecture supports regional failover within our Tier 1 RTO of 1 hour. For customers with data residency requirements, we can discuss region-specific deployment options. Active-active multi-region is available for customers on our <some Enterprise/specific tier> plan.
For the full framework on handling the toughest 30% of questionnaire questions, read The 30% AI Can't Answer For You.
Need help building the program behind the answers? Talk to us.




