January 2, 2026

AI Compliance Checklist

AI regulation is evolving rapidly. Here's a practical checklist for AI compliance covering the EU AI Act, emerging US requirements, and framework-agnostic best practices.

AI regulation is a moving target. The EU AI Act is now law. US federal agencies are issuing guidance. States are passing their own requirements. Industry frameworks proliferate.

This checklist synthesizes current requirements and emerging best practices into a practical framework. It’s organized by activity—developing AI, deploying AI, using AI—rather than by specific regulation, since most organizations do all three.

Regulations will continue to evolve. Treat this as a current snapshot and foundation for ongoing governance.

AI Governance Foundation

Before diving into specific activities, establish governance basics.

Governance Structure

  • [ ] AI use policy established and communicated
  • [ ] Roles and responsibilities for AI governance defined
  • [ ] Accountability for AI systems assigned (clear ownership)
  • [ ] Cross-functional AI governance body established (if organization scale warrants)
  • [ ] AI risk tolerance defined by leadership
  • [ ] Budget allocated for AI compliance activities

Inventory and Classification

  • [ ] All AI systems in use inventoried
  • [ ] AI systems classified by risk level (high/limited/minimal per EU AI Act categories, or equivalent)
  • [ ] Prohibited use cases identified and blocked
  • [ ] Each AI system’s purpose, inputs, outputs, and decisions documented
  • [ ] Data sources for AI systems documented
  • [ ] Third-party AI components and services inventoried

Documentation Requirements

  • [ ] Documentation standards for AI systems established
  • [ ] Technical documentation maintained for each AI system
  • [ ] Training data documentation maintained
  • [ ] Version control for AI models and their documentation
  • [ ] Records retention policies for AI documentation defined

Developing AI Systems

For organizations building AI capabilities (including fine-tuning and customization).

Risk Assessment

  • [ ] Pre-development risk assessment performed
  • [ ] Use case evaluated against prohibited categories
  • [ ] Risk classification determined (high-risk determination per EU AI Act if applicable)
  • [ ] Fundamental rights impact assessed (for high-risk EU deployments)
  • [ ] Risk mitigation measures identified

Data Governance

  • [ ] Training data sources documented
  • [ ] Legal basis for training data use established (consent, legitimate interest, etc.)
  • [ ] Training data quality assessed
  • [ ] Bias evaluation of training data performed
  • [ ] PII and sensitive data handling compliant with privacy regulations
  • [ ] Data provenance tracked
  • [ ] Copyright and intellectual property implications considered

Development Practices

  • [ ] Security considerations integrated into AI development lifecycle
  • [ ] Model testing including security/adversarial testing
  • [ ] Bias testing and fairness evaluation performed
  • [ ] Accuracy and performance metrics defined and measured
  • [ ] Robustness testing conducted
  • [ ] Human oversight mechanisms designed
  • [ ] Explainability requirements considered in design

High-Risk AI Systems (EU AI Act)

If developing high-risk AI systems as defined by EU AI Act:

  • [ ] Risk management system implemented throughout lifecycle
  • [ ] Data governance measures for training, validation, and testing data
  • [ ] Technical documentation per Annex IV requirements
  • [ ] Automatic logging of events during operation
  • [ ] Transparency and user information provisions
  • [ ] Human oversight mechanisms enabled
  • [ ] Accuracy, robustness, and cybersecurity measures
  • [ ] Quality management system implemented
  • [ ] Conformity assessment completed (where required)
  • [ ] CE marking affixed (where required)
  • [ ] EU database registration completed (where required)

Deploying AI Systems

For organizations deploying AI systems (whether developed internally or by third parties).

Pre-Deployment Assessment

  • [ ] Purpose and scope of deployment defined
  • [ ] Risk classification verified
  • [ ] Deployment environment security assessed
  • [ ] Integration points and data flows documented
  • [ ] Human oversight mechanisms configured
  • [ ] Monitoring and logging enabled
  • [ ] Rollback procedures defined

Transparency Requirements

  • [ ] AI system use disclosed to affected individuals where required
  • [ ] Information about AI decision-making provided where required
  • [ ] Opt-out mechanisms provided where required
  • [ ] Explanations available for AI-informed decisions where required

For Deployers of High-Risk AI (EU AI Act)

  • [ ] Provider’s instructions for use followed
  • [ ] Human oversight performed by competent persons
  • [ ] Input data relevant and representative
  • [ ] Monitoring of AI operation as specified
  • [ ] Logs maintained as required
  • [ ] Incidents and malfunctions reported to provider and authorities
  • [ ] Fundamental rights impact assessment conducted (for certain public and high-risk contexts)

Employment and HR AI

If AI is used in employment decisions:

  • [ ] Compliance with employment discrimination laws
  • [ ] Illinois AI Video Interview Act compliance (if applicable)
  • [ ] NYC Local Law 144 compliance (if applicable—bias audit, notice)
  • [ ] Colorado AI Act compliance (if applicable—effective 2026)
  • [ ] Notice to candidates/employees about AI use
  • [ ] Human review of AI-informed employment decisions
  • [ ] Adverse impact testing performed

Consumer-Facing AI

If AI affects consumer decisions or interactions:

  • [ ] Disclosure of AI use where required
  • [ ] Option for human interaction where required
  • [ ] Unfair or deceptive practice analysis under FTC Act
  • [ ] Industry-specific requirements addressed (finance, healthcare, etc.)

Using Third-Party AI Services

For organizations using AI services provided by others (including enterprise AI assistants, AI features in SaaS, etc.).

Vendor Assessment

  • [ ] Vendor’s AI governance practices evaluated
  • [ ] Data handling practices understood (training, retention, sharing)
  • [ ] Appropriate contractual provisions in place
  • [ ] Service’s risk classification understood
  • [ ] Vendor’s security practices assessed
  • [ ] Compliance attestations obtained where applicable

Data Protection

  • [ ] Data sent to AI services classified
  • [ ] Sensitive data handling approved by appropriate stakeholders
  • [ ] PII processing compliant with privacy requirements
  • [ ] Data processing agreements in place
  • [ ] Cross-border transfer requirements addressed

Acceptable Use

  • [ ] Acceptable use policy for AI services established
  • [ ] Prohibited uses defined (sensitive decisions, regulated areas)
  • [ ] Training provided to users
  • [ ] Monitoring for compliance with acceptable use

Output Handling

  • [ ] AI outputs reviewed before use in critical decisions
  • [ ] Human oversight for high-impact decisions
  • [ ] AI outputs not relied upon for regulated determinations without appropriate oversight
  • [ ] Liability for AI outputs understood and managed

Sector-Specific Requirements

Additional requirements may apply based on industry.

Financial Services

  • [ ] Fair lending and anti-discrimination compliance
  • [ ] Model risk management per SR 11-7 (if applicable)
  • [ ] Explainability for credit decisions
  • [ ] Consumer protection requirements
  • [ ] Bank regulatory expectations addressed

Healthcare

  • [ ] AI as medical device requirements (FDA)
  • [ ] Clinical decision support considerations
  • [ ] HIPAA compliance for AI processing PHI
  • [ ] State AI in healthcare regulations

Insurance

  • [ ] Unfair discrimination analysis
  • [ ] State insurance AI regulations
  • [ ] Rate-making AI requirements

Government

  • [ ] Executive Order 14110 requirements (federal agencies)
  • [ ] OMB guidance compliance
  • [ ] Procurement requirements for AI

Ongoing Compliance

AI compliance isn’t one-time—it requires ongoing attention.

Monitoring and Testing

  • [ ] AI system performance monitored
  • [ ] Bias monitoring ongoing
  • [ ] Accuracy degradation tracked
  • [ ] Incident reporting procedures established
  • [ ] Post-deployment testing conducted

Change Management

  • [ ] Model updates assessed for compliance impact
  • [ ] Significant changes trigger re-assessment
  • [ ] Regulatory changes monitored and incorporated
  • [ ] Documentation updated with changes

Incident Management

  • [ ] AI incidents defined and categorized
  • [ ] Incident response procedures for AI failures
  • [ ] Incident reporting to regulators where required
  • [ ] Post-incident review conducted

Training and Awareness

  • [ ] Developers trained on AI compliance requirements
  • [ ] Users trained on appropriate AI use
  • [ ] Leadership briefed on AI risks and compliance
  • [ ] Training updated as regulations evolve

EU AI Act Timeline Reference

Key compliance dates:

  • February 2025: Prohibited AI practices take effect
  • August 2025: Requirements for general-purpose AI models take effect; governance structures required
  • August 2026: Most requirements (including high-risk AI) take effect
  • August 2027: Requirements for high-risk AI in certain Annex I products take effect

Plan compliance activities against these deadlines.

US Federal AI Guidance Reference

Key guidance documents to monitor:

  • Executive Order 14110 on Safe, Secure, and Trustworthy AI
  • OMB memoranda on AI governance
  • NIST AI Risk Management Framework
  • Agency-specific guidance (HHS, DOJ, FTC, SEC, etc.)
  • State laws (Colorado AI Act, Illinois, NYC, etc.)

The US landscape is fragmented. Monitor federal and state developments relevant to your industry and operations.

Implementation Approach

For organizations starting AI compliance:

Phase 1 (Immediate):

  • Inventory AI systems
  • Establish governance ownership
  • Identify prohibited uses
  • Address highest-risk systems

Phase 2 (Near-term):

  • Risk classification for all AI systems
  • Documentation requirements implemented
  • Vendor assessment for third-party AI
  • Training programs launched

Phase 3 (Ongoing):

  • Monitoring and testing programs
  • Continuous regulatory monitoring
  • Maturation of governance processes
  • Regular compliance assessments

AI compliance is evolving rapidly. This checklist provides a foundation—not a permanent answer. Build the capability to adapt as requirements change.

Ready to make security your competitive advantage?

Schedule a call