Skip to main content
Back to BlogAI Governance
What the EU AI Act Means for US Companies

What the EU AI Act Means for US Companies

Understanding the EU AI Act's extraterritorial reach and what American companies need to do to prepare for compliance.

January 1, 202612 min readBy Adil Karam

The European Union's Artificial Intelligence Act is the world's first binding AI regulation, and its extraterritorial scope means US companies cannot afford to ignore it. If your AI-powered products or services reach EU customers, even indirectly, you are almost certainly in scope. The penalties for non-compliance can reach 35 million euros or 7% of global annual revenue, whichever is higher.

This is not a distant regulatory concern. The first enforcement deadlines have already passed, and the high-risk AI rules take full effect in August 2026. For US companies with European market exposure, the time to prepare is now.

Key Dates and Compliance Timeline

MilestoneDateWhat Happens
Unacceptable AI BanFeb 2025Prohibited AI systems banned
GPAI RulesAug 2025General-purpose AI (like GPT) compliance
High-Risk AI RulesAug 2026Full compliance for high-risk systems
All RequirementsAug 2027Complete enforcement

The phased rollout is intentional. The EU structured the timeline to give organizations time to classify their systems, build compliance programs, and implement technical controls. But the window is closing rapidly for high-risk systems.

Does It Apply to You?

The EU AI Act's extraterritorial reach is broader than most US companies realize. You are in scope if:

  • You are established in the EU (straightforward)
  • You place AI systems on the EU market (even from outside the EU)
  • Your AI output is used within the EU (this captures the majority of global SaaS companies)
  • Example: A US SaaS company with a recommendation engine serving EU users? In scope. A US company using AI to screen resumes of EU job candidates? In scope. A US platform whose third-party integrations process EU citizen data through AI? Also in scope.

    The scope is intentionally broad. If your AI system affects EU citizens, the Act likely applies regardless of where your servers sit or where your company is incorporated.

    The Risk Categories Explained

    The AI Act classifies AI systems into four risk tiers, each with escalating obligations. Understanding where your systems fall is the first step toward compliance.

    Prohibited AI (Banned Outright)

  • Social scoring by governments
  • Emotion recognition in workplaces and schools
  • Biometric categorization by race, religion, or sexual orientation
  • Real-time facial recognition in public spaces (with narrow law enforcement exceptions)
  • US companies should audit their product portfolios immediately. Any system that falls into this category must be discontinued for EU users by February 2025, no exceptions.

    High-Risk AI (Heavy Regulation)

  • AI in hiring and HR decisions
  • Credit scoring and loan decisions
  • AI in healthcare diagnostics
  • AI in education (grading, admissions)
  • AI used as safety components in critical infrastructure
  • Requirements for high-risk systems: A formal risk management system, data governance protocols, automated logging, human oversight mechanisms, accuracy and robustness testing, and CE marking. According to the NIST AI Risk Management Framework, these requirements align closely with emerging US best practices, making dual compliance achievable.

    Limited Risk (Transparency Required)

  • Chatbots and conversational AI
  • Deepfake generators
  • Emotion recognition in non-prohibited contexts
  • Requirements: Users must be clearly informed they are interacting with AI. This applies to every customer-facing chatbot, virtual assistant, or AI-generated content feature.

    Minimal Risk (No Specific Requirements)

  • Spam filters
  • AI in video games
  • Most internal business analytics tools
  • What US Companies Should Do Now

    Phase 1: Assessment (Start Immediately)

  • Inventory your AI systems: Catalog every AI model, algorithm, and automated decision-making tool across your organization
  • Map EU exposure: Identify which systems are used by EU customers, process EU citizen data, or produce outputs consumed in EU jurisdictions
  • Classify risk: Determine which tier each system falls into under the Act's framework
  • Phase 2: Gap Analysis (Q1 2026)

  • Compare current state to requirements: Evaluate existing documentation, risk assessments, and human oversight mechanisms against the Act's mandates
  • Identify high-risk gaps: Prioritize systems classified as high-risk, as these carry the most onerous obligations
  • Evaluate GPAI obligations: If you use foundation models from providers like OpenAI, Anthropic, or Meta, assess your inherited compliance responsibilities
  • Phase 3: Remediation (Q2-Q3 2026)

  • Implement missing controls: Build risk management systems, logging infrastructure, and human review processes
  • Establish conformity assessment processes: Prepare for third-party audits where required
  • Prepare documentation packages: The Act requires extensive technical documentation that most US companies do not currently maintain
  • The GPAI Complication

    If you use General Purpose AI (GPT, Claude, Llama, or similar foundation models), you inherit compliance obligations from the AI Act's GPAI provisions. These include:

  • Transparency: Disclose AI use to end users
  • Documentation: Maintain technical documentation about how the GPAI is integrated
  • Risk Assessment: Evaluate for systemic risks when deploying GPAI in high-risk contexts
  • Copyright Compliance: Ensure training data usage respects EU copyright law
  • If your GPAI provider is non-compliant, the liability may shift to you as the deployer. This is a critical vendor management concern. According to IEEE's standards on AI ethics, organizations should establish clear contractual provisions with AI providers to allocate compliance responsibilities.

    The EU AI Act represents the most significant shift in technology regulation since GDPR. Companies that treat this as a GDPR-scale compliance initiative, rather than a checkbox exercise, will maintain market access while competitors scramble to catch up or exit the European market entirely.

    Penalties

    Violation TypeMaximum Fine
    Prohibited AI35M euros or 7% of global revenue
    High-Risk Non-Compliance15M euros or 3% of global revenue
    False Information to Regulators7.5M euros or 1% of global revenue

    For comparison: GDPR's maximum is 20M euros or 4% of revenue. The AI Act penalties are intentionally higher, reflecting the EU's assessment that AI risks can be more systemic than data protection failures.

    The Board Brief

    What to tell the board:

    "The EU AI Act has extraterritorial application and will impact our products and services. We have identified our AI systems in scope and classified those that qualify as high-risk. We are conducting a gap analysis and will present a compliance roadmap by Q2 2026. Non-compliance penalties can reach 7% of global revenue, making this a material business risk that requires board-level oversight."

    How I Help

    With 20+ years in security and governance, I help US companies build AI governance programs that satisfy both the EU AI Act and emerging US frameworks simultaneously. My approach covers the full lifecycle: AI inventory and risk classification, gap analysis against the Act's requirements, implementation of technical controls and documentation, and ongoing compliance management.

    If your organization needs board-level guidance on AI risk exposure, or if you need a practical roadmap from assessment to conformity, I can help you move from uncertainty to compliance readiness.

    Schedule a consultation to assess your EU AI Act exposure and build a pragmatic compliance plan.

    #EU AI Act#Regulation#Compliance#AI Governance#Global
    PDFShare:

    Adil Karam

    Security & AI Governance Advisor

    Helping organizations navigate security leadership and AI governance challenges.

    Ready to Put These Insights Into Action?

    Whether you need AI governance, security leadership, or compliance guidance—let's discuss how to apply these strategies to your organization.