
What the EU AI Act Means for US Companies
Understanding the EU AI Act's extraterritorial reach and what American companies need to do to prepare for compliance.
The European Union's Artificial Intelligence Act is the world's first binding AI regulation, and its extraterritorial scope means US companies cannot afford to ignore it. If your AI-powered products or services reach EU customers, even indirectly, you are almost certainly in scope. The penalties for non-compliance can reach 35 million euros or 7% of global annual revenue, whichever is higher.
This is not a distant regulatory concern. The first enforcement deadlines have already passed, and the high-risk AI rules take full effect in August 2026. For US companies with European market exposure, the time to prepare is now.
Key Dates and Compliance Timeline
| Milestone | Date | What Happens |
|---|
| Unacceptable AI Ban | Feb 2025 | Prohibited AI systems banned |
| GPAI Rules | Aug 2025 | General-purpose AI (like GPT) compliance |
| High-Risk AI Rules | Aug 2026 | Full compliance for high-risk systems |
| All Requirements | Aug 2027 | Complete enforcement |
The phased rollout is intentional. The EU structured the timeline to give organizations time to classify their systems, build compliance programs, and implement technical controls. But the window is closing rapidly for high-risk systems.
Does It Apply to You?
The EU AI Act's extraterritorial reach is broader than most US companies realize. You are in scope if:
Example: A US SaaS company with a recommendation engine serving EU users? In scope. A US company using AI to screen resumes of EU job candidates? In scope. A US platform whose third-party integrations process EU citizen data through AI? Also in scope.
The scope is intentionally broad. If your AI system affects EU citizens, the Act likely applies regardless of where your servers sit or where your company is incorporated.
The Risk Categories Explained
The AI Act classifies AI systems into four risk tiers, each with escalating obligations. Understanding where your systems fall is the first step toward compliance.
Prohibited AI (Banned Outright)
US companies should audit their product portfolios immediately. Any system that falls into this category must be discontinued for EU users by February 2025, no exceptions.
High-Risk AI (Heavy Regulation)
Requirements for high-risk systems: A formal risk management system, data governance protocols, automated logging, human oversight mechanisms, accuracy and robustness testing, and CE marking. According to the NIST AI Risk Management Framework, these requirements align closely with emerging US best practices, making dual compliance achievable.
Limited Risk (Transparency Required)
Requirements: Users must be clearly informed they are interacting with AI. This applies to every customer-facing chatbot, virtual assistant, or AI-generated content feature.
Minimal Risk (No Specific Requirements)
What US Companies Should Do Now
Phase 1: Assessment (Start Immediately)
Phase 2: Gap Analysis (Q1 2026)
Phase 3: Remediation (Q2-Q3 2026)
The GPAI Complication
If you use General Purpose AI (GPT, Claude, Llama, or similar foundation models), you inherit compliance obligations from the AI Act's GPAI provisions. These include:
If your GPAI provider is non-compliant, the liability may shift to you as the deployer. This is a critical vendor management concern. According to IEEE's standards on AI ethics, organizations should establish clear contractual provisions with AI providers to allocate compliance responsibilities.
The EU AI Act represents the most significant shift in technology regulation since GDPR. Companies that treat this as a GDPR-scale compliance initiative, rather than a checkbox exercise, will maintain market access while competitors scramble to catch up or exit the European market entirely.
Penalties
| Violation Type | Maximum Fine |
| Prohibited AI | 35M euros or 7% of global revenue |
| High-Risk Non-Compliance | 15M euros or 3% of global revenue |
| False Information to Regulators | 7.5M euros or 1% of global revenue |
For comparison: GDPR's maximum is 20M euros or 4% of revenue. The AI Act penalties are intentionally higher, reflecting the EU's assessment that AI risks can be more systemic than data protection failures.
The Board Brief
What to tell the board:
"The EU AI Act has extraterritorial application and will impact our products and services. We have identified our AI systems in scope and classified those that qualify as high-risk. We are conducting a gap analysis and will present a compliance roadmap by Q2 2026. Non-compliance penalties can reach 7% of global revenue, making this a material business risk that requires board-level oversight."
How I Help
With 20+ years in security and governance, I help US companies build AI governance programs that satisfy both the EU AI Act and emerging US frameworks simultaneously. My approach covers the full lifecycle: AI inventory and risk classification, gap analysis against the Act's requirements, implementation of technical controls and documentation, and ongoing compliance management.
If your organization needs board-level guidance on AI risk exposure, or if you need a practical roadmap from assessment to conformity, I can help you move from uncertainty to compliance readiness.
Schedule a consultation to assess your EU AI Act exposure and build a pragmatic compliance plan.
Adil Karam
Security & AI Governance Advisor
Helping organizations navigate security leadership and AI governance challenges.
Related Articles
Ready to Put These Insights Into Action?
Whether you need AI governance, security leadership, or compliance guidance—let's discuss how to apply these strategies to your organization.