BLOG
May 13, 2026
decorative
Travis Good

EU AI Act Compliance: What US SaaS Companies Need to Know

A practical guide to EU AI Act compliance for US SaaS companies.

The EU AI Act has been around for a couple of years now (first coming into force in August 2024). It’s the first comprehensive law regulating artificial intelligence, and like GDPR, it reaches beyond the EU’s borders. 

If your produce uses AI and you sell to EU-based customers, the act applies to you — even if you don’t have an EU office or EU subsidiary. 

The good news is that for most US-based companies, the obligations are fairly light. That said, you’ll likely see the EU AI act popping up in procurement if you’re selling in Europe. So here’s what you need to know. 

Does the EU AI Act Apply to Your US Company?

If your AI product is used by EU customers, or its output reaches users in the EU, you're in scope. Article 2 of the Act sets the same kind of extraterritorial reach as GDPR. It catches providers placing AI on the EU market and providers or deployers whose AI output is used in the EU, wherever they're located. You don't need a European entity, a European customer, or a European user base of any specific size.

(Note: More on providers and deployers below.) 

In practice, the most likely way most US-based organizations will come into contact with the EU AI Act is through a European buyer’s procurement team. In the same way EU-based customers started asking about GDPR in 2018, and more recently DORA, they’re now adding AI sections to their security questionnaires and procurement policies. 

The Four Risk Tiers and Where Most US SaaS Lands

The Act sorts AI systems into four tiers and most US B2B SaaS lands in the bottom two.

  1. Unacceptable risk (prohibited). Article 5 bans eight categories of AI outright: social scoring, manipulation that causes harm, untargeted facial-image scraping, emotion recognition in workplaces and schools, biometric categorization by protected characteristics, and a handful of law-enforcement uses (predictive policing and real-time remote biometric identification in public spaces). 
  2. High risk. Covers two categories. The first is AI used as a safety component of a regulated product (medical devices, machinery, toys, vehicles). The second is AI used in narrow standalone domains listed in such as biometric identification, critical infrastructure, education and proctoring, employment and HR decisioning, credit scoring, insurance pricing, law enforcement, migration, and the administration of justice. If you're building a CRM, a productivity tool, or a developer platform, you're not here. If you're building an AI recruiting screener, a credit-underwriting model, or an edtech proctoring system, you almost certainly are.
  3. Limited risk. This covers areas like chatbots, synthetic media, and AI-generated text on matters of public interest. The obligation here is transparency, set out in Article 50, EU-based users must know they're interacting with AI or seeing AI-generated content. 
  4. Minimal risk. Spam filters, recommender systems, video-game AI, most embedded AI features. 

The Two Obligations That Impact Almost Every Organization

Two parts of the Act cut across every tier and both apply regardless of whether your AI is minimal-risk or high-risk.

AI Literacy (Article 4)

Article 4 was introduced in February 2025 and requires every provider and deployer of AI to ensure its team and those operating or using those systems on their behalf has a "sufficient level of AI literacy." 

Generally, this requirement can be satisfied by having a documented AI use policy and running internal training for people building and operating with AI systems across your business. This requirement will impact most businesses selling into the EU as almost every business is using AI internally now and many are also using AI in products. 

Transparency (Article 50)

Article 50 applies from August 2, 2026, and covers four cases:

  • Chatbots and conversational AI have to disclose to users that they're interacting with an AI system.
  • AI-generated or manipulated content (deepfakes, synthetic media) has to be labeled.
  • AI-generated text published to inform the public on matters of public interest has to be disclosed.
  • Emotion recognition and biometric categorization systems have to inform the people exposed to them.

Most of this is a UI and copy change rather than a deep security and compliance project. But the deadline is concrete, and any AI features covered by the above cases will need to be labeled. 

Is Your Business A Provider, Deployer, or Both?

The Act assigns obligations by role, not by company type, and it’s quite common that SaaS companies may fall under both categories: 

  • A provider is the entity that places an AI system on the market under its own name. 
  • A deployer is the entity using an AI system under its authority. 

A typical SaaS company will be a deployer of AI technology from organizations like Open AI, Anthropic, and Google, and also a provider if it deploys features built on top of AI to customers. 

The EU AI Act Timeline

Since its introduction in 2025, the Act has been rolling out in phases:

  • 1 August 2024: The Act entered into force.
  • 2 February 2025: Prohibitions (Article 5) and AI literacy (Article 4) became applicable.
  • 2 August 2025: General-purpose AI model rules, governance, and the penalty regime applicable.
  • 2 August 2026: Full application comes into place, including high-risk system requirements and Article 50 transparency.
  • 2 August 2027: Extended transition for AI embedded in regulated products under Annex I, and for GPAI models that were already on the market before 2 August 2025.

For many US-based SaaS companies, August 2026 is the key deadline when you'll need to meet the transparency requirements from Article 50.

How the EU AI Act Generally Shows Up for US SaaS

For most US SaaS, the impact of the EU AI Act will mostly start showing up in security questionnaires and procurement processes for any EU companies you’re selling into.

Procurement teams may want to know which models you use, what controls you have in place to protect data, and which policies you have across your team to ensure the safe use of AI. It’s not too dissimilar from how DORA has started to show up in more procurement processes recently. 

The key thing is to understand exactly how the EU AI Act impacts your business, to ensure you meet any requirements (such as labeling where consumers are interacting with AI), and having clear answers and policies in place for any security questionnaires that may reference the EU AI Act.

If your business falls into the higher risk categories, you may need to go a little deeper than implementing policies and labeling AI. Not sure where you fall? Get in touch with our team here

Why More Businesses Are Pursuing ISO 42001 

There's no formal "EU AI Act certification.” But we’re starting to see more organizations exploring ISO 42001 as a way to prove their AI compliance practices. ISO 42001 covers areas like AI governance, risk management, lifecycle controls, transparency, and accountability — many of the areas the EU AI Act covers itself. 

ISO 42001 is the framework most companies I speak to are defaulting to for a couple of reasons. Firstly, ISO is a trusted brand and globally recognised so your customers in Boston and Berlin will know what an ISO certification means. And secondly, auditors are already set up to assess it.

The ISO 42001 standard is designed to bring structure to conversations about AI usage and shows that your company has thought about how to build, deploy, use, and monitor AI with clear governance.

For most startups, I see ISO 42001 as the fastest path to show AI maturity currently. And in the same way SOC 2 became industry-required, ISO 42001 looks likely to become required in the next 12 months.

Are You Ready for the EU AI Act?

The most important thing for any US-based company to do is to understand how the EU AI Act may impact your business and what you may need in place to ensure you meet the relevant requirements. 

If you get the correct policies and data labeling in place, the EU AI Act is unlikely to cause delays to your sales process. But if you get caught out by a questionnaire asking how you meet Article 50 and you don’t have a good answer ready to go, you may start to see deals stall. 

Want help ensuring you’re all set? Workstreet helps US SaaS companies get ISO 42001-ready alongside their existing SOC 2 or ISO 27001 program. If you're seeing EU AI Act questions show up in your security reviews and want to understand what the best next steps are for your business, talk to our team.

Turn compliance into a growth engine: Workstreet delivers full-stack solutions that transform security and compliance into growth accelerators. Talk to an expert →
Build trust, accelerate growth.
Workstreet offers Al-first security solutions that help high growth technology companies get compliant, scale securely, and close bigger deals.
Get started
Ready to Transform Security into a Growth Advantage
Schedule a consultation with our trust solutions experts to see how we can accelerate your security program and compliance journey.
Talk to an engineer
Travis Good

Architect of security and privacy programs for 1,000+ hypergrowth companies. Author of "Complete Cloud Compliance," HITRUST 3rd Party Council member, and recognized speaker on startup security.