Governance in the age of AI is often treated like that aspirational pair of jeans you keep in the closet—bought for when you finally “get in shape” or start going to the gym. A worthy intention, always deferred. Many teams plan to tackle it once they have more headcount, a bigger budget, or fewer competing fires. Meanwhile, Responsible AI champions—or the people assigned that title—are often overextended and undertrained. It’s usually a secondary duty, tacked onto someone’s primary role without the authority or resources to do it well. But this patchwork approach exposes your enterprise to unmitigated risk, because under emerging frameworks like the EU AI Act, the burden of governance falls squarely on anyone who builds, sells, or operationalizes GenAI systems.
What Article 13 Actually Says (And Why It Applies to You)
I’m not sure how many AI experts fully understand the regulatory burden they’re taking on when building or operationalizing generative AI systems. Under Article 13 of the EU AI Act, accountability doesn’t stop at the organization that uses AI—it reaches upstream to anyone who builds, trains, packages, or sells it.
That’s you—the PM owning just one feature in a larger orchestration stack. That’s you—the data annotator fine-tuning prompts or validating responses. That’s you—the startup founder reselling an LLM via an API wrapper. Article 13 makes it clear: if your work contributes to how an AI system functions in the world, you are a provider under the law.
Let’s unpack how Article 13 broadens the scope of responsibility by defining multiple roles:
- Provider: Any entity that develops or commissions an AI system with the intention of placing it on the market. This includes vendors, commercial open-source developers, and institutions—whether public or private—who offer foundational models for others to fine-tune or deploy.
- Deployer: The organization or individual using the system in a real-world context, such as a platform team, enterprise client, or public agency.
- Authorized representative, importer, and distributor: These roles apply to those involved in the logistics of introducing an AI system to the EU market—including partners, resellers, and those embedding AI into broader systems.
Here’s what most people working on AI systems need to understand: Providers are responsible for compliance before the system is ever used. That means:
- Delivering clear instructions for safe and appropriate deployment,
- Embedding technical safeguards that support transparency and human oversight,
- Ensuring traceability throughout the system’s lifecycle,
- And offering documentation that is understandable and auditable—not hidden behind trade secrets or rendered unreadable through jargon.
Obscure or intentionally obstructive documentation fails to meet Article 13’s standards.
Most significantly, this isn’t just about compliance checklists. The requirement for human-in-the-loop design—where each decision node can be reviewed and overridden—reshapes how we think about system architecture and autonomy. It demands that governance be designed in, not added on.
So if you’re involved in building GenAI infrastructure, training custom models, or designing orchestration frameworks—even if you’re not the one who deploys the system—you are a provider under the law. That makes you ethically and legally accountable for what your system enables in the world.
Critical Systems: A Moving Target
The burden of governance is significantly higher if your system falls under the Critical designation. You may be wondering: what constitutes a critical system? I’m glad you asked—though the answer is both long and subject to change.
“Critical” may refer to AI systems that interface with public infrastructure, biometric identification tools, systems influencing medical decision-making, or those with the potential to negatively impact employment, education, or housing outcomes.
The lawmakers behind Article 13 appear to have anticipated the rigidity of international legislative change. To avoid being boxed in by a static legal definition, they intentionally crafted the definition of Critical Systems to be broad and discretionary. This strategic vagueness acts as a regulatory safeguard, allowing governing bodies to update what counts as “critical” without having to rewrite the law.
Under Article 13 and its associated chapters, providers of these systems are held to a higher operational standard. They must:
- Implement robust risk management frameworks across the entire AI lifecycle,
- Ensure continuous monitoring for model drift, emergent behavior, and misuse,
- Enable real-time human override and fail-safe mechanisms,
- Submit to post-market surveillance and audits, not just pre-deployment compliance.
These aren’t check-the-box technical controls. They require deep operational maturity—dedicated ownership, thorough documentation, persistent oversight, and a clearly defined escalation path when things go off-script.
If your product—or even its downstream use cases—might eventually fall under the “critical” umbrella, then you must govern it like it already does. Because the cost of being unprepared isn’t just regulatory. Cutting corners for expediency or short-term investor optics will damage far more than your release timeline. You’re risking your reputation, financial stability, and ultimately, your license to operate within the EU.
Licensing, Delays, and the Cost of Inaction
The governance debt you accrue by adopting a “we’ll get to it later” attitude toward compliance carries hidden costs—not just for your org, but for the entire industry. Regulatory momentum is shifting. Experts worldwide are urging legislators to treat AI governance as an a priori necessity, not a patchwork of retroactive fixes.
Professor Woodrow Hartzog, a fellow at the Cordell Institute for Policy in Medicine and Law at Washington University in St. Louis, has advocated for enforceable frameworks that include licensing and liability for organizations involved in building and deploying AI systems. In testimony before the Senate Judiciary Subcommittee on AI Oversight, Professor Gary Marcus, professor emeritus of psychology and neural science at NYU, echoed that sentiment. He warned Congress that aspirational values are not enough. AI systems, he argues, carry too much societal risk to be governed by good intentions alone.
Marcus introduced the concept of “datocracy”—an emerging threat in which opaque, unaccountable systems replace democratic processes—and proposed the creation of an independent international institution akin to CERN to coordinate oversight globally. His message was clear: if industry fails to govern itself responsibly, governments will be forced to do it for us—and they will move slowly, with heavy restrictions.
If your organization ships GenAI features today without clear governance protocols, you are accruing a kind of regulatory debt. And like technical debt, it compounds. The longer you delay building structured oversight—such as iteration-level documentation, model behavior tracing, and intervention pathways—the more likely you are to face expensive bottlenecks when external licensing regimes eventually arrive. Governance debt has creditors. And they always come calling.
Operationalizing Governance Before You’re Forced To
Integrating good governance isn’t a bureaucratic delay—it’s your ethical and strategic foundation. You can start now, and doing so will not only prepare you for regulatory scrutiny but also align your development with what’s right for the broader public good.
To keep your governance “balance sheet” healthy, start with the following foundational practices:
- Maintain clear, concise documentation from day one. If your team doesn’t have the bandwidth to document and update your system’s development and behavior, then you may not have the operational readiness to build or deploy AI responsibly. Documentation is not optional—it’s your audit trail.
- Develop flexible, application-aware model cards. These cards should clearly state what the system does, its limitations, its intended uses, and known failure modes. Also, invest in tooling that enables human override of any decision made by the AI. Article 13 mandates it—and more importantly, lives may depend on it.
- Embed misuse and drift testing into your dev cycle. These tests should reflect plausible threats specific to your use case, not just generic red-teaming scenarios. Treat the results as part of your model’s living documentation, and make them accessible and actionable.
- Assign clear, accountable ownership for AI governance. Governance cannot be a side hustle for an overburdened contributor. It needs to be someone’s job—explicitly. Think of this person as your in-house expert, your internal regulator, the one who could be called to testify when it matters most.
Incorporating these practices early isn’t just proactive—it’s protective. It positions your team to innovate responsibly, adapt to evolving regulations, and maintain the trust of your users, stakeholders, and society.
Governance Is Not a Tax on Innovation—It’s a Catalyst for It
Adopting governance early in your development lifecycle doesn’t slow you down—it protects you from risk, reduces downstream bottlenecks, and signals to the world that your team is building AI systems with integrity. Strong governance practices foster trust among users, regulators, and the public. They also craft a transparent narrative about your enterprise—one that will define how you are remembered during this critical inflection point in technological history.
So, how did you ride this wave of AI momentum? Did you choose what was easy—or what was right?
Balancing agile production with responsible oversight is not simple. But remember: you are helping to shape the future of human-machine interaction, of labor, of language, and of life itself. If you are part of this extraordinary turning point, then you are also responsible for how it unfolds—and for mitigating the harms we have yet to imagine.
References
- European Union AI Act, Article 13 and associated chapters.
Available at: https://artificialintelligenceact.eu/ - EU AI Act: Title III – High-Risk AI Systems
Full text and updated classifications.
Available at: https://artificialintelligenceact.eu/the-act/title-iii-high-risk-ai-systems/ - Senate Judiciary Subcommittee on Privacy, Technology, and the Law, “Oversight of AI: Principles for Regulation” – Testimony of Woodrow Hartzog, Professor of Law at Boston University School of Law and Cordell Institute Fellow.
Hearing date: September 2023.
Transcript available via: https://www.judiciary.senate.gov - Senate Judiciary Subcommittee Hearing, “Oversight of A.I.: Legislating on Artificial Intelligence” – Testimony of Gary Marcus, Professor Emeritus at NYU.
Hearing date: May 2023.
Available at: https://www.judiciary.senate.gov - Marcus, Gary. “Why We Need a CERN for AI.”
Published on: garymarcus.substack.com - Model Cards for Model Reporting – Mitchell et al., 2019.
Available at: https://arxiv.org/abs/1810.03993