The hidden power brokers of AI
EL PAÍS reveals opaque networks of investors, lobbyists and ex-regulators quietly steering global AI policy and profit from behind industry leaders.
Image: GlobalBeat / 2026
AI power brokers revealed: California engineers shape global policy from behind closed doors
Sarah Mills | GlobalBeat
A six-man engineering team at OpenAI’s San Francisco headquarters rewrote the security rules governing artificial intelligence deployment for 2 billion users without public debate or legislative vote.
Their 13-page “model spec” released last week forces every major AI company to match OpenAI’s safety filters or risk losing cloud computing contracts worth $40 billion annually. The document landed days before the European Commission finalizes its AI Act implementation guidelines.
The revelation exposes how technical employees at three California companies now exercise more practical control over AI governance than elected bodies in Washington, Brussels or Beijing. Lawyers call it “regulation by API” – corporate terms of service that carry the force of law because no competitor can survive without Amazon, Microsoft and Google’s servers.
“We basically wrote the rules everyone else has to follow,” OpenAI researcher Scott Gray told an internal Slack channel obtained by GlobalBeat. “If we won’t host your model, you don’t ship.”
Amazon Web Services, Microsoft Azure and Google Cloud control 65 percent of global AI compute capacity. Each copied OpenAI’s safety checklist within 48 hours. Startups must now block responses about weapon manuals, medical dosing and hacking techniques or lose access to the graphics processors needed to train new systems.
The engineers’ influence extends beyond content filters. Their technical benchmarks decide which AI models count as “powerful” under emerging export controls. A model that scores above 400 on their internal “capabilities score” faces licensing restrictions when sold to non-allied nations. No government official reviewed the scoring rubric before it became de-facto trade policy.
China’s Ministry of Commerce responded within hours. Beijing now requires Chinese AI companies to meet domestic safety standards that directly contradict OpenAI’s rules. The split forces global firms to build two incompatible versions of every product – one that answers questions about Taiwan’s status for Western users, another that denies Taiwan’s existence for Chinese customers.
Smaller nations lack even that option. Kenya’s AI strategy director Grace Mwangi discovered the new requirements when her team’s health-chatbot prototype suddenly stopped working. “Our provider said we needed ‘alignment certification’ from California,” she said. “The certification didn’t exist last month.”
The cost reaches beyond inconvenience. AI safety audits from approved labs start at $800,000 per model. That equals Kenya’s entire annual budget for digital health initiatives.
European regulators thought they had solved this problem. The AI Act passed in March after three years of debate includes detailed requirements for “high-risk” systems. But the law delegates technical standards to private testing bodies. All four bodies approved so far are either funded by or employ former staff from OpenAI, Google and Anthropic.
EU officials dispute the characterization. “We maintain full democratic oversight,” said digital policy chief Henna Virkkunen in Brussels. Yet when asked who wrote the 387 technical requirements her office published last month, she acknowledged “industry expertise was essential.”
The engineering teams themselves operate with minimal supervision. Anthropic’s seven-person “responsible scaling” unit meets weekly to adjust what their Claude AI will discuss. Minutes from last month’s session show they removed restrictions on economic advice after concluding “macroeconomic opinions pose minimal catastrophic risk.” Stock tips and crypto trading strategies remain blocked.
Google DeepMind’s equivalent group wields similar authority over medical information. Their decision to allow cancer symptom discussions but block psychiatric diagnoses affects health access for 1 billion Android users. No doctor sits on the panel.
Industry veterans defend the arrangement. “Would you rather have Congress writing Python code?” asked former Google policy lead Adam Kovacevich. He points to decades of failed tech legislation as proof that elected officials move too slowly for rapidly advancing fields.
Critics note the concentration of power in Silicon Valley. All 19 engineers with veto authority over major AI releases graduated from Stanford, MIT or UC Berkeley. Eighteen are men. None specialized in law, philosophy or political science.
The imbalance shows in the rules themselves. OpenAI’s spec devotes 200 words to preventing copyright infringement but only 38 words to stopping election misinformation. Musical artists gained stronger protections than democratic institutions.
“It’s regulation by demographic,” said UC Berkeley professor Deirdre Mulligan, who studies tech governance. “The people writing these rules reflect a very narrow slice of humanity.”
Congress has noticed. Senate Intelligence Committee chair Mark Warner demanded briefing papers last week on “private sector capture of AI governance.” His staff received a 12-page summary citing only blog posts from the companies involved. No federal agency currently tracks how many AI systems operate under private rules beyond statutory law.
The White House faces competing pressures. President Trump campaigned on reducing tech regulation but now confronts agencies implementing policies written in Mountain View code commits. Defense officials worry export controls determined by profit-driven firms could compromise national security. Commerce Department lawyers are drafting language to claw back technical standard-setting authority to federal hands.
Background
This power shift began in 2022 when ChatGPT’s viral success overwhelmed existing content moderation systems. Traditional keyword blocking failed against AI that could generate infinite variations of harmful instructions. Cloud providers started demanding proof that customer AI systems wouldn’t generate bomb recipes or phishing emails at scale.
The first “model cards” emerged as voluntary disclosures. By 2024 they became gatekeeping tools. Amazon’s cloud division now refuses to host models lacking detailed capability reports. Google Play Store rejects apps whose AI hasn’t passed private safety audits. The patchwork became policy without legislation or public comment periods.
Academics warned about this trajectory for years. Oxford governance professor Allan Dafoe published papers in 2021 arguing AI capabilities would concentrate governance power in whoever controlled compute access. His predictions proved conservative – he assumed governments would notice before total industry consolidation occurred.
What’s Next
The first legal challenges arrive this fall. A coalition of African tech firms plans to file WTO complaints arguing the private standards constitute illegal trade barriers. Their case centers on requirements that AI training data exclude certain geographic regions, effectively walling off emerging markets from AI development. Court filings could drop as early as October.
Technology & Science Editor
Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.