Tech Nonprofits to Feds: Don’t Weaponize Procurement to Undermine AI Trust and Safety
EFF-led tech nonprofits warn U.S. agencies against leveraging procurement rules to erode AI oversight safeguards.
Image: GlobalBeat / 2026
AI procurement regulations: Tech nonprofits warn federal rules could gut safety standards
Sarah Mills | GlobalBeat
Electronic Frontier Foundation and 21 partner nonprofits told federal agencies that proposed procurement rules would let contractors hide AI system flaws from public scrutiny.
The coalition’s 27-page filing attacked a January draft from the Office of Federal Procurement Policy that drops mandatory disclosure requirements for training data and bias audits.
Federal agencies spend $2 billion annually on AI systems, according to the Government Accountability Office, making procurement rules the de facto standard for private vendors nationwide.
Current draft language lets vendors mark safety documentation “confidential commercial information,” the groups wrote, preventing watchdogs from testing claims about accuracy or discrimination.
“Taxpayers deserve to know if the facial recognition system at their airport misidentifies Black women at triple the rate of white men,” EFF staff attorney Saira Hussain told reporters Tuesday.
The revised rules eliminate a Biden-era requirement that contractors reveal training data sources, model architectures, and independent audit results before winning contracts worth more than $10 million.
Office of Management and Budget officials defended the changes as necessary to protect trade secrets and maintain U.S. competitiveness against Chinese tech firms.
Industry lobbyists argued that public disclosure would let competitors reverse-engineer proprietary systems, though the original rules allowed redaction of legitimate secrets.
Consumer Reports tech policy analyst Sumit Sharma called that reasoning “a red herring” since audit results can be shared without revealing source code or training datasets.
Federal contractors already must disclose cybersecurity practices and financial conflicts under existing transparency rules, the coalition noted.
The timing matters. Agencies award more AI contracts in the fourth quarter than any other period, according to federal procurement data analyzed by Bloomberg.
Hussain’s group flagged that vendors could hide discrimination behind confidentiality claims just as agencies ramp up purchases of predictive policing and benefits-screening algorithms.
Taxpayers foot the bill when flawed systems fail. The IRS paid identity verification vendor ID.me $14 million last year after the company admitted its facial recognition falsely rejected millions of legitimate taxpayers.
Background
Federal procurement policy sets purchasing rules for every agency from the Pentagon to the Agriculture Department, making the office’s AI guidelines default standards for any company wanting federal business.
The Biden administration required AI transparency in a September 2023 executive order, mandating that contractors prove their systems don’t discriminate before deployment. Trump revoked that order on February 7 but left procurement rules unchanged, creating the current review process.
Large tech firms have fought disclosure requirements since 2022, when the first federal AI procurement standards required bias testing for systems affecting housing, employment, or credit decisions.
What’s Next
The Office of Federal Procurement Policy must respond to public comments by May 15 before finalizing rules that take effect October 1, the start of fiscal 2027 procurement cycles.
Technology & Science Editor
Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.