Tech: Tracking AI timelines
Punchbowl News launches tracker mapping U.S. lawmakers’ AI timelines, citing surge in legislative proposals.
Image: GlobalBeat / 2026
AI timeline predictions: House panel demands White House release standard forecasting metrics
Sarah Mills | GlobalBeat
The House Science Committee ordered the Biden and Trump administrations to hand over internal forecasts that predict when artificial-intelligence systems could match or exceed human performance.
Committee Chairman Rep. Zoe Lofgren said the documents will show whether federal agencies are using consistent yardsticks to judge AI risk.
Lawmakers fear agencies are flying blind as labs race toward systems that could automate scientific discovery, military command software, and critical infrastructure controls. Without shared benchmarks, Congress cannot decide where to set safety guardrails or how much money to budget for oversight.
The letters, dated Thursday and seen by GlobalBeat, give the White House Office of Science and Technology Policy (OSTP) and the National Institute of Standards and Technology (NIST) until 14 May to produce charts, slide decks, and spreadsheets that spell out “estimated arrival dates” for artificial general intelligence (AGI). Democrats and Republicans on the panel signed off on identical requests to both the current Trump OSTP director and former Biden-era officials who left office in January.
Staffers told reporters the committee wants “hard numbers” rather than vague adjectives such as “medium-term” or “within the decade.” One aide said lawmakers were stunned last month when OpenAI CEO Sam Altman testified that his own forecasting team places “fifty-percent probability” of AGI before 2030 yet offers no written methodology to outsiders.
Lofgren’s letter cites a 2024 Government Accountability Office report that found 28 federal AI projects lacked agreed timelines for capability jumps. “The absence of standardized timelines inhibits risk assessment, budget justification, and international coordination,” the letter states. It demands transcripts of any conversation since 2022 in which OSTP or NIST officials discussed “transformative AI,” “AGI,” or “frontier model milestones.”
Ranking member Rep. Jay Obernolte, a California Republican who holds a computer-science PhD, said private labs already circulate spreadsheets among themselves that plot parameter counts, training costs, and projected capability “take-off” dates. “We simply want to know if the federal government keeps similar tables, and if not, why not,” Obernolte told reporters outside the Capitol.
The request arrives weeks after DeepMind published a technical paper claiming its newest Gemini iteration showed “incipient evidence of chain-of-thought planning across scientific disciplines.” The paper included an internal forecasting graphic that projected a 50% chance of “expert-level problem solving in physics, chemistry, and biology” by 2027, assuming current hardware scaling curves hold. Committee staff say they want that kind of granular forecast from every lab receiving federal compute credits over $10 million.
OSTP spokesperson Kalisha Dessources Figures declined to say whether the office maintains a master timeline. “We are reviewing the letter and will respond appropriately,” she wrote in an email. NIST did not reply to questions.
Budget analysts say the committee’s move could expose a split inside the Executive Branch. The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has quietly funded a “AI Timeline Tracking” program since 2021, contracting RAND Corporation to produce classified quarterly briefings. Two people who have seen the briefings said RAND’s 2024 Q4 update marked “high probability” of AI systems capable of autonomous cyber weapon design “between 2026 and 2029,” depending on chip supply chains.
Yet the Department of Energy, which runs three national labs that train large scientific models, told GAO auditors last year it had “no official forecast” for AGI and therefore embeds no date assumptions into cyber-physical security protocols for power grids. Committee aides say that disconnect illustrates why Congress needs a unified baseline.
Industry lobbyists are watching nervously. “If the government standardizes timeline metrics, every SEC filing and insurance policy will have to reference them,” said one tech-funded policy advisor who requested anonymity. Markets punish companies that miss self-imposed AI product deadlines; formal federal estimates could turn guesswork into liability.
Civil-society groups welcomed the inquiry. “Forecasting should not happen in secret,” said Sneha Revanur, founder of Encode Justice, a youth-led AI safety nonprofit. Revanur pointed to stakeholder letters last year in which major labs refused to publish probability distributions for economically-relevant milestones. “Public timelines force accountability,” she added.
Background
Congress has wrestled with AI forecasting opacity since 2019, when the National Security Commission on Artificial Intelligence warned Congress that “absent a coordinated federal view of AI progress, the United States will be unprepared for adversary breakthroughs.” The final commission report recommended an inter-agency “capability clock” updated every 6 months, but the proposal stalled amid partisan fights over regulation.
The issue reignited after OpenAI released ChatGPT in late 2022. Lawmakers discovered the company’s internal governance charter predicted “AGI within 10 years” yet shared no technical criteria. Subsequent hearings revealed that neither NIST nor OSTP kept comparable projections, leaving agencies to rely on media reports and venture-capital slide decks.
What’s Next
Committee staff say they will hold a public hearing in June if agencies miss the 14 May deadline. Witness lists already include RAND analysts, Anthropic policy executives, and former Biden OSTP director Arati Prabhakar. Lofgren warned she is willing to subpoena records if negotiations stall, a step that could force labs to disclose forecasting formulas investors have never seen.
Technology & Science Editor
Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.