Olivier Le Moal / Getty Images

Smarter, Faster, Lighter: A roadmap for agile evidence building

COMMENTARY | With DOGE initiatives fundamentally changing the state of federal policy evaluation, it could be time to embrace more evidence-based approaches to assess agency actions.

Federal evidence building is at a crossroads. The Department of Government Efficiency (DOGE) abruptly ended several longstanding research and survey initiatives and laid off employees from evaluation and statistical offices. These actions underscore a larger truth: if evidence is to remain relevant in policymaking, it must evolve from a boutique function into a core operating capability that is embedded in how government designs, delivers and improves services in real time.

Traditional approaches to federal evaluation are slow, costly and often disconnected from policymaking. Many evaluations take years to complete and cost millions, only to yield findings that are too modest or too late to be useful. Even when findings are clear, they rarely influence decisions. Programs like Social Security’s Ticket to Work or the 21st Century Community Learning Centers have undergone rigorous evaluations that found little to no positive impact—yet they persist, reauthorized, funded and reevaluated year after year. Here, at least, there’s no meaningful feedback loop between evidence and action.

Formula and competitive grant programs, which make up a large portion of federal spending, often lack meaningful evaluation requirements altogether. While reporting from the Government Performance and Results Act (GPRA) provides some insight into how funds are spent, it tells us little about whether programs work, why they work or what outcomes they produce. Evaluations that do occur often assess the overall portfolio of grants—too broad and inconsistent to provide actionable insights at the program or grantee level.

Federal surveys, too, are overdue for modernization. Many have been on autopilot for decades, consuming millions in taxpayer dollars without a clear link to policy. For instance, the Health and Retirement Study (HRS), funded by the National Institute on Aging, has generated thousands of academic papers but little discernible impact on federal policy. With limited practical application and minimal reporting to policymakers, it’s worth questioning whether such surveys serve government priorities or merely the interests of researchers.

In short, the current model of federal evidence building is outdated. It’s too slow for today’s policy environment, too expensive to justify in a time of fiscal restraint and too removed from the real-time decisions that drive public impact. But the answer is not to abandon evaluation. Rather, we must embrace a new model—one that is faster, cheaper and purpose-built for action.

This new model is agile evidence building. It replaces retrospective, one-off studies with real-time, embedded performance tracking. It shifts the focus from research for its own sake to evaluation that directly supports better operations and service delivery. Under this model, agencies prioritize:

1. Setting Minimum Viable Evaluation (MVE) Standards. Agencies should set clear standards for a “minimum viable evaluation” (MVE) that both formula and competitive grant recipients must follow. These standards should require recipients to define and measure outcomes, identify fraud mitigation strategies and assess return on investment. MVEs should rely on administrative or secondary data where possible and follow streamlined reporting templates to reduce burden. The federal role is not to conduct these evaluations, but to define the floor for what constitutes credible, actionable evidence—and to use that information to guide funding decisions.

2. Reporting on Return on Investment. Once grant recipients complete MVEs, federal agencies should focus on aggregating and reporting those results to show the broader return on investment of federal spending. Rather than launching new federal evaluations, agencies should consolidate MVE findings and publish them on a centralized dashboard (e.g., a reimagined Evaluation.gov). The dashboard should highlight indicators such as jobs created, cost per outcome or overall program value—enabling cross-program comparisons and guiding resource allocation.

3. Using Evidence to Drive Citizen Services. Evaluation should be embedded in the design and delivery of citizen-facing services. Every major digital platform—whether used for patents, passports or benefits—should include measurable service goals and real-time performance monitoring. Product and evaluation teams should collaborate to ensure that systems support continuous improvement and public accountability. By tracking what services deliver—and how well they serve the public—agencies build trust and responsiveness.

4. Maintaining Core National Indicators. The government must continue collecting and reporting data on key national conditions across sectors like health, education and public safety. But these data collections should be updated to use modern, cost-effective tools like survey panels, passive data collection and administrative records. Agencies should focus on making data more usable and relevant to policymakers and the public, reducing duplication and improving quality.

To succeed in a government increasingly focused on speed, efficiency and return on investment, evidence must become lighter, faster and smarter. The tools of evaluation still matter—but they must serve a new purpose: to drive better decisions in real time, with fewer resources and greater agility.

By adopting the principles of agile evidence building, federal agencies can move beyond outdated systems and into a new era of responsive governance. It’s time to shift from evidence as an academic exercise to evidence as an operational advantage—one that ensures public services deliver meaningful results for the American people.

Justin Baer is vice president for program evaluation and policy analysis at ForsMarsh.