Error processing SSI file
Error processing SSI file
Error processing SSI file
Error processing SSI file
Error processing SSI file
Error processing SSI file

Democratic Caucus letterhead banner

Views & Estimates :: March 5, 2003

Additional Democratic Views and Estimates on the Budget for Civilian Science and Technology Programs, Fiscal Year 2004

We generally agree with the policy guidance offered by the Majority in their Views and Estimates to the Budget Committee on the FY04 budget for civilian R&D. Those Views start with a global observation about the importance of adequate funding for science and technology, but the document is actually silent on what level of funding the Majority believes would be adequate. Instead, we are left with a collection of program-level recommendations done up department-by-department. That leaves us wondering what use the Budget Committee can put this document to as it looks for guidance on, for example, funding levels for Function 250 over the next five years. There is a fundamental disconnect between the purpose of composing Views and Estimates and the content of the Majority's report.

But this is nothing new. Each year for the past decade we have seen the Views and Estimates move further from their intended purpose of providing a solid, analytical, five-year recommendation to the Budget Committee. Many of our Members will sign on to the Majority's Views because the report does no harm, but the report also does no good by evading its central responsibility. Content is sacrificed in pursuit of unanimity.

We might make the same calculation were we charged with writing Views and Estimates because the budget process itself has become largely irrelevant. If the process is irrelevant, why make enemies and stir dissent by asking Members to sign up to big budget increases in S&T for the next five years (or cuts, or minimal increases - whatever poison you choose will simply divide Members)? The logic of the situation leads one irresistibly away from offering a clear-eyed vision of the S&T budget for the next five years and towards a detailed discussion of specific programs and initiatives. It is a kind of bad conjurer's trick to use lots of hand-waving about specific programs in hopes that no one will notice that the rabbit - a five-year projection - didn't disappear because it was never there in the first place. The whole exercise reminds us of the Committee's much-ballyhooed 1998 National Science Policy Study, which meekly called for "stable and substantial" funding for Federal R&D without actually committing to any specific funding recommendations. As pointed out by critics at the time, the "stable and substantial" criterion would be met by a budget that was slowly, steadily, inexorably declining over time.

In these additional views, we want to suggest an overall level of funding for FY2004 for R&D and offer some observations on the use of metrics in the President's budget request and on earmarks in the budget process.

A Reluctant Recommendation

The Administration's overall request for R&D amounts to a 4.8% increase over the FY2003 appropriated levels and yet that appears inadequate. Under the President's request, many programs would receive less funding in FY2004 than in FY2003. The Department of Energy's civilian research programs, the National Institutes of Standards and Technology, the National Oceanic and Atmospheric Administration, the Environmental Protection Agency and the Departments of Agriculture, Interior, Veterans Affairs and Education would all face R&D cuts from the 2003 appropriated level if the President's request were enacted. Perhaps most tellingly, non-defense, non-NIH research in the President's budget grows by just 1.6% from the 2003 enacted level - below the level of inflation. It seems a mistake then to stay wedded to the President's numbers.

More than a mistake, it might be irresponsible. The reality is that the appropriators have been pushing for strong growth in R&D accounts; R&D increased by 13.8% from 2002 to 2003. On top of this, there is near-unanimous agreement that the need for national security-related research continues to grow, and there is a consensus that we should be investing more in the physical sciences and in such areas as energy and environmental technologies. Further, while we can't say what impact the Columbia tragedy will have on NASA's budget, we can guess that more money rather than less will be needed at the agency. In light of these factors, it would seem reasonable to recommend an increase in the overall R&D funding in the 8% to 10% range compared to the FY2003 enacted levels. It seems impossible to do the things we know we need to do in R&D with anything less than that, unless we are now willing to start sacrificing biomedical research. As to outyears, we would like to believe that increases for security and physical sciences could decline slightly, say to the 5% to 7% range in the four subsequent years.

Metrics in the President's Budget

The President's budget makes much of the effort to develop metrics for R&D programs. We fully support the effort to identify reasonable measures of performance for programs, both to give program managers useful tools for evaluating progress and to provide policy-makers in Congress and elsewhere with insight into the Administration's budgetary decisions. However, we remain skeptical that this Administration has demonstrated the utility of metrics in producing sound budgeting decisions. We also have limited confidence in the ability of OMB to know the difference between a good management criterion and a bad one - and the difference matters. Some have said that a bad number is better than no number at all. From our perspective a bad number, if used to guide budgetary decisions, can lead to terrible outcomes.

Judgment is Required. For example, OMB's evaluation of the Space Shuttle program in the FY04 budget submission notes that the "Shuttle operational costs are rising" and that one of the goals for the program is "to help mitigate cost growth in Shuttle operations." But is the criterion of "mitigating cost growth" wise? Perhaps the wisest course would be to increase Shuttle costs, and quickly, in light of an overworked, depleted workforce. Absent in the program summary is any direct engagement with the central issue surrounding the Shuttle program even before the Columbia accident: is the program doing everything it should to ensure flight safety? That seems like an important metric and, given the costs of losing a Shuttle, an essential one, but it isn't represented in the OMB analysis. We are not suggesting that OMB is somehow to blame for the Columbia accident, just that what OMB counts matters to agencies and what OMB counts may not always be what an agency most needs to focus on.

Objectivity vs. Political Philosophy. We find programs that receive solid ratings in the OMB Metric effort but are cancelled for other reasons. Thus the Manufacturing Extension Program at the Department of Commerce is proposed for phase-out despite having good scores on planning (86 out of 100), management (91) and results (80). Why? Because OMB doesn't believe the purpose of the program has been demonstrated - that the services provided to small manufacturers through MEP centers should be handled by the private sector. Perhaps OMB is right. Perhaps it is wrong. But the number given MEP for "purpose" (40) is based on faith and political ideology rather than objective measurement. The same is true for many other programs (Fossil Energy R&D and the Advanced Technology Program both come to mind). Canceling a program because you don't believe the government should do it is certainly defensible, but making this the most important criterion will always relegate managerial objectivity to a diminished role, if not irrelevancy.

Some Tactical Retreats. And then, for all the talk of metrics and management initiatives, one finds some retreat from the previous Administration in the use of objective numerical criteria. NASA's new strategic plan, which was released with the FY 2004 budget request, eliminates a number of quantitative performance objectives set by NASA in previous years. For example, in the late 1990s, NASA set an explicit aviation safety objective to guide its R&D efforts, namely "Reduce the accident rate by a factor of 5 within 10 years and by a factor of 10 within 20 years." In contrast, the new NASA strategic plan has changed the objective to "Decrease the accident rate and mitigate the consequences of accidents.." In the area of air traffic management R&D, the previous objective was "Double the capacity of the aviation system within 10 years and triple it within 25 years." The revised objective is now "Enable more people and goods to travel faster and farther, anywhere, anytime with fewer delays." Perhaps the original numbers were too ambitious, but these sorts of applied R&D programs should be the easiest areas to develop reasonable measures of performance. So why have the numbers been dropped?

Metrics and Policy. Finally, an emphasis on program-level metrics without some broader awareness of how R&D policies fit together with and support other policies is a recipe for failure. In promoting the development and adoption of applied energy or environmental technologies, for example, supportive policies are needed to move innovations into broader use. Spending billions of dollars to enhance our understanding and encourage innovation in areas that will benefit the public is simply wasted if the knowledge stays bottled up or if innovations find no outlet through complementary policies. We see no evidence that the Administration's efforts at R&D metrics provide for integrated analysis of how to achieve broader societal goals for which applied R&D is but one component. For example, what regulatory and fiscal stimuli might be necessary to complement the President's hydrogen initiative in order to accelerate the transition to a hydrogen economy? How do these stimuli relate to the R&D program?

Summary of Metrics. In the end, the effort to utilize metrics will rise or fall on how it addresses the issues raised in this section. In the short run, the use of metrics must at least result in clearer program goals and execution. The evidence is not entirely encouraging in this regard. The one area where the Administration seems to have worked the hardest to craft a coherent planning process has been in climate change R&D; however, according to a just released National Academy of Sciences evaluation, that draft plan "lacks most of the basic elements of a strategic plan: a guiding vision, executable goals, clear timetables and criteria for measuring progress."


The President's budget also makes much of earmarks in R&D accounts, arguing that one cannot measure the effectiveness of such expenditures, that higher priority work is crowded out through political favoritism, and that earmarks are distorting some programs (for example, NIST's construction account was heavily earmarked for non-NIST projects in FY2002 and FY2003 appropriations). We have some sympathy for OMB's objections and worry about the ability of some programs to carry out their missions. This Committee has a long history of supporting NIST construction accounts - and of wondering why the Department of Energy should help build hospitals. However, we say to our friends at the other end of Pennsylvania Avenue that, if you don't like earmarks, don't fund them. Most earmarks do not exist in law. They are contained, by and large, in the detailed report language that accompanies appropriations bills. Report language is not binding on an agency. The ultimate responsibility for earmarks lies with the Administration that cuts the check. From a political perspective, we understand why no one in the Old Executive Office Building wants to start telling Appropriators they won't get their earmarks, but if you really believe them to be such a problem, perhaps you should swallow hard and start drawing lines in the sand. It is the kind of brave decision someone might make just before leaving town to run for Governor.

Approved by
  • Ralph M. Hall, MC
  • Bart Gordon, MC
  • Jerry F. Costello, MC
  • Eddie Bernice Johnson, MC
  • Lynn C. Woolsey, MC
  • Nick Lampson, MC
  • John B. Larson, MC
  • Mark Udall, MC
  • David Wu, MC
  • Michael M. Honda, MC
  • Chris Bell, MC
  • Brad Miller, MC
  • Lincoln Davis, MC
  • Sheila Jackson Lee, MC
  • Zoe Lofgren, MC
  • Brad Sherman, MC
  • Dennis Moore, MC
  • Anthony D. Weiner, MC
  • Jim Matheson, MC
  • Dennis Cardoza, MC

Error processing SSI file
Error processing SSI file
Error processing SSI file
Error processing SSI file
Error processing SSI file
Error processing SSI file