Last week, on the same day that the 2010 intelligence budget totals were revealed, the Office of the Director of National Intelligence also released another previously undisclosed intelligence budget figure — the 2006 budget appropriation for the National Intelligence Program.
“The aggregate amount appropriated to the NIP for fiscal year 2006 was $40.9 Billion,” wrote John F. Hackett (pdf), director of the ODNI Information Management Office.
This disclosure provides one more benchmark in the steady, sharp escalation of intelligence spending in the last decade. (The NIP budgets in the subsequent years from 2007-2010 were: $43.5 billion, $47.5 billion, $49.8 billion, and $53.1 billion.)
But what makes the new disclosure profoundly interesting and even inspiring is something else: In 2008, Mr. Hackett determined (pdf) that disclosure of this exact same information could not be permitted because to do so would damage national security. And just last year, ODNI emphatically affirmed that view on appeal.
“The size of the National Intelligence Program for Fiscal Year 2006 remains currently and properly classified,” wrote Gen. Ronald L. Burgess in a January 14, 2009 letter (pdf). “In addition, the release of this information would reveal sensitive intelligence sources and methods.”
Yet upon reconsideration a year later, those ominous claims have evaporated. In other words, ODNI has found it possible — when prompted by a suitable stimulus — to rethink its classification policy and to reach a new and opposite judgment.
This capacity for identifying, admitting (at least implicitly) and correcting classification errors is of the utmost importance. Without it, there would be no hope for secrecy reform and no real place for public advocacy. But as long as errors can be acknowledged and corrected, then all kinds of positive changes are possible.
The Obama Administration’s pending Fundamental Classification Guidance Review requires classifying agencies to seek out and eliminate obsolete classification requirements based on “the broadest possible range of perspectives” over the next two years. If it fulfills its original conception, the Review will bring this latent, often dormant error correction capacity to bear on the classification system in a focused and consequential way. There are always going to be classification errors, so there needs to be a robust, effective way to find and fix them.
“We really wanted a range of perspectives – specifically from voices that have been traditionally left out of the conversation”
The joint advocacy effort calls for the establishment of an effective AI governance framework through NIST, including technical standards, test methods, and objective evaluation techniques for the emerging technology.
Understanding the implications of climate change in agriculture and forestry is crucial for our nation to forge ahead with effective strategies and outcomes.
Alexa White’s journey into the world of science policy started back when she was earning her undergraduate degree in biology and chemistry at Howard University.