How Intelligence Has Evolved Since Able Danger
By Erik Kleinsmith, American Military University
Ten years ago on September 21, 2005, the Senate Judiciary Committee convened a hearing to learn what a small group of military intelligence analysts had discovered about the world-wide reach of Al Qaeda and its affiliations prior to 9/11.
This first hearing, on what is now known as the Able Danger program, was followed by a second hearing before the House Armed Services Committee in February 2006. It is also a major subject of the 2010 book, The Watchers, by Shane Harris.
Without going into the fairly well-documented details of this program, the controversy surrounding its closure and subsequent revelation that the analytical team had information about 9/11 hijackers, my involvement as the chief of the intelligence branch of the organization that provided the intelligence analysis for Able Danger has provided some unique lessons that are pertinent for intelligence analysts today.
What is Able Danger?
Able Danger was a program with the U.S. Special Operations Command (SOCOM) that ran from late 1999 to early 2001; the operational aspects remain classified. During its run, SOCOM turned to intelligence support from a new organization, the Army’s Land Information Warfare Activity (LIWA). It is among the most highly controversial intelligence efforts in U.S. history, not because of what the program did, but because of what it failed to do: Prevent the terrorist attacks of September 11, 2001 and fail to warn about an impending attack on the USS Cole.
I was the senior military member of the Able Danger analytical team. I was also the one responsible for deleting all the collected data of the program in order to comply with applicable intelligence regulations.
Lessons from Able Danger
The intelligence analysis in support of Able Danger was a watershed event. It was the first significant data mining operation that was able to successfully harvest and visualize massive amounts of data and to use this data to conduct preliminary analysis of Al Qaeda’s dispositions.
In comparison with today’s intelligence community, the 2.5 terabytes of data the team harvested is relatively tiny. Further, and as I discuss in a previous article, massive amounts of data mining about our threats is no longer confined to the intelligence community. Technology—combined with adapting threats and customer needs for information about such threats—has been driving this explosion of intelligence.
Able Danger also provided the intelligence community with a template for what successful intelligence analysis needs to look like. Our analytical support was the near-perfect combination of the four critical elements needed for successful intelligence operations: data, tools, people, and processes.
Too many intelligence centers that have sprouted up since 9/11 are centered on putting tools and data together without consideration of integrating the right people or implementing the right analytical/operational processes to effectively use data mining tools.
In terms of people, the small team of personnel that conducted the analysis benefitted greatly by the diversity of our backgrounds. Our team was a combination of two field-grade intelligence officers, two DoD civilians (one of which had a Ph.D.), an experienced SIGINT warrant officer, and part-time support from two capable non-commissioned officers.
Able Danger also turned the argument of national security versus personal privacy on its head. Army and DoD regulations include strong restrictions regarding what information government officials can collect about U.S. persons as well as hard limits regarding the retention of such data. But what if a computer does it? Is it considered a direct-collection effort or incidental collection if a data-mining program happens to harvest information about U.S. persons? What about retention of this data? For our operations, we were required to delete our data 90 days after we ran our first harvest. Does the retention clock start when it was collected, or when it was first seen by human eyes?
In deleting our data, our team ended up sacrificing national security over privacy concerns, something that will stay with each of us personally. In light of 9/11 and the revelations concerning the NSA and other intel community collection operations, it now appears that there are wholly different interpretations of data mining with no end in sight to this argument.
Ongoing Challenges of Data Mining Operations
One of the biggest arguments against data mining has been the enigmatic quote: “You don’t find needles in a haystack by adding more hay.” Our team was able to work through the data, but the standard paradigm that intelligence analysts are overwhelmed by data is truer today than ever before.
Much of our investment on the technical side of intelligence in the last decade has been to do just this: Help collect massive amounts of data with relatively little priority spent on our ability to handle it. As a result, analysts today are either overwhelmed with data with an acknowledged risk that the most pertinent message, link, or other piece of information is going to slip by them.
As the intelligence community continues to evolve, it must constantly adjust so that the collection of data remains in harmony with the capabilities of its systems and analysts.
About the Author: Erik Kleinsmith is the Associate Vice President for Strategic Relationships in Intelligence, National & Homeland Security, and Cyber for American Military University. He is a former Army Intelligence Officer and the former portfolio manager for Intelligence & Security Training at Lockheed Martin. Erik is one of the subjects of a book entitled The Watchers by Shane Harris, which covered his work on a program called Able Danger tracking Al Qaeda prior to 9/11. He currently resides in Virginia with his wife and two children.