The Real Risk of Cyber Warfare

iStock/Portugal2004

This week, the New York Times published allegations tying the Chinese military to hacking against the United States. The coverage is the latest of a series of articles exposing governmental hacking generally. Last year, the New York Times reported that the U.S. and Israeli governments developed and launched Stuxnet ­– the most sophisticated cyber weapon known to date – which crippled Iran’s nuclear facility Natanz. And in 2007 and 2008, infrastructures in Estonia and Georgia suffered from hacking activity allegedly tied to Russia.

But now, the reality of powerful cyber weaponry has exposed a different kind of danger: The threat to civil liberties and human rights. Take President Obama’s national security policy, for instance, which came under fire recently mainly from his political allies. But this controversy is not limited to legal wrangling over executive power or the role of new technology. It is about the fundamental challenge of a globalized world where the distinction between domestic and foreign threats -- and players -- is increasingly vanishing, and the balance between security and freedom in flux. So far the emphasis has been on the former; the latter needs more attention.

When meteorologist Edward Lorenz coined the term “butterfly effect” in 1972, he was referring to how small changes – the action of a butterfly – can have large effects. He was talking about hurricanes. But four decades later, we can see how it applies to international affairs. The 9/11 attacks carried out by just 19 people rippled today through geodiplomacy. Mohamed Bouazizi set himself on fire in 2010 and set in motion revolutions across the Middle East that last until this day. The actions of one or a few individuals can have large-scale effects far away. That means that national security is increasingly equivalent to international security. This is not new; the development of intercontinental ballistic missiles marked the point when national security became a global challenge. But in those days, only states had the capacity to have such effects.

Today, a cyber-attack gives certain non-state actors global reach, too, and the potential to impact millions of people and large infrastructure systems within milliseconds. They can also be used as proxies by governments who want to hide their involvement. So how can states adapt to this new world without jeopardizing individual human rights, and preserving that balance between security and freedom?  

One of the questions we must address: When do we decide to use force? The administration, according to the latest reporting, believes the President has the power to order a preemptive strike against a cyber-attack. This position sounds a lot like the 2003 Iraq War debate, which he opposed as a Senator. One critical component of that argument, and this debate, is the question of imminence. It must inform a country’s decision to use force under international law. But the concept of imminence has been changing. The New York Times writes that a preemptive strike would be limited to a “looming attack.” What does “looming” mean in the context of cyber-security, where experts speak of Advanced Persistent Threats, attacks that last months or years? Where does defense end and offense start? And is cyber exploitation, collecting data on a system, akin to preparing the battleground or comparable to satellites monitoring other countries’ activities from space? If established principles of warfare must be redefined to address new security threats, should this not be known to all parties involved?

Another set of questions concerns the implementation once a decision has been taken. There used to be a fairly clear-cut distinction between a military operation, covert action, and intelligence gathering. The institutional response to cyber threats has blurred these lines. The commander of the military U.S. Cybercommand, General Keith Alexander, is also the director of the National Security Agency. These changes have taken place partly because the principle of sovereignty that the international system relies on is becoming increasingly anachronistic. Yet, the categorizations have important implications for the government’s oversight mechanisms to review the decisions made. 

But like the threat itself, these important changes to the nation’s national security policies have been largely hidden from the public: the government has made very little information publicly available. The Senate Intelligence Committee, one of the key Congressional oversight committees, for example, did not have a public hearing in more than a year before the controversial February 7 hearing. And when two of its committee members asked the National Security Agency last year how many people inside the United States have had their communications collected under the Foreign Intelligence Surveillance Act, the Inspector General of the Intelligence Community replied that “obtaining such an estimate was beyond the capacity” of the office the NSA’s Inspector General.

In other words, the public barely has access to the Committee’s dealings and is therefore limited in fulfilling its oversight of its representatives, while the Committee is limited in fulfilling its oversight over the Executive. That is why both the public and Congress seem to increasingly rely on the press to exercise oversight rather than the institutional mechanisms set up for such oversight in the first place.

So, it almost feels quaint when a Washington Post editorial calls for more transparency: “What concerns us is not the growth of forces but the way it is happening behind the scenes... So far, operations and deployments are being handled almost entirely in secret.” More transparency is clearly needed as warfare is transformed, affecting the very foundations of our democracies. But in order to preserve a functioning system of checks and balances, oversight mechanisms must also be reformed and strengthened.  This might be a new world we are stepping in, but some of the old principles are worth keeping.