Tuesday, April 4, 2017
Oceanic enemy: A brief philosophical history of the NSA by Gregoire Chamayou
A brief philosophical history of the NSA
6 July 1962, NAVFAC base, Barbados.
A grey building stands at the foot of a stone lighthouse overlooking the Caribbean Sea. Inside, a military serviceman is examining the lines being recorded on an enormous roll of paper by the stylus of a sort of gigantic electrocardiogram. We are in one of the secret bases of the Sound Surveillance System (SOSUS) network, established by the US Navy in the 1950s. Among the clutter of zigzags, from which he has learnt to read the sound of the oceans, the man is looking for a ‘signature’. Today, for the first time, he spots the signal of a Soviet nuclear submarine.
The problem with submarine warfare was that enemy vessels were hidden from view. But what one could not see, one could nonetheless hear: the water in which the submarines hid carried the sound of their engines far into the distance. This is how the sea was put under surveillance. The sound waves could be captured by hydrophones and transmitted by cables to coastal stations where machines transcribed them into graphs. The ‘ocean technicians’ who deciphered them were able to ‘discern subtle nuances in sound signals via intensity, colour, shape, and shade that often made the difference between seeing a school of fish or a submarine on a Lofargram’. They listened with their eyes. Typical patterns corresponding to known entities were called ‘signatures’. The metaphor spoke for itself: there, like elsewhere, an identity would be certified by some lines inscribed on a piece of paper.
Yet all this met with a very unexpected fate. A model that had combined a global listening system, the mass collection of signals, and remote sensing via signature recognition, a few decades later would provide the conceptual basis for an altogether different kind of surveillance apparatus.
At the end of the 1990s, the National Security Agency (NSA) understood that something new was in the offing that promised, excepting the need to overcome certain obstacles, an unheard-of extension of its empire. Historically, the agency had been charged with the task of intercepting electromagnetic signals for external intelligence – diplomatic cables, military communications, satellite beams, and so on. But with the close of the millennium, civil populations were themselves becoming signal transmitters. The whole world was becoming connected, and creating one in which each of us would soon produce more data than any Soviet embassy of the past.
Recalling the reigning zeitgeist of this era, the ex-director of the NSA Michael Hayden today admits:
prior to 9/11, when we were looking at modern telecommunications, … we said we had the problem of what we would call … V cubed – volume, variety and velocity – that the modern telecommunications were just exploding in variety and in size. … But also, we knew that our species was putting more of its knowledge out there in ones and zeroes than it ever had at any time in its existence. In other words, we were putting human knowledge out there in a form that was susceptible to signals intelligence. So to be very candid, I mean, our view even before 9/11 was if we could be even half good at mastering this global telecommunications revolution, this would be the golden age of signals intelligence. And candidly, that’s what NSA set out to do.
The utopia of anti-terrorist data mining: The logo depicts a pyramid topped by an all-seeing eye, Illuminati-style, floating in space and bombarding Earth with luminous rays. This was the emblem of a research programme launched by the Defense Advanced Research Projects Agency (DARPA), an electronic surveillance project entitled Total Information Awareness. This idiotic design, which seemed deliberately created to stoke crazy conspiracy theories, was emblazoned with a Latin motto which, in a sense, rescued the entire design: scientia est potentia, knowledge is power. That was, effectively, what it was all about.
In August 2002, the programme director John Poindexter presented it with great pomp at the DARPATech conference that was being held at Anaheim in California. What is at stake, he began, is ‘somewhat analogous to the anti-submarine warfare problem of finding submarines in an ocean of noise – we must find the terrorists in a world of noise.’ The oceanic analogy was not by accident. The admiral had begun his career in the Navy at the end of the 1950s in a unit tasked with the tracking of Soviet submarines. He added, referring to the ‘terrorists’, that ‘they will leave signatures in this information space.’
The parallel was clear: what had once been done in the ocean was now going to be done in an ‘ocean of information’. Instead of the old lofargrams, computers would now be used to sieve through an immense mass of heterogeneous data – telecommunications, bank details, administrative files, and so on – searching for ‘signatures of terrorist behaviour’. A ‘red team’ would be charged with the task of enumerating potential scenarios of terrorist attack and coming up with the necessary preparative measures: ‘These transactions would form a pattern that may be discernible in certain databases.’