The Terminator movies got it wrong (though T3 was close). In them, SkyNet was a single supercomputer stored in some DoD bunker, built to control a national nuclear arsenal and powered by a nuclear reactor. In reality, SpyNet will be an intelligence agency algorithm built to monitor the entire world, distributed across tiny processors embedded in everything, and powered by whatever local sources are available, including waste heat and the sun.
The Shape of SpyNet
If you want to monitor and control a civilization, computers are good. But they’re not great—they consume a lot power and have trouble with tasks that humans can easily accomplish. Plus, they need humans to run them and check their results. So you invent a better computer that can more easily emulate human cognition and you build an infrastructure of pervasive information that these computers, in conjunction with their more traditional brethren, can sample, process, and control.
Here are the prototypical technologies that will build the SpyNet.
Here is a part of its brain, a neural network running on hardware instead of software emulation:
When hosting a neural network, the chip is remarkably power efficient. And the researchers say their architecture can scale arbitrarily large, raising the prospect of a neural network supercomputer…
They found that TrueNorth [neural net chip] cut energy use by 176,000-fold compared to a traditional processor and by a factor of over 700 compared to [other] specialized hardware designed to host neural networks…
“We have begun building neurosynaptic super-computers,” the authors state, “by tiling multiple TrueNorth chips, creating systems with hundreds of thousands of cores, hundreds of millions of neurons, and hundreds of billions of synapses.”
Here is its memory, with no distinction between volatile/temporary and permanent storage:
“The simplest way to think about it is this—take a DRAM DIMM out, and put a memristor DIMM in,” said Sontag. “You now have another pool of memory that’s denser and nonvolatile. It’s a new class of memory—the consequence for operating systems is that moving stuff around from I/O devices [to and from disk] becomes unnecessary.” …
Memristor memory is “between 64 and 128 times denser than DRAM,” Sontag said, “which makes it even denser than disk drives.” And because of that, memristors are a natural fit for systems-on-a-chip or other embedded storage. “We might just bury that memory within a processor socket and have something that sometimes looks like a memory controller and sometimes does processing,” Sontag said.
Remember, there are SDXC Flash cards out there carrying 256 GB that fit on the tip of your finger. This density will continue to increase in future storage technologies such as the memristors above. The people I know in the data storage industry talk to me about crazy densities and capabilities on the 10 year horizon. Beyond 10 years, we will see the gradual integration of computation with memory—no more separate CPU, RAM, and disk, but you get a “compute block” instead.
Here is its communication device, a high bitrate radio as small as an ant and powered by incoming signals:
The radios are fitted onto tiny silicon chips, and cost only pennies to make thanks to their diminutive size. They are designed to compute, execute, and relay demands, and they are very energy efficient to the point of being self-sufficient. This is due to the fact that they can harvest power from the incoming electromagnetic signal so they do not require batteries, meaning there is no particular lifetime associated with the devices.
Here is a sample of its data harvesting capabilities. In this case, researchers were able to monitor environmental vibrations on a potato chip bag to reconstruct human speech:
But perhaps the biggest surprise came when they showed that they didn’t actually need a specialized, high-speed camera. It turns out that most consumer-grade equipment doesn’t expose its entire sensor at once and instead scans an image across the sensor grid in a line-by-line fashion. Using a consumer video camera, the researchers were able to determine that there’s a 16 microsecond delay between each line, with a five millisecond delay between frames. Using this information, they treated each line as a separate exposure and were able to reproduce sound that way.
Overall, it’s an impressive bit of computer science, but the authors are up-front about its potential use in surveillance. The biggest limitation right now is that the camera has to be quite nearby; the team didn’t test anything beyond four meters. But they also suggest that a powerful zoom lens might allow the system to work at much greater distances.
Everything you ever say will soon be recorded. Windows vibrate, lips can be read, even bags of potato chips vibrate when someone is speaking nearby.
And you can bet that intelligence organizations are salivating over the notion of creating a “sentient enterprise,” something capable of sorting through and monitoring the internet in nearly real time. They predict this to be a possibility by 2025:
The Sentient Enterprise will track and manage thousands of exabytes of data every day… enabling iterative assessments in real time, not days or weeks.
I doubt it will actually be sentient, but it may be intelligent and adaptive enough to perform complex tasks beyond human capabilities. My nightmare is that it may even be like something out of a Peter Watts novel (Blindsight).
For the last few decades, our global computational infrastructure has been like the early universe—undergoing a massive inflationary period where expansion outpaces the speed of light. But the universe eventually slowed to below light speed, allowing both matter and energy to catch up. The Inflationary Internet is ending, and the matter and energy of controlling entities, be they governments or corporations, is catching up to the ragged edge.
Ironically, the slowdown period is due to a coming massive increase in computational performance and energy efficiency. There has historically been a disconnect between processing power and network bandwidth on the one hand, and the total volume of data storage on the other. Storage has always outpaced processing and comms, and pulling together enough processing power to act on petabytes of data has required energy measured in coal plants. This has afforded us anonymity or at least some amount of delay between online actions and exposure. Unless you have been specifically targeted by a government or corporate power, you can generally do what you want on the internet. No one in authority is watching you, and your data trail is simply lost in an expansion that moves faster than the ability to control and monitor.
This will end, this is ending, this has been ending. Soon the frontier of expansion will be fully monitored and policed in real time. Forensic tools will be (are being) developed to observe the Internet Microwave Background and they will dredge up and analyze the entire past history of the internet. There will be tools to focus in on Deep Web Dark Matter, a mass of data not visible on search engines and containing a wealth of information that the Baryonic Internet floats within. The anonymous data trails will become fully legible and parsable by advanced algorithms.
We need to start adapting to this now. We are rapidly approaching a point where everything is recorded and nothing is forgotten, where there is the potential for no anonymity and no privacy in real time. Where SpyNet sees everything you’re doing, and only laws, ethics, and social norms can protect you from its handlers—or, worst case, from SpyNet itself.