Written By: Ran Levi
When it was discovered in 2010, Stuxnet was the most complicated and sophisticated malware ever known: an Advanced Persistent Threat (APT). In this article, we’ll explore the story behind the worm, it’s target and creators – as well as the innovative technology it implemented.
This article is a transcript of a podcast. Listen to the podcast:
Part I – Advanced Persistent Threat (Download MP3)
Part II – Uranium Enrichment (Download MP3)
Part II – Flame & Duqu (Download MP3)
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter
Explore episodes in other categories:
Part I: Stuxnet’s Discovery
Several years ago I decided to write a book. By that point, I had already written two other books and knew from experience that it’s easy to get distracted by fascinating research that isn’t necessarily relevant to a book’s theme or thesis. This particular book was on the history of computer viruses— also known as Malware — but because many new viruses appear around the world every year, choosing the viruses that would make the final list wasn’t an easy task.
Yet from the very beginning it was clear to me that one of the chapters would be dedicated to a malware called Stuxnet – the subject of this article.
Stuxnet was discovered in 2010 after an unprecedented attack on an Iranian uranium enrichment facility. At that time, it was the most complicated and sophisticated malware ever known. Yet it wasn’t its technological sophistication that made it so prominent in the relatively short history of computer viruses. The Stuxnet attack was a terrifying display. It illustrated to us how weak and exposed the industrial infrastructures that we all depend on are to cyber attacks—including nuclear reactors and petrochemical factories. But it was more than a wake-up call—it was more like the bellowing horn of an approaching train. After discovering Stuxnet, we were forced to ask ourselves—how much more potentially devastating viruses are out there?
Industrial Control Systems
Not that there weren’t any sophisticated computer viruses before 2010. There were quite a few. But Stuxnet targeted a very specific niche of the computerized world, a field that most of us aren’t familiar with and aren’t exposed to on a daily basis—computerized industrial control systems. For that reason I have invited Andrew Ginter, an expert in IT security, to be our guest.
“My name is Andrew Ginter, I’m vice president of Industrial Security at Waterfall Security Solutions. I’ve got a long history of writing code, mostly for industrial control systems […] In a sense, I was part of the problem in the early days of connecting these systems together. And then I got religious and now I’m trying to secure of these connections that we haphazardly connected together in the last 20 years.”
A New Type of Malware
Like other computer users, Andrew and his programming colleagues were already familiar with computer viruses and the various defenses against them, like anti-virus software. But in the mid-2000s, a new and a more menacing threat emerged.
“In about ‘06 or ‘07 the American Department of Homeland Security started issuing warnings, saying there’s a new class of attack out there, this ‘Advanced Persistent Threat,’ where there are significant differences between that and threats people were used to.”
The new threat was an Advanced Persistent Threat, known as APT. In order to understand what APT is, and why it’s so special, we’ll have to take a step back and talk about malicious software in general. By the way, this episode will be divided into three parts, each of which will tackle a different aspect of this unique malware: its mechanisms, its creators, and its target.
Advanced Persistent Threat
Basically, malicious software is software that deliberately causes some sort of digital damage. For example, it might erase or steal important information. There are many types of malicious software, and a “virus” is just one of them. But for the sake of simplicity, we’ll use both terms—malware and computer virus—interchangeably.
The Internet is full of viruses stealing bank account information, installing unwanted software on computers and causing all sorts of trouble. But all those viruses share a common characteristic — they are not directed at you personally. In other words, rather than attack one specific user or computer, a virus will usually try to attack as many computers as possible at the same time.
An APT, on the other hand, is a targeted threat. It is a malware created with the intent of penetrating a specific system or organization. If we compare malware to bombs, then a “regular virus” is similar to the carpet bombings often seen in old World War II movies, where thousands of bombs are released over a large area in the hopes that at least a few will hit their targets. An APT then is like a guided missile. It is so well tuned that it’s akin to a missile hitting a specific window of a specific building. The word “persistent” in the acronym APT refers to the human component in the operation of the malicious software: a human operator remotely monitors and guides the actions of the APT.
War Games in US
It seems that the United States government was thinking about APTs long before ’06-‘07. Blake Sobczak is a journalist covering IT & Security matters.
“My name is Blake Sobczak. I’m a reporter with Energy Wire, based in Washington DC. I write about efforts to secure critical infrastructure in the U.S. and abroad from cyber attacks.
There’s evidence to believe that the U.S. was thinking about the possibility of attacks on critical infrastructure as far back as 1997, likely before. That was the date that there was a very famous wargame called ‘operation eligible receiver’, where the Department of Defense and the NSA sort of run this no-notice exercise. They surprised a bunch of military contractors with a huge cyber attack. The results of that particular exercise were so shocking that a lot of it, even though it was classified, leaked out into the public domain. And so that at this point the industry was aware of the threat.”
But as Andrew Ginter says, at the time, around 2005 or 2006, though a few APTs had already been discovered, all of them were focused on the business world, for the purpose of industrial espionage, stealing credit card information or hacking bank accounts. The industrial control systems world of factories and reactors, on the other hand, hadn’t yet faced the same threat.
“A lot of people followed this with some interest. Everybody likes to see things break, and here was a new way that people were describing how things can break. But nobody really believed they would really be the target of an Advanced Threat. All these Advanced Threats were happening on IT networks, business networks. You know, stealing credit cards, account information. There had not been a worm or a virus or anything before specifically targeting control systems components in the wild. At least not that was reported, none that I was aware of. The kind of security incidents you saw before was stuff like insiders using their own passwords to access systems and cause malfunctions. The classic one that everyone was talking about was Maroochy, in Australia.”
Oh, yeah… the Maroochy incident. That was a rather smelly and unpleasant business, but we might as well look into it. It will help us better understand what industrial control is, and why it’s so important.
The Maroochy Incident
The Shire of Maroochy is one of Australia’s delightful treasures—a beautiful and serene rural area that attracts many nature-loving tourists. Maroochy has a local sewage system that handles more than 9 million gallons of sewage every day, using 142 sewage pumps scattered around the shire.
At the heart of the continuous operation of these pumps is, of course, a computer; to be more precise—a computer system called SCADA, which stands for Supervisory Control And Data Acquisition. The title is a mouthful, but the principle is rather simple. A computer gathers information from different sensors—like those that measure sewage levels—and turns the pumps on or off accordingly. Our domestic air conditioning system, for example, is a sort of SCADA. A tiny sensor located in the remote controller reports temperature inside the house to the AC main computer, which then tells the compressor to turn on or off.
In 1999, a man named Vitek Boden was supervising the sewage pumps in Maroochy, working for the company that installed the control system. Boden, then in his forties, had been working for the company for two years until he resigned as a result of a dispute with his bosses. After quitting his job, he approached the district council and offered his services as an inspector. The council declined.
Shortly after, the Maroochy sewage system started having mysterious and seemingly random problems: pumps stopped working; alarms failed to go off; and worst of all, about 200,000 gallons of sewage flooded vast areas. Rivers turned black, nature reserves were destroyed, countless fish and wildlife died, and of course, the local population suffered through the terrible stench for weeks.
Maroochy’s water authority hired experts to examine the problems. At first, the experts suspected that disturbances from other control systems in the area were causing the problems, or that there was an error in the hardware. After all the immediate suspects were investigated, the experts were helpless; time after time they examined failing pumps, only to discover new and intact equipment that would simply stop operating, seemingly for no reason.
Some time later, an engineer working on the sewage system at around 11 o’clock at night, changed one the configurations in the control system. To his surprise, the change was reset and erased a half an hour later. The engineer became suspicious. He decided to thoroughly investigate the data traffic between the different pumps and discovered that sewage pump number 14 was the one who had sent the order to reset his original configuration change. He drove to pump 14, examined it and its computer, and found them to be in perfect working order.
Setting The Trap
He was now certain that a human hand was behind the chaos in the system. He decided to set the hacker up. He changed the pump identification code from 14 to three, meaning, all legitimate orders coming from pump station 14 would now be received under identification code 3. He waited until the next error occurred, and then analyzed the data traffic. As predicted, the malicious orders still indicated they were coming from pump 14. In other words, someone had hacked the communication network of the pump and was pretending to be pump number 14.
Vitek Boden became the immediate suspect. Investigators assumed that he was penetrating the network remotely, via wireless communication. It was likely then that during an attack, Boden would be within a few dozen miles from the pump stations.
The water authority promptly hired private investigators that began tracking Boden’s movements. On April 23rd, at 7:30 in the morning, another series of errors occurred in the pump stations— but this time the trap set around Boden snapped. A private investigator noted Boden’s car on a highway not far from one of the pumping stations and called the police. Boden was chased and arrested. A laptop with a pirated copy of the control system software and a two-way radio transmitter were found in his car.
At his trial, Boden claimed that all evidence against him was circumstantial since no one saw him actually hacking the control system. The Australian court wasn’t convinced. The circumstantial evidence was pretty strong; especially considering the radio equipment found in his car was designated for operating the control computers of the sewage system. The judge theorized that Boden wanted revenge after having to leave his job, or that perhaps he thought he could win his position back once he was called in to fix the “errors.”
Only The Tip of An Iceberg
Vitek Boden was sentenced to two years in prison, and the crime he committed became a point of interest to IT security experts around the world. They thoroughly analyzed each and every one of his steps, and what they found wasn’t very reassuring. Maroochy’s control system wasn’t designed with cyber security in mind. As is often the case in the programming world, finding engineering solutions to solve each immediate problem took priority over the less urgent need to secure data. One can guess that security wasn’t a top priority for the people who designed the sewage control system, since after all, they had enough s*** to deal with as it was…
Worse yet, the Maroochy incident was only the tip of an iceberg. Industrial control systems, such as the computer system that controlled the sewage pumps in Maroochy, are the foundation on which almost all of our industries and infrastructures are built upon. Millions of control systems are used to control a vast variety of industrial processes all over the world, from assembly lines to nuclear reactors, to making electricity. The ease with which Boden was able to penetrate Maroochy’s control system reflected, said the experts, how easily someone could disrupt the gas or electricity supplies to entire cities.
Still, in 2006, when the authorities in America warned against the possibility of a sophisticated malware attack against industrial control systems, it was still based solely in theory.
“[Andrew:] And in 2010, when Stuxnet hit, it was in a sense the first very concrete example of an advanced attack, targeting industrial control systems. It was very big news.”
Stuxnet: First Report
The first report of Stuxnet was received by a relatively unknown anti-virus vendor from Belarus in Eastern Europe named VirusBlokAda, a company who supplied tech support and counseling to local and international customers. Sergey Ulasen, a programmer, and an IT security expert, was a team leader in VirusBlokAda. On a Saturday in 2010, Sergey got a call from one of his clients from abroad, an Iranian company. Despite it being the weekend, and despite being at a friend’s wedding, Sergey answered the call since the company’s rep was also a personal acquaintance of his. While everyone else was drinking and dancing, Sergey stood in the corner and discussed the problem with his client.
The Iranian representative told Sergey that several of the company’s computers were crashing, presenting the infamous “blue screen of death,” familiar to anyone who has ever run into a critical problem with the Windows operating system. At first, Sergey assumed that it was a technical problem having nothing to do with malware—maybe a collision of sorts between two installed programs. But when he learned that the same problem occurred on computers with a new and clean installation of Windows—he began to suspect a virus.
A Digital Certificate
When he came into the office on the following Monday, Sergey began looking into the matter. He remotely took over the Iranian computers and rummaged through the guts of the operating system. Finally, he located the software that was causing the computers to crash. The way the unknown software was hiding among other files in the computer reminded Sergey of typical malware behavior—but the suspicious software also had one odd and unique characteristic, one that is never found in malware. It had a valid “digital certificate.” What is a digital certificate? Here’s Andrew Ginter.
“If I give you a new driver and say ‘here, install that on your machine’ and you try, the machine will say ‘That’s funny. The driver is signed by this certificate that I’ve never seen before. The certificate claims to be from abc.com hardware manufacturer. Do you trust this certificate?’ If you say yes, there’s also a checkbox that says ‘In the future, don’t ask me this question.’ Just Go. If you click that, then the next time you see a piece of software signed by this vendor, you won’t be asked anymore—it would just install.”
In other words, a digital certificate is a kind of like a long-term multiple entry visitors’ visa. The first time you enter a country, you might need a valid visa—but once you have it in your passport, you are free to come and go. Similarly, software with a signed digital certificate can install itself on a computer without warning.
Sergey knew that only distinguished and well-known companies could sign their software with a certificate like that. The software he was looking at didn’t seem like it was legitimate software made by a legitimate company. The fact that the malware was trying to hide on the computer was also suspicious, just like seeing someone at a mall wearing a big coat on a hot summer day.
A Crucial Discovery
But the question remained—if it was malware, how could it have a signed digital certificate? Sergey and his colleagues at VirusBlokAda argued among themselves for hours. But then Sergey made a crucial discovery that ended the discussion.
Sergey discovered that in order to install itself on a computer; the suspected malware was using a bug—a software error—in the operating system. The error allowed the malware, located on an infected USB drive, to install itself on a computer without the user’s knowledge or approval. The fact that the suspected software was exploiting a bug in the operating system was the smoking gun Sergey was looking for. No legitimate software vendor would ever exploit a bug in such a way. Sergey was spot on—it was later revealed that the digital certificates Stuxnet was using were, in fact, certificates stolen from the safes of two Taiwanese companies: Realtek and Jmicron. The identities of the thieves were never found.
Sergey realized that he was in over his head. VirusBlokAda is a small company focused on customer service and IT counseling; it didn’t have the funds and resources to thoroughly analyze the new malware. He contacted Microsoft to warn them about the newly discovered bug, and on July 10th, 2010, VirusBlokAda posted a press release on its website describing the virus as ‘very dangerous’ and warning that it had the potential to spread uncontrollably.
Stuxnet Becomes Big News
We should note, though, that despite the threatening tone, these sorts of alerts were and still are somewhat routine in the anti-virus industry, since new viruses are discovered daily. What made this case special, however, was the discovery of previously unknown operating system software error, the bug that allowed the malware to install itself secretly every time someone attached an infected USB drive to a computer. Since it was unknown, there was no protection against the bug; Microsoft hadn’t yet released a patch fixing it. In other words, any malicious software designed to exploit this vulnerability could penetrate any computer that ran Windows, even if has updated anti-virus software installed on it.
Since such a new and unknown bug had the potential of infecting billions of computers around the world, it became big news. Several well-known IT security bloggers shared VirusBlokAda’s report, and Microsoft hurriedly announced that its engineers were working around the clock to patch the bug.
The Dreaded APT
Meanwhile, several anti-virus vendors began to analyze the actual malware that Sergey discovered. As they dug deeper and deeper into its code, the analysts realized that this wasn’t your ordinary, run-of-the-mill computer virus. Instead, it was one of the most sophisticated viruses known to date—and it was targeting industrial control systems. In fact, this was the dreaded APT malware the U.S. government had been warning about since 2005. The name given to the new malware was Stuxnet, a combination of two random words in its code.
AG: “People reacted almost immediately. There were entire sites that glued all of their USB ports shut on their control system network.”
BS: “First and foremost, Stuxnet was a jarring proof of concept.”
AG: “Beforehand, it was ‘no one would ever come after us with a targeted attack. That’s for IT.’ Afterward, it was ‘Oh no! Now, what?”
BS: “This was something that many cyber security researchers had been warning about for years, the possibility that a digital weapon could attack physical infrastructure. And Stuxnet was really the first big public evidence of that.”
AG: “Certainly in the day, there was some panic.”
BS: “And I think what made it so impactful and so shocking for so many people, was the fact that it was just so complex. In most cases, in the cybersecurity realm, you almost expect the initial stages of a new type of attack to come very slowly and gradually. Maybe start with something, say, DDOS on critical infrastructure. Just trying to shut things down, just trying to break things in a crude and basic way, not even by directly affecting the physical mechanism of the control system, but just by going after the computer, the software code that’s helping to run those control systems, and just trying to take them down and seeing what happens. The fact that Stuxnet was so intricate, highly targeted and carefully assembled—I think that was what really made it a wake-up call.”
AG: “The Stuxnet worm spread on control systems networks when it was discovered. There was no anti-virus for it, there were no security updates to close the hole that is was using. There was no way to stop it. In the early days, nobody knew what it was, what its objective was, what its consequences would be. All they knew was that here is something that is spreading on control system networks, and there was no way to stop it.
One of the ways it spread was on USB sticks, so people took what measures they could. You know, it’s very bad news when something hits the industry, and you’re told ‘take precautions, you’re in trouble.’ What precautions should I take? I don’t know (laughing) that’s a very distressing thing to hear.”
The Spreading Mechanism
Teams of several anti-virus vendors analyzed Stuxnet and published detailed reports. It was a tremendous effort. Stuxnet was a gigantic malware, in terms of the sheer size of its code: it had 150,000 code lines—roughly 10 times more than the average computer virus. Stuxnet’s size is a testimony to its complexity and was the reason it took several months to reverse engineer it. The majority of the Stuxnet’s code was dedicated to its spread and stealth mechanisms. But how do these mechanisms work?
Let’s say it’s Monday morning, and an employee returns to his office at some industrial company. He could be the company’s accountant or the inventory manager of the cafeteria. He sits at his desk and takes out a USB thumb drive. The drive might carry documents he was working on from home, and maybe some family photos. He plugs the USB drive into the computer and copies the files he needs to his desktop.
But what he doesn’t know is that the USB drive contains one more thing—a copy of Stuxnet. How did that concealed copy find its way to the USB drive? We can’t tell. Perhaps the employee’s home computer was already infected, or maybe some “secret hand” made sure the malicious software was installed on the drive. What we do know is that while the employee copies the documents from the USB drive, Stuxnet installs itself on his desktop in complete silence, without any warnings or alerts.
What Is Stuxnet Looking For?
You may have also noticed that Andrew Ginter described Stuxnet as a “worm.” This term refers to a type of malware that is able to spread inside a network of computers without any human intervention—much like a biological virus spreads through the body using its blood vessels. From the moment it has installed itself on the first computer, Stuxnet is independent and is able to spread to other computers in the company’s network—the network that allows the accountant or the inventory manager to send emails and share documents with their colleagues.
What is Stuxnet looking for? Why is it skipping from one computer to another? Well, it has a clear target. Remember that we are talking about an industrial facility of some sort, perhaps a snack factory or maybe a power station. Such a facility usually has two distinct computer networks: a ‘business network’, and an ‘industrial control network.’ The computers of the accountant and the inventory manager both belong to the business network, where bank accounts details, or secret patents can be found…but Stuxnet is not interested in this sort of information; instead, it is looking for the computers belonging to the industrial control system—the system that controls the heat in ovens or the spinning of turbines. These computers will almost always be segregated and protected from the other computers in the organizational network, using several IT security measures like anti-virus software or virtual firewalls. Yet Stuxnet is sophisticated enough to breach these security measures as if they don’t exist at all.
How does Stuxnet know that it has infected a computer belonging to the industrial control network? The answer is that it tries to locate software called Step 7. Step 7 was created by the German company Siemens, and it is used solely in industrial control systems. Why this software in particular? All we can say right now is that Step 7 was not chosen randomly. Stuxnet’s creators even went as far as obtaining a secret password, which allows them to hack the software and use it to infect other computers. How did they get their hands on the secret password? Again, we have no clue. No one, except the programmers who developed Step 7, was supposed to know about the existence of such a secret password.
Targeting The Control System
After hacking Step 7, Stuxnet begins looking for control equipment attached to the PC, called a PLC, which stands for Programmable Logic Controller. The PLC is a type of simple computer that “translates” and transports orders back and forth from a PC to the industrial machine it controls, like a generator, a conveyor belt or an oven.
The PLC’s role is to manage the flow of information between the computer and the controlled machine. If Stuxnet finds such PLC equipment, and if several other vital conditions are fulfilled (which we will talk about soon), then the malware knows it’s found its target.
And then what happens? Well, imagine Stuxnet as a guided missile with two parts: the first is a thruster that leads the missile to its target, and the second is the payload that causes damage to the target. The entire mechanism we have described so far—an infection caused by a USB drive, the spreading through an organizational network, and the hacking of Step 7—was the thruster. Now that Stuxnet has found its target, it is time to activate the payload, the part of Stuxnet that makes it so unique, unprecedented and frightening; it is this part of the virus that will be the focus of our next part of this episode. In the next section, we will also learn about the damage that Stuxnet caused to the Iranian uranium enrichment facility.
Part II – Stuxnet’s Payload
As I told you at the beginning of the previous part, my third book was called “Battle of Minds: A History of Computer Viruses.” Writing a book is never easy, but this time, I ran into a new problem, one that I never encountered when writing before—paranoia.
You see, while doing research for the book, I visited countless websites that can only be described as “dubious”: hacker forums, virus-spreading websites and so on. I’m a cautious person, so I took all measures I could to protect my computer from getting infected with any scary viruses. But despite my efforts, I was always worried that I missed something and would get infected by some malware. Every time my computer acted slightly weird, I immediately wondered whether it had to do with a virus. Slow Internet? It might be a virus! Some software stopped working? A trojan! The hard-disk is over active? It’s malware for sure! You get the picture. I was a digital hypochondriac.
But why am I telling you this? Because I believe that right now, a few thousand miles away, perhaps inside an underground bunker somewhere, there are programmers and technicians who are sitting in front of their computers and experiencing the same paranoia.
But first, a mini-recap.
In the first part of this episode, I introduced you to Stuxnet, a malicious software that flipped the IT security world upside down when it was exposed in 2010. Stuxnet wasn’t your typical run-of-the-mill computer virus, but an entirely new threat called an Advanced Persistent Threat, or APT for short. Unlike typical viruses, an APT is a targeted threat, sort of like a guided missile aimed at a specific target. We learned how Stuxnet was accidently exposed by a small Belarus IT company, and of the industrial control world’s terrified reaction when its executives realized the real potential of APTs. We also learned that Stuxnet exploited an unknown vulnerability in the Windows operating system and a secret password in Siemens’ software in order to penetrate control systems in industrial facilities.
One question, however, was not addressed in the previous episode, and it is the topic of this episode. It’s a question that kept many CEOs awake at night: What, or Who, was Stuxnet’s target?
The Right Place At The Right Time
The technological analysis published after Stuxnet was exposed made it clear that Stuxnet was a sophisticated and malicious software—but experts couldn’t figure out who it was targeting. There were literally thousands of industrial facilities using Siemens’ industrial control equipment around the world. Were they all targets, or just a few of them?
Ralph Langner is an IT security expert and the owner of a tiny three-man IT security company. He also specializes in industrial control systems, so when Stuxnet became known to the public, it struck Langner’s interest. He immediately obtained a copy of the malware and started analyzing it.
It turned out that Ralph Langner was the right person in the right place at the right time. He had years of knowledge and experience working with Siemens’ control systems. In fact, Siemens sometimes sent its employees to him for professional seminars on using their equipment. Langner and his two colleagues spent weeks decoding the secrets of Stuxnet and were able to paint a clear picture of how it operated.
How Stuxnet Works
Now’s a good time for a quick reminder of how Stuxnet works, since we’re going to go into more detail.
Stuxnet’s infiltration process began when an infected USB was connected to a PC by one of the industrial facility’s employees. It then took over the computer and traveled inside the network, skipping from computer to computer, until it found one that belonged to the facility’s industrial control system—for example, a computer that controls an assembly line. Stuxnet checked whether software called Step7, made by Siemens, was installed on the PC. If Step7 was on the PC, Stuxnet hacked it using a secret password that was built into the software. Next, Stuxnet looked for a specific type of control equipment called a PLC, which is a smaller computer used to “translate” commands to and from the PC to the actual machinery. If Stuxnet found a PLC, it took over it as well.
Now, up until this point in the process, Stuxnet hasn’t caused any damage. It hasn’t erased any files, nor has it stolen any data. Going back to the guided missile analogy, now that the thruster has brought the missile to its target, it’s time for the payload to cause the real damage.
Once Stuxnet took over the PLC, it checked to see which electrical components were attached to it. Specifically, it was looking for two microchips that control the rotation speed of engines, made by two specific companies: Vacon from Finland, and Fararo Paya from Iran. If Stuxnet didn’t find these two microchips—or if the microchips were made by some other company—then Stuxnet terminated its operation and erased itself from the computer. If Stuxnet found the two microchips it continued to look for an even more specific condition—the rotation speed of the engines connected to the microchips, specifically, between 807 and 1210 Hertz. Once again, if it didn’t find the engines, or if the rotation speed was not within the right range, Stuxnet terminated its operation and erased itself from the computer.
Let’s review the long chain of specific conditions vital for Stuxnet’s operation. A computer on which a specific software made by Siemens was installed, connected to Siemens PLC, itself connected to two microchips made by two specific companies that controlled rotating engines at a very specific speed range. The convergence of such a precise set of conditions is comparable to searching the phonebook for a Mr. Butterworth who lives in Michigan, in the city of Livonia, and on 19323 Shadyside Street. If you found Joe, he is most likely the person you were looking for, since the chances of there being two people with the same name at the same address are slim. In other words, Stuxnet was looking for a very specific target.
An Obvious Target
This level of preciseness, atypical for computer viruses, astonished Ralph Langner. Who would invest that much effort in order to harm one specific facility? The answer was almost obvious: It could only be a country that wanted to attack a military facility of some other country. Langner contacted several of his colleagues and clients and asked whether they knew of an industrial facility whose characteristics matched the specific pattern Stuxnet was after. Soon after he found a match—a gas centrifuge for a uranium enrichment facility in Natanz, Iran.
For those of you who aren’t familiar with the ins and outs of a uranium enrichment process, here’s a quick explanation.
Uranium is the vital material for nuclear fission, the process that takes place in nuclear power plants and nuclear bombs. There are several natural occurring isotopes of uranium—think of them as different “flavors” of the same element. Only one particular flavor, known as uranium 235, can support the process of nuclear fission. Luckily, uranium 235 is rare and makes up less than one percent of the uranium found in nature, which is why uranium deposits are not spontaneously exploding on their own. In order to use uranium in a nuclear power plant or to make a nuclear bomb, it must be enriched. That is, the percentage of uranium 235 in the original material needs to be increased. It’s kind of like assembling a basketball team—the more Stephen Currys you have playing on your team, the better the chances you’ll make it to the playoffs.
The most common method for enriching uranium is by using a gas centrifuge. An enrichment centrifuge is a rapidly rotating tube, into which hot uranium gas is injected. The centrifugal forces acting on the gas separates the different isotopes or “flavors” of uranium so that the gas leaving the centrifuge is enriched with uranium 235. In other words, the percent of uranium 235 is higher than it was before the gas was injected to the machine. If five to 10 percent of the Uranium atoms are uranium 235, then it can be used as fuel in a nuclear power plant. If the concentration goes up to 20 percent, it can be used in a nuclear bomb. That’s the whole secret. Now only you, I and Wikipedia know it.
Messing With The Centrifuges’ Rotation Speed
These centrifuges, or more precisely—the engines that run them—were Stuxnet’s targets. If all the conditions we talked about were fulfilled (the specific microchips attached to engines rotating at a specific speed) then Stuxnet would interfere and force the engines to change their speed. Initially, it would speed them up, then slow them down so that they almost stopped, before returning them to their original rotation speed. Stuxnet initiated these weird speed changes in the centrifuges once every 27 days.
Why, then, is Stuxnet messing with the centrifuges’ rotation speed? Well, to understand that, we have to appreciate how sensitive these machines are. A typical centrifuge rotates at tens of thousands of revolutions per minute—the edge of the tube travels almost half a mile each second. The tremendous speed at which they rotate, combined with the high temperature of the radioactive gas inside them, makes the centrifuges extremely sensitive to any sort of rattling and turbulence. Think of a centrifuge as a motorcycle on a highway running into a rock. If the motorcycle is fast enough, even a small stone could cause great damage.
If a spinning centrifuge experiences uncontrolled rattling, the shaking and extra friction can be disastrous. The best-case scenario is that the centrifuge will be worn out sooner than expected and need to be replaced at huge cost. In the worst-case scenario, the centrifuge will disintegrate and break into pieces. By increasing and decreasing the centrifuges’ rotation speed, Stuxnet induced turbulences that did exactly that.
A Sabotage Attempt
According to past media reports, The United States and Israel have previously tried to sabotage the Iranian nuclear program by replacing sensitive components with false or failed ones. Robert Langner suspected that Stuxnet was another such sabotage attempt. But he needed to learn more about the Iranian nuclear program. Ironically enough, one of his main resources in doing so was the Iranian Public Relations Office. Over the years the Iranians have published images and videos of the former Iranian President Ahmadinejad visiting nuclear facilities—Ahmadinejad walking near the gas centrifuges, Ahmadinejad leaning over a technician’s shoulder and looking at a control screen…you get the picture. Langner analyzed every image and frame he could get a hold of in order to figure out what equipment the Iranians were using, and was convinced that Iran was the target. His conclusion was bolstered by the fact that about 60 percent of the infected computers were located in Iran.
Langner published his theory in a series of posts on his company’s blog. He doubted that anyone would take his ideas seriously, but he was wrong. The IT security world was enthusiastic about the news—and it was only then that the mainstream media realized what was happening. Blake Sobczak is a reporter who covers IT & Security matters for the magazine EnergyWire.
“You know, the initial stories about Stuxnet were kind of glossed over. The impact of it was immediately clear, and part of that had to do with just the technical aspects of it. Those who were dissecting it, this took time. It’s an enormous piece of code, multiple versions of it. They were trying to find where it came from, what it was trying to do, what the malicious payload was. And then for a while, it didn’t have that added element of Iran’s nuclear program, which of course pushed it over the edge to be international news [when] all of a sudden when you realize what this was going after and how significant it really was.”
Nowadays, the consensus among experts is that Stuxnet was indeed developed in order to damage the uranium enrichment centrifuges in the nuclear facility in Natanz. What damage did it cause? It’s hard to tell. According to media reports, the Iranian nuclear program was often delayed in 2010, and more than 1,000 centrifuges were replaced due to severe defects. Leaked documents referred to a possible nuclear accident that took place in the first half of 2010. The Iranian President, Mahmud Ahmadinejad, admitted that a malware caused “limited damage” to the centrifuges, although it’s safe to assume that the damage was worse than the Iranians wanted the world to know.
Hiding in Plain View
Now you may be asking: How is it possible that no one in the Iranian facility noticed the changes in the rotation speed caused by Stuxnet? Such dramatic changes of speed should have caused the alarms in the facility’s control room to go off and alert the technicians monitoring the machines.
This brings us to what is probably the most brilliant – or perhaps nefariously brilliant – aspect of Stuxnet. When Stuxnet takes over the control computer and it’s PLC, it also replaces the sensor data sent to the technician’s monitor with a pre-recording. That way, when Stuxnet drives the centrifuges crazy—the monitors are still showing the appropriate speed, temperature and more. Downstairs, among thousands of centrifuges, a single centrifuge starts doing the tango and making noises like your grandfather’s broken Chevy. But upstairs in the control room, All Systems Are Go.
Eventually, of course, someone will notice that something’s wrong. If a hundred centrifuges usually need replacing in a typical month, and all of the sudden 500 go out of order almost simultaneously—someone will start asking questions. The executives will inquire and the technicians will report that no alarm went off. It’s safe to assume at this point that Iranian nuclear program executives will start demanding explanations…
Here is where the brilliantness of Stuxnet comes into play. When Stuxnet replaces the existing software in the PLC, it does not erase the original software, instead, it actually “saves” it somewhere in the memory. When a programmer tries to examine the currently running control software in the PLC—Stuxnet pulls out the original software from where it was stored and presents it to the programmer as if nothing’s wrong.
Now, put yourselves in the shoes of that Iranian programmer.
Expensive centrifuges are being destroyed daily, and upper management is going nuts. The pressure on you, the programmer, is tremendous. Your boss is constantly looking over your shoulder. You retrieve the software from the PLC and it’s perfect. Nothing seems to be wrong; after all, it’s the same software you wrote a week ago and tested countless times. So how is it possible that centrifuges can disintegrate without any warning? As an engineer, I can empathize with those poor programmers, and feel their hair-pulling & monitor-kicking frustration…
Which brings us back to the notion of paranoia. Once Stuxnet was exposed in the media we can assume that the Iranians understood what was really going on in their facility. Given the sophisticated level of programming they were dealing with—they would have to become extremely suspicious and paranoid. Every computer failure, every small malfunction, could potentially indicate the existence of a new and advanced malware. According to one report, the Iranians were so suspicious of their equipment that at a certain point they sat people in front of the centrifuges in order to manually monitor the rotation speed of the machines.
Ralph Langner thinks this paranoia—this digital psychological torment—might have been the real intention of whoever was behind Stuxnet. It’s obvious that Stuxnet’s creators could have caused catastrophic damage to all the centrifuges simultaneously, which might have caused the entire facility to shut down; yet they chose a kind of gradual “strangling” in order to sabotage not only the machines but also the confidence the engineers had in them.
“If catastrophic damage was caused by Stuxnet, that would have been by accident rather than by purpose. The attackers were in a position where they could have broken the victim’s neck, but they chose continuous periodical choking instead. Stuxnet is a low-yield weapon with the overall intention to reduce the lifetime of Iran’s centrifuges and make their fancy control systems appear beyond their understanding.”
Millions Of Dollars Spent
So Stuxnet was a unique piece of malware. It could not only bypass anti-virus software from multiple vendors without batting an eye, but it also knew how to disguise its activity so that neither the equipment operator nor the system’s programmer could find it. Here’s Andrew Ginter, the industrial system security expert whom we met in the previous episode.
“This class of attack had been described as possible years early in security conferences – but has never been seen in the wild until Stuxnet. Stuxnet in a sense did not invent any kind of new attack. Stuxnet took the most powerful, the most effective and the most advanced of any technique that anyone had ever described—and put them all together, and made them work really well in one package, and this had never been seen before. This was the big thing.”
According to Ginter, Stuxnet must have been the result of a huge investment in time, money and expert IT programmers.
“I developed SW for 25 years. I wrote software, I managed teams that wrote software—I know how much it costs to produce software. This worm installed cleanly on every machine I tried it on. Everything from Windows NT up to the Windows OS of the day, all of the different variants—it installed clean and it ran clean on all of them. I know how hard it is to produce a legitimate product that works on that wide a variety of equipment. My estimation is that there were at least millions of dollars spent on the worm. It might have been tens of millions.”
Stuxnet’s Fatal Flaw
This last point raises another interesting question. If Stuxnet was so sophisticated and polished, why was it ever exposed? That’s not a trivial question. According to the media, earlier versions of Stuxnet successfully meddled with the Iranian facility for five whole years before it was ever discovered. Something must have gone wrong…
It turns out that Stuxnet might have successfully remained “underground” for a long time, had it remained within the confines of the Iranian facility. The makers of Stuxnet had reliable intelligence regarding the computers at the facility, so Stuxnet operated without any problems. But at a certain point, Stuxnet began spreading to other places and other computers…a lot of other computers.
“You know, you might ask—if the worm was that sophisticated, how did it ever get discovered? The people analyzing the worm have concluded they have discovered what they think is a bug. And so this bug caused the worm to propagate. Instead of there being tens or hundreds of copies of the worm in the world, there grow to be at one point over 100,000 copies.”
A small bug—an error in one code line—caused the mechanism controlling the malware’s propagation to malfunction. As a result, instead of only infecting three new computers at a time – it ended up infecting exponentially more. It’s possible that among the infected computers were some that exhibited the “blue screen of death” that VirusBlokAda, the small Belarus IT security firm, was called to look into, which initiated the chain reaction that led to the discovery of the worm.
“This very sophisticated artifact was discovered because of a bug. To produce that much code that is that reliable is extremely costly. And to have that investment vanish because of one line of code—that’s really annoying.”
You know, I have a reason to believe that some of the listeners of this podcast—both in the US and in Israel—might have had something to do with the creation of Stuxnet. If I am correct, and you are listening—then hold on tight for the next—and final—installment of this episode, since we will be talking about… you. Who the anonymous creators of Stuxnet, and what clues did they leave for us within the software’s code? Could it be that exposing Stuxnet was part of a bigger plan, and not so coincidental? And who are Duqu and Flame, Stuxnet’s sisters? All that and more in Part III.
Part III – Stuxnet’s Creators
The last two parts of the episode focused on the technological characteristics of Stuxnet, a computer virus that attacked the uranium enrichment facility in Iran, and was exposed in 2010, almost accidentally, by a small IT company from Belarus. We described the way the malicious software penetrated the facility’s network, located the control system’s computers and finally started messing with the gas centrifuges by increasing and decreasing their rotation speed—all while presenting pre-recorded false data to the technicians and programmers. This level of sophistication made Stuxnet a groundbreaking malware—the first “real” cyber weapon.
Now it is time to talk about the people who created it. In the computer security business, this question is usually considered to be secondary. In most cases even if we do catch the creators of malicious software and punish them, the software itself still continues to spread. It is like a man releasing a lion from its cage; we might be able to punish the man, but the priority is catching the lion before it gets downtown.
In Stuxnet’s case, however, identifying the creators of the software isn’t “secondary” at all. On the contrary: if it is possible to prove that it was created by a government agency, then Stuxnet becomes much more than a computer virus—it becomes a cyber-weapon. Reliable information about the cyber warfare capabilities of nations is scarce since it’s usually considered top-secret military information. If Stuxnet is indeed a weapon developed by a country, then it might reveal something about this secret world.
When IT experts analyzed Stuxnet, they found the software included a communication channel with its operators. When Stuxnet infected a computer, it searched for Internet access. Once online, it sent its current status and other information to servers located at these addresses:
The physical computers storing the servers for these websites were located in Denmark and Indonesia respectively. But before we suspect the Danish or Indonesian governments, it is important to remember that anyone can launch a Web server from anywhere around the world, regardless of his or her physical location. For instance, the website for this podcast is hosted on servers located in the United States, but it could have just as easily been hosted in Europe. Stuxnet’s operators created these servers anonymously, and without any identifying information. So both Web servers led to a dead end when it came to identifying the people behind Stuxnet.
Hidden Clues In The Code
Next, experts started “rummaging” through the code of the malware, hoping to find hidden clues. For example, one of the files referenced in the code was named “guava.pdb” and was stored in a folder called “Myrtle.” In Hebrew, a myrtle is called “Hadas,” and is also the middle name of the biblical Queen Esther, wife of Ahasuerus, king of Persia, which is, of course, modern day Iran. In other words, it could be interpreted as a possible connection between Stuxnet and Israel.
But is this clue, and other similar clues discovered in the code, a “smoking gun”? Definitely not. This free association is the tech equivalent of playing a Led Zeppelin album backward, hoping to find satanic messages. Personally, I find it hard to believe that any programmer would bother scattering such vague clues, except maybe as a joke. There is no smoking gun—not even a toy gun! And even if such clues were to be found in the code, we still can’t take them seriously. In fact, even if Stuxnet had played the song “Hava Nagila” while destroying the centrifuges, we still wouldn’t be able to reliably point a finger at Israel, since clues could easily have been planted in the code as a distraction, disguising the malware’s real creators.
Digging into the code, however, did prove to be helpful in other ways. For example, in order to infiltrate and take over a computer, Stuxnet took advantage of an unknown bug in the Windows operating system. This sort of exploitable vulnerability is usually called a ‘Zero-Day’ bug. Zero-Days aren’t commonly known since it takes a tremendous amount of knowledge and time to locate them in software—and it’s their scarcity makes them very valuable. It’s like knowing the code to the vault at a bank. It’s safe to assume that criminal organizations would be willing to pay quite a lot for information that could aid in a robbery. Similarly, whoever knows about the existence of a Zero-Day vulnerability in Windows can sell the knowledge to computerized crime organizations for hundreds of thousands of dollars. But secrets are only valuable as long as they remain secret. Once the vault is breached, the bank will immediately change the password—and knowledge of the former one becomes worthless.
Similarly, knowledge about a vulnerability in software is valuable as long as no one has used it to break into a computer. Once a Zero-Day bug is used in a malicious software, it’s only a matter of time before antivirus vendors expose it and the bug is fixed. When that happens, the Zero-Day becomes worthless. Which is why a normal virus never uses more than one Zero-Day: revealing more than one is a pure waste of money.
But Stuxnet used not one, not two, but four Zero-Day bugs simultaneously! Such an extravagant waste of resources is akin to tossing four heavy bombs on a building—when one would suffice to destroy the target.
As you might have guessed, I did not choose the analogy randomly. Such behavior only makes sense in the military world, where making sure a building is destroyed outweighs any financial considerations. In other words, someone wanted to make sure that Stuxnet would succeed in penetrating the uranium enrichment facility in Natanz at all costs—perhaps because he or she knew there would only be one opportunity to do so. After they discovered Stuxnet, the Iranians would learn their lesson and be more careful in the future.
Two Immediate Suspects
Another clue in identifying Stuxnet’s mysterious creators lies in the fact that they had very detailed information regarding the control system of the Iranian facility. They could tell how many centrifuges were installed in the facility and their specifications; they knew which microchips controlled the centrifuges, and what anti-virus software was installed on each computer. Ralph Langner, the IT security expert who analyzed Stuxnet, said this of the intelligence:
“The detailed pin-point manipulations of these sub-controllers indicate a deep physical and functional knowledge of the target environment; whoever provided the required intelligence may as well know the favorite pizza toppings of the local head of engineering.”
Not surprisingly, the enormous resources and top intelligence that Stuxnet’s developers enjoyed indicate two immediate suspects: The United States and Israel. An investigative report in The New York Times claimed the Stuxnet was a result of a joint American-Israeli operation code-named “Olympic Games,” which began during the Bush administration and continued with Barack Obama. According to some reports, the malware was developed by an Israeli Intelligence unit called Unit 8200, and was tested on real centrifuges in the Dimona Nuclear Research Center in the Israeli desert and on centrifuges that the Americans obtained from former Libyan dictator Muammar Gaddafi when he gave up his nuclear ambitions.
I assume that some listeners will celebrate the success of Stuxnet since Iran is not a friendly nation, to say the least—and what’s bad for the enemy is surely good for the U.S. and Israel, right?
Well, not so fast. We previously compared Stuxnet to a guided missile aimed at a specific target. While being a valid analogy, it is not a perfect one. Remember that the way the malware “moves” within the virtual space of the Internet is nothing like the direct path a missile takes to its target. Instead, Stuxnet spreads much like as viral epidemic—skipping from one computer to another and infecting them. According to some assessments, Stuxnet infected about a hundred thousand computers. Most of these computers were not related to the Iranian nuclear program, and many were not even physically located in Iran. Some of those computers crashed—which is how Stuxnet was discovered in the first place. But even if Stuxnet didn’t affect the computer’s operation, it still caused damage. When dealing with sensitive control systems, no one can afford to let “foreign” and unfamiliar software—let alone a known malicious one—wander around a system unchecked. As Ginter explains:
“The worm infected hundreds of thousands of other machines and caused almost no damage. Now, it did cost a lot of money! Because if I have an infected machine staring me in the face, am I going to believe that that’s OK and leave the worm on there? No, I’m gonna clean it out. The process of shutting it down and cleaning it up costs enormously. If I have to shut down a refinery, because some the equipment has been infected, I’m losing millions of dollars a day. “
In other words, the Iranians were not the only ones potentially harmed by Stuxnet. Even an American petrochemical facility—completely unrelated to the Iranian nuclear program—could have suffered great financial loss as a result of Stuxnet taking an afternoon stroll through its network. Interestingly, the potential for secondary damage never stopped Stuxnet’s anonymous developers from carrying on with their plans.
In early September of 2011, a year after Stuxnet was exposed, a new, unfamiliar, malicious software was discovered in Hungary. The researchers who discovered it named it Duqu, since the letters D and Q appear in some of its files. IT experts from the Budapest University of Technology and Economics analyzed Duqu and identified many similarities to Stuxnet. In fact, there were so many similarities, that it later turned out Duqu was actually discovered even prior to September 2011 by a Finnish antivirus company—but its automated identification system mistook the malware for a copy of Stuxnet. The two types of malicious software were like two sisters sharing identical DNA segments. They shared so many common pieces of code that experts believe that the people who developed Duqu are the same people who developed Stuxnet, or at least share common ties.
The two types of malware, however, have different objectives in the cyber offensive. While Stuxnet was used to carry out attacks on industrial systems, Duqu was an espionage tool. It did not attack systems directly. Instead, it tried to steal information such as sensitive documents and contact lists. It could also record the activity taking place on a computer. It recorded keyboard keystrokes, took screenshots and even recorded conversations using the computer’s microphone. The captured information was sent through the Internet to a few dozen servers located in different countries around the world—from India and Vietnam to Germany and England.
Taking Down Duqu’s Servers
As with Stuxnet, these servers were paid for anonymously. But researchers were still eager to get their hands on them since any information they held might reveal new clues. As is often the case with modern websites, Duqu Web servers were hosted on hardware rented from dedicated hosting companies. Security researchers from the antivirus company Kaspersky Lab approached the hosting companies who owned Duqu’s server and asked for their cooperation with the investigation. It was a race against time. The researchers knew that as soon as Duqu’s operators realized that the malware had been discovered, they would try to destroy as much incriminating evidence as they could. Unfortunately for Kaspersky Lab, the bureaucratic process for obtaining the approvals was agonizingly slow, and on Oct. 20, 2011, all the data on the servers was remotely destroyed before Kaspersky could get its hands on it. Duqu’s operators made sure to wipe out every last byte of information on the servers. In one case, a server in India was wiped clean mere hours before Kaspersky received permission to take it over.
Six months later, in May 2012, yet another new espionage malware was discovered and dubbed Flame. Unlike Duqu, Flame had very little resemblance to Stuxnet. At first, experts thought they were unrelated. But then a thorough analysis of Flame unearthed several small similarities between the two, just enough to convince the investigators that both viruses were likely made by the same people, or perhaps different teams working from the same organization.
As an espionage software, Flame is much more advanced and sophisticated than Duqu. For example, it had the ability to activate Bluetooth communication on a computer or on a phone, identify other smart devices in the area, and pull out contact lists and other relevant information from them. Just to give you an idea of Flame’s complexity: Stuxnet had 150,000 lines of code, which made it a HUGE virus—10 times larger than a typical malware. Flame, however, was 20 times bigger than Stuxnet, with some people considering it to be the most complex malware ever known. Andrew Ginter, the industrial system security expert we met in the previous episodes, does not believe that we will ever get a full analysis of Flame.
“I have never seen a complete analysis of Flame. I’m not sure anyone had ever done that. It’s just a lot of code to reverse-engineer. It took the Symantec team of, I think, four engineers, four or five months to reverse-engineer Stuxnet. Imagine how many people you need to do something 10 times as big.”
The Age of CyberWarfare
All over the world, the number of cyber attacks against commercial, government and industrial targets is on the rise. Russia, China, North and South Korea, Germany and India are all known to have governmental cyber warfare units, and together with the discovery of Duqu and Flame, it is rapidly becoming clear that cyber warfare is going to be an inseparable part of future wars and conflicts. Blake Sobczak, the IT and Security reporter for EnergyWire we met in the previous two episodes, puts it this way.
“Critical infrastructure operators in the U.S. have fallen victim to much more simple attacks than Stuxnet and have been infected by a potential malware that is laying latent. Several officials in the U.S. have said there’s a strong chance that there’s malware already existing on control networks that might be able to do some damage. The extent of that is not always clear. Much of it is classified, a lot of people have signed nondisclosure agreements, talking about it might be considered security sensitive for all the obvious reasons.”
The discovery of Stuxnet and its sisters encouraged experts in the industrial sector to seek better protection against future threats. In the first few days following the exposure of Stuxnet, Ginter thought he could use it to categorize the effectiveness of different types of antivirus software. He obtained a copy of Stuxnet and tested it against a number of different types of antivirus software and firewalls, trying to learn which would be the best defense against the threat.
AG: “I tried Stuxnet against a number of defense systems, and I drew conclusions, saying ‘This defense would work, that one would not. That means this defense is stronger than that one.’ In hindsight, that was nonsense.”
Why is Ginter calling these early attempts “nonsense?” Because he realized that what he was actually looking at was a weapon designed for a very specific target.
“The Stuxnet worm was written to attack one specific site in the world, and it was designed to evade the security measures in place at its target. And so, there was nothing in the worm to evade whitelisting, but there was stuff in the worm to evade AV. If the site had used whitelisting, the worm would have included measured to evade white listing. There were, apparently, in hindsight, AV systems deployed at the site, and so the worm was designed to get around them. So the fact that some mechanisms worked against the worm and others didn’t, was more a reflection of what was deployed at the target site.”
In other words, Ginter claims that since Stuxnet was specifically created to attack the defense systems of the Iranian facility, we cannot assume that it accurately represents other threats. Stuxnet is like a bullet designed to penetrate a bulletproof vest made by a very specific manufacturer. The fact that it might actually penetrate the vest without any difficulty does not necessarily mean that the vest isn’t effective against other types of bullets—or that other kinds of bulletproof vests are somehow better.
Outlook On The Future
If we continue down this train of thought, we arrive at a very alarming conclusion. If malware can be designed to successfully penetrate a specific defense system—as was done by Stuxnet, for example—what can we do to defend ourselves against these kinds of threats? You might be surprised to hear this, but Andrew Ginter says that basically, there’s nothing we can do. If someone really wants to penetrate your bulletproof vest and is willing to invest all sorts of resources in doing it—you’re in a world of trouble.
“If we have an adversary coming after us, and that adversary has detailed inside knowledge of how our systems are secured, in a sense—there’s nothing we can do. You are never completely secure. Which means no matter what we have deployed, we can always be hacked by an adversary that has enough time, enough money and enough Ph.D.s to throw at the problem. If you have an adversary that advanced, there’s nothing you can do. This was the nonsense. If we have that class of adversary coming after us, we don’t have a cybersecurity problem—we have an espionage problem, and we need to escalate [the problem] to our national security agencies.”
This is a very worrisome conclusion indeed. If true, then we might be doomed to live in constant fear of an enemy shutting down our electricity, water or gas supply. Luckily, Ginter has a more reassuring message as well.
“This is one of the big pieces of confusion that I try to clarify. In theory, there’s nothing you can do, from the point of view of any one facility. In practice, if we’re talking about protecting an entire industry—there is absolutely stuff you could do. No adversary in the world has enough money to buy every citizen of the United States and compromise all of its industries. That’s just ludicrous. And so what we can do to protect an industry is put enough security in place so that in order to compromise an industrial site, it has to be escalated to a level of espionage attack. And if you do that, nobody has the means to carry out that sophisticated against an entire industry. What we want to do is elevate the problem from something as easy as hacking into a power plant from the comfort of my basement on the other side of the planet to something where I have to compromise feet on the street, in my target and have them do my attack for me. That’s a much harder problem. It is absolutely possible to elevate our security posture to the point it takes a physical assault to breach the cyber security posture.”
In other words, while it’s true that one can develop very sophisticated malicious software and shut down a facility or a power station—actually doing it is extraordinarily expensive. Remember that in Stuxnet’s case, someone had to find out which control systems were installed in the uranium enrichment facility, which microchips controlled the engines, what the electronic defense systems against malicious software were, etc. If someone wanted to attack all the industrial facilities in Iran—he or she would need to invest the same amount and effort for each facility. Creating a single Stuxnet is doable—making a thousand such weapons is a totally different matter. It’s a notion shared by Blake Sobczak.
“It’s very tempting to say that anything is possible. When you look at something as incredibly written and almost frighteningly executed as Stuxnet, it’s tempting to say, ‘well, who’s to say that the lights aren’t going to shut off tomorrow in the U.S.?’ Some hacker in the Ukraine or whatever is behind it. Oh my God! There’s almost the temptation to rush to panic. But if you take the time to reach out and talk to sources both in government and the technical community, they’ll say, ‘well, yes, there are vulnerabilities, there are countries and criminal elements willing to target these vulnerabilities, but is it so easy as to just flip a switch or hit a button and take down a critical infrastructure? Certainly not.’ This is an important aspect of the Stuxnet case. This was not easy to pull off.”
A New Type Of Weapon
Let’s run through all that we learned in the last three parts of this series. Stuxnet was a sophisticated malicious software, perhaps the most sophisticated malware of its time. But it’s safe to assume that it will be remembered not just for its technological sophistication, but because it was a new type of weapon, the first of its kind, in the emerging virtual battlefield of the Internet. Stuxnet served as a frightening demonstration of what can be achieved when a technological superpower is willing to invest major financial and intelligence resources in cyber warfare.
By the way, we should note here that this demonstration might have actually been a part of Stuxnet’s purpose. Ralph Langner doesn’t believe that Stuxnet’s exposure in 2010, after several years of uninhibited activity, was a coincidence. Or at least, its creators didn’t lose any sleep over it.
“Somebody among the attackers may also have recognized that blowing cover would come with benefits. Uncovering Stuxnet was the end of the operation, but not necessarily the end of its utility. It would show the world what cyber weapons can do in the hands of a superpower. Unlike military hardware, one cannot display USB sticks at a military parade. The attackers may also have become concerned about another nation, worst case an adversary, would be first in demonstrating proficiency in the digital domain—a scenario nothing short of another Sputnik moment in American history…
Only the future can tell how cyber weapons will impact international conflict, and maybe even crime and terrorism. That future is burdened by an irony. Stuxnet started as nuclear counter-proliferation and ended up to open the door to proliferation that is much more difficult to control—the proliferation of cyber weapon technology.”