Category Archives: Technology

Deep Learning - Curious Minds Podcast

Deep Learning & Artificial Intelligence, Pt. I | Curious Minds Podcast

AlphaGo’s victory on S. Korea’s champion Le SeDol was a shock to many in the computer world – but was only a natural development in the story of Artificial Intelligence, as it unfolds in the last few years. What is Deep Learning, and how can computers learn ‘skills’ and ‘intuition’? 

Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Deep Learning & Artificial Intelligence, Part I

(Full Transcript)

Written By: Ran Levi & Nate Nelson

It’s Game Two. We’re now watching the best player in the world, Lee Se-Dol, facing off against a machine, named AlphaGo, in one of humanity’s most storied and complex strategy games: ‘Go’.

Game one saw AlphaGo take an easy victory–one that shocked onlookers around the world. Se-Dol, the favorite, carries the weight of the world on his back as he fights to regain footing in the best of five series. An hour in, his prospects are looking good.

Lee Se-Dol - Curious Minds Podcast
Lee Se-Dol

Then, AlphaGo plays the game’s 37th move. You can see the announcer’s hand shake as he copies the move to the internet broadcast’s big board. He adjusts the piece, unsure of whether he’s mistaken in its placement. Murmurs are rising from the crowd.

And just like that, it’s over.

Move 37

In the time since AlphaGo proved itself the world’s best Go player, move 37 has taken on a sort of cult status in pop culture. Major news outlets around the globe covered the story with articles like “Move 37, or How AI Can Change the World”, a deep learning startup was founded and named itself Move 37, and internet forums were swept with curiosity and speculation.

AlphaGo's Logo - Curious Minds Podcast
AlphaGo’s Logo

For those not experienced in Go, move 37 might otherwise appear totally nondescript–vertically centered, in the rightmost third of the grid, placed next to a white piece in an otherwise empty section of the board, it hardly looks like anything out of the ordinary. Yet, to those who know the game well, it was just about revolutionary.

Fan Hui, the European Go champion who in a five-game series was cleanly swept by AlphaGo prior to its matches against Lee Sedol, eagerly observed match two. Of move 37, he told reporters “It’s not a human move. I’ve never seen a human play this move.” Hui, knowing the game so intimately, was as shocked as anyone.  “So beautiful,” he said and repeated that word over and over again: beautiful. But how can a machine attain beauty?

To understand why move 37 was so amazing, you have to understand how the game of Go works. Invented by the ancient Chinese as a playful representation of war, two players with white or black stones face each other in a battle to take as much territory as possible on a nineteen-by-nineteen space board. Players are free to place stones anywhere on the board of 361 spots, and the goal is to totally surround your opponent’s pieces, therefore kicking those pieces off the board and collecting territory. Because the game is so nonlinear, with such an incomprehensible number of possible moves, sequences, and game arrangements, there’s really no catch-all strategy you can stick to. After all, the number of potential game possibilities in Go outnumbers the total atoms in our universe–a number something along the lines of  208168199381979… you get the picture.

Anyway, this number is so big there’s little point in even trying to comprehend it. Therefore, because any game of Go can go any number of directions, experienced players will tell you it’s all about feel. As the subconscious brain tries to do its best to analyze such a multifaceted and chaotic board, it’s up to a player’s intuition to determine the best course of action.

This is why Go seemed such an unattainable task for machines to master: computers are supposed to follow orders, so how can that translate to something like intuition? Move 37 appeared entirely unintuitive–DeepMind’s programmers later calculated that a human player might have a 1 in 10,000 chance of choosing it at that stage in the game–and yet proved so effective that Lee Se-Dol felt right away its colossal genius, and stood up and left the room almost immediately after realizing what had been done to him.

AlphaGo is the first time humanity has been shown that machines will be able to beat us at characteristically, previously uniquely human abilities like feel and intuition. Soon, perhaps, this could evolve to emotion, at which point we’d all have a real existential crisis on our hands.

The Brain as A Machine

Before we fall too deep into hypotheticals, though, let’s begin with an even more fundamental question: is a computer at all capable of imitating what we might see as ‘human thought’? For example, the ability to draw conclusions, to raise ideas, Intuit, and so on?

Well, the seventeenth-century French philosopher Rene Descartes was one of the first thinkers to try to discern the fundamental difference between humans and machines. Descartes argued that, in principle, every aspect of the human body can be explained in mechanical terms: the heart as a pump, the lungs have bellows, and so on. That is, the body is merely a kind of sophisticated machine.

Rene Descartes - Curious Minds Podcast
Rene Descartes

The brain, however, can not be explained in such mechanical terms, he said. Our thinking, speech, and ability to draw conclusions are so different from what those machines are capable of performing that they can not be explained in engineering terms, and we must use words like ‘soul’, ‘intellect’ or similarly abstract concepts to describe them.

The invention of the computer at the end of the first half of the twentieth century cast a heavy shadow on this hypothesis. The computer, remember, is basically a machine that performs a long series of mathematical operations. The programmer tells the computer what sequence it must follow to perform any task – for example, solving a mathematical equation or drawing an image on a screen – and the processor executes the commands quickly. Clearly, the computer does not “think” in the human sense of the word: it doesn’t solve the equation and doesn’t draw the picture – it is merely a machine that executes a sequence of commands. But outwardly, it certainly looks like the computer solved the equation and drew the picture. If you were to show a computer to someone who’s never seen or heard of a computer before, they’d almost certainly assume it either a magical or godly item or perhaps something controlled by a little person inside!

And why shouldn’t they? The nature of computers has caused many scientists and non-scientists alike to question the idea of a separate ‘mind’ or ‘soul’ in humans. If a relatively simple machine like a computer can seem to solve a problem, can not our brain itself be a kind of machine that performs calculations? In other words, is it possible that our brains are nothing more than very complex machines, and that all our thoughts and ideas are merely the result of calculations? If this hypothesis turns out to be correct, then the answer to the question I raised above may well be: yes, it is possible to construct a computer that will perform the required calculations and thus imitate the operation of the human brain.

Neurons and Perceptrons

To try to build such a machine, researchers’ first instinct was to turn to biology and neuroscience. Advances in the field of neurology and physiology of the brain inspired them abundantly. Through the twentieth century, neuroscientists uncovered many biological mechanisms that underpin the brain’s activity, foremost of which is the importance of neurons  –the nerve cells from which the brain is made.

How do neurons work? Neurons are tiny cells with long arms that connect to each other and transmit information through electrical currents. Each neuron has multiple inputs, and it receives electrical pulses from other neurons. In response to these pulses, the neuron produces its own electrical pulse at the output.

Neurons - Curious Minds Podcast
Neurons

In 1949, a researcher named Donald Hebb discovered one of the most important fundamental principles of the brain: the way learning takes place. His research revealed that if two neurons–A and B–are interconnected, and neuron A shoots electrical pulses, over time the B neuron, in response, will begin to fire pulses more efficiently. The constant firing of neuron A will cause neuron B to learn that the information coming from neuron A is important, and should be responded to by shooting your own pulse in response.

Donald Hebb - Curious Minds Podcast
Donald Hebb

This insight inspired a psychologist named Frank Rosenblatt, an American cognitive expert at Cornell University. In 1958, Rosenblatt conceived a new type of electrical component called the “Perceptron” (from the word ‘perception’). Rosenblatt’s perceptron was a kind of artificial neuron: an abstract model of the human neuron. He had a number of inputs that received binary values – 0 or 1 – and one output that could also produce 0 or 1, a sort of positive or negative result.

Frank Rosenblatt & The Perceptron - Curious Minds Podcast
Frank Rosenblatt & The Perceptron

R: The interesting detail is that Rosenblatt found a way to simulate the brain’s learning process as described by neurologists. Imagine the perceptron as a black and opaque box, with several inputs and outputs. Each input has a dial that can be rotated, thus determining that one input or another has a higher weight, or importance, relative to the other inputs. Turn the dial all the way to the right, and now every little signal at the entrance makes the perceptron set the output to ‘1’: it is to teach the perceptron that this input is very important and should not be ignored. Turn the dial all the way to the left, and the perceptron will “learn” that this input is not important at all; no matter what happens, the perceptron will not respond to any of its signals. The weights at the perceptron inputs simulate the strength of the connections between biological neurons in a human brain.

Rosenblatt set up a device in his lab that contained several such perceptrons connected to each other in a kind of artificial neural network, and connected their inputs to four hundred light receptors. Rosenblatt placed letters, numbers, and geometric shapes in front of the light receptors, and by fine-tuning the weights at the inputs, he managed to ‘teach’ the network to identify the shapes and in response to produce signals that meant things like: ‘the letter A’ or ‘a square’.

How did he do that? By strengthening or weakening the connections between the perceptrons: each time the perceptrons did not correctly identify a form, he modified the weights slightly – if you will, played a little with the dials – until each perceptron learned that a certain combination of inputs yields one result, and another combination another result. The game with the weights is a way to fix the system’s errors, to tell it: “What you did just now was wrong. Here’s the right way to do it.”

Perceptrons and Programs

In order to understand the importance of Rosenblatt’s experiment, we need to sharpen the fundamental distinction between Rosenblatt’s perceptron machine, versus how a “normal” computer program operates.

A program is a sequence of commands that a human programmer defines: if the programmer wants to make the computer recognize a square, for example, he must formulate rules for the machine that explicitly define the properties of the shape.

The perceptrons’ learning, on the other hand, did not occur by defining rules, but by presenting a variety of sample squares, and strengthening or weakening the connections between the perceptrons until the combination of weight values was found at the entrance to each perceptron.

These are two completely different approaches to problem solving: In the first, we dictate to the computer rules for solving the problem, such as “If a shape has four sides, is equilateral, and its angles are all at 90 degrees, then it is square.”

In the second, we provide it with examples of squares and a series of simple steps that, if executed, will gradually improve the network’s ability to identify squares.

Notice that at no point does Rosenblatt tell the machine what a square is. He only “taught” the machine by playing with its left-right-right-left dials until it developed a system for recognizing squares.

This approach, of learning from examples, is very close to the way people acquire many skills: for example, parents teach their children to identify a square by pointing to a square and saying “square” again and again, or pointing to a shape and asking ‘what is it?’. If the child is wrong, the parent corrects him or her, and if the child is right, they get rewarded.

The new approach to computer programming offered by Frank Rosenblatt does, in many ways, mirror what would happen half a century later when AlphaGo offered a paradigm shift in the field of AI.

AlphaGo vs. Deep Blue

One way to make sense of why AlphaGo, a machine that got good at a board game, means so much to the future of technology, is to compare it with its spiritual predecessor: IBM’s Deep Blue machine.

Deep Blue - Curious Minds Podcast
Deep Blue

Deep Blue was sort of like the original AlphaGo–in 1997, it beat the world’s greatest chess player, Gary Kasparov, in a no less dramatic fashion than Move 37: Kasparov, knowing his fate, stood up and walked off the television set with his arms out, frustrated, as if to say “What do you want me to do?”

IBM’s Deep Blue was a thoroughly programmed artificial intelligence that was given a rulebook on how to best play chess. Every move Deep Blue executed was leveraged against other possible moves it could have played at any given stage in a chess match, keeping in mind the value of each piece and referencing pre-programmed chess information as it checked 200 million positions per second on the board. The algorithm sifted through a huge pool of possibilities and chose the move that would position it best moving forward. Because there are only a limited number of pieces, ways each piece can move, and spaces on the board to move to, Deep Blue’s processors were powerful enough to make such calculations.

AlphaGo, however, was created with a different approach. Because the game of Go has such an incomprehensible number of possible combinations of moves available to players through the game–it doesn’t even have the simplicity of direction like chess does–there’s no way even our most powerful computers can sift through every potential outcome at every stage in the game. Instead, the programmers at DeepMind fed their algorithm thousands of real games played by professionals–amounting to some 30 million moves–and the program itself learned how the pros do it. The program dutifully sifted through all of the scenarios and moves, recognizing patterns in how humans play the game and the strategies they employ, in order to be able to play like the best. Once AlphaGo could play like a pro, it was then given the task to play itself. Throughout many iterations, the program would improve upon itself incrementally as it tried to find ways to beat previous versions of itself, adopting strategies that work to its advantage and scrapping those that result in losses.

In this way, whereas Deep Blue calculated all possible scenarios and opted for the most effective outcome, AlphaGo had to learn from the ground up through trial and error. Where Deep Blue draws on a database of information to mathematically draw up a decisive path to victory, AlphaGo is made strong by a wealth of experience built up over time, to where it doesn’t have to look back at every single data point it’s been fed because a process has been honed–something akin to “skill”.

Early Skepticism

But scientists in the mid-20th century couldn’t see so far ahead, and there were researchers who were not swept away by general enthusiasm for neural networks. Here is the place to take a step back and look at the broader picture of artificial intelligence research.

Over the years, several different approaches have been developed to solve the question of how to make a computer behave intelligently. Some researchers prefer logic-based methods, others prefer methods based on statistical calculations, and several other approaches have also been floated.

We’re not going to review all of these approaches in depth in this chapter, but I will note that there was no consensus among researchers as to which approach is absolutely superior to the others. Specifically for our purposes, in the 1960s and 1970s, many researchers believed that the attempt to imitate the biological mechanism of the brain would not lead us to artificial intelligence computers.

Why? For the same reason that aircraft engineers do not try to imitate the flight of birds. If there is more than one way to solve a particular problem, the biological path is not necessarily the simplest and easiest way, from an engineering point of view.

One of those skeptical scholars was Marvin Minsky. His objection stemmed from the fact that Rosenblatt’s perceptron had no solid theoretical basis – in other words, there was no mathematical theory that explained how the weights should be adjusted at the inputs to the perceptron to induce it to perform any task. The success of Rosenblatt’s system in identifying shapes did not impress him; he claimed that the forms and letters that Rosenblatt used to demonstrate the abilities of the perceptron were too simplistic and not a real challenge.

Marvin Minsky - Curious Minds Podcast
Marvin Minsky

In 1969, ten years after Rosenblatt first demonstrated the prototype of the perceptron, Minsky and another talented mathematician named Seymour Papert together published a book called “Perceptrons”. The book analyzed in depth the component invented by Rosenblatt – and concluded that it would be impossible to develop artificial intelligence systems with it.
Why? The perceptron, they wrote, is an elegant component with impressive capabilities for its simplicity – but to build complex systems that can perform complex tasks, a lot of preceptors should be connected.

The Problem with Many Layers

Imagine a cake made of many layers: each layer has a number of perceptrons whose inputs are connected to the perceptrons in the layer above – and their outputs to the perceptrons in the layer below.

Layers of Artificial Neurons - Curious Minds Podcast
Layers of Artificial Neurons

The problem identified by Minsky and Papert lies in the inner layers of the cake. Rosenblatt’s system consisted of two layers: inputs, and outputs. When you have two layers of perceptrons it’s relatively easy to find the correct adjustment of the weights in order to reach the desired result. But if you have an ‘internal’ layer – that is, the one between the input layer and the output layer – it’s much harder to determine the proper weights.

To understand this, let’s imagine a group of children playing the game Telephone: You tell the first child a sentence, who whispers it to the next child in line, who whispers it to the next, and so on, until you reach the last child, who says aloud the sentence he heard. The first child in the row is, in our analogy, a perceptron in the input layer. The last child – the perceptron in the output layer. All the children in between are perceptrons in inner layers.

Now, suppose we have only two children in the game: you tell the first child the sentence, he tells the other child, who says it out loud. If something went wrong and the sentence did not come out right – it’s easy to find out where the problem was: either the first child misdirected it, or the other child misunderstood it. You find the problem child, explain what needs to be done – maybe hint to a possible candy waiting for a good boy… that’s it, basically. However, if we have ten children in the game and the trial goes the wrong way, how will we know where the disruption occurred? This is much harder, and without this knowledge we can not fix the system.

The same applies, in principle, to the perceptron machine. If we have only two layers – inputs and outputs – it is easy to find the desired weight adjustment. If we have internal layers, it is very difficult to know how to play with the dials. Without internal layers, though, Minsky and Papert determined that artificial neural networks are limited to performing simple tasks such as identifying basic shapes and can never be used to perform more demanding tasks like facial recognition.

Backpropagation

But here and there were still some stubborn scientists who refused to abandon the artificial neurons. One of them was a doctoral student named Paul Werbos, who in 1974 found a solution to the problem of internal layers that bothered Minsky and Papert so much.

His solution was a method called ‘backpropagation’. It means, essentially, going backward through the neural network and modifying the weights of the connections between the artificial neurons. You start from the final output layer and go back one layer at a time, changing weights as you go so that the next time the inputs arrive, the final result will be correct. With each iteration of trial and error, backward propagation allows the system to traces back its errors, gradually correcting where it went wrong until it reaches optimal results. The most significant thing to understand about backpropagation is that it is based on a sound and proven mathematical theory: that is, it has the solid theoretical basis that Marvin Minsky was looking for.

But Werbos was only a doctoral student, and his ideas got little attention from other researchers. It would take more than ten years after Paul Werbos’ initial discovery for the idea of backpropagation to be independently rediscovered by several researchers at the same time: Geoffrey Hinton, and David Rumelhart and James McClelland.

Learning a Language

Rumelhart and McClelland were psychologists in their training and came to the field of artificial neural networks not from computer science, but from the study of human language. Rumelhart and McClelland took part in the debate in the academic community about how children learn to speak. Language is a distinct human ability that distinguishes us from the rest of the animals, and therefore we can conclude that something in the structure of the human brain is unique in the natural world in this respect. The question is – what?

James McClelland - Curious Minds Podcast
James McClelland

Most researchers have speculated that the language and speech rules are “encoded” within the brain in some way, like a hidden software somewhere in your head. David Rumelhart and James McClelland advocated a different view. They believed that there’s no ‘rulebook’ in the brain for learning language, but that our ability to learn a language was based on how neurons interact. In other words – there are no rules, there are only connections.

To prove their claim, the two psychologists turned to artificial neural networks. In 1986, they built a computerized model of a multi-layer neural network, and using the backpropagation process taught it how to manipulate English verbs in the past – for example, work-worked, begin-began, and so on. The network received a present-tense verb and had to guess what its past-tense form is. As anyone who has learned English knows, guessing the past-tense version of a verb isn’t trivial: there are verbs that have the ‘ed’ suffix added to the end – for example, work-worked, carry-carried – and there are verbs whose past form is unique – sing-sang, begin-began and so on.

When small children learn to speak English, there is a phenomenon that repeats itself in almost all cases. At first, the child memorizes the past form of several verbs and says them properly. But then the child discovers the ‘rule’ of adding ‘ed’ at the end, and out of enthusiasm begins to add ‘ed’ even to verbs for which doing so is incorrect: for example, singed, begined and similar errors. Only after constant correction does the child understand their mistake and learn when to add ‘ed’ and when not to.

Amazingly, the Artificial Neural Networks of Rumelhart and McClelland made the exact same mistake as children do. At the beginning of the learning process, the system correctly predicted the past form of verbs – but then, as more and more examples were entered into it, the neural network identified the rule of adding ‘ed’ to verbs – and then, just like with human children, it began to err and add ‘ed’ where it was not applicable. Only when the researchers introduced more and more examples of verb conjugation did the system learn when to add ‘ed’ and when not – and its predictive ability improved accordingly.

In other words, Rumelhart and McClelland demonstrated how a network of neurons can quite literally learn a characteristic of human language. Not only that, but the Artificial Neural Network, without any outside reason to do so, took the same exact path a human brain would to that end.

The real question now becomes: if this is also how babies learn language, do our brains perform their own version of backpropagation? Now that we’ve created machines that act like brains, maybe we need to ask less “Can computers think like humans?” and more “Do humans think like computers?”

To probe further, we’ll pick up in the next episode with the one advance that shot artificial intelligence research into the stratosphere,  return to our tense series between AlphaGo and Lee Sedol and interview the CEO of a deep learning company.

Part II Coming Soon!

Music Credits

Echoes of Time by Kevin MacLeod

Readers! Do You Read by Chris Zabriskie

Candlepower by Chris Zabriskie

Dopplerette by Kevin MacLeod

Sources And Bibliography

http://plato.stanford.edu/entries/computational-mind/
https://ecommons.cornell.edu/bitstream/handle/1813/18965/Rosenblatt_Frank_1971.pdf;jsessionid=4896F3469B3DA338CC28C46405E93DD7?sequence=2
http://csis.pace.edu/~ctappert/srd2011/rosenblatt-congress.pdf
https://web.csulb.edu/~cwallis/artificialn/History.htm
http://web.media.mit.edu/~minsky/papers/SymbolicVs.Connectionist.html
https://www.technologyreview.com/s/546116/what-marvin-minsky-still-means-for-ai/
http://www.kdnuggets.com/2016/09/9-key-deep-learning-papers-explained.html/3
http://www.iep.utm.edu/connect/
http://plato.stanford.edu/entries/connectionism/
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/
http://playground.tensorflow.org/
http://blog.kaggle.com/2014/12/22/convolutional-nets-and-cifar-10-an-interview-with-yan-lecun/
https://plus.google.com/+YannLeCunPhD/posts/Qwj9EEkUJXY
https://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun
http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning
https://www.technologyreview.com/s/540001/teaching-machines-to-understand-us/
http://www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-go-champ/
http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/
http://science.sciencemag.org/content/287/5450/47.full
http://www.blutner.de/NeuralNets/NeuralNets_PastTense.pdf
http://thesciencenetwork.org/programs/cogsci-2010/david-rumelhart-1

Mir Space Station - Curious Minds Podcast

The Awful and Wonderful History of the Mir Space Station | Curious Minds Podcast

The Mir Space Station was a true Soviet engineering wonder, an achievement comparable with the US landing on the Moon. Yet in its later years, Mir survived some horrific & hair-raising accidents…

Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

The Awful and Wonderful History of the Mir Space Station

(Full Transcript)

Written By: Ran Levi

Since the 1950s, the Soviet Union and the United States have competed in what was called the Space Race: each of the superpowers tried to prove its technological, moral, and economic superiority through its achievements in space exploration.

The successful landing of the Apollo 11 on the moon on July 20, 1969, was an important achievement for the United States in the space race. The Soviet Union was looking for a challenge that would restore some of its lost prestige.

The challenge the Russians chose was the establishment of a space station. A space station is a kind of ‘semi-permanent structure’, which allows for a longer space stay than a spacecraft like Apollo or even the Space Shuttle. If a spaceship is like a ship that goes on short voyages, then the space station is like a barge that one can live on for months or even years.

The First Space Station

The first space station launched by the Soviet Union was Salyut 1 in April 1971: Salyut means fireworks. Salyut 1 was launched unmanned, and three days later the Soyuz 10 spacecraft took off with three cosmonauts who were supposed to be the first occupants of the space station.

Salyut 1 - Curious Minds Podcast
Salyut 1

After a 24-hour journey, Soyuz 10 approached Salyut 1 and prepared to dock. It is easy to imagine the excitement that the cosmonauts felt at those moments as they were getting ready to open a new chapter in the history of human progress – but a malfunction in the automatic docking system caused a gap of 90mm – about a third of an inch – prevented the two crafts from being perfectly linked. That tiny space was enough to keep the cosmonauts from opening the connecting door, and after five hours of repeated attempts to solve the problem, the cosmonauts had no choice but to give up.

Another crew that later took off eventually managed to open the stubborn door – but Salyut‘s story illustrates the enormous technical challenge of launching and operating a space station. Outer space, with its extreme vacuum and temperature, is so hostile to humans and human technology that even a simple operation like opening a door is not trivial at all.

Salyot’s story gives us a point of reference when we tell the story of the Mir space station, the home of dozens of astronauts for fifteen consecutive years. Mir survived not only harsh environmental conditions and hair-raising accidents but also extreme political and social changes on Earth.

A Third Generation Space Station

Salyut 1 was a so-called ‘first-generation space station’: it was launched with all the supplies and equipment it needed, and when the equipment ran out, the space station ended its life. Salyut 6 and 7 were second-generation space stations. They had two anchorage points: one for the spaceship that brought back the cosmonauts, the other for unmanned spacecraft that brought in a steady supply of equipment, water, and food. This supply allowed the space station to extend its usefulness significantly.

The Americans also had their own space station: SkyLab. SkyLab was launched in 1973 and was, on paper, technologically equivalent to Salyut 6 and 7. Unfortunately, from the very early days, Skylab suffered many technical faults in its power supply and its docking points, and the Americans, the winners of the race to the moon, found themselves lagging behind the Soviet Union when it came to long space stays.

SkyLab - Curious Minds Podcast
SkyLab

Mir was a technology leapfrog: it was planned in advance to be modular – that is, it could be expanded gradually through modules that would be launched separately from each other. This approach has many advantages: for example, each module could be optimally designed to fulfill a specific task such as scientific experiments, communication, energy supply, and so on. Another advantage is that each module can be launched separately instead of the entire station at once, so the launch rocket can be relatively small and cheap. Towards the end of its life, Mir was made up of seven different modules that were connected at odd angles to each other and looked a bit like a toy created up by a very incompetent child, or a creative, thought-provoking child if it was one of my children, of course.

Mir Space Station - Curious Minds Podcast

Mir’s first module, called the Base Block, left Earth in 1986. Two additional modules joined it in 1989 and 1990. Launching a space station for orbit around the globe is a costly business: Mir cost the Russians over $ 4 billion. Why did the Soviet Union invest so much money in a space station instead of, say, launching a bunch of regular spacecraft, like those that stay in space for a few days and return to Earth?

Part of the answer is, as you might imagine, national patriotism. The Soviet Union was very proud of its advanced space station and invited many countries – especially its allies, of course – to send their own cosmonauts to Mir. For example, Syria and Afghanistan sent their first cosmonauts to Mir in 1987 and 1998.

From a more practical angle, a space station allows us to perform scientific experiments that can not be performed on Earth, or even with a spaceship.

Bone Mass Loss in Free Fall

‘Free Fall’ is the sense of the lack of gravity that is created when we fall in acceleration equal to the acceleration of gravity exerted by the Earth. Almost all of us have experienced, on some occasion, the feeling of ‘freefall’: for example, while jumping down from a high place or when riding a roller coaster. Many phenomena – from the behavior of animals and plants to chemical processes – occur quite differently under free fall conditions than in normal gravity, and studies in this field may have interesting and unpredictable consequences.

Most of the physiological changes that occur in our body due to a long stay in free fall are reversible, and after returning to Earth, the body will return to its original state after a few days of good rest. One exception, in this respect, is the loss of bone mass that can cause long-term damage and can even be life-threatening.

We are used to thinking of our bones as permanent and unchanging, but not so. There are two types of bone cells: the first type, osteoblasts, build the bone constantly, and the other type – osteoclasts – breaks it down. The two types of cells work in unison so a stable equilibrium is created and bone mass remains constant. In free fall, however, this balance is disturbed: for some unknown reason, the cells that build the bone slow down, while the cells that break the bone continue to do so at the same rate. The result is a gradual loss of bone mass,  similar to osteoporosis, which is common in older people. But while an adult loses one and a half percent of his bone mass per year, an astronaut in free fall loses a percent and a half … a month.

In short space flights of a few weeks, loss of bone mass is not a serious problem, and the body returns to normal after returning to Earth. Flight to other planets, as mentioned, may take months or even years. When the astronauts reach their destination, their bones may be so brittle and fragile that every blow can cause cracks and dangerous fractures and prevent them from performing the task for which they were sent. In other words, finding a way to prevent or slow the loss of bone mass is a necessary step on the way to conquering space.

in some cases, The cosmonauts of Mir spent long months in space. Cosmonaut Valery Polyakov, for example, holds the current world record for space stay: 437 consecutive days in Mir. The long stay under free fall allowed scientists to examine different techniques for coping with a loss of bone mass and muscle weakness. For example, the cosmonauts wore special elastic suits, took supplements and experimental drugs, and exercised for two to four hours each day on bicycle and running equipment tied down with bungee cords. Exercises, in particular, had a positive effect on muscle strength, but not on the loss of bone mass. Research on this subject continues today on astronauts on the International Space Station, but no solution to the problem has been found. The researchers hope that a breakthrough if there is one, will also help osteoporosis patients on Earth.

The Last Citizens of the Soviet Union

More than a hundred cosmonauts have visited Mir during its 15 years, but it is likely that the most famous is Sergei Krikalev. Krikalev began his career as a mechanical engineer in the Soviet space industry, but in 1985 he was chosen to be a cosmonaut. He visited Mir for the first time in 1988 and spent nearly six months there. In 1991, Mir approached the end of its planned life span of five years, but in reality, its technical condition was good enough to continue operations. Krikalev volunteered for another mission, and in May 1991 he took off again to the space station.

Sergei Krikalev - Curious Minds Podcast
Sergei Krikalev

When Krikalev left Soviet soil, the mighty Soviet empire was already in the throes of severe internal shocks. Mikhail Gorbachev’s political power was diminishing, Boris Yeltsin swept public opinion, and in the Baltic republics, more and more voices were calling for rebellion against the central government. Krikalev was due to return to Earth five months later, in October 1991, but pressure exerted by Kazakhstan on the Russian space agency led to a change in plans. The Kazakhs demanded that a cosmonaut of Kazakh origin be sent to the space station, and were unwilling to compromise. Since all the takeoffs and landings were from Kazakhstan, the Russians had no choice: They replaced one of the cosmonauts of Soyuz 13, the spacecraft that was about to take off for Mir, with a Kazakh cosmonaut. Unfortunately, the Kazahi did not have enough training for a long stay at a space station, and Krikalev was asked to remain in Mir for another period, accompanied by another cosmonaut named Alexander Volkov. It is interesting to note that someone in the space agency forgot to update the military bureaucracy of this development: Krikalev was supposed to serve in the military reserves, and was almost issued a warrant for desertion – before the army realized that their reserve soldier was not even on Earth.

Two months later, on December 26, the Soviet Union disintegrated. Many in the West feared that, in the chaos and disorder of the disintegrating empire, Mir would be forgotten and abandoned. The cosmonauts are totally dependent on a steady supply of water, air, and food through another spacecraft, and it was hard to believe that anyone in Russia could arrange a launch of a supply ship in the current situation. The very fact that Krikalev remained in Mir instead of returning to Earth intensified the feeling that he and his colleague had been abandoned in the orbit around the earth.

In practice, however, Krikalev and Volkov were not abandoned and were never in danger. The Russian space agency continued to function in an admirable manner, and the plans for the replacement crew were not stopped. In addition, Mir always had a ‘lifeboat’: a Soyuz spacecraft that was anchored to the space station and with which cosmonauts could return to earth as soon as they wanted to. Krikalev later said that he was following the events on the ground very closely, but he was never afraid for his life.

Anyway, In March 1992, the spacecraft and the replacement crew were launched, and Krikalev and Volkov returned to Earth. The world to which they returned was very different from the one they had left … The Soviet Union disappeared, Kazakhstan became an independent state, and even Krikalev’s birth town, Leningrad, changed its name back to St. Petersburg. Krikalev and Volkov still wore the Soviet Union’s insignia on their uniforms and were carrying their Communist Party membership cards in the pockets of their suits. They entered history as “the last citizens of the Soviet Union.”

Sergei Krikalev flew to space four more times and holds the title of ‘the human who’s spent the most time in space’: 803 days on six different flights. Mir continued to be manned continuously and received supplies on an ongoing basis. But even though it was hovering 300 km (186 mi) above the ground, Mir did not evade the consequences of the political earthquake occurring below it. The political upheaval on the ground had very practical implications for the operation of the station – as well as its level of maintenance.

A New Kind of Peace

The breakup of the Soviet Union opened a window of opportunity for interesting cooperation between NASA and the Russian space agency. The Americans wanted to gain experience in a long stay in space, in preparation for the establishment of the International Space Station, which was supposed to take off at the end of the 1990s. The Russians, on the other hand, desperately needed funding. The Russian Mir space station, which was launched in 1986, met the needs of both sides.

In 1992 President George H.W. Bush and Boris Yeltsin signed a cooperation agreement: American astronauts would join cosmonauts on Mir, in exchange for several hundred million dollars to be transferred to the Russian space agency.

The new cooperation was a great success. Space shuttles were launched to Mir, and American funding helped Russians launch four new modules into space and significantly improved the space station. For those who grew up in the Cold War era, the images of an American space shuttle hovering over the Earth alongside a Russian space station were amazing and full of hope, demonstrating better than any diplomatic treaty the change in relations between the two powers.

The Shuttle Atlantis Docked To Mir - Curious Minds Podcast
The Shuttle Atlantis Docked To Mir

Mold And Fungus

But time can not be stopped, and the passing years began to make their mark on the aging space station: the solar panels that supplied the electrical energy gradually lost their efficiency, and random mishaps occurred on the various computers and devices. Moreover, Mir was like a small, crowded ship that had not entered a port and had not been thoroughly cleaned for over a decade. I’ve been on ships in my time: trust me, you don’t want to live in such a place.

The Cramped Interior of Mir - Curious Minds Podcast
The Cramped Interior of Mir

For example – Jerry Linenger arrived in Mir in January 1997. Linenger was the fourth American astronaut on the space station and joined two cosmonauts who had been in space for several months. In interviews and in his book, Linenger described the smell of the air in Mir: the stink of mold and fungus. No one was surprised. When we talk about the dangers and challenges associated with life on a space station, we usually think about the harmful effects of cosmic radiation or the loss of bone mass we discussed in the previous episode – but not many are aware of the dangers of a fungal infection. The moss and mold the cosmonauts brought with them from Earth found a home in hidden corners, behind tubes and electronic panels, and grew unchecked for years. The tiny mushrooms penetrated tiny cracks and widened them, feeding on plastic and weakening metals. Since the station could not be ventilated or the equipment dismantled and cleaned thoroughly, the accumulated damage from fungi could be significant. Linenger, a doctor by training, examined the health of the team and saw no evidence of medical damage from fungal infections, but admitted there were dark corners on the space station that no one was willing to put their hands into…

Jerry Linenger - Curious Minds Podcast
Jerry Linenger

The Fire on Mir

In February 1997, a few weeks after Linenger’s arrival in Mir, the spacecraft Soyuz 25 docked at the space station. Three cosmonauts, two of them Alexander Lazutkin and Vasily Tsibliyev, were supposed to replace the current cosmonauts and remain with Jerry Linenger for several months. February 24 was a Russian holiday, and the six cosmonauts celebrated with a rich dinner that included fairly rare dishes at the space station, such as caviar, hot dogs, and cheese. They sang, played guitar, told jokes, and spent a particularly enjoyable evening.

Mir was designed to accommodate three cosmonauts on a regular basis, and its oxygen recycling facilities were able to supply oxygen to only three people. Six laughing and singing cosmonauts were unusually heavy on the usual supply of oxygen, and at some point, the cosmonauts felt that the oxygen levels in the air were beginning to fall. This was not an unusual occurrence for Mir during periods of team exchange, and just for that purpose, a backup generator was installed at the station to produce more oxygen. Oxygen production is done by inserting a kind of chemical ‘cassette’ into a special furnace: heating the compound to a temperature of about 700 degrees Celsius (or 1300 f) resulted in the dissolution and release of the oxygen atoms in it.

Cosmonaut Alexander Lazutkin left the dinner table and floated into the corridor where the backup generator was installed. He took a new cassette from the closet, put it in the furnace and turned the handle. It was a perfectly routine operation, performed at least two thousand times, if not more, over the years. But when he turned to return to the dinner table, he heard an unfamiliar whisper behind him. He turned around and froze. A huge jet of flames, a small volcano as he later described it, burst out of the generator. Liquid, boiling metal was pouring on the opposite wall of the corridor.

Alexander Lazutkin - Curious Minds Podcast
Alexander Lazutkin

A fire inside a closed building like a space station is much more dangerous than a similar fire in a closed building on Earth, as the thick smoke and the hot air have nowhere to go to, and the flames can consume all the oxygen in the air within minutes. Given the dangerous potential of a fire inside the space station, you’d expect the astronauts to be well trained in dealing with fires, right? They weren’t. Part of the reason for this was the poor design of the fire extinguishers installed on the walls of the station, which prevented the cosmonauts from using them for practice: the extinguishers were designed so that the moment they were removed from their place, a chemical process was started that made them useless three months later.

Predictably, the lack of readiness was at the disadvantage of the cosmonauts in these fateful moments. Lazutkin tried to extinguish the fire with a wet towel, but within seconds the entire corridor was enveloped in thick smoke. The rest of the cosmonauts struggled to wear their smoke masks, some of which were faulty. Some of the fire extinguishers that were secured to the walls could not be released. The flames, which were about one and a half feet tall–or half a meter–, were lapping the walls of the station; if the metal melted and a hole in the wall would form, the fire would become the least of the concerns of the cosmonauts…

And, unfortunately, the fire broke out in the corridor leading to one of the two escape spaceships, which prevented their access to it. That meant that at best only three cosmonauts could escape back to Earth, and three would remain at the space station.

Fortunately, the cosmonauts managed to open three fire extinguishers and were able to take control of the flames before they spread to the rest of the station. Thick smoke filled the station for hours until the air circulation system was able to filter it completely.

The Russian space agency received the reports of what happened in Mir, but outwardly broadcast an atmosphere of “business as usual”: In the reports that were transmitted to NASA about the incident, the fire was described as a small fire that lasted a few dozen seconds. Only when Jerry Linenger returned to Earth and told his story – did his managers in NASA understand how close the space station was to a terrifying disaster… Under the pressure of the Americans, a comprehensive investigation was conducted and the cause of the failure in the oxygen generator was discovered. The workers in the manufacturing plant used flexible latex gloves to protect their hands from the chemical compound of the cassettes: a piece of latex found its way into one of the cassettes, and during heating, it disrupted the chemical reaction and made it unstable. New safety and quality assurance procedures were introduced into the production plant, and NASA hoped that Mir’s troubles were over. They were wrong.

Mir’s Collision with Progress

A few months passed. Jerry Linenger left Mir and was replaced by astronaut Michael Foale. With Foale were Lazutkin and Vasily Tsibliyev, the cosmonauts who were there during the fire incident. In the 1990s, Russia’s economic situation was extremely difficult, and the money the Americans poured in was not enough to solve the budget problems of the Russian space agency.

Vasiliy Tsibiliyev - Curious Minds Podcast
Vasiliy Tsibiliyev

The Progress cargo ship was an unmanned spaceship that every few months docked at Mir and brought supplies from Earth. Progress was equipped with an automatic system that allowed it to dock with the space station autonomously – but this system was manufactured in Ukraine, and the Ukrainians demanded a lot of money for it. The Russian space agency switched from the autonomous system to a manual anchorage system called TORU, a cheaper system. TORU enabled cosmonauts on the space station to control the spacecraft via remote control while it was approaching them: a video camera on the spaceship broadcast images to a screen in the space station, and the operator navigated the spacecraft using joysticks, like a video game.

In early June 1995, a spacecraft docked at the space station. As always, the cosmonauts took out the equipment it had brought and filled it with the garbage that had accumulated in the station in recent months. This time, the control center decided to take advantage of the opportunity to test the TORU manual docking system before Progress returned to Earth. They ordered Tsibliyev, the station’s commander, to disengage Progress from Mir, fly it by remote control for several kilometers, turn it back and then re-dock it at the space station.

Tsibliyev was not at all enthusiastic about the idea. A few months earlier he tried to perform the same exercise, but encountered many difficulties: a technical malfunction hit the video camera on the spaceship, and in combination with his inexperience in operating the manual system, the failure caused the spaceship to miss the space station and fail in anchoring. Tsibliyev believed that the dangerous exercise should not be carried out without comprehensive preparation, but perhaps because of the rigid hierarchical nature or character of the Soviet military system, he didn’t speak up to his superiors.

The exercise began on June 25. Tsibliyev grabbed the steering handles of the TORU system and flew Progress away from the space station. Then he turned it on its axis and began to fly it back toward the station. This time the Progress video camera was working properly, but the images it passed to the screen in front of Tsibliyev were almost useless. Progress photographed Mir from above, against the background of the earth – and Tsibliyev could not see the space station against the white clouds below it. He asked Lazutkin to look out the windows and report to him the location of the approaching supply craft.

In the absence of a reasonable video image, Tsibliyev incorrectly assessed the speed of Progress’s approach. Lazutkin shouted to him that the ship was approaching too fast. Tsibliyev used Progress’s braking rockets, but these had little effect. Lazutkin told Michael Foale, the American astronaut, to prepare the lifeboat, and looked back out the window at the approaching spaceship.

“I watched the black, speckled body of the spaceship pass under me, bent down to look closer, and at that moment there was a tremendous thump, and the whole station shock.”

Progress hit one of Mir’s solar panels, then rebounded and hit the station itself. The impact point was in a module called Specter that contained electrical systems that provided about half of the power supply to the space station. A few seconds after the strike, the crew heard the sound no cosmonaut ever wanted to hear: the low air pressure alarm. But even without the alarms, the cosmonauts immediately realized that they were in trouble; they could feel the pressure of the air falling, and the reason was clear: a hole had opened in the side of the station.

Mir's Collision Damage - Curious Minds Podcast
Mir’s Collision Damage

Now the cosmonauts were racing against the clock; they had to discover the source of the leak and block it in a matter of minutes. If they failed, they would have to rush to the lifeboat and hope they could escape the station on time.

Lazutkin and Foale identified the impact point on the Specter module and decided to close the opening leading to it and disconnect the module from the rest of the station. Unfortunately, eighteen power cables passed through the opening and prevented them from closing the door. Having no choice, they decided to cut the cables on the spot, without even cutting off the electricity. Under a shower of dangerous sparks, the two cut the cables and managed to close the door to Specter before the air ran out. The initial emergency was successfully dealt with, but this was only the beginning. The impact of Progress on Mir put the space station in an uncontrollable spin that meant that even the solar panels that were not damaged in the collision were no longer directed at the sun, and did not supply electricity to the station. The backup batteries lasted only half an hour and were then depleted.

Michael Foale - Curious Minds Podcast
Michael Foale

A space station is, in principle, a noisy place: at any given moment, the station has numerous devices, air conditioners, and installations buzzing and beeping. But now, without electricity, Mir was quiet … quiet as a cemetery. Michael Foale described it this way, quote:

“Within twenty to thirty minutes of the collision, every piece of equipment at the station stopped working, the station was completely quiet. It was a very strange feeling, to be in a space station with no sound but your own breathing. And then it started to be cold, very cold.”

Unable to communicate with Earth’s control room, the cosmonauts had to find a way to stop Mir’s spin and direct the solar panels back at the sun. At their disposal was the Soyuz spacecraft, which was anchored to the station, and with which it was possible to influence the speed of rotation, but the amount of fuel in the rockets was very limited, and if they wasted all of it, they could not abandon Mir and return to Earth if necessary. Without computers or navigation devices, the cosmonauts did not know when to activate the rockets and for how long.

Having no other choice, Michael Foale and his colleagues returned to the basics. Like the sailors who sailed the sea thousands of years ago, the cosmonauts peeked through the station windows to the stars, trying to calculate the spin velocity with basic high school math. After several erroneous rocket activations, they finally managed to stop the spin enough to allow the panels to absorb a few minutes of sunlight once every hour and a half.

With the partial power supply, the cosmonauts continued to restore Mir back to health piece by piece. The next step, after regaining communications with the control room on the ground, was to renew the supply of electricity to a module called Kvant 2, because that was where the toilets were… After thirty exhausting and taxing hours, Mir returned to life. The Specter module, however, remained closed and disconnected from the rest of the station for good, and the cosmonauts later went on several spacewalks within the punctured module to retrieve equipment from it and examine the damage.

The Flight controllers at the Russian space center accused Vasily Tsibliyev of being responsible for the accident and even fined him and reduced his wages. They claimed that Tsibliyev had performed the Progress docking exercise with carelessness, acted out of order, and skipped vital tests. The cosmonauts, for their part, were convinced that the negligence was on the side of the ground crews; the order to perform the exercise was made without Tsibliyev properly practicing the manual steering system, and without anyone figuring out the best maneuver for the docking.

A New Threat for Mir

No matter whose fault the collision was, this dangerous accident marked the beginning of the end for Mir, a space station that was over ten years old at that time. Beyond the ever-increasing danger posed to cosmonauts in the aging space station, a new threat to Mir’s safety appeared in the form of the International Space Station. The International Space Station was a joint venture of space agencies from several countries, including the United States and Russia, each of which was supposed to contribute its own budget to the project. Russia, a country that was already in a difficult financial situation, simply did not have enough money to finance two space stations.

In 1998, the Russian space agency announced that Mir would return to Earth in June 1999. In Russia, many citizens still regarded Mir as a source of national pride: they tried to prevent the decree and even raised public funds to continue its operation. The result was a $15,000 donation, barely enough to fund the station for two or three days.

Another hope for Mir’s salvation came in the form of a 51-year-old British businessman named Peter Llewellyn, who promised to pay $100 million for a week-long visit to the space station. The Russian space agency had almost begun to count the money when investigative reports in the Western press poured cold water the whole thing: Llewellyn, the papers claimed, was a professional con artist who regularly made grandiose promises and deceives his business associates. These claims were not officially sanctioned, as far as we know, but the fact is that Llewellyn did not manage to raise even $10 million in advance payments, and this deal was never carried out. In August 1999 the last team left Mir and returned to Earth. For the first time in over ten years, the station was unmanned.

But just when everything was almost done, and Mir had one solar panel in the grave, as the cliche goes, an unexpected hero suddenly appeared to save it. Walter Anderson was an American businessman who made his fortune in the communications sector and in the 1980s decided to devote his time and energy to the development of private space initiatives. At that time, we should remember, space exploration was the exclusive domain of government space agencies, and for good reason: launching a manned spacecraft was so expensive that it was hard to imagine a scenario in which a commercial company could make the necessary investment, let alone profit. Nevertheless, Anderson worked tirelessly to advance his vision and at the end of the 1980s financed the establishment of the University of Space Studies, which is still active today in central France, with branches in many countries around the world.

MirCorp Logo - Curious Minds Podcast
MirCorp Logo

In 1999, Anderson and a partner set up a company called MirCorp to do something that was never done before: to operate Mir as a private and commercial space station, independent of governments and countries. Mircorp was jointly owned by Anderson and his partners, and by RSC Energia, a Russian spacecraft manufacturer. Energia was responsible for station maintenance and operation, and Anderson and his staff would manage the business and commercial side of the activity.

Anderson did a great job, and within a few months made his first successful transaction: Dennis Tito, an American businessman, wanted to be the first tourist in space and was willing to pay several million dollars for this goal. Another agreement was signed with NBC, the television network, which designed a reality show called ‘Destination: Mir’. As you can guess, ‘Destination: Mir’ was to be a reality TV show, where participants go through trials, and if they fail, they are kicked out. In this case, the group of volunteers was supposed to undergo strenuous astronaut training, at the end of which one of them would fly to Mir.

When Walter Anderson paid the Russian space agency an advance of seven million dollars, the Russians understood that he really meant what he was saying – and went into action. Mir was already in a low orbit, preparing for her return to Earth- and an unmanned spacecraft sent to Mir pushed it to a higher orbit. In April 2000 a historical milestone was achieved: two astronauts hired by MirCorp flew to the Russian Mir spacecraft, the first time that a private company had sent astronauts into space. The two stayed in Mir for seventy-three days, fixed problems, and prepared it for its future as a television studio and a hotel for wealthy tourists.

But not everyone was happy with this success. Walter Anderson’s supporters claimed that NASA has begun to exert enormous political pressure on the Russian space agency to sever its ties with MirCorp and stop commercial space cooperation, but we have not found any official support for these claims. However, according to various press reports, NASA was very much worried that the Russians were so eager to make money from private space initiatives that they would abandon their commitment to the International Space Station.

Conspiracy theory or not, MirCorp failed. None of the business ventures eventually took off, even though Dennis Tito did take off a year later to the International Space Station, fulfilling his dream of being the first tourist in space. A few years later Walter Anderson was arrested and charged with the largest tax evasion ever in the history of the United States and was sentenced to eight years in prison. MirCorp closed its doors in 2003. Without funding and without a future, Mir was a dead-spaceship-walking, and its fate was sealed.

How To Kill a Space Station?

So how do you ‘kill’ a space station? In principle, the matter is simple: allow it to fall into the atmosphere and let the friction with the air burn it and break it up into tiny particles. But when it comes to a big, heavy object like a space station, nothing is so simple.

SkyLab, the US space station, is a representative example of this. SkyLab was as big as two buses and weighed more than seventy tons, and NASA engineers feared that the friction with the atmosphere would not be enough to break it up — so it would crash on Earth still intact. It was also impossible to predict with certainty where on earth the space station would fall. This combination of factors led many to fear that Skylab would crash in a densely populated urban area. The chances of this were very slim – less than one in a thousand – but as always in such cases, the media fed the flames and covered the story in detail. In the end, when SkyLab returned to Earth on June 11, 1979, it was only partially dismantled: large parts of it, like a huge oxygen tank, survived the burn in the atmosphere and crashed in Australia, not far from a small town called Esperance. No one was hurt, but the Australians decided to fine NASA $400 for dumping waste in a public area. NASA ignored the fine, but in 2009 a local DJ in California organized a fundraising campaign from his listeners, sent a check to the Australian town and finally closed the twenty-year ordeal.

The Russian engineers responsible for the destruction of Mir directed the space station to crash in the Pacific, far from any human habitation. Nevertheless, the American fast food company Taco Bell tried to hitch a ride on public concerns and launched an original marketing campaign. Taco Bell placed a huge sheet of plastic with a bullseye mark about ten miles from the Australian coast. If a part of the crashing space station hit the target, Taco Bell promised that every American citizen would get a free taco.

Mir's Reentry - Curious Minds Podcast
Mir’s Reentry

The managers of Taco Bell were not stupid and knew that the chances of such a thing actually happening were negligible – but just to be on the safe side, they bought insurance. In the end, it was not needed. In March 2001, Mir burned in the atmosphere almost perfectly, and only a small piece of it was eventually detected in the Boston area of the northeastern United States. The story of the Mir space station was finally over.

An Engineering Marvel

When reading about Mir’s accidents and mishaps over the years, it is easy to mistake it for a failure. Nothing is further from the truth. The mere fact that Mir had been in space for fifteen consecutive years and was continuously manned for almost that entire period was a tremendous technological achievement, a success comparable to the American landing on the moon. This impressive success was achieved by the Soviet Union alone, using outdated technology such as analog computers and spacecrafts from the seventies. The Russians have a good reason to be proud of Mir.

Mir’s history is fascinating as it reflects the enormous social and technological transformations that took place on Earth at the end of the twentieth century. Mir began its life as part of the Cold War between the Soviet Union and the United States. It was built and maintained by a state in the ‘classical’ model of space exploration since its inception. Fifteen years later, it was the common home of Russian & American cosmonauts who ate, drank, and played the guitar together – that is, when they were not busy putting out fires and generally trying to survive. The last team to visit the station was a civilian team that marked the beginning of the era we are in today: an era when private companies are sending spacecraft to the International Space Station, and teams from around the world are planning to land robots on the moon.

Two space stations are currently orbiting the Earth – the International Space Station and the Tianjong 1 space station, which were designed to one degree or another with the lessons learned during Mir’s long years in space. Whether the first to reach Mars would be an American astronaut, a Russian cosmonaut or a Chinese taikonaut, he or she would probably have a good reason to thank the space station and the brave cosmonauts who lived in it.

 

Music Credits

https://soundcloud.com/stevenobrien/foreboding-ambience
https://soundcloud.com/jordan-craige/space
https://soundcloud.com/tonspender/unbalanced-trip
https://soundcloud.com/mideion/insomnia.
https://soundcloud.com/nihili-christi/tumore-spirituale
https://soundcloud.com/rhetoric-1/film-noir-restored

Sources And Bibliography

http://news.bbc.co.uk/2/hi/americas/1231447.stm
http://history.nasa.gov/SP-4225/multimedia/progress-collision.htm
http://plus.maths.org/content/right-spin-how-fly-broken-space-craft
http://old.post-gazette.com/regionstate/19990430scam3.asp
http://youtu.be/uwuzX38x_Ro
http://news.bbc.co.uk/2/hi/americas/1231447.stm
http://web.archive.org/web/20090406003700/http://www.astronautix.com/craft/mirmplex.htm
http://www.space.com/21122-skylab-space-station-remains-museum.html
http://www.space.com/19607-skylab.html
http://www.scientificamerican.com/article.cfm?id=how-does-spending-prolong
http://www.nsbri.org/humanphysspace/introduction/intro-bodychanges.html
http://science1.nasa.gov/science-news/science-at-nasa/2001/ast01oct_1/
http://suzymchale.com/krikalyov/mir09.html
http://www.nytimes.com/1992/03/26/world/after-313-days-in-space-it-s-a-trip-to-a-new-world.html?pagewanted=2&src=pm
http://articles.latimes.com/1992-03-21/news/mn-4000_1_soviet-union
http://www.mathematica-journal.com/issue/v7i3/special/transcript/html/
http://www.mircorp.org/corporate.html
http://books.google.co.il/books?id=DrgvjPsfwhsC&lpg=PA66&ots=sreC_8gQug&dq=mircorp%20walt%20anderson&pg=PA79#v=onepage&q&f=false
http://rense.com/general8/mir.htm
http://history.nasa.gov/SP-4225/nasa4/nasa4.htm
http://nsc.nasa.gov/SFCS/SystemFailureCaseStudy/Details/81
http://www.universetoday.com/100229/fire-how-the-mir-incident-changed-space-station-safety/
http://www.astronautix.com/details/soy51799.htm#more
http://history.nasa.gov/SP-4225/science/science.htm
http://home.comcast.net/~rusaerog/mir/Mir_exp.html#BIOLOGY

Malicious Life - Curious Minds

The Dark Avenger [From: Malicious.Life] | Curious Minds Podcast

We’re back from our short break, with a fantastically interesting episode:
In 1989, a message was found in a virus: “Eddie Lives…Somewhere in Time!”. ‘Eddie’ was a particularly nasty virus, and its discovery led a young Bulgarian security researcher down a rabbit hole, on a hunt for the prolific creator of the Eddie virus: The Dark Avenger.

Guests: Vesselin Bontchev, Graham Cluley

Link to more Malicious.Life episodes

Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

 

FORTRAN - Curious Minds Podcast

Are Software Bugs Inevitable? Part 2: The Most Expensive Failed Software Project Ever | Curious Minds Podcast

After describing the Software Crisis in the previous episode, we discuss the various methodologies and practices implemented over the years to combat the complexities of software development. We’ll tell the sad story of the FBI’s VCF project – perhaps the most expensive failed software project ever – and hear about Dr. Fred Brooks’ classic book, ‘The Mythical Man-Month’.

Link to Part I

Big thanks to Aviran Mordo from Wix.com for appearing in the episode.  


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Are Software Bugs Inevitable?

Written By: Ran Levi

Part II:  The Most Expensive Failed Software Project Ever

(Read Part I)

In this episode, we’re continuing our discussion about the Software Crisis, which we introduced last week. If you missed that episode, you might want to go back and have a read before listening to this one.

The question we asked was: why do so many large software projects fail so often? THAT is the Software Crisis, a term that was first coined in 1968.

“Failure”, in the context of software, has many aspects: software projects tend to deviate from schedule and budgets, produce software that does not meet the customer’s specs or expectation – and often contains a significant number of errors and bugs. It is a question that troubled many engineers and computer scientists over the years: what makes software so complicated to engineer?

The solutions to this problem changed over the years, along with changes in the business culture of the High Tech world.

The Waterfall Methodology

In the 1960s, The dominant approach to software development – especially when it came to complicated projects – was known as the “Waterfall” methodology. This approach divides the software projects into well-defined stages: first, the customer defines the requirements for the product. A software architect – usually an experienced programmer – creates the outline of the system that fits these requirements, and then a team of programmers writes the actual code that fits this outline. In essence, the Waterfall approach is the same approach a carpenter would use when creating a new piece of furniture: Learn what the customer wants, draw a schematic outline of the product, measure twice – and cut once.

The name “Waterfall” hints at the progression between stages: a new stage in the project will begin only when the last one is complete, just like water flowing down a waterfall. You can’t start writing code until the client has defined their needs and wants. Seems like a sensible approach, and indeed the Waterfall methodology served engineers outside of the software industry for hundreds of years. Why shouldn’t it be used here as well?

But in the last twenty years, the “Waterfall” method has been under constant and profound criticism, coming from software developers and business leaders. The main argument against Waterfall is that even though it served other engineering disciplines, from architecture to electronics – it is not well-suited to the software field.

And why is that? Let’s examine this question through an example which is, most likely, one of the most expensive failures in the history of software engineering.

VCF – Virtual Case File

In the year 2000, the FBI decided to replace its entire computer system and network. Many of the agency’s computers were old and outdated and had no longer suited the needs of agents and investigators. Some of the computers still used 1980s green screens and didn’t even support using a mouse…After September 11th, FBI agents had to fax photos of suspects because the software they used couldn’t attach a document to their e-mails! It can’t get much worse than that…

Seal of The FBI - Curious Minds Podcast
Seal of The FBI

Finally, at the end of that year, Congress approved a budget of four hundred million dollars for upgrading the FBI computer system and network. The project was supposed to take three years, and replace all computer stations with modern and capable machines, connected via a fast optic cable network system.

The crowning glory of the new system was a software called VCF, for “Virtual Case File”. VCF was supposed to allow agents at a crime scene to upload documents, images, audio files, and any other investigation material to a central database, in order to cross-reference information on suspects that they could later present in court. The company hired to write the advanced software was called SAIC.

The FBI Goes Waterfall

It’s important to note that the FBI has hundreds of thousands of people, and as most large organizations – it tends to be very bureaucratic and conservative. Naturally, the preferred methodology for the VCF project was the Waterfall, and so the project managers begun by writing an eight hundred page long document that specified all the requirements from the new software. This document was extremely detailed, with sentences like: “In such and such screen, there will be a button on the top left corner. The button’s label will read – ‘E-Mail’.” They didn’t leave the developers a lot of room for questions…

But in an organization as huge and varied as the FBI, it’s doubtful that there is one person, or even one group, who understands the practical daily needs of all the departments and groups for which the program was written. As the project progressed, it became clear that the original requirements didn’t meet the day-to-day needs of agents.

So, special groups were assigned to study the needs of the agents in the field, and they constantly updated the project’s requirements. As you might imagine, the constant changes made the requirements document almost irrelevant to the developers in SAIC who actually wrote the code. The events of September 11th gave the project a new sense of urgency, and the tight schedule soon created conflicts between the programmers and FBI management. The software developers were frustrated over the ever-changing requirements, while FBI agents felt their needs were being ignored.

A Dramatic Failure

Things got worse and worse, and by the beginning of 2003, it was clear that the new software wouldn’t be ready on time. Since the VCF project was deemed a matter of importance to national security, Congress approval an additional budget of two hundred million dollars. But that didn’t help either.

In December of 2003, a year after it was supposed to be ready, SAIC finally released the VCF’s first version. The FBI rejected it almost immediately! Not only was the software buggy and error-prone, it also lacked basic functions like “bookmarking” and search history. It was totally inadequate for field or office work.

In an effort to save the failing project, the FBI invited a committee of outside experts to consult the agency. One of the committee members, Professor Matt Blaze, later said that he and his colleagues were shocked once they analyzed the software. He jokingly told a reporter later, quote, “That was a little bit horrifying. A bunch of us were planning on committing a crime spree the day they switched over. If the new system didn’t work, it would have just put the FBI out of business.”

In January 2005, the head of the FBI decided to abandon the project. This wasn’t an easy decision since it meant that all the agency’s personnel would have to continue using the ancient computers from the 1980s and 90s for at least five more years. This had a sizeable impact on national security, not to mention all that money that was spent for nothing.

Why Did VCF Fail?

The VCF project failed even though the FBI used the age old approach of “measure twice, cut once”. It defined all the software requirements up front, and left nothing to chance – or so it seemed. Critics of the Waterfall methodology claim that the problem was that the FBI was never able to define its needs perfectly, or even adequately. In such a complex and big organization, defining all the software requirements up front is an almost hopeless task, since no single person knows everything that’s going on in the organization and also has a good grasp of what the technology can and can’t do.

The FBI’s VCF fiasco is typical, it seems, for large-scale software projects. I recently had a chance to speak with a very experienced programmer who has worked on a different large scale project.

“My name is Aviran Mordo, I’m the Head of Engineering for Wix.com. I’ve been working in the software industry for over twenty years, from startup companies here in Israel to developing the US National Archives.

Aviran Mordo - Curious Minds Podcast
Aviran Mordo

“So working on a government project is everything that you hear that is wrong with Software Development, like Waterfall and long processes. We tried to be as Agile as we can, and during the prototyping phase, we actually succeeded – that is why we won the project. But when we started the actual writing of the project, hundreds of people came and worked on humongous documents that nobody read, and built this huge architecture with very very long processes. You could see that this is going to take a long time, and the project just suffered so badly… That was the point that I switched to a smaller team, to do that on the civilian market. We were just five people. We started about six months after the big project started. We were five people against a hundred people. After six months we were a year ahead of their development schedule.”

“They actually had to restart the project twice. [Ran: when you’re saying ‘restart’ you mean they just developed everything from the beginning?] Yes, they had to develop everything from the beginning. The architecture, everything. “

I asked Aviran why, in his opinion, the Waterfall methodology – which does a great job in other engineering disciplines – fails so miserably in software engineering.

“Several Things. One, you do the architecture up front and think you have all the answers. If you don’t have all the answers, you plan for the things you don’t actually know. So you build a lot of things that are wasting your time for future features that you think may affect the product or for requirements that may change in the future – but since the release cycle is so long, and it costs so much to do a new version, you try to cramp up as many features as you can into a project. This derails you from whatever really need to be achieved and have a quick feedback from the market and from the client. [it’s] A waste of time.”

Aviram’s view is echoed by many other developers, including the ones who investigated the failed FBI project. Waterfall assumes you have all the information you need before the project begins. As we already saw, software projects tend to be so complex, that this assumption is often wrong.

The Agile Methodology

So in the 1990s and early 2000s, an alternative to the unsuccessful Waterfall methodology appeared: its name was Agile Programming. The Agile methodology is almost the exact opposite of Waterfall: it encourages maximum flexibility during the development process. The strict requirement documents are abandoned in favor of constant and direct communication with the client.

Say, for example, that the customer wants an email client. The developers work for a week or two, and create a rough skeleton of the software: it might have just the very basic functions, or maybe just buttons that don’t do anything yet. They show the mockup to the customer, who then gives them his feedback: that button is good, that one is wrong, etc. The developers work on the mockup some more, fixing what needs to be fixed and maybe adding a few more features. There’s another demonstration and more feedback from the customer. This process repeats itself over and over until the software is perfected. Experience shows that projects developed using the Agile methodology are much less prone to failure: this is because the customers have ample time and opportunity to understand their true needs and “mold” the software to fit.

Not A Perfect Solution

So, is Agile the solution for the software crisis? Well, Probably not. Aviran, Wix’s Head of Engineering says that Agile is not suited for every kind of software project.

“It’s harder to plan ahead what’s the end goal. This [Agile] works well for a product which is innovative but it won’t work well for a product which is revolutionary. For example, the iPhone. The iPhone cannot work this way – it’s a really huge project with a lot of things in it, and it revolutionized the market. It could be developed with Agile, but you miss the step of getting it in front of your customer. So if you’re doing something which is ‘more of the same’ or you’re not trying to change the market – Agile works extremely well. If you’re trying to change the market and do something revolutionary, you could do Agile to some extent until you actually get to market.”

Also, Agile has been around for twenty years – and large-scale projects do still fail all around us. It is a big improvement over Waterfall, no doubt, but it probably won’t solve the software crisis.

So, what is the solution? Maybe a new and improved methodology? A higher-level programming language?

Dr. Frederick Brooks

One of the many computer scientists who tried to tackle this question was Dr. Frederick Brooks. In the 1960s and 70s Brooks managed some of IBM’s most innovative software projects – and naturally suffered some failures himself. Prompt by his own painful experiences, Brooks wrote a book in 1975 called “The Mythical Man-Month”. A “Man-Month” is a sort of a standard work-unit in a software project: the work an average programmer does in a single month.

Fred Brooks - Curious Minds Podcast
Fred Brooks

Brooks’ book, and another important article he published called “No silver bullet”, influenced a whole generation of software programmers and managers. In his writings, Brooks analyzed the main differences between the craft of software writing and other engineering disciplines, focusing on what he perceived to be the most critical and important characteristic of software: its complexity.

Brooks claims there is no such thing as a “simple software”. Software, he writes, is a product of the human thinking process, which is unique to every individual. No two developers will write the exact same code, even if they are faced with the exact same problem. Each piece of software is always unique since each brain is unique. Imagine a world where each and every clock has its own unique clockwork mechanism. Now imagine a clockmaker trying to fix these clocks: each and every clock he opens is different from the rest! The levers, the tiny screws, the springs – there’s no uniformity. A new clock, a whole new mechanism. This uniqueness would make clock-fixing a complex task that requires a great deal of expertise – and that’s the same situation with software.
Now, high complexity can be found in other engineering disciplines – but we can almost always find ways to overcome it. Take electronic circuits, for example: in many cases, it is possible to overcome complexity by duplicating identical sub-circuits. You could increase a computer’s memory capacity by adding more memory cells: you’ll get an improved computer but the design’s complexity would hardly change since we’re basically just adding more of the same thing. Sadly, that’s often not true for software: each feature you add to a piece of software is usually unique.

The Mythical Man-Month Cover - Curious Minds Podcast
The Mythical Man-Month Cover

What about architecture? Well, architects overcome the complexities of their designs by creating good schematics. Humans are generally good at working with drawings: an architect can open the schematics, take a quick look, and get a fairly good idea of what’s going on in the project. But software development, explains Brooks, does not allow such visualization. A typical software has both a dimension of time – that is, do this, then do that – and a dimension of space, such as moving information from one file to another. A diagram can usually represent only a single dimension, and so can never capture the whole picture. Imagine an architectural schematic that tries to capture both the walls and levels of a building – and the organizational structure of the company that will populate it… many times a software programmer is left with no choice but to build a mental model of the entire project he is working on; or if it is too complex, parts of it.

In other words, Brooks argues that unlike electronics or architecture, it is impossible to avoid the “built-in” complexity of software. And if this is true, then computer software will always contain bugs and errors, and large projects will always have a high risk of failure. There is no “Silver Bullet” to solve the Software Crisis, says Brooks.

But you might be asking – what about high-level languages? As we learned in the previous episode, advanced software languages like FORTRAN, C, and others greatly improved a programmer’s ability to tackle software complexity. They made programming easier, even suitable for kids. Isn’t is possible that in the future someone will invent a new, more successful programming language that would overcome the frustrating complexity of software?

Brooks’ Advice

The answer, according to Brooks, is no. High-level languages allow us to ignore the smaller details of programming and focus on the more abstract problems, like thinking up new algorithms. In a way, they remove some of the barriers between our thought processes and their implementation in the computer. But since software complexity is a direct result of our own thought processes, the natural complexity of the human mind – a higher level language will not solve the software crisis.

But Fredrik Brooks does have a rather simple solution – one which might solve the problems of most companies. His suggestion: don’t write software – buy it!

Purchasing software, says Brooks, will always be cheaper and easier than developing it. After all, buying Microsoft’s Office suite takes minutes, while developing it from scratch might take years – and fail. True, a purchased one-size-fits-all software will never meet all the organization’s needs – but it’s often easier to adjust to an existing software then create a new one. Brooks uses salary calculation software to illustrate his point: in the 1960s, most companies preferred writing custom software to handle their salary calculation. Why? Because back then, a computer system cost millions of dollars while writing software only a few tens of thousands. The costs of customized software seemed reasonable when compared to the cost of the hardware. Nowadays, however, computers cost just a few thousand dollars – while large-scale software projects cost millions. Therefore, almost all organizations learned to compromise and purchase standard office software such as “Excel” and “Word”. They adjusted their needs to fit the available, ready-made, software.

Brooks gave another piece of advice, this time to the software companies themselves: nurture your developers! Some brains are better suited to handle the natural complexity of software development. Just as with sports and music, not all people are born equally talented. There are “good programmers” – and then there are “outstanding programmers”. Brooks’ experience taught him that a gifted programmer will create software that is ten times better than the one made by an average programmer”. Software companies should try and identify these gifted individuals early in their careers, and nurture them properly: assign them good tutors, invest in their professional education, and so on. That kind of nurturing isn’t cheap – but neither is a failed software project.

Conclusion

To summarize, we started by asking whether software bugs and errors are inevitable – or can we hope to eliminate them sometime in the future. Digging deeper, we discovered an even more fundamental problem: large-scale software projects not only have plenty of bugs – they also tend to fail spectacularly. This is known as the Software Crisis.

Modern software development methodologies such as Agile can sometimes improve the prospects of large-scale projects – but not always, and not completely. The only real solution, at least according to Dr. Fredrick Brooks, is finding developers who are naturally gifted at handling the complexities of software – and helping them grow to their full potential.  And until we find the Silver Bullet that will solve the Software Crisis, we’ll just have to…well – bite the bullet.


Credits

Mountain Emperor by KevinMacLeod
Hidden Wonders by KevinMacLeod
Clock Ticking Sound Effects By FreeHighQualitySoundFX,  Sound Effects CentralSound Effects HD Indie Studiosstunninglad1

FORTRAN - Curious Minds Podcast

Are Software Bugs Inevitable? Part 1: FORTRAN and the Denver Airport Baggage Disaster | Curious Minds Podcast

Software errors and random bugs are rather common: We’ve all seen the infamous Windows “blue screen of death”… But is there really nothing we can do about it? Are these errors – from small bugs to catastrophic mistakes – inevitable? In this episode, we’ll tell the story of FORTRAN, the groundbreaking high-level computer language, and the sad, sad tale of the Denver Airport Baggage Disaster. Don’t laugh, it’s a serious matter. 

Link To Part II


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Are Software Bugs Inevitable?

Written By: Ran Levi

Part I

A car that breaks once every few weeks is simply unacceptable. We expect our cars to have a certain level of reliability. But when it comes to computers – software errors and random bugs are rather common. We’ve all seen the infamous Windows “blue screen of death”, and most smartphones require a reboot every once in awhile.

In a way, we’ve learned to live with software errors and take them for granted. But is there really nothing we can do about it? Are these errors – from small bugs to catastrophic mistakes – inevitable, or is there hope that as technology and innovation move forward, we’ll be able to overcome this annoying problem – and make software bugs a thing of the past? This will be the main question of this episode.

How Software Works

Software bugs are nothing new, of course. Writing computer software in the 1940s and 50s was a complicated and difficult task, so it’s no wonder that the first generation of computer programmers considered bugs and errors inevitable.


Conceptis Pic-A-Pix - Curious Minds Podcast


But let’s start from the basics, with a quick refresher on how computer software works. A computer is a pretty complex system, and its two main parts are the processor and the memory. The memory cells contain numbers; the processor’s role is to read a number from the memory, apply some sort of a mathematical operation on it such as adding or subtracting and then write the result back to the memory.

A software is a sequence of commands telling the processor where certain information is located in the memory and what needs to be done with it. Think of the information in a computer’s memory as food ingredients: the software is the recipe. It tells us what we need to do with the different ingredients at every given moment. A software error is an error in the sequence of commands. It might be a missing command, two commands given in the wrong order, or an altogether wrong command.

Just Plain Numbers

So why were software bugs seen as a fact of life back in the 1950s? Back then, both information and commands were given not as words – bus as plain numbers. For example, the numeral forty-two might represent the command “copy,” so that the sequence 42-11-56 might represent an action like: “copy the content of memory cell eleven to memory cell fifty-six.” Even a mildly complicated calculation, like solving a mathematical equation, might require hundreds – if not thousands – of such command sequences. Each such sequence had to be perfectly correct, or else the entire calculation might fail – just like in baking: if we put the frosting on the cake before baking the batter, that cake will be a disaster.

But software is even less forgiving than baking. You can’t even make a small mistake because then the entire equation will fail. It’s like a house of cards. It’s no wonder then that at the time, only computer fanatics were willing to devote their time to programming. It was a truly Sisyphean task.

Assembly

In the late 1940s, a new computer language was developed. It was called “Assembly”, also known as “machine language.” Assembly replaced some of the numbers with meaningful words that were somewhat easier to remember. For example, the number 42 was replaced by the word MOV. These textual commands were then fed to a special software named the “Assembler” that converted the words back to numbers – since that’s ultimately what computers understand.

Example of Assembler Code - Curious Minds Podcast
Example of Assembler Code

For programmers, Assembly represented a real improvement: for humans, words are much easier to work with than random numbers. But Assembly didn’t eliminate software bugs. It was still too “Low Level” – meaning even simple calculations still required thousands upon thousands of code lines. Programming was still an exhausting and Sisyphean task.

John Backus

One of those exhausted programmers was John Backus. Backus was a mathematician who worked for IBM, calculating the trajectories of rockets. He absolutely hated programming in Assembly and the tedious process of finding and eliminating software bugs.

John Backus - Curious Minds Podcast
John Backus

So in 1953 Backus decided to do something about it. He wrote a memo to his supervisors and suggested that IBM should develop a new software language that will replace Assembly. This new language, wrote Backus, will be a “High-Level Programming Language”: each individual command will represent numerous Assembly commands. For example, you could use the command “Print” – and “behind the scenes” it will evoke hundreds of simpler Assembly commands that will handle the actual process of printing a character on the screen or on a page. This level of abstraction will make programming easier, simpler – and hopefully, a lot less prone to errors.

But if a High-Level programming language was such a great idea, how come no one thought about it before? Well, it turns out that Backus wasn’t the first. In fact, similar computer languages were developed as early as the 1940s, and many computer scientists found them to be a fascinating research subject. In reality, though, none of these early high-level languages threatened Assembly’s dominance. Why is that?

Recall that the computer, as a rule, understands nothing but numbers. Much like Assembly had to have an “Assembler” to translate the textual commands to numbers – A high-level programming language needs a special software called a “Compiler” to translate the high-level commands. Unfortunately, the translated code produced by the early compilers was very inefficient, when compared to code written in Assembly by a human programmer. The compiler could create ten lines of code – where a human, with a bit of creative thinking, could accomplish the same task with a single line.

This inefficiency meant that software written in high-level code tended to be slow. Since in the 1940s and 50s, computers were already weak and slow, this hit in computation performance was a penalty that no one was willing to pay. And so, high-level programming languages remained an unfulfilled promise.

A High-Level Language

Backus was aware of the challenge, but he was determined. In his memo to IBM, he stressed the potential financial benefits: programming in a high-level language could reduce the number of bugs in a software project, shorten development time, and reduce costs by as much as seventy-five percent.

Backus made a convincing argument, and IBM’s CEO approved his idea. Backus was made the head of the development team and recruited talented and enthusiastic engineers who welcomed the challenge of creating this high-level language. They worked days and nights, and often they slept at a hotel near the offices in order to get available computer time even before sunrise.

Their main task was writing the compiler: the software that translated the high-level code to low-level machine language. Backus and his people knew that the success of their project depended on the compiler’s efficiency: if the code it produced was too inefficient, their new language would fail just as its predecessors did.

Four years later, in 1957, the first version of the new high-level programming language was ready. Backus named it FORTRAN, short for Formula Translation. The name reflects on what is was intended to be used for scientific and mathematical calculations. IBM had just launched a new computer model named IBM-704, and Backus sent the new compiler and a detailed FORTRAN manual to all customers who bought the new computer.

FORTRAN - Curious Minds Podcast
FORTRAN

FORTRAN Takes The World By Storm

The response was overwhelmingly positive. Programmers were delighted by how easy and fast they could write software using FORTRAN! One customer recalled how shocked he was to see one of his colleagues – a physicist in a nuclear research institute – writing a complicated piece of software in a single afternoon, whereas writing the same code in Assembly would have taken several weeks!

FORTRAN took the software world by storm! Within less than a year, almost half of the IBM-704 programmers were using FORTRAN on a daily basis. It was the salvation that all bleary-eyed programmers had prayed for: code written in FORTRAN was up to twenty times shorter than one written in Assembly, while still being efficient and blazingly fast. In fact, the new FORTRAN compiler was so successful that it was considered the best of its kind even twenty years later.

This was a big stride towards a future clean from software bugs. Backus’ team was so thrilled by FORTRAN’s success, they hardly included in the language any tools for detecting software errors and analyzing them. They believed that these tools weren’t needed anymore, as the new language had reduced the number of bugs considerably.

FORTRAN Is Made A Standard

In 1961, the American National Standards Institute decided to make FORTRAN a standard language. This was a big deal since until then FORTRAN only worked with IBM-704 computers: making it a national standard meant that it could now be used on computers from other manufacturers as well. Programming became easy and fun! It was no longer restricted to hardcore mathematicians and engineers. Amateur programmers taught themselves FORTRAN from books. Assembly was almost gone, and FORTRAN had taken over the market.

FORTRAN’s success ushered in a new generation of high-level programming languages like COBOL, ALGOL, ADA, and C – the most successful of them all. These languages not only improved on FORTRAN’s ideas and made programming even easier, but they also made programming suitable to a larger variety of tasks: from accounting to artificial intelligence.

Nowadays, there are hundreds of high-level programming languages to choose from. Some modern languages such as Python or JavaScript are so simple that even children can learn how to use them easily!

You might be surprised to learn that fifty years after it was first released – FORTRAN is still alive and kicking! It’s gone through changes and updates over the years, but is very much still relevant today – and some scientists and researchers still prefer it for complicated calculations. This is an amazing feat for a language designed in an era when programming was done with punched cards!

John Backus passed away in 2007. He was fortunate enough to see how the language he helped create changed the programming world. It transformed programming from a dreadful task – to a sort of positive challenge, even a hobby embraced by millions around the world.

What About The Bugs?!

All’s well that ends well!  But Wait a minute! Wait just a minute…are we not forgetting something? What about bugs? Did high-level languages solve the problem of software errors?

No, unfortunately, they didn’t. You see, while high-level languages did make programming a lot easier – they also allowed software to become much more complex. It’s like having the option to buy LEGO blocks instead of fabricating them yourself: you can invest all your time in creating bigger and bigger creations instead of doing everything from scratch. Similarly, high-level languages allow developers to add more features to their programs – and so the advantages of high-level languages were balanced by the ever increasing complexity of the programs themselves. The bugs never went away.

A Fundamental Problem in Software Design

In the 1960s it became clear to many computer programmers that reliability was a fundamental problem in software design. Despite all the great developments, it was still almost impossible to create a piece of software with no errors. I mean, a complex and feature-rich software, not some trivial program.

Worse yet, it seemed that it was getting harder and harder to complete a software development project “successfully.” What do I mean by “successfully”? A successful software project is one whose output is a high-quality, bug-free piece of code, tailored to meet the customer’s specifications, completed within schedule and without budget overruns. This goal was proving to be more elusive with every passing year.

Modern research clearly shows that when it comes to software projects – failure rates are extremely high compared to other engineering projects. A basic software project has a twenty-five percent chance of going over budget, or over its deadline, or producing software that doesn’t meet the customer’s expectations. That’s one out of every four projects! And if the project lasts more than eighteen months, or if the staff is larger than twenty people – the risk of failing becomes more than fifty percent. When it comes to even larger projects that go on for years and involve many teams of programmers, the likelihood of failure is close to a hundred percent. About ten percent of those projects fail so dramatically, that the entire software is thrown away and never used at all. What a waste!

This realization hit many programmers badly. I’m a software developer myself, you know – and us developers take our profession very seriously. We can spend years – and I’m not joking – debating the proper way to capitalize variable names in the code.

Over time, software developers started squinting at other engineering fields, like architecture for example and asked themselves why they couldn’t be more similar. I mean, architects and engineers build tall buildings, long bridges and other structures that are reliable, they don’t go over budget or schedule (well, at least most of the time) and they are built according to spec. Experience has taught software programmers that their projects usually won’t reach the same result.

In 1968 some of the world’s leading software engineers and computer scientists gathered in a NATO convention in Germany, in order to discuss this obvious difficulty of creating successful software. The convention didn’t result in any solution, but it did give the problem a name: “The Software Crisis.”

Building A New Airport

So, what does the software crisis looks like in real life? Here’s an example.

In 1989, the city of Denver, Colorado, announced it was embarking on one of the most ambitious projects of its history: building a new modern airport that would march state tourism and local business towards the twenty-first century.

Denver International Airport - Curious Minds Podcast
Denver International Airport

The jewel in the modern airport’s crown was to be a new and sophisticated baggage transportation system. I mean, baggage transportation is an important part of every modern airport. If a suitcase takes too long to get from point to point – passengers might end up missing their connecting flights. If a suitcase gets lost – that’s a ruined vacation. It’s obvious why the project’s leaders took their new baggage transportation system very seriously.

Aerial View of the Denver Airport during construction - Curious Minds Podcast
Aerial View of the Denver Airport during construction

The company that was chosen to plan the new system was BAE, an experienced company within its field. BAE’s engineers examined the blueprints of the future airport and realized that this was going to be a momentous challenge: Denver’s airport was going to be much larger than was usual for airports, and this meant that transporting a piece of baggage from one end of the airport to the other might take up to 45 minutes! So to solve that problem, BAE designed the most advanced baggage transportation system ever built.

A Breathtaking Design

The plan called for a fully automated system. From the moment an airplane lands, until the moment the luggage is picked up from the carousel – no human hands would touch it. Barcode scanners would identify the suitcase’s destination, and a computerized control system would make sure that an empty cart would be waiting for it at just the right place and time, to transport it as quickly as possible via underground tracks, to the correct terminal. Timing was incredibly important: the carts weren’t even supposed to stop: suitcases were supposed to fall from the one track onto a moving cart at the exact right moment.

Just to give you some perspective on BAE’s breathtaking design: there were 435,000 miles of cables and wires, 25 miles of underground tracks and 10,000 electrical engines to power them. The control system included about 300 computers to supervise the engines and carts. It is no wonder that Denver’s mayor said that the new project was as challenging as, quote, “building the Panama canal.”

The entire city followed the project with interest. The new airport was supposed to open at the end of 1993, but a few weeks before the due date the mayor announced that the opening would be delayed due to some final testing to the baggage transportation system. No one was too surprised: after all, this was a complicated and innovative new system, and it was likely that testing it will take some time.

A breathtaking Failure

But no one was prepared for the embarrassment that took place in March 1994, when the proud mayor invited the media to a celebrated demonstration of the new system.

Instead of an efficient and punctual transportation system, the amazed journalists watched in a mixture of horror and delight at a sort of technological version of Dante’s inferno. Carts that made their way too quickly fell off the tracks and rolled on the floor. Suitcases flew in the air because the carts that were supposed to wait for them never came. Pieces of clothing that fell out of the suitcases got shredded in the engines or tangled in the wheels of passing carts. Suitcases that somehow made their way to an empty cart reached the wrong terminal because barcode scanners failed to identify the stickers on the suitcases. In short – nothing worked properly.

The journalists who witnessed the demonstration didn’t know whether to laugh or cry. If it wasn’t for the project costing the taxpayers close to 200 million dollars, it could have been a like classic Charlie Chaplin comedy. The headlines of the following morning undoubtedly made the mayor cringe.

Behind the scenes, there were desperate attempts to salvage the system. Technicians ran to fix the tracks, but finding a solution at one spot created two other problems elsewhere; Each and every day of delay cost the city of Denver another one million dollars. By the end of 1994, with the airport’s project nowhere near completion – Denver faced the very real possibility of bankruptcy.

The mayor had no choice; after conducting several external tests and emergency discussions, it was decided to dismantle a big chunk of the new automated system. Instead, a traditional more manual system was put in place. The final cost of the airport project, including the cost of the new system, was 375 million dollars – twice the original budget.

In February 1995, almost a year and a half after the original date, the new Denver airport opened to flights, travelers, and suitcases. Only one airline, United, agreed to use the new baggage transportation system, but it too soon gave up. The system experienced so many problems and errors that the monthly maintenance cost was close to a million dollars. In 2005, United Airlines announced it would stop using the automated system, and go back to a manual transportation system.

An Investigation

So what went wrong in Denver? Why was the project’s failure so massive and absolute?

Several investigations highlighted the many factors leading to the failure of the Denver Airport project. Some of these factors included: Management decisions that were too affected by political considerations, changes that were made during construction just to please the airlines and an electricity supply that was constantly interrupted. if all that wasn’t enough, the project’s main architect had died unexpectedly. In short, everything that could have gone wrong – did. Having said that, most of the investigators agreed that the biggest problem of this ambitious project was its software component.

As we mentioned before, the original plan dictated that 3000 carts would travel independently around the airport using the underground tracks. In order for that to happen, about 300 computers were supposed to communicate with each other and make rapid decisions in “real time.”

For example, one of the common scenarios was where an empty cart traveled to a specific location in order to pick up a suitcase. Its route turned out to be quite complicated: the empty cart had to use several tracks, change directions, and perhaps move over or under other carts. Also, a cart’s specific route depended not only on its location but on the location of other carts as well. This meant that if a “traffic jam” occurred for any reason, the software had to recalculate an alternative route. Let’s not forget that we are talking about many thousands of carts, each going to a different destination and each supposed to arrive on time; and that the decisions were supposed to be made by hundreds of computers running simultaneously!

Twenty software programmers worked on the project for two whole years, but the system was so complicated that none of them could have seen the entire picture and truly understand, from a “bird’s eye view,” how the system actually behaved. The final result was a complex and complicated software that was full of errors. Denver’s airport project is a perfect example of the Software Crisis: a large project that went over schedule, over budget, and didn’t reach its goals.

Is There A Solution To The Software Crisis?

Dr. Winston Royce, a leading computer scientist, defined the situation best in 1991 when he said:

“The construction of new software that is both pleasing to the user/buyer and without latent errors is an unexpectedly hard problem. It is perhaps the most difficult problem in engineering today and has been recognized as such for more than 15 years. […]. It has become the longest continuing “crisis” in the engineering world, and it continues unabated.”

So what can we do? Is there a solution to the software crisis? That question will be the focus of our next episode. We will get to know the two methods developed over time to tackle the challenges of creating complicated software: Waterfall and Agile. We’ll also tell the incredible story of a computerized system developed for the FBI at a cost of half a billion dollars… and how it turned out to be a breathtaking failure.

And finally, we will get to know Fred Brooks – a computer scientist who wrote an influential book called “The Mythical Man-Month”. In his book, Brooks asks – Is there a “silver bullet” that will allow us to solve the software crisis?

Read Part II

Credits

Maschinenpart By Jairo Boilla
Song 007 – By tarbolovanolo
Crypto, Industrial Cinematic, Batty Slower – By Kevin MacLeod
Cartoon Record Scratch Sound Effects By Nick Judy

Strategic Defense Initiative - Curious Minds Podcast

Ronald Reagan’s Strategic Defense (SDI) Initiative, AKA – “Star Wars” | Curious Minds Podcast

In 1983, president Ronald Reagan shocked the world when he announced that the United States was developing an ultra-modern defense system against intercontinental ballistic missiles. Hundreds of billions of dollars were invested in the system’s development – But then, in 1991, the Soviet Union collapsed, and with it – the Star Wars initiative. Was Reagan’s Strategic Defense Initiative the reason for the Soviet Union’s collapse?


Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Conceptis Pic-A-Pix - Curious Minds Podcast


Credits

Nuclear Deterrence cold war By Aviation Technology Space Channel
Vintage Nuclear Preparedness Educational Film from 1950’s By Radiation Prevention
President Ronald Reagan Introduces SDI on March 23, 1983 By High Frontier
2014 FTG 06b By MDAbmds
The Blueshift (Kxmode Remix) – Dorris Terrible by kxmode
BNA – DARK DANCE INTRO
alxdmusic – jade by alxd
Falling of the Berlin Wall 1989 By Video CC Comercial

Bittorrent - History of File Sharing - Curious Minds Podcast

The History of File Sharing, Part 2: Grokster & BitTorrent | Curious Minds Podcast

The fall of Napster (see Part I of this series) has left a vacuum in the world of file sharing – and as the saying goes, the Internet abhors vacuum… Various File Sharing programs such as Gnutella, Kazaa and others quickly filled the void.
In this episode, we’ll describe Grokster’s legal battle against the Record Companies, the sinister poisoning of file sharing networks by OverPeer – and the rise of BitTorrent.

Guest in this episode: Brett Bendistis, from The Citizen’s Guide to the Supreme Court Podcast!


Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Conceptis Pic-A-Pix - Curious Minds Podcast


Credits

Penske File (demo mix) by Steely Chan
Darkness by xclntr
Sinister Pt. 2 by ¡SplastiC

File Sharing History - Curious Minds Podcast

The History of File Sharing, Part 1 (of 2): The Rise & Fall of Napster | Curious Minds Podcast

Napster, a revolutionary Peer-to-Peer file sharing software, was launched in 1999 – and forever changed the media world. In this episode, we’ll tell the story of Sean Fanning and Sean Parker, its creators, and talk about the legal battle it fought with the record companies – and Metallica.

Guest in this episode: Brett Bendistis, from The Citizen’s Guide to the Supreme Court Podcast!

Also recommended: The History of Byzantium – A podcast telling the story of the Roman Empire from 476 AD to 1453.


Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Credits

Nightmares by myuu
Exploring the Inferno by myuu
Crime Scene (Film Score) by myuu

Inventor of mp3 - Curious Minds Podcast

Heroes Of Podcasting #1: Inventor of MP3, Prof. Karlheinz Brandenburg | Curious Minds Podcast

Inventor of mp3 - Curious Minds Podcast

This series explores the history and future of podcasting, and each episode will feature a single guest who is a pioneer of podcasting. This time, we’re interviewing Prof. Karlheinz Brandenburg – inventor of the popular MP3 format which a critical innovation in Podcasting history.


Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Credits

Cloud Of Sense by Everknew
Welcome [Free/Libre music] by CyberSDF

The History of Open Source & Free Software, Pt. 1, w/ Special Guest: Richard Stallman| Curious Minds Podcast

The History of Open Source & Free Software - Curious Minds PodcastIn the early 1980’s Richard Stallman founded the Free Software Foundation (FSF): a socio-technological movement that revolutionized the software world. In this episode, we’ll hear Stallman himself talking about the roots of the movement, and learn of its early struggles.

 


Download mp3 | Download OggRead Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

outbrain engineering: CMPod the molecular clock

Credits

alxd – background music (tense)
ThusKann – Malfunqtionin’
myuu – Disintegrating
Copyright free Sound-Effect – Restaurant sound effect