Category Archives: Information Technology

Deep Learning - Curious Minds Podcast

Deep Learning & Artificial Intelligence, Pt. I | Curious Minds Podcast

AlphaGo’s victory on S. Korea’s champion Le SeDol was a shock to many in the computer world – but was only a natural development in the story of Artificial Intelligence, as it unfolds in the last few years. What is Deep Learning, and how can computers learn ‘skills’ and ‘intuition’? 

Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Deep Learning & Artificial Intelligence, Part I

(Full Transcript)

Written By: Ran Levi & Nate Nelson

It’s Game Two. We’re now watching the best player in the world, Lee Se-Dol, facing off against a machine, named AlphaGo, in one of humanity’s most storied and complex strategy games: ‘Go’.

Game one saw AlphaGo take an easy victory–one that shocked onlookers around the world. Se-Dol, the favorite, carries the weight of the world on his back as he fights to regain footing in the best of five series. An hour in, his prospects are looking good.

Lee Se-Dol - Curious Minds Podcast
Lee Se-Dol

Then, AlphaGo plays the game’s 37th move. You can see the announcer’s hand shake as he copies the move to the internet broadcast’s big board. He adjusts the piece, unsure of whether he’s mistaken in its placement. Murmurs are rising from the crowd.

And just like that, it’s over.

Move 37

In the time since AlphaGo proved itself the world’s best Go player, move 37 has taken on a sort of cult status in pop culture. Major news outlets around the globe covered the story with articles like “Move 37, or How AI Can Change the World”, a deep learning startup was founded and named itself Move 37, and internet forums were swept with curiosity and speculation.

AlphaGo's Logo - Curious Minds Podcast
AlphaGo’s Logo

For those not experienced in Go, move 37 might otherwise appear totally nondescript–vertically centered, in the rightmost third of the grid, placed next to a white piece in an otherwise empty section of the board, it hardly looks like anything out of the ordinary. Yet, to those who know the game well, it was just about revolutionary.

Fan Hui, the European Go champion who in a five-game series was cleanly swept by AlphaGo prior to its matches against Lee Sedol, eagerly observed match two. Of move 37, he told reporters “It’s not a human move. I’ve never seen a human play this move.” Hui, knowing the game so intimately, was as shocked as anyone.  “So beautiful,” he said and repeated that word over and over again: beautiful. But how can a machine attain beauty?

To understand why move 37 was so amazing, you have to understand how the game of Go works. Invented by the ancient Chinese as a playful representation of war, two players with white or black stones face each other in a battle to take as much territory as possible on a nineteen-by-nineteen space board. Players are free to place stones anywhere on the board of 361 spots, and the goal is to totally surround your opponent’s pieces, therefore kicking those pieces off the board and collecting territory. Because the game is so nonlinear, with such an incomprehensible number of possible moves, sequences, and game arrangements, there’s really no catch-all strategy you can stick to. After all, the number of potential game possibilities in Go outnumbers the total atoms in our universe–a number something along the lines of  208168199381979… you get the picture.

Anyway, this number is so big there’s little point in even trying to comprehend it. Therefore, because any game of Go can go any number of directions, experienced players will tell you it’s all about feel. As the subconscious brain tries to do its best to analyze such a multifaceted and chaotic board, it’s up to a player’s intuition to determine the best course of action.

This is why Go seemed such an unattainable task for machines to master: computers are supposed to follow orders, so how can that translate to something like intuition? Move 37 appeared entirely unintuitive–DeepMind’s programmers later calculated that a human player might have a 1 in 10,000 chance of choosing it at that stage in the game–and yet proved so effective that Lee Se-Dol felt right away its colossal genius, and stood up and left the room almost immediately after realizing what had been done to him.

AlphaGo is the first time humanity has been shown that machines will be able to beat us at characteristically, previously uniquely human abilities like feel and intuition. Soon, perhaps, this could evolve to emotion, at which point we’d all have a real existential crisis on our hands.

The Brain as A Machine

Before we fall too deep into hypotheticals, though, let’s begin with an even more fundamental question: is a computer at all capable of imitating what we might see as ‘human thought’? For example, the ability to draw conclusions, to raise ideas, Intuit, and so on?

Well, the seventeenth-century French philosopher Rene Descartes was one of the first thinkers to try to discern the fundamental difference between humans and machines. Descartes argued that, in principle, every aspect of the human body can be explained in mechanical terms: the heart as a pump, the lungs have bellows, and so on. That is, the body is merely a kind of sophisticated machine.

Rene Descartes - Curious Minds Podcast
Rene Descartes

The brain, however, can not be explained in such mechanical terms, he said. Our thinking, speech, and ability to draw conclusions are so different from what those machines are capable of performing that they can not be explained in engineering terms, and we must use words like ‘soul’, ‘intellect’ or similarly abstract concepts to describe them.

The invention of the computer at the end of the first half of the twentieth century cast a heavy shadow on this hypothesis. The computer, remember, is basically a machine that performs a long series of mathematical operations. The programmer tells the computer what sequence it must follow to perform any task – for example, solving a mathematical equation or drawing an image on a screen – and the processor executes the commands quickly. Clearly, the computer does not “think” in the human sense of the word: it doesn’t solve the equation and doesn’t draw the picture – it is merely a machine that executes a sequence of commands. But outwardly, it certainly looks like the computer solved the equation and drew the picture. If you were to show a computer to someone who’s never seen or heard of a computer before, they’d almost certainly assume it either a magical or godly item or perhaps something controlled by a little person inside!

And why shouldn’t they? The nature of computers has caused many scientists and non-scientists alike to question the idea of a separate ‘mind’ or ‘soul’ in humans. If a relatively simple machine like a computer can seem to solve a problem, can not our brain itself be a kind of machine that performs calculations? In other words, is it possible that our brains are nothing more than very complex machines, and that all our thoughts and ideas are merely the result of calculations? If this hypothesis turns out to be correct, then the answer to the question I raised above may well be: yes, it is possible to construct a computer that will perform the required calculations and thus imitate the operation of the human brain.

Neurons and Perceptrons

To try to build such a machine, researchers’ first instinct was to turn to biology and neuroscience. Advances in the field of neurology and physiology of the brain inspired them abundantly. Through the twentieth century, neuroscientists uncovered many biological mechanisms that underpin the brain’s activity, foremost of which is the importance of neurons  –the nerve cells from which the brain is made.

How do neurons work? Neurons are tiny cells with long arms that connect to each other and transmit information through electrical currents. Each neuron has multiple inputs, and it receives electrical pulses from other neurons. In response to these pulses, the neuron produces its own electrical pulse at the output.

Neurons - Curious Minds Podcast
Neurons

In 1949, a researcher named Donald Hebb discovered one of the most important fundamental principles of the brain: the way learning takes place. His research revealed that if two neurons–A and B–are interconnected, and neuron A shoots electrical pulses, over time the B neuron, in response, will begin to fire pulses more efficiently. The constant firing of neuron A will cause neuron B to learn that the information coming from neuron A is important, and should be responded to by shooting your own pulse in response.

Donald Hebb - Curious Minds Podcast
Donald Hebb

This insight inspired a psychologist named Frank Rosenblatt, an American cognitive expert at Cornell University. In 1958, Rosenblatt conceived a new type of electrical component called the “Perceptron” (from the word ‘perception’). Rosenblatt’s perceptron was a kind of artificial neuron: an abstract model of the human neuron. He had a number of inputs that received binary values – 0 or 1 – and one output that could also produce 0 or 1, a sort of positive or negative result.

Frank Rosenblatt & The Perceptron - Curious Minds Podcast
Frank Rosenblatt & The Perceptron

R: The interesting detail is that Rosenblatt found a way to simulate the brain’s learning process as described by neurologists. Imagine the perceptron as a black and opaque box, with several inputs and outputs. Each input has a dial that can be rotated, thus determining that one input or another has a higher weight, or importance, relative to the other inputs. Turn the dial all the way to the right, and now every little signal at the entrance makes the perceptron set the output to ‘1’: it is to teach the perceptron that this input is very important and should not be ignored. Turn the dial all the way to the left, and the perceptron will “learn” that this input is not important at all; no matter what happens, the perceptron will not respond to any of its signals. The weights at the perceptron inputs simulate the strength of the connections between biological neurons in a human brain.

Rosenblatt set up a device in his lab that contained several such perceptrons connected to each other in a kind of artificial neural network, and connected their inputs to four hundred light receptors. Rosenblatt placed letters, numbers, and geometric shapes in front of the light receptors, and by fine-tuning the weights at the inputs, he managed to ‘teach’ the network to identify the shapes and in response to produce signals that meant things like: ‘the letter A’ or ‘a square’.

How did he do that? By strengthening or weakening the connections between the perceptrons: each time the perceptrons did not correctly identify a form, he modified the weights slightly – if you will, played a little with the dials – until each perceptron learned that a certain combination of inputs yields one result, and another combination another result. The game with the weights is a way to fix the system’s errors, to tell it: “What you did just now was wrong. Here’s the right way to do it.”

Perceptrons and Programs

In order to understand the importance of Rosenblatt’s experiment, we need to sharpen the fundamental distinction between Rosenblatt’s perceptron machine, versus how a “normal” computer program operates.

A program is a sequence of commands that a human programmer defines: if the programmer wants to make the computer recognize a square, for example, he must formulate rules for the machine that explicitly define the properties of the shape.

The perceptrons’ learning, on the other hand, did not occur by defining rules, but by presenting a variety of sample squares, and strengthening or weakening the connections between the perceptrons until the combination of weight values was found at the entrance to each perceptron.

These are two completely different approaches to problem solving: In the first, we dictate to the computer rules for solving the problem, such as “If a shape has four sides, is equilateral, and its angles are all at 90 degrees, then it is square.”

In the second, we provide it with examples of squares and a series of simple steps that, if executed, will gradually improve the network’s ability to identify squares.

Notice that at no point does Rosenblatt tell the machine what a square is. He only “taught” the machine by playing with its left-right-right-left dials until it developed a system for recognizing squares.

This approach, of learning from examples, is very close to the way people acquire many skills: for example, parents teach their children to identify a square by pointing to a square and saying “square” again and again, or pointing to a shape and asking ‘what is it?’. If the child is wrong, the parent corrects him or her, and if the child is right, they get rewarded.

The new approach to computer programming offered by Frank Rosenblatt does, in many ways, mirror what would happen half a century later when AlphaGo offered a paradigm shift in the field of AI.

AlphaGo vs. Deep Blue

One way to make sense of why AlphaGo, a machine that got good at a board game, means so much to the future of technology, is to compare it with its spiritual predecessor: IBM’s Deep Blue machine.

Deep Blue - Curious Minds Podcast
Deep Blue

Deep Blue was sort of like the original AlphaGo–in 1997, it beat the world’s greatest chess player, Gary Kasparov, in a no less dramatic fashion than Move 37: Kasparov, knowing his fate, stood up and walked off the television set with his arms out, frustrated, as if to say “What do you want me to do?”

IBM’s Deep Blue was a thoroughly programmed artificial intelligence that was given a rulebook on how to best play chess. Every move Deep Blue executed was leveraged against other possible moves it could have played at any given stage in a chess match, keeping in mind the value of each piece and referencing pre-programmed chess information as it checked 200 million positions per second on the board. The algorithm sifted through a huge pool of possibilities and chose the move that would position it best moving forward. Because there are only a limited number of pieces, ways each piece can move, and spaces on the board to move to, Deep Blue’s processors were powerful enough to make such calculations.

AlphaGo, however, was created with a different approach. Because the game of Go has such an incomprehensible number of possible combinations of moves available to players through the game–it doesn’t even have the simplicity of direction like chess does–there’s no way even our most powerful computers can sift through every potential outcome at every stage in the game. Instead, the programmers at DeepMind fed their algorithm thousands of real games played by professionals–amounting to some 30 million moves–and the program itself learned how the pros do it. The program dutifully sifted through all of the scenarios and moves, recognizing patterns in how humans play the game and the strategies they employ, in order to be able to play like the best. Once AlphaGo could play like a pro, it was then given the task to play itself. Throughout many iterations, the program would improve upon itself incrementally as it tried to find ways to beat previous versions of itself, adopting strategies that work to its advantage and scrapping those that result in losses.

In this way, whereas Deep Blue calculated all possible scenarios and opted for the most effective outcome, AlphaGo had to learn from the ground up through trial and error. Where Deep Blue draws on a database of information to mathematically draw up a decisive path to victory, AlphaGo is made strong by a wealth of experience built up over time, to where it doesn’t have to look back at every single data point it’s been fed because a process has been honed–something akin to “skill”.

Early Skepticism

But scientists in the mid-20th century couldn’t see so far ahead, and there were researchers who were not swept away by general enthusiasm for neural networks. Here is the place to take a step back and look at the broader picture of artificial intelligence research.

Over the years, several different approaches have been developed to solve the question of how to make a computer behave intelligently. Some researchers prefer logic-based methods, others prefer methods based on statistical calculations, and several other approaches have also been floated.

We’re not going to review all of these approaches in depth in this chapter, but I will note that there was no consensus among researchers as to which approach is absolutely superior to the others. Specifically for our purposes, in the 1960s and 1970s, many researchers believed that the attempt to imitate the biological mechanism of the brain would not lead us to artificial intelligence computers.

Why? For the same reason that aircraft engineers do not try to imitate the flight of birds. If there is more than one way to solve a particular problem, the biological path is not necessarily the simplest and easiest way, from an engineering point of view.

One of those skeptical scholars was Marvin Minsky. His objection stemmed from the fact that Rosenblatt’s perceptron had no solid theoretical basis – in other words, there was no mathematical theory that explained how the weights should be adjusted at the inputs to the perceptron to induce it to perform any task. The success of Rosenblatt’s system in identifying shapes did not impress him; he claimed that the forms and letters that Rosenblatt used to demonstrate the abilities of the perceptron were too simplistic and not a real challenge.

Marvin Minsky - Curious Minds Podcast
Marvin Minsky

In 1969, ten years after Rosenblatt first demonstrated the prototype of the perceptron, Minsky and another talented mathematician named Seymour Papert together published a book called “Perceptrons”. The book analyzed in depth the component invented by Rosenblatt – and concluded that it would be impossible to develop artificial intelligence systems with it.
Why? The perceptron, they wrote, is an elegant component with impressive capabilities for its simplicity – but to build complex systems that can perform complex tasks, a lot of preceptors should be connected.

The Problem with Many Layers

Imagine a cake made of many layers: each layer has a number of perceptrons whose inputs are connected to the perceptrons in the layer above – and their outputs to the perceptrons in the layer below.

Layers of Artificial Neurons - Curious Minds Podcast
Layers of Artificial Neurons

The problem identified by Minsky and Papert lies in the inner layers of the cake. Rosenblatt’s system consisted of two layers: inputs, and outputs. When you have two layers of perceptrons it’s relatively easy to find the correct adjustment of the weights in order to reach the desired result. But if you have an ‘internal’ layer – that is, the one between the input layer and the output layer – it’s much harder to determine the proper weights.

To understand this, let’s imagine a group of children playing the game Telephone: You tell the first child a sentence, who whispers it to the next child in line, who whispers it to the next, and so on, until you reach the last child, who says aloud the sentence he heard. The first child in the row is, in our analogy, a perceptron in the input layer. The last child – the perceptron in the output layer. All the children in between are perceptrons in inner layers.

Now, suppose we have only two children in the game: you tell the first child the sentence, he tells the other child, who says it out loud. If something went wrong and the sentence did not come out right – it’s easy to find out where the problem was: either the first child misdirected it, or the other child misunderstood it. You find the problem child, explain what needs to be done – maybe hint to a possible candy waiting for a good boy… that’s it, basically. However, if we have ten children in the game and the trial goes the wrong way, how will we know where the disruption occurred? This is much harder, and without this knowledge we can not fix the system.

The same applies, in principle, to the perceptron machine. If we have only two layers – inputs and outputs – it is easy to find the desired weight adjustment. If we have internal layers, it is very difficult to know how to play with the dials. Without internal layers, though, Minsky and Papert determined that artificial neural networks are limited to performing simple tasks such as identifying basic shapes and can never be used to perform more demanding tasks like facial recognition.

Backpropagation

But here and there were still some stubborn scientists who refused to abandon the artificial neurons. One of them was a doctoral student named Paul Werbos, who in 1974 found a solution to the problem of internal layers that bothered Minsky and Papert so much.

His solution was a method called ‘backpropagation’. It means, essentially, going backward through the neural network and modifying the weights of the connections between the artificial neurons. You start from the final output layer and go back one layer at a time, changing weights as you go so that the next time the inputs arrive, the final result will be correct. With each iteration of trial and error, backward propagation allows the system to traces back its errors, gradually correcting where it went wrong until it reaches optimal results. The most significant thing to understand about backpropagation is that it is based on a sound and proven mathematical theory: that is, it has the solid theoretical basis that Marvin Minsky was looking for.

But Werbos was only a doctoral student, and his ideas got little attention from other researchers. It would take more than ten years after Paul Werbos’ initial discovery for the idea of backpropagation to be independently rediscovered by several researchers at the same time: Geoffrey Hinton, and David Rumelhart and James McClelland.

Learning a Language

Rumelhart and McClelland were psychologists in their training and came to the field of artificial neural networks not from computer science, but from the study of human language. Rumelhart and McClelland took part in the debate in the academic community about how children learn to speak. Language is a distinct human ability that distinguishes us from the rest of the animals, and therefore we can conclude that something in the structure of the human brain is unique in the natural world in this respect. The question is – what?

James McClelland - Curious Minds Podcast
James McClelland

Most researchers have speculated that the language and speech rules are “encoded” within the brain in some way, like a hidden software somewhere in your head. David Rumelhart and James McClelland advocated a different view. They believed that there’s no ‘rulebook’ in the brain for learning language, but that our ability to learn a language was based on how neurons interact. In other words – there are no rules, there are only connections.

To prove their claim, the two psychologists turned to artificial neural networks. In 1986, they built a computerized model of a multi-layer neural network, and using the backpropagation process taught it how to manipulate English verbs in the past – for example, work-worked, begin-began, and so on. The network received a present-tense verb and had to guess what its past-tense form is. As anyone who has learned English knows, guessing the past-tense version of a verb isn’t trivial: there are verbs that have the ‘ed’ suffix added to the end – for example, work-worked, carry-carried – and there are verbs whose past form is unique – sing-sang, begin-began and so on.

When small children learn to speak English, there is a phenomenon that repeats itself in almost all cases. At first, the child memorizes the past form of several verbs and says them properly. But then the child discovers the ‘rule’ of adding ‘ed’ at the end, and out of enthusiasm begins to add ‘ed’ even to verbs for which doing so is incorrect: for example, singed, begined and similar errors. Only after constant correction does the child understand their mistake and learn when to add ‘ed’ and when not to.

Amazingly, the Artificial Neural Networks of Rumelhart and McClelland made the exact same mistake as children do. At the beginning of the learning process, the system correctly predicted the past form of verbs – but then, as more and more examples were entered into it, the neural network identified the rule of adding ‘ed’ to verbs – and then, just like with human children, it began to err and add ‘ed’ where it was not applicable. Only when the researchers introduced more and more examples of verb conjugation did the system learn when to add ‘ed’ and when not – and its predictive ability improved accordingly.

In other words, Rumelhart and McClelland demonstrated how a network of neurons can quite literally learn a characteristic of human language. Not only that, but the Artificial Neural Network, without any outside reason to do so, took the same exact path a human brain would to that end.

The real question now becomes: if this is also how babies learn language, do our brains perform their own version of backpropagation? Now that we’ve created machines that act like brains, maybe we need to ask less “Can computers think like humans?” and more “Do humans think like computers?”

To probe further, we’ll pick up in the next episode with the one advance that shot artificial intelligence research into the stratosphere,  return to our tense series between AlphaGo and Lee Sedol and interview the CEO of a deep learning company.

Part II Coming Soon!

Music Credits

Echoes of Time by Kevin MacLeod

Readers! Do You Read by Chris Zabriskie

Candlepower by Chris Zabriskie

Dopplerette by Kevin MacLeod

Sources And Bibliography

http://plato.stanford.edu/entries/computational-mind/
https://ecommons.cornell.edu/bitstream/handle/1813/18965/Rosenblatt_Frank_1971.pdf;jsessionid=4896F3469B3DA338CC28C46405E93DD7?sequence=2
http://csis.pace.edu/~ctappert/srd2011/rosenblatt-congress.pdf
https://web.csulb.edu/~cwallis/artificialn/History.htm
http://web.media.mit.edu/~minsky/papers/SymbolicVs.Connectionist.html
https://www.technologyreview.com/s/546116/what-marvin-minsky-still-means-for-ai/
http://www.kdnuggets.com/2016/09/9-key-deep-learning-papers-explained.html/3
http://www.iep.utm.edu/connect/
http://plato.stanford.edu/entries/connectionism/
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/
http://playground.tensorflow.org/
http://blog.kaggle.com/2014/12/22/convolutional-nets-and-cifar-10-an-interview-with-yan-lecun/
https://plus.google.com/+YannLeCunPhD/posts/Qwj9EEkUJXY
https://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun
http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning
https://www.technologyreview.com/s/540001/teaching-machines-to-understand-us/
http://www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-go-champ/
http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/
http://science.sciencemag.org/content/287/5450/47.full
http://www.blutner.de/NeuralNets/NeuralNets_PastTense.pdf
http://thesciencenetwork.org/programs/cogsci-2010/david-rumelhart-1

Malicious Life - Curious Minds

The Dark Avenger [From: Malicious.Life] | Curious Minds Podcast

We’re back from our short break, with a fantastically interesting episode:
In 1989, a message was found in a virus: “Eddie Lives…Somewhere in Time!”. ‘Eddie’ was a particularly nasty virus, and its discovery led a young Bulgarian security researcher down a rabbit hole, on a hunt for the prolific creator of the Eddie virus: The Dark Avenger.

Guests: Vesselin Bontchev, Graham Cluley

Link to more Malicious.Life episodes

Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

 

FORTRAN - Curious Minds Podcast

Are Software Bugs Inevitable? Part 2: The Most Expensive Failed Software Project Ever | Curious Minds Podcast

After describing the Software Crisis in the previous episode, we discuss the various methodologies and practices implemented over the years to combat the complexities of software development. We’ll tell the sad story of the FBI’s VCF project – perhaps the most expensive failed software project ever – and hear about Dr. Fred Brooks’ classic book, ‘The Mythical Man-Month’.

Link to Part I

Big thanks to Aviran Mordo from Wix.com for appearing in the episode.  


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Are Software Bugs Inevitable?

Written By: Ran Levi

Part II:  The Most Expensive Failed Software Project Ever

(Read Part I)

In this episode, we’re continuing our discussion about the Software Crisis, which we introduced last week. If you missed that episode, you might want to go back and have a read before listening to this one.

The question we asked was: why do so many large software projects fail so often? THAT is the Software Crisis, a term that was first coined in 1968.

“Failure”, in the context of software, has many aspects: software projects tend to deviate from schedule and budgets, produce software that does not meet the customer’s specs or expectation – and often contains a significant number of errors and bugs. It is a question that troubled many engineers and computer scientists over the years: what makes software so complicated to engineer?

The solutions to this problem changed over the years, along with changes in the business culture of the High Tech world.

The Waterfall Methodology

In the 1960s, The dominant approach to software development – especially when it came to complicated projects – was known as the “Waterfall” methodology. This approach divides the software projects into well-defined stages: first, the customer defines the requirements for the product. A software architect – usually an experienced programmer – creates the outline of the system that fits these requirements, and then a team of programmers writes the actual code that fits this outline. In essence, the Waterfall approach is the same approach a carpenter would use when creating a new piece of furniture: Learn what the customer wants, draw a schematic outline of the product, measure twice – and cut once.

The name “Waterfall” hints at the progression between stages: a new stage in the project will begin only when the last one is complete, just like water flowing down a waterfall. You can’t start writing code until the client has defined their needs and wants. Seems like a sensible approach, and indeed the Waterfall methodology served engineers outside of the software industry for hundreds of years. Why shouldn’t it be used here as well?

But in the last twenty years, the “Waterfall” method has been under constant and profound criticism, coming from software developers and business leaders. The main argument against Waterfall is that even though it served other engineering disciplines, from architecture to electronics – it is not well-suited to the software field.

And why is that? Let’s examine this question through an example which is, most likely, one of the most expensive failures in the history of software engineering.

VCF – Virtual Case File

In the year 2000, the FBI decided to replace its entire computer system and network. Many of the agency’s computers were old and outdated and had no longer suited the needs of agents and investigators. Some of the computers still used 1980s green screens and didn’t even support using a mouse…After September 11th, FBI agents had to fax photos of suspects because the software they used couldn’t attach a document to their e-mails! It can’t get much worse than that…

Seal of The FBI - Curious Minds Podcast
Seal of The FBI

Finally, at the end of that year, Congress approved a budget of four hundred million dollars for upgrading the FBI computer system and network. The project was supposed to take three years, and replace all computer stations with modern and capable machines, connected via a fast optic cable network system.

The crowning glory of the new system was a software called VCF, for “Virtual Case File”. VCF was supposed to allow agents at a crime scene to upload documents, images, audio files, and any other investigation material to a central database, in order to cross-reference information on suspects that they could later present in court. The company hired to write the advanced software was called SAIC.

The FBI Goes Waterfall

It’s important to note that the FBI has hundreds of thousands of people, and as most large organizations – it tends to be very bureaucratic and conservative. Naturally, the preferred methodology for the VCF project was the Waterfall, and so the project managers begun by writing an eight hundred page long document that specified all the requirements from the new software. This document was extremely detailed, with sentences like: “In such and such screen, there will be a button on the top left corner. The button’s label will read – ‘E-Mail’.” They didn’t leave the developers a lot of room for questions…

But in an organization as huge and varied as the FBI, it’s doubtful that there is one person, or even one group, who understands the practical daily needs of all the departments and groups for which the program was written. As the project progressed, it became clear that the original requirements didn’t meet the day-to-day needs of agents.

So, special groups were assigned to study the needs of the agents in the field, and they constantly updated the project’s requirements. As you might imagine, the constant changes made the requirements document almost irrelevant to the developers in SAIC who actually wrote the code. The events of September 11th gave the project a new sense of urgency, and the tight schedule soon created conflicts between the programmers and FBI management. The software developers were frustrated over the ever-changing requirements, while FBI agents felt their needs were being ignored.

A Dramatic Failure

Things got worse and worse, and by the beginning of 2003, it was clear that the new software wouldn’t be ready on time. Since the VCF project was deemed a matter of importance to national security, Congress approval an additional budget of two hundred million dollars. But that didn’t help either.

In December of 2003, a year after it was supposed to be ready, SAIC finally released the VCF’s first version. The FBI rejected it almost immediately! Not only was the software buggy and error-prone, it also lacked basic functions like “bookmarking” and search history. It was totally inadequate for field or office work.

In an effort to save the failing project, the FBI invited a committee of outside experts to consult the agency. One of the committee members, Professor Matt Blaze, later said that he and his colleagues were shocked once they analyzed the software. He jokingly told a reporter later, quote, “That was a little bit horrifying. A bunch of us were planning on committing a crime spree the day they switched over. If the new system didn’t work, it would have just put the FBI out of business.”

In January 2005, the head of the FBI decided to abandon the project. This wasn’t an easy decision since it meant that all the agency’s personnel would have to continue using the ancient computers from the 1980s and 90s for at least five more years. This had a sizeable impact on national security, not to mention all that money that was spent for nothing.

Why Did VCF Fail?

The VCF project failed even though the FBI used the age old approach of “measure twice, cut once”. It defined all the software requirements up front, and left nothing to chance – or so it seemed. Critics of the Waterfall methodology claim that the problem was that the FBI was never able to define its needs perfectly, or even adequately. In such a complex and big organization, defining all the software requirements up front is an almost hopeless task, since no single person knows everything that’s going on in the organization and also has a good grasp of what the technology can and can’t do.

The FBI’s VCF fiasco is typical, it seems, for large-scale software projects. I recently had a chance to speak with a very experienced programmer who has worked on a different large scale project.

“My name is Aviran Mordo, I’m the Head of Engineering for Wix.com. I’ve been working in the software industry for over twenty years, from startup companies here in Israel to developing the US National Archives.

Aviran Mordo - Curious Minds Podcast
Aviran Mordo

“So working on a government project is everything that you hear that is wrong with Software Development, like Waterfall and long processes. We tried to be as Agile as we can, and during the prototyping phase, we actually succeeded – that is why we won the project. But when we started the actual writing of the project, hundreds of people came and worked on humongous documents that nobody read, and built this huge architecture with very very long processes. You could see that this is going to take a long time, and the project just suffered so badly… That was the point that I switched to a smaller team, to do that on the civilian market. We were just five people. We started about six months after the big project started. We were five people against a hundred people. After six months we were a year ahead of their development schedule.”

“They actually had to restart the project twice. [Ran: when you’re saying ‘restart’ you mean they just developed everything from the beginning?] Yes, they had to develop everything from the beginning. The architecture, everything. “

I asked Aviran why, in his opinion, the Waterfall methodology – which does a great job in other engineering disciplines – fails so miserably in software engineering.

“Several Things. One, you do the architecture up front and think you have all the answers. If you don’t have all the answers, you plan for the things you don’t actually know. So you build a lot of things that are wasting your time for future features that you think may affect the product or for requirements that may change in the future – but since the release cycle is so long, and it costs so much to do a new version, you try to cramp up as many features as you can into a project. This derails you from whatever really need to be achieved and have a quick feedback from the market and from the client. [it’s] A waste of time.”

Aviram’s view is echoed by many other developers, including the ones who investigated the failed FBI project. Waterfall assumes you have all the information you need before the project begins. As we already saw, software projects tend to be so complex, that this assumption is often wrong.

The Agile Methodology

So in the 1990s and early 2000s, an alternative to the unsuccessful Waterfall methodology appeared: its name was Agile Programming. The Agile methodology is almost the exact opposite of Waterfall: it encourages maximum flexibility during the development process. The strict requirement documents are abandoned in favor of constant and direct communication with the client.

Say, for example, that the customer wants an email client. The developers work for a week or two, and create a rough skeleton of the software: it might have just the very basic functions, or maybe just buttons that don’t do anything yet. They show the mockup to the customer, who then gives them his feedback: that button is good, that one is wrong, etc. The developers work on the mockup some more, fixing what needs to be fixed and maybe adding a few more features. There’s another demonstration and more feedback from the customer. This process repeats itself over and over until the software is perfected. Experience shows that projects developed using the Agile methodology are much less prone to failure: this is because the customers have ample time and opportunity to understand their true needs and “mold” the software to fit.

Not A Perfect Solution

So, is Agile the solution for the software crisis? Well, Probably not. Aviran, Wix’s Head of Engineering says that Agile is not suited for every kind of software project.

“It’s harder to plan ahead what’s the end goal. This [Agile] works well for a product which is innovative but it won’t work well for a product which is revolutionary. For example, the iPhone. The iPhone cannot work this way – it’s a really huge project with a lot of things in it, and it revolutionized the market. It could be developed with Agile, but you miss the step of getting it in front of your customer. So if you’re doing something which is ‘more of the same’ or you’re not trying to change the market – Agile works extremely well. If you’re trying to change the market and do something revolutionary, you could do Agile to some extent until you actually get to market.”

Also, Agile has been around for twenty years – and large-scale projects do still fail all around us. It is a big improvement over Waterfall, no doubt, but it probably won’t solve the software crisis.

So, what is the solution? Maybe a new and improved methodology? A higher-level programming language?

Dr. Frederick Brooks

One of the many computer scientists who tried to tackle this question was Dr. Frederick Brooks. In the 1960s and 70s Brooks managed some of IBM’s most innovative software projects – and naturally suffered some failures himself. Prompt by his own painful experiences, Brooks wrote a book in 1975 called “The Mythical Man-Month”. A “Man-Month” is a sort of a standard work-unit in a software project: the work an average programmer does in a single month.

Fred Brooks - Curious Minds Podcast
Fred Brooks

Brooks’ book, and another important article he published called “No silver bullet”, influenced a whole generation of software programmers and managers. In his writings, Brooks analyzed the main differences between the craft of software writing and other engineering disciplines, focusing on what he perceived to be the most critical and important characteristic of software: its complexity.

Brooks claims there is no such thing as a “simple software”. Software, he writes, is a product of the human thinking process, which is unique to every individual. No two developers will write the exact same code, even if they are faced with the exact same problem. Each piece of software is always unique since each brain is unique. Imagine a world where each and every clock has its own unique clockwork mechanism. Now imagine a clockmaker trying to fix these clocks: each and every clock he opens is different from the rest! The levers, the tiny screws, the springs – there’s no uniformity. A new clock, a whole new mechanism. This uniqueness would make clock-fixing a complex task that requires a great deal of expertise – and that’s the same situation with software.
Now, high complexity can be found in other engineering disciplines – but we can almost always find ways to overcome it. Take electronic circuits, for example: in many cases, it is possible to overcome complexity by duplicating identical sub-circuits. You could increase a computer’s memory capacity by adding more memory cells: you’ll get an improved computer but the design’s complexity would hardly change since we’re basically just adding more of the same thing. Sadly, that’s often not true for software: each feature you add to a piece of software is usually unique.

The Mythical Man-Month Cover - Curious Minds Podcast
The Mythical Man-Month Cover

What about architecture? Well, architects overcome the complexities of their designs by creating good schematics. Humans are generally good at working with drawings: an architect can open the schematics, take a quick look, and get a fairly good idea of what’s going on in the project. But software development, explains Brooks, does not allow such visualization. A typical software has both a dimension of time – that is, do this, then do that – and a dimension of space, such as moving information from one file to another. A diagram can usually represent only a single dimension, and so can never capture the whole picture. Imagine an architectural schematic that tries to capture both the walls and levels of a building – and the organizational structure of the company that will populate it… many times a software programmer is left with no choice but to build a mental model of the entire project he is working on; or if it is too complex, parts of it.

In other words, Brooks argues that unlike electronics or architecture, it is impossible to avoid the “built-in” complexity of software. And if this is true, then computer software will always contain bugs and errors, and large projects will always have a high risk of failure. There is no “Silver Bullet” to solve the Software Crisis, says Brooks.

But you might be asking – what about high-level languages? As we learned in the previous episode, advanced software languages like FORTRAN, C, and others greatly improved a programmer’s ability to tackle software complexity. They made programming easier, even suitable for kids. Isn’t is possible that in the future someone will invent a new, more successful programming language that would overcome the frustrating complexity of software?

Brooks’ Advice

The answer, according to Brooks, is no. High-level languages allow us to ignore the smaller details of programming and focus on the more abstract problems, like thinking up new algorithms. In a way, they remove some of the barriers between our thought processes and their implementation in the computer. But since software complexity is a direct result of our own thought processes, the natural complexity of the human mind – a higher level language will not solve the software crisis.

But Fredrik Brooks does have a rather simple solution – one which might solve the problems of most companies. His suggestion: don’t write software – buy it!

Purchasing software, says Brooks, will always be cheaper and easier than developing it. After all, buying Microsoft’s Office suite takes minutes, while developing it from scratch might take years – and fail. True, a purchased one-size-fits-all software will never meet all the organization’s needs – but it’s often easier to adjust to an existing software then create a new one. Brooks uses salary calculation software to illustrate his point: in the 1960s, most companies preferred writing custom software to handle their salary calculation. Why? Because back then, a computer system cost millions of dollars while writing software only a few tens of thousands. The costs of customized software seemed reasonable when compared to the cost of the hardware. Nowadays, however, computers cost just a few thousand dollars – while large-scale software projects cost millions. Therefore, almost all organizations learned to compromise and purchase standard office software such as “Excel” and “Word”. They adjusted their needs to fit the available, ready-made, software.

Brooks gave another piece of advice, this time to the software companies themselves: nurture your developers! Some brains are better suited to handle the natural complexity of software development. Just as with sports and music, not all people are born equally talented. There are “good programmers” – and then there are “outstanding programmers”. Brooks’ experience taught him that a gifted programmer will create software that is ten times better than the one made by an average programmer”. Software companies should try and identify these gifted individuals early in their careers, and nurture them properly: assign them good tutors, invest in their professional education, and so on. That kind of nurturing isn’t cheap – but neither is a failed software project.

Conclusion

To summarize, we started by asking whether software bugs and errors are inevitable – or can we hope to eliminate them sometime in the future. Digging deeper, we discovered an even more fundamental problem: large-scale software projects not only have plenty of bugs – they also tend to fail spectacularly. This is known as the Software Crisis.

Modern software development methodologies such as Agile can sometimes improve the prospects of large-scale projects – but not always, and not completely. The only real solution, at least according to Dr. Fredrick Brooks, is finding developers who are naturally gifted at handling the complexities of software – and helping them grow to their full potential.  And until we find the Silver Bullet that will solve the Software Crisis, we’ll just have to…well – bite the bullet.


Credits

Mountain Emperor by KevinMacLeod
Hidden Wonders by KevinMacLeod
Clock Ticking Sound Effects By FreeHighQualitySoundFX,  Sound Effects CentralSound Effects HD Indie Studiosstunninglad1

FORTRAN - Curious Minds Podcast

Are Software Bugs Inevitable? Part 1: FORTRAN and the Denver Airport Baggage Disaster | Curious Minds Podcast

Software errors and random bugs are rather common: We’ve all seen the infamous Windows “blue screen of death”… But is there really nothing we can do about it? Are these errors – from small bugs to catastrophic mistakes – inevitable? In this episode, we’ll tell the story of FORTRAN, the groundbreaking high-level computer language, and the sad, sad tale of the Denver Airport Baggage Disaster. Don’t laugh, it’s a serious matter. 

Link To Part II


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Are Software Bugs Inevitable?

Written By: Ran Levi

Part I

A car that breaks once every few weeks is simply unacceptable. We expect our cars to have a certain level of reliability. But when it comes to computers – software errors and random bugs are rather common. We’ve all seen the infamous Windows “blue screen of death”, and most smartphones require a reboot every once in awhile.

In a way, we’ve learned to live with software errors and take them for granted. But is there really nothing we can do about it? Are these errors – from small bugs to catastrophic mistakes – inevitable, or is there hope that as technology and innovation move forward, we’ll be able to overcome this annoying problem – and make software bugs a thing of the past? This will be the main question of this episode.

How Software Works

Software bugs are nothing new, of course. Writing computer software in the 1940s and 50s was a complicated and difficult task, so it’s no wonder that the first generation of computer programmers considered bugs and errors inevitable.


Conceptis Pic-A-Pix - Curious Minds Podcast


But let’s start from the basics, with a quick refresher on how computer software works. A computer is a pretty complex system, and its two main parts are the processor and the memory. The memory cells contain numbers; the processor’s role is to read a number from the memory, apply some sort of a mathematical operation on it such as adding or subtracting and then write the result back to the memory.

A software is a sequence of commands telling the processor where certain information is located in the memory and what needs to be done with it. Think of the information in a computer’s memory as food ingredients: the software is the recipe. It tells us what we need to do with the different ingredients at every given moment. A software error is an error in the sequence of commands. It might be a missing command, two commands given in the wrong order, or an altogether wrong command.

Just Plain Numbers

So why were software bugs seen as a fact of life back in the 1950s? Back then, both information and commands were given not as words – bus as plain numbers. For example, the numeral forty-two might represent the command “copy,” so that the sequence 42-11-56 might represent an action like: “copy the content of memory cell eleven to memory cell fifty-six.” Even a mildly complicated calculation, like solving a mathematical equation, might require hundreds – if not thousands – of such command sequences. Each such sequence had to be perfectly correct, or else the entire calculation might fail – just like in baking: if we put the frosting on the cake before baking the batter, that cake will be a disaster.

But software is even less forgiving than baking. You can’t even make a small mistake because then the entire equation will fail. It’s like a house of cards. It’s no wonder then that at the time, only computer fanatics were willing to devote their time to programming. It was a truly Sisyphean task.

Assembly

In the late 1940s, a new computer language was developed. It was called “Assembly”, also known as “machine language.” Assembly replaced some of the numbers with meaningful words that were somewhat easier to remember. For example, the number 42 was replaced by the word MOV. These textual commands were then fed to a special software named the “Assembler” that converted the words back to numbers – since that’s ultimately what computers understand.

Example of Assembler Code - Curious Minds Podcast
Example of Assembler Code

For programmers, Assembly represented a real improvement: for humans, words are much easier to work with than random numbers. But Assembly didn’t eliminate software bugs. It was still too “Low Level” – meaning even simple calculations still required thousands upon thousands of code lines. Programming was still an exhausting and Sisyphean task.

John Backus

One of those exhausted programmers was John Backus. Backus was a mathematician who worked for IBM, calculating the trajectories of rockets. He absolutely hated programming in Assembly and the tedious process of finding and eliminating software bugs.

John Backus - Curious Minds Podcast
John Backus

So in 1953 Backus decided to do something about it. He wrote a memo to his supervisors and suggested that IBM should develop a new software language that will replace Assembly. This new language, wrote Backus, will be a “High-Level Programming Language”: each individual command will represent numerous Assembly commands. For example, you could use the command “Print” – and “behind the scenes” it will evoke hundreds of simpler Assembly commands that will handle the actual process of printing a character on the screen or on a page. This level of abstraction will make programming easier, simpler – and hopefully, a lot less prone to errors.

But if a High-Level programming language was such a great idea, how come no one thought about it before? Well, it turns out that Backus wasn’t the first. In fact, similar computer languages were developed as early as the 1940s, and many computer scientists found them to be a fascinating research subject. In reality, though, none of these early high-level languages threatened Assembly’s dominance. Why is that?

Recall that the computer, as a rule, understands nothing but numbers. Much like Assembly had to have an “Assembler” to translate the textual commands to numbers – A high-level programming language needs a special software called a “Compiler” to translate the high-level commands. Unfortunately, the translated code produced by the early compilers was very inefficient, when compared to code written in Assembly by a human programmer. The compiler could create ten lines of code – where a human, with a bit of creative thinking, could accomplish the same task with a single line.

This inefficiency meant that software written in high-level code tended to be slow. Since in the 1940s and 50s, computers were already weak and slow, this hit in computation performance was a penalty that no one was willing to pay. And so, high-level programming languages remained an unfulfilled promise.

A High-Level Language

Backus was aware of the challenge, but he was determined. In his memo to IBM, he stressed the potential financial benefits: programming in a high-level language could reduce the number of bugs in a software project, shorten development time, and reduce costs by as much as seventy-five percent.

Backus made a convincing argument, and IBM’s CEO approved his idea. Backus was made the head of the development team and recruited talented and enthusiastic engineers who welcomed the challenge of creating this high-level language. They worked days and nights, and often they slept at a hotel near the offices in order to get available computer time even before sunrise.

Their main task was writing the compiler: the software that translated the high-level code to low-level machine language. Backus and his people knew that the success of their project depended on the compiler’s efficiency: if the code it produced was too inefficient, their new language would fail just as its predecessors did.

Four years later, in 1957, the first version of the new high-level programming language was ready. Backus named it FORTRAN, short for Formula Translation. The name reflects on what is was intended to be used for scientific and mathematical calculations. IBM had just launched a new computer model named IBM-704, and Backus sent the new compiler and a detailed FORTRAN manual to all customers who bought the new computer.

FORTRAN - Curious Minds Podcast
FORTRAN

FORTRAN Takes The World By Storm

The response was overwhelmingly positive. Programmers were delighted by how easy and fast they could write software using FORTRAN! One customer recalled how shocked he was to see one of his colleagues – a physicist in a nuclear research institute – writing a complicated piece of software in a single afternoon, whereas writing the same code in Assembly would have taken several weeks!

FORTRAN took the software world by storm! Within less than a year, almost half of the IBM-704 programmers were using FORTRAN on a daily basis. It was the salvation that all bleary-eyed programmers had prayed for: code written in FORTRAN was up to twenty times shorter than one written in Assembly, while still being efficient and blazingly fast. In fact, the new FORTRAN compiler was so successful that it was considered the best of its kind even twenty years later.

This was a big stride towards a future clean from software bugs. Backus’ team was so thrilled by FORTRAN’s success, they hardly included in the language any tools for detecting software errors and analyzing them. They believed that these tools weren’t needed anymore, as the new language had reduced the number of bugs considerably.

FORTRAN Is Made A Standard

In 1961, the American National Standards Institute decided to make FORTRAN a standard language. This was a big deal since until then FORTRAN only worked with IBM-704 computers: making it a national standard meant that it could now be used on computers from other manufacturers as well. Programming became easy and fun! It was no longer restricted to hardcore mathematicians and engineers. Amateur programmers taught themselves FORTRAN from books. Assembly was almost gone, and FORTRAN had taken over the market.

FORTRAN’s success ushered in a new generation of high-level programming languages like COBOL, ALGOL, ADA, and C – the most successful of them all. These languages not only improved on FORTRAN’s ideas and made programming even easier, but they also made programming suitable to a larger variety of tasks: from accounting to artificial intelligence.

Nowadays, there are hundreds of high-level programming languages to choose from. Some modern languages such as Python or JavaScript are so simple that even children can learn how to use them easily!

You might be surprised to learn that fifty years after it was first released – FORTRAN is still alive and kicking! It’s gone through changes and updates over the years, but is very much still relevant today – and some scientists and researchers still prefer it for complicated calculations. This is an amazing feat for a language designed in an era when programming was done with punched cards!

John Backus passed away in 2007. He was fortunate enough to see how the language he helped create changed the programming world. It transformed programming from a dreadful task – to a sort of positive challenge, even a hobby embraced by millions around the world.

What About The Bugs?!

All’s well that ends well!  But Wait a minute! Wait just a minute…are we not forgetting something? What about bugs? Did high-level languages solve the problem of software errors?

No, unfortunately, they didn’t. You see, while high-level languages did make programming a lot easier – they also allowed software to become much more complex. It’s like having the option to buy LEGO blocks instead of fabricating them yourself: you can invest all your time in creating bigger and bigger creations instead of doing everything from scratch. Similarly, high-level languages allow developers to add more features to their programs – and so the advantages of high-level languages were balanced by the ever increasing complexity of the programs themselves. The bugs never went away.

A Fundamental Problem in Software Design

In the 1960s it became clear to many computer programmers that reliability was a fundamental problem in software design. Despite all the great developments, it was still almost impossible to create a piece of software with no errors. I mean, a complex and feature-rich software, not some trivial program.

Worse yet, it seemed that it was getting harder and harder to complete a software development project “successfully.” What do I mean by “successfully”? A successful software project is one whose output is a high-quality, bug-free piece of code, tailored to meet the customer’s specifications, completed within schedule and without budget overruns. This goal was proving to be more elusive with every passing year.

Modern research clearly shows that when it comes to software projects – failure rates are extremely high compared to other engineering projects. A basic software project has a twenty-five percent chance of going over budget, or over its deadline, or producing software that doesn’t meet the customer’s expectations. That’s one out of every four projects! And if the project lasts more than eighteen months, or if the staff is larger than twenty people – the risk of failing becomes more than fifty percent. When it comes to even larger projects that go on for years and involve many teams of programmers, the likelihood of failure is close to a hundred percent. About ten percent of those projects fail so dramatically, that the entire software is thrown away and never used at all. What a waste!

This realization hit many programmers badly. I’m a software developer myself, you know – and us developers take our profession very seriously. We can spend years – and I’m not joking – debating the proper way to capitalize variable names in the code.

Over time, software developers started squinting at other engineering fields, like architecture for example and asked themselves why they couldn’t be more similar. I mean, architects and engineers build tall buildings, long bridges and other structures that are reliable, they don’t go over budget or schedule (well, at least most of the time) and they are built according to spec. Experience has taught software programmers that their projects usually won’t reach the same result.

In 1968 some of the world’s leading software engineers and computer scientists gathered in a NATO convention in Germany, in order to discuss this obvious difficulty of creating successful software. The convention didn’t result in any solution, but it did give the problem a name: “The Software Crisis.”

Building A New Airport

So, what does the software crisis looks like in real life? Here’s an example.

In 1989, the city of Denver, Colorado, announced it was embarking on one of the most ambitious projects of its history: building a new modern airport that would march state tourism and local business towards the twenty-first century.

Denver International Airport - Curious Minds Podcast
Denver International Airport

The jewel in the modern airport’s crown was to be a new and sophisticated baggage transportation system. I mean, baggage transportation is an important part of every modern airport. If a suitcase takes too long to get from point to point – passengers might end up missing their connecting flights. If a suitcase gets lost – that’s a ruined vacation. It’s obvious why the project’s leaders took their new baggage transportation system very seriously.

Aerial View of the Denver Airport during construction - Curious Minds Podcast
Aerial View of the Denver Airport during construction

The company that was chosen to plan the new system was BAE, an experienced company within its field. BAE’s engineers examined the blueprints of the future airport and realized that this was going to be a momentous challenge: Denver’s airport was going to be much larger than was usual for airports, and this meant that transporting a piece of baggage from one end of the airport to the other might take up to 45 minutes! So to solve that problem, BAE designed the most advanced baggage transportation system ever built.

A Breathtaking Design

The plan called for a fully automated system. From the moment an airplane lands, until the moment the luggage is picked up from the carousel – no human hands would touch it. Barcode scanners would identify the suitcase’s destination, and a computerized control system would make sure that an empty cart would be waiting for it at just the right place and time, to transport it as quickly as possible via underground tracks, to the correct terminal. Timing was incredibly important: the carts weren’t even supposed to stop: suitcases were supposed to fall from the one track onto a moving cart at the exact right moment.

Just to give you some perspective on BAE’s breathtaking design: there were 435,000 miles of cables and wires, 25 miles of underground tracks and 10,000 electrical engines to power them. The control system included about 300 computers to supervise the engines and carts. It is no wonder that Denver’s mayor said that the new project was as challenging as, quote, “building the Panama canal.”

The entire city followed the project with interest. The new airport was supposed to open at the end of 1993, but a few weeks before the due date the mayor announced that the opening would be delayed due to some final testing to the baggage transportation system. No one was too surprised: after all, this was a complicated and innovative new system, and it was likely that testing it will take some time.

A breathtaking Failure

But no one was prepared for the embarrassment that took place in March 1994, when the proud mayor invited the media to a celebrated demonstration of the new system.

Instead of an efficient and punctual transportation system, the amazed journalists watched in a mixture of horror and delight at a sort of technological version of Dante’s inferno. Carts that made their way too quickly fell off the tracks and rolled on the floor. Suitcases flew in the air because the carts that were supposed to wait for them never came. Pieces of clothing that fell out of the suitcases got shredded in the engines or tangled in the wheels of passing carts. Suitcases that somehow made their way to an empty cart reached the wrong terminal because barcode scanners failed to identify the stickers on the suitcases. In short – nothing worked properly.

The journalists who witnessed the demonstration didn’t know whether to laugh or cry. If it wasn’t for the project costing the taxpayers close to 200 million dollars, it could have been a like classic Charlie Chaplin comedy. The headlines of the following morning undoubtedly made the mayor cringe.

Behind the scenes, there were desperate attempts to salvage the system. Technicians ran to fix the tracks, but finding a solution at one spot created two other problems elsewhere; Each and every day of delay cost the city of Denver another one million dollars. By the end of 1994, with the airport’s project nowhere near completion – Denver faced the very real possibility of bankruptcy.

The mayor had no choice; after conducting several external tests and emergency discussions, it was decided to dismantle a big chunk of the new automated system. Instead, a traditional more manual system was put in place. The final cost of the airport project, including the cost of the new system, was 375 million dollars – twice the original budget.

In February 1995, almost a year and a half after the original date, the new Denver airport opened to flights, travelers, and suitcases. Only one airline, United, agreed to use the new baggage transportation system, but it too soon gave up. The system experienced so many problems and errors that the monthly maintenance cost was close to a million dollars. In 2005, United Airlines announced it would stop using the automated system, and go back to a manual transportation system.

An Investigation

So what went wrong in Denver? Why was the project’s failure so massive and absolute?

Several investigations highlighted the many factors leading to the failure of the Denver Airport project. Some of these factors included: Management decisions that were too affected by political considerations, changes that were made during construction just to please the airlines and an electricity supply that was constantly interrupted. if all that wasn’t enough, the project’s main architect had died unexpectedly. In short, everything that could have gone wrong – did. Having said that, most of the investigators agreed that the biggest problem of this ambitious project was its software component.

As we mentioned before, the original plan dictated that 3000 carts would travel independently around the airport using the underground tracks. In order for that to happen, about 300 computers were supposed to communicate with each other and make rapid decisions in “real time.”

For example, one of the common scenarios was where an empty cart traveled to a specific location in order to pick up a suitcase. Its route turned out to be quite complicated: the empty cart had to use several tracks, change directions, and perhaps move over or under other carts. Also, a cart’s specific route depended not only on its location but on the location of other carts as well. This meant that if a “traffic jam” occurred for any reason, the software had to recalculate an alternative route. Let’s not forget that we are talking about many thousands of carts, each going to a different destination and each supposed to arrive on time; and that the decisions were supposed to be made by hundreds of computers running simultaneously!

Twenty software programmers worked on the project for two whole years, but the system was so complicated that none of them could have seen the entire picture and truly understand, from a “bird’s eye view,” how the system actually behaved. The final result was a complex and complicated software that was full of errors. Denver’s airport project is a perfect example of the Software Crisis: a large project that went over schedule, over budget, and didn’t reach its goals.

Is There A Solution To The Software Crisis?

Dr. Winston Royce, a leading computer scientist, defined the situation best in 1991 when he said:

“The construction of new software that is both pleasing to the user/buyer and without latent errors is an unexpectedly hard problem. It is perhaps the most difficult problem in engineering today and has been recognized as such for more than 15 years. […]. It has become the longest continuing “crisis” in the engineering world, and it continues unabated.”

So what can we do? Is there a solution to the software crisis? That question will be the focus of our next episode. We will get to know the two methods developed over time to tackle the challenges of creating complicated software: Waterfall and Agile. We’ll also tell the incredible story of a computerized system developed for the FBI at a cost of half a billion dollars… and how it turned out to be a breathtaking failure.

And finally, we will get to know Fred Brooks – a computer scientist who wrote an influential book called “The Mythical Man-Month”. In his book, Brooks asks – Is there a “silver bullet” that will allow us to solve the software crisis?

Read Part II

Credits

Maschinenpart By Jairo Boilla
Song 007 – By tarbolovanolo
Crypto, Industrial Cinematic, Batty Slower – By Kevin MacLeod
Cartoon Record Scratch Sound Effects By Nick Judy

Bittorrent - History of File Sharing - Curious Minds Podcast

The History of File Sharing, Part 2: Grokster & BitTorrent | Curious Minds Podcast

The fall of Napster (see Part I of this series) has left a vacuum in the world of file sharing – and as the saying goes, the Internet abhors vacuum… Various File Sharing programs such as Gnutella, Kazaa and others quickly filled the void.
In this episode, we’ll describe Grokster’s legal battle against the Record Companies, the sinister poisoning of file sharing networks by OverPeer – and the rise of BitTorrent.

Guest in this episode: Brett Bendistis, from The Citizen’s Guide to the Supreme Court Podcast!


Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Conceptis Pic-A-Pix - Curious Minds Podcast


Credits

Penske File (demo mix) by Steely Chan
Darkness by xclntr
Sinister Pt. 2 by ¡SplastiC

File Sharing History - Curious Minds Podcast

The History of File Sharing, Part 1 (of 2): The Rise & Fall of Napster | Curious Minds Podcast

Napster, a revolutionary Peer-to-Peer file sharing software, was launched in 1999 – and forever changed the media world. In this episode, we’ll tell the story of Sean Fanning and Sean Parker, its creators, and talk about the legal battle it fought with the record companies – and Metallica.

Guest in this episode: Brett Bendistis, from The Citizen’s Guide to the Supreme Court Podcast!

Also recommended: The History of Byzantium – A podcast telling the story of the Roman Empire from 476 AD to 1453.


Download mp3 | Read Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Credits

Nightmares by myuu
Exploring the Inferno by myuu
Crime Scene (Film Score) by myuu

Todd Cochrane - Curious Minds Podcast

Heroes Of Podcasting #4: Todd Cochrane, CEO of RawVoice (Blubrry) | Curious Minds Podcast

 

This series explores the history and future of podcasting, and each episode will feature a single guest who is a pioneer of podcasting. This time, we’re interviewing Todd Cochrane, CEO of RawVoice (better known as Blubrry) and the host of Geek News Central Podcast.

Todd has an amazing story which begun with a serious injury – but ultimately led to a surprising career as an early entrepreneur in the new media of podcasting. He wrote the first book on podcasting and signed one of the first advertising deals. Today, Todd’s company is one of the biggest players in this new media.


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Leo Laporte & Ran Levi | Curious Minds Podcast

Heroes Of Podcasting #3: Leo Laporte, This Week In Tech | Curious Minds Podcast

Leo Laporte & Ran Levi | Curious Minds Podcast
Leo Laporte & Ran Levi

This series explores the history and future of podcasting, and each episode will feature a single guest who is a pioneer of podcasting. This time, we’re interviewing Leo Laporte, from This Week In Tech.

Leo Laporte is one of the very first podcasters. In 2005 Leo left – or almost left – traditional radio to start his own podcasting network, centered around cutting edge technology news, called TWIT. TWIT quickly became one of the most successful podcast networks with millions of downloads and award winning show such as This Week In Tech, Security Now and the New Screen Savers.


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Jay Soderberg - VP at BlogTalkRadio - Curious Minds Podcast

Heroes Of Podcasting #2: Jay Soderberg (The PodVader), VP of BlogTalkRadio | Curious Minds Podcast

This series explores the history and future of podcasting, and each episode will feature a single guest who is a pioneer of podcasting. This time, we’re interviewing Jay Soderberg – AKA The Pod Vader. Head of Content at BlogTalkRadio and Host of the Next Fan Up show.

Jay Soderberg started in podcasting back in 2006. Jay’s story is rather unique, since his first steps in podcasting were in the corporate world, whereas the vast majority of podcasters back then were independent creators. We talked about the advantages and disadvantages of podcasting in a corporate environment, Jay’s vision as Head of Content and, of course, the origins of his nickname – the Pod Vader.


Download mp3
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Tim O'Reilly - The History of Open Source & Free Software - Curious Minds Podcast

The History of Open Source & Free Software, Pt. 2 | Curious Minds Podcast

Tim O'Reilly - The History of Open Source & Free Software - Curious Minds PodcastWith Special Guests: Richard Stallman & Tim O’Reilly!

In 1998, a group of people broke away from the Free Software Foundation and created instead the Open Source Initiative. What were their motives? Richard Stallman, the founder of the FSF, and Tim O’Reilly who helped popularize the term ‘Open Source’ discuss the history of Open Source & Free Software.

 


Download mp3 | Download OggRead Transcript
Subscribe: iTunes | Android App | RSS link | Facebook | Twitter

Credits

Dark By Alas Media
DraMATIC by pazurbeats
monorail by tonspender