Computer chips that mimic the brain may be the next big thing in computing
From vacuum tubes to neuromorphology, the modern computer has come a long way
Humans have been using computers for thousands of years.
From the abacus to the Antikythera, we've used mechanical devices of some sort to help us solve mathematical problems.
But it wasn't until the latter half of the 20th century that digital computers really took off.
The first digital computer was called ENIAC, and was the size of a large room, filled with patch cords and vacuum tubes. Finished in 1945, it could do mathematical calculations about a thousand times faster than a human, explained Paul Ceruzzi, Curator Emeritus at the Smithsonian National Air and Space Museum.
Ceruzzi is also the author of A History of Modern Computing.
The creators of the ENIAC went on, with the help of mathematician John von Neumann, to essentially create the computer architecture we know today, with processing, memory, storage and an interface.
And while the successor to the ENIAC, called the UNIVAC, was available commercially, it's price tag of around half a million dollars — in 1950 — didn't seem like it would be a success, until the Cold War, when the U.S. military realized computers could give them an edge over the Soviet Union.
Indeed, the Cold War and the Space Race helped with the miniaturization of components so they could be used on missiles and rockets. "They had to be small, and they had to be lightweight," Ceruzzi told Spark host Nora Young.
Eventually that need, as well as the baffling complexity of wiring as computers got faster, led to the invention of the integrated circuit—or microchip.
And once an engineer named Gordon Moore, who co-founded a company called Intel, realized that the number of transistors on a circuit could be doubled roughly every year or so, modern computers really took off.
Among them was a computer called the Altair, made in Albuquerque, New Mexico. This attracted the interest of two young engineers named Bill Gates and Paul Allen—with Gates dropping out of Harvard to move to New Mexico to make software for this new "personal" computer. And the rest is well-known history.
By the 'Eighties, big companies like IBM (and a smaller one called Apple) were making mass-market computers.
To the future
While current models for computing have given us a lot, there's ongoing research into different ways of improving computing: to allow computation of ever more complex problems, to increase storage or to rein in energy consumption.
The human brain has been a focus of this research.
It has the ability to process massive loads of information while running on minimal energy. That's something computer scientists have been trying to make sense of for years now, as it offered insights into smarter and more efficient computing.
The field known as neuromorphic computing, developed in the 1980s by Carver Mead, involves electronic systems that are based on organizing principles of the brain and has vastly evolved since its conception.
"Even just [in] the last 10 years, the amount we've learned about the brain has sort of exploded," engineer Kwabena Boahen told Young.
Boahen is a professor of bioengineering, electrical engineering and computer science at Stanford University. He's also one of the world's leading experts in low-power computer chips that operate like the human brain.
By studying the structure of the human brain, Boahen and others in the field have learned to simulate neuron-like functionalities in highly efficient supercomputers.
"Biology ultimately has this incredible solution because it's really doing much more, energy efficiently. And in a way that's sustainable."
Boahen says that while the human brain is actually slower at processing information than the technology used in our current devices, neuromorphic computing could actually address the limitations of conventional computing, which are becoming more of a challenge as devices are built smaller and smaller.
He says that designers of current chips are "fascinated or enamored with speed," but that it shouldn't be a race.
"What's more important and what we're beginning to understand better now is that the real cost of computation is how much energy you use and how much heat you produce," he said.
Parallel processing is a computing model inspired by the brain's ability to process different types of information simultaneously — similar to an assembly line, said Boahen.
"What we find is that if we break it down into smaller jobs and work more slowly, we can actually reduce the voltage, which then reduces the amount of energy [used]. So you can trade speed for parallelism and you can save energy that way because doing things slowly is more energy efficient."
Boahen describes the future of processor chips as the Manhattan model, characterized by a move away from LA-style flat sprawling chips that produce high emissions, to more efficient three-dimensional ones akin to skyscrapers.
He says that this technology is "fundamental" for future innovation.
Written and produced by Adam Killick and Samraweet Yohannes.