Could robots be dangerous?
Because of Moore's law, which states that computer power doubles every eighteen months, it is conceivable that within a few decades robots will be created that have the intelligence, say, of a dog or a cat. But by 2020 Moore's law may well collapse and the age of silicon could come to an end. For the past fifty years or so the astounding growth in computer power has been fueled by the ability to create tiny silicon transistors, tens of millions of which can easily fit on your fingernail. Beams of ultraviolet radiation are used to etch microscopic transistors onto wafers made of silicon. But this process cannot last forever. Eventually, these transistors could become so small that they reach the size of molecules, and the process will break down. Silicon Valley could become a Rust Belt after 2020, when the age of silicon finally comes to an end.
The Pentium chip in your laptop computer has a layer about twenty atoms across. By 2020 that Pentium chip might consist of a layer only five atoms across. At that point the Heisenberg uncertainty principle kicks in, and you no longer know where the electron is. Electricity will then leak out of the chip and the computer will short circuit. At that point, the computer revolution and Moore's law will hit a dead end because of the laws of the quantum theory. (Some people have claimed that the digital era is the "victory of bits over atoms". But eventually, when we hit the limit of Moore's law, atoms may have their revenge.)
Physicists are now working on the post-silicon technology that will dominate the computer world after 2020, but so far with mixed results. As we have seen, a variety of technologies are being studied that may eventually replace silicon technology, including quantum computers, DNA computers, optical computers, atomic computers, and so forth. But each of them faces huge hurdles before it can take on the mantle of silicon chips. Manipulating individual atoms and molecules is a technology that is still in its infancy, so making billions of transistors that are atomic in size is still beyond our ability.
But assume, for the moment, that physicists are capable of bridging the gap between silicon chips and, say, quantum computers. And assume that some form of Moore's law continues into the post-silicon era. Then artificial intelligence might become a true possibility. At that point robots might master human logic and emotions and pass the Turing test every time. Steven Spielberg explored this question in his movie Artificial Intelligence: AI9 where the first robot boy was created that could exhibit emotions, and was hence suitable for adoption into a human family.
This raises the question: could such robots be dangerous? The answer is likely yes. They could become dangerous once they have the intelligence of a monkey, which is self-aware and can create its own agenda. It may take many decades to reach such a point, so scientists will have plenty of time to observe robots before they pose a threat. For example, a special chip could be placed in their processors that could prevent them from going on the rampage*. Or they could have a self-destruct or deactivation mechanism that would turn them off in case of an emergency.
Arthur C. Clarke wrote, "It is possible that we may become pets of the computers, leading pampered* existences like lapdogs, but I hope that we will always retain the ability to pull the plug if we feel like it."