Robots have become so omnipresent as to fade into the background as we go about our daily business. From software to hardware they come in myriad flavors. They are our self-guided vacuums, our GPS, an automated line on the Paris metro linking two of the city’s principle train stations, internet search engines… Businesses and other institutions have become equally infatuated. Computerized robotic system now distribute medicine in hospitals, Japanese students send machine surrogates to class, and visitors to Cluny Abbey may soon be guided through the ancient structure by docent robots. When did this evolution begin? What did it look like at its earliest stages? Where will it lead? From mechanized figurines to the first robot arm a brief journey through a history 3000 years in the making.
Pinpointing the emergence of three-dimensional copies of living organisms poses some difficulties. Nevertheless, at sites throughout Europe, we can trace the appearance of figurines modeled on animal or female forms to the Aurignacian period (32,000 – 28,000 BCE) when they were sculpted from ivory, stone, or bones.
The establishment of some simple mechanical principles sparked off a major leap forward during the Neolithic Era when masks and statues could be brought to life through the magic of articulated parts. Elaborate systems of pulleys and ropes could be activated by lighting a sacred flame, heating air, and causing it to expand. Next, water would be forced out by the increased volume to set the process in motion. In ancient Egypt these statues were viewed as actual vessels through which the soul of the departed was reincarnated. From 800 BCE we can observe examples such as a representation of the god Amun capable of extending its arm or even an articulated totem mask housed in the Louvre Museum bearing an effigy of the jackal-headed Anubis.
Mirroring these developments, the Babylonians created what might be described as the first fully realized mechanical object in the form of the clepsydra or water clock. Water flowed out of one tank and into another raising a figure to point out the time. In Egypt, Ctesibius of Alexandria refined hydraulics to create the world’s first truly autonomous device in 246 BCE complete with a dial that followed the progress of the solar year.
Aristotle (384-322 BCE) improved the theoretical underpinnings of mechanics through his exploration of the concept of automated systems. “For if every instrument could accomplish its own work, obeying or anticipating the will of others, if […] in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.” Respectable men could abandon mundane tasks as beneath them devoting themselves to the work of citizenship with all the knowledge and wisdom it bestows. (Politics 1)
The ideas promulgated by Aristotle over 2000 years ago remain relevant to the evolution of contemporary robotics and emerged at roughly the same time as the first identifiable automaton: The Pigeon. Documented by Aulus Gellius and Favorinus, it was created by an intimate of Plato named Archytas of Tarentum in 380 BCE. The self-propelled device was constructed from wood and could fly by way of an ingenious system of compressed air.
Nevertheless, the world would have to wait until the 1st century for Hero of Alexandria to create a more comprehensive framework for automata through his investigations into mechanics and mathematics. His innovations, flame-driven devices and bird-shaped bronzes that could drink, were overlooked by contemporaries but have propelled him to greatness in the eyes of history. His works on mathematics and physics inspired countless successors, particularly in the Arab world from the 9th century onwards.
The spread of automata across the Arabian Peninsula is in large part attributable to the work of the Banū Mūsā brothers who published their Kitâb al-hiyal or Book of Ingenious Devices in 850. The engineer al-Jazari drew from his own experience, not least his construction of a series of colossal timepieces, to expand the tradition in the 11th century.
Clocks served the practical purpose of calling the faithful to prayer but could also be handy as gifts to foreign sovereigns which is how an automatic clock made of brass was famously offered to Charlemagne by the Abbasid Caliph Harun al–Rashid in 807. Over the centuries that followed there was little to distinguish these devices from those originally proposed by Hero of Alexander aside from a notable increase in pomp and detail. The decorative trend reached a crescendo in the Middle Ages when playful figures began to appear at the top of towers to ring in the hour with the aid of metal hammers. Mechanized jacquemarts might be fashioned from copper, iron, or other metals and two fine examples of the art are on display at the Cathedral of Strasbourg and Cluny Abbey.
Throughout the Renaissance, automata as objects of fascination were reserved largely for the enjoyment of the rich. At the royal Château of Saint-Germain-en-Laye a fantastical animated garden was constructed and its grottoes peopled by hydraulically powered curiosities. Thomas Francini and Leonardo da Vinci were masters of the art and opened the door to the Enlightenment era explorations of Jacques de Vaucanson.
Vaucanson’s creations pushed up against the limits of possibility and his Digesting Duck was the marvel of its age. The mechanical bird could drink, eat, quack, paddle, and digest like the real thing. Built around an engraved cylinder and a complex system of levers that extended as far as its legs, the device was programmable. Perched on an imposing pedestal made of perforated copper so observers could admire the complexity of its inner workings, the beast propelled Vaucanson to the head of a family tree for a new race of androids despite doubts over whether he ever succeeded in the creation of an anatomically correct human form.
By the 19th century the folly for automata spread throughout the world and—from mechanical dolls in Osaka to conjurer tricks in Parisian pleasure palaces—devices began to demonstrate ever increasing levels of sophistication.
The leap necessary to transition from painted mechanical curiosities to actual robots was made possible in 1854 because of the work of a professor at Queen’s College, Cork (Ireland) whose Boolean algebra, based on zeros and ones, laid the foundations for the emergence of computers a century later.
By the dawn of the 20th century the word robot entered the modern lexicon by way of Czech writer Karel Capek and his 1920 science-fiction themed play Rossumovi univerzální roboti (RUR). Inspiration was provided by early Slavic, in which rabota denotes a type of feudal forced labor or corvée, echoes of which can be heard in modern-day Polish and Slovak where robotnik means laborer. It should be noted that Capek’s robot was in fact a cyborg, which is to say a being constructed of both organic and artificial parts and not a purely mechanical creation.
The first known usage of the word “robotics” can be traced to Isaac Asimov’s short story Liar published in 1941, in which the celebrated author explores the limits of his self-created Three Laws of Robotics.
Indeed, the 1940s can be pinpointed as the period when advances in the field of electronics and the emergence of digital computers would shake up the trajectory of robotics. As early as 1936, Alan M. Turing was already at work answering the question, “Can machines think?” His article On Computable Numbers laid out a system for handling massive volumes of information at truly dazzling speeds. Among the many mathematicians and computer scientists that weighed in was Hebert Simon, a professor at Carnegie Mellon, who developed a computer to prove mathematical theorems in 1956. He answered Turing’s question and demonstrated the ability of a non-living machine to think logically clearing the way for the rise of artificial intelligence. Working alongside him was Allen Newell and the two elaborated various programs capable of modeling human problem solving that were in large part informed by advances in psychology.
In the meantime, scientists were coming together in the kind of intellectual ferment that gave birth to foundations such as the one that organized the Macy conferences. In a series of ten meetings at regular one-year intervals between 1942-1953, scholars from a range of disciplines including mathematics, logic, anthropology, psychology, and economics were brought together in New York to help define the contours of the discipline we now understand as cognitive science. In many ways, modern cybernetics can be traced to these forums and the influence MIT’s Norbert Wiener exerted on proceedings following the publication of Cybernetic, or Control and Communication in the Animal and the Machine. The discipline emerged as a response to the need for a general scientific explanation for communication in natural and artificial systems and was used as an approach toward automatic control in machines.
Cybernetics seeks insight through a deeper understanding of the processes that lead to the execution of commands. Theory was transformed to reality when William Grey Walter began building turtle-like, mobile robotic vehicles in order to study the behavior of animals. His offspring, Else and Elmer, moved autonomously and were guided by photoelectric sensors that allowed for adjustments based on sources of light.
The use of artificial intelligence was gradually democratized and advances in technology began to spread benefits across industrial settings. First was the energy sector when Raymond Goertz created a teleoperated articulated arm for the French Atomic Energy Commission (CEA). Next was the automotive sector and in 1962, General Motors acquired the first industrial robot for its factory in New Jersey. The Unimate 001—short for Universal Automation and essentially two boxes connected to an arm—was created by George Devol and Joseph F. Engelberger. Using coded commands, the two ton behemoth sequenced and stacked hot pieces of metal in the company’s Trenton die-cast facility.
It was the moment when the United States became the unprecedented leader in robotics worldwide, a position it would hold for the next thirty years thanks in no small part to the contributions of researchers at MIT and Stanford who produced their own versions of articulated arms. A diverse range of players (American Machine and Foundry, Tralfa, Vicarm, etc.) produced specialized robots for tasks such as welding and painting.
Far Eastern competitors were soon biting at the heels of the pioneers and by the 1980s Japanese policies for the promotion of robotics allowed firms such as Kawasaki to roll out mass produced hydraulic arms.
At a time when US firms became bogged down by soul searching discussions on the moral implications of robots the Japanese were well on their way to assuming the mantle of leadership through the creation of a 46-member trade association in 1971 to promote cooperation and development. Headquartered in Tokyo, it unashamedly sought to cultivate the next world leaders within a decade.
The loss of nerve on the American side led to some troubling questions: would man be replaced by machines? Surpassed? Would machines one day turn against their creators? Social and critical commentators such as Gilbert Ryle and Hubert Dreyfus opposed fantastical theories while Herbert Simon went as far as to predict that by 1985 “machines will be capable... of doing any work a man can do.” The release of Stanley Kubrick’s 2001: A Space Odyssey, where theatre goers were subjected to the paranoid machinations of the computer HAL 9000 on the big screen, did little to soothe fears.
In industrial settings robots were refined and made more reliable but gains in performance had to be tempered by the fact that they remained one-trick ponies with little power to adapt to their surrounding environment. The existence of limitations sparked interest in a new approach toward research from the 1960s on. Inventors began to program and create mobile machines that benefited from advances in sensor technology to react to different sets of predefined situations.
Initially, these robots resembled nothing more than a cleverly assembled jumble of mix and match parts. As they moved in different directions they were capable of detecting their surroundings through instrumentation consisting of cameras, range finders, and sensors.
Parading through laboratory corridors in places like Stanford and John Hopkins (Baltimore) the first experiments on the mobile platform were baptised with names such as Cart, Beast, or Shakey. Clunky in the extreme, these specimens might take an hour or so to complete even the simplest of maneuvers.
With the arrival of microprocessors and microcomputing in the 1970s robots took another step forward in the march toward artificial intelligence. Form factors were made more manageable and software refined through the development of more robust programming languages such as Robot Basic in 1984.
No longer limited to predefined situations, the new breed of thinking machines became more reactive thanks to cameras, sensors, software, and actuators. They were equipped to adapt to the surrounding environment and in some cases change their programs on the fly.
For the first truly walking and talking robot we can look to Japan and the development of Wabot-1 at Waseda University. It could move, manipulate objects—even exchange pleasantries in Japanese—and might have been forgiven its only passing resemblance to actual humans.
It is here that the current state-of-the-art has arrived and as robots become more commonplace it is hard to find any sector of activity where their services have not become essential. Performance (operation, behavior, precision, autonomy…) increases with each step forward in the innovation lifecycle. In a recent example, the discovery of graphene, promises batteries that outperform any existing technology in terms of autonomy, charging time, and ecological footprint (the material is biodegradable).
Reading between the lines of technological advances and public acceptance of change it may just be possible to pick out the outlines of our robotic future.
The proliferation of mobile devices has concentrated the technology we carry in our pockets to such a degree that the keys to understanding the robots of the future can pass almost unnoticed for being right under our nose: miniaturization and connectivity.
The acceleration of the trend toward miniaturization of electronic components has been propelled by the arrival of nanotechnologies. Processors are now measured in nanometers, a mere 22 in the case of Intel’s Core i3/i5/i7 series, and have shrunk to a point where researchers have begun to propose quantum computers. These machines operate on the principle that their fundamental unit of computation, the “qubit”, is no longer limited to the value of 1 or 0, but instead exists in many simultaneous states. As qubits are added the number of possible states rises exponentially to unleash phenomenal powers of calculation.
Another non-negligible development, based on prevailing processor technology, is the arrival of open-source solutions such as the credit-card-sized computer Raspberry Pi. With sales of around one thousand devices a second, $35 now buys hobbyists the power to construct their own robots and places various vocations within the grasp of future generations.
As for connectivity, we’ve only just begun to explore the possibilities of a permanently networked world. Wi-Fi, 4G, WiMax, Bluetooth, and RFID are among the protocols from which innovators will weave the intelligent objects of the future. Some robots have been adapted to exploit the resources made available through the widespread diffusion of smartphones. One example is a bionic arm controlled by a downloadable application, while another is a robot that mimics the movements of users holding an Android-powered handset (video).
The rise of the Internet has profoundly altered our relationship to information and is shaking up old habits. We are permanently connected and form part of an information ecosystem that we simultaneously consume and construct. Aided by ever more powerful robot forms we in turn process larger chunks of data and are creating a sort of omniscient intelligence—many would hold up Google as an example—that could one day be introduced to the human body through the means of an electronic chip. The face of humanity’s future could be thus …
The emergence of the loosely defined Transhumanist movement encourages a belief in a technological future that is expanding exponentially. The suggestion is that society’s only hope of keeping pace is to make peace with the notion of enhancements of the type that could one day lead to the creation of a hybrid form of humanity. Of course, certain assumptions have been made about possible paths toward the future of technological innovation suggesting the need for a far-reaching ethical discussion and a healthy dose of prudence as we advance toward an unknown world.