Skip to Main Content (Press Enter)

I, Robot Teacher’s Guide

By Isaac Asimov

I, Robot by Isaac Asimov

TEACHING GUIDE



NOTE TO TEACHERS

Teachers: If you’d like a printable version of this guide, click on the PDF link at the bottom of this page.

ABOUT THIS BOOK

I, ROBOT turns the world of science fiction literature on its head. Rather than telling the typical tale of a humanoid machine run amok (e.g., Terminator), SFWA Grand Master Isaac Asimov asks readers to imagine a world where robots protect us from our own worst nature. Beginning with a simple story about the relationship between a little girl and a limited-function robot, I Robot moves on to explore, in subsequent stories, increasingly sophisticated thoughts, questions, and moral complexities. In the process the book reveals Asimov’s overarching vision of a future that entangles inextricably the humans and the machines.

The stories grew from Asimov’s opinion that anyone smart enough to create robots would be smart enough to make sure that those robots wouldn’t attack their makers. Conceived by Asimov as the Three Laws of Robotics–essential laws built into the robots’ inner workings–these Laws freed science fiction writers to develop robots as characters instead of portraying them as monstrous things. I, ROBOT hints loudly that robots are a “better breed” than humans and though they were created to serve, they will inevitably become the masters.

At first Asimov had trouble getting “Robbie,” the opening story in I, ROBOT, published. But throughout the 1940s the subsequent tales appeared regularly in pulp science fiction magazines. In 1949, Asimov gathered the stories into a book that he wanted to call Mind and Iron. (The publisher prevailed with I, ROBOT.) The collection has enjoyed great success through the years and is offered by Spectra in a compact, affordable, and eminently readable edition for you–or your robot.

Plot Summary
“Introduction” 2057: Earth. An unnamed reporter for “Interplanetary Press” prepares to interview Susan Calvin, a seventy-five-year-old “Robopsychologist” who works for U.S. Robot and Mechanical Men, Inc., (U.S. Robots). Accused of being emotionless like a robot, Calvin argues that robots are more than mechanical parts, “They’re a cleaner better breed than we [humans] are.” Calvin reminisces about early opposition to robots from labor unions (that were worried about competition) and religious groups (that were worried about sacrilege). Against these anti-robot arguments she holds the memory of an early robot model named Robbie, which was sold in 1996 as a nursemaid for a little girl. Calvin begins to tell the story.

“Robbie” 1998: Earth. Robbie the Robot plays hide-and-go-seek outdoors with his charge, nine-year-old Gloria Weston. Robbie lets her win. Gloria is a demanding but charming girl who loves Robbie. At Robbie’s gestured urging (he cannot speak), she begins to tell him his favorite story, Cinderella, but her mother, Grace, calls them to come inside. Grace does not trust Robbie with her daughter and badgers her husband, George, to get rid of him until George finally gives in. Gloria is heartbroken when her parents take away Robbie. In an attempt to distract her, her parents decide to take Gloria on a trip to New York, hoping the excitement of the city will take her mind off Robbie. While in New York they tour the U.S. Robots factory. Gloria spies Robbie, one of “the robots creating more robots.” She runs toward him–right into the path of a moving tractor. Before anyone else has time to react, Robbie snatches Gloria out of harm’s way. Because Robbie has saved Gloria’s life, her mother grudgingly allows the robot to return to the family. At this point the frame story (Susan Calvin talking to the reporter) resumes. Calvin tells us that robots were banned from Earth between 2003 and 2007. To ensure the company’s survival, U.S. Robots started developing mining models for other planets. Calvin recalls two troubleshooters, Mike Donovan and Gregory Powell, who worked with the experimental designs in “the teens.”

“Runaround” 2015: Mercury. Gregory Powell and Mike Donovan have sent robot SPD 13, “Speedy,” on a quest for selenium, a necessary ingredient for their life support machinery. Selenium is somewhat dangerous to Speedy, and when the robot doesn’t return, Powell and Donovan decide they must go retrieve him from the surface. They find Speedy circling the selenium pool and gibbering. Speedy has gone crazy because two of the fundamental laws of robotics have come into conflict. Powell ordered Speedy to get the selenium (Second Law: always obey human orders). But since Powell wasn’t very insistent, the Second Law didn’t quite overwhelm the Third Law (self-protection). Caught between conflicting directives, Speedy hovers around the selenium pool, not quite able to get close enough to harm himself, but not able to leave the site because he has been ordered to go to the pool. To remove Speedy’s conflict, Powell walks towards Speedy, purposely going too far from safety for him to be able to return without Speedy’s help. Speedy sees him, causing the First Law to kick in (do not harm or allow a human to come to harm through inaction). Speedy saves Powell and they send the robot back for the selenium, this time installing in Speedy Second Law orders firm enough to counteract any Third Law thoughts of self-preservation. Speedy returns with the selenium and the pair anticipate their next work assignment at the space stations.

“Reason” 2015: Space Station. QT-1 (Cutie), a new model, refuses to believe that inferior humans created superior robots. Cutie decides that an Energy Converter has created robots, and that he is its Prophet. Cutie’s religion spreads to the other robots: they obey Cutie, but not Powell or Donovan. Meanwhile, a potentially dangerous electron storm is approaching. Widespread destruction on earth could occur if the storm is able to throw out of focus the energy beam sent from the station to earth. Cutie will not let Powell and Donovan make adjustments. To Cutie, humans are obsolete. Donovan and Powell try to argue with Cutie, but all their attempts fail. They then try to prove to Cutie that humans build robots by building a robot themselves. Cutie argues that the pair only assembled the robot; they did not create it. The electron storm comes and, luckily, Cutie keeps the beam focused because he believes he serves the Converter by keeping its instrumentation in balance (i.e., in focus). Powell points out that the Second Law (always obey human orders) requires QT to obey. No matter what robots believe to be the ultimate source of command, they will still do their duties.

“Catch That Rabbit” 2016: Asteroid. Donovan and Powell are sent to an asteroid to test a mining robot, DV-5 (Dave), that controls six subrobots. Dave has a problem: sometimes, for no apparent reason, the robot stops mining and starts marching his subrobots in drills. Though upset at his own behavior, Dave can’t explain why he is doing this. Donovan and Powell interview a subrobot, but that is like asking a “finger” why the hand does what it does. They figure out that situations requiring personal initiative (e.g., emergencies) cause the problem. But they can’t understand why because Dave always resumes his correct duties when they show up. They can’t take Dave apart to test him: all the circuits are intertwined, which means that testing them in isolation won’t help. They decide to create an emergency without Dave knowing and then watch what happens. They cause a cave-in on themselves, but Dave goes marching off with his subrobots, leaving them trapped. Powell then shoots one of the subrobots. Dave comes back to rescue them. Powell figured out that the six subrobots needed more command attention during an emergency, which put too much stress on “a special co-ordinating circuit.” With fewer robots to attend to, Dave was able to handle emergencies. At the end of this tale the frame story resumes with Susan Calvin. The reporter asks her if a robot had ever “gone wrong” on her watch. She hesitates, but then admits that this did happen once, and launches into the story of Herbie.

“Liar!” 2021: Earth. U.S. Robots accidentally creates RB-34 (Herbie), a robot that can read minds. Susan Calvin, Alfred Lanning, Milton Ashe, and Peter Bogert are assigned to find out how the mind reading has changed the robot: “what harm it has done to his ordinary RB [robot] properties.” Herbie has figured out that Calvin loves Ashe but does not feel worthy of being loved in return. Herbie assures her that Ashe loves her and that a woman Ashe brought to visit was only a cousin. When Bogert consults Herbie, the robot tells him that the director, Lanning, has retired and has put Bogert in charge. When Lanning questions some of Bogert’s calculations, Bogert informs an incredulous Lanning that he is no longer the boss. Later, during a conversation with Ashe, Calvin finds out that the girl isn’t Ashe’s cousin but his fiancée. Calvin realizes that Herbie has been lying to them because it was following the First Law of Robotics (do not harm a human). By telling each person what s/he wanted to hear, the robot was trying not to hurt the humans. Calvin asks the robot what went wrong in its assembly that made it able to read minds. This throws the robot into an impossible conflict. The robot can’t answer because it thinks it will make Lanning and Ashe feel bad to know that a robot figured out something that the scientists couldn’t. On the other hand, it must answer because Lanning and Ashe want to know the answer (Second Law: obey human commands). Since either action will cause harm to humans, the robot collapses. The frame story resumes again with Calvin sitting behind her desk with her face “white and cold.”

“Little Lost Rabbit” 2029: Hyper Base. Susan Calvin and Peter Bogert are called to a hyper base to identify one missing NS-S (Nestor) robot out of a fleet of sixty-three seemingly identical models. The missing robot is identical to all the others except that its positronic brain is not fully wired with the entire First Law of Robotics (against harming humans). It turns out that a worker, Gerald Black, got annoyed with the robot and told it, “Go lose yourself.” Obeying the order, Nestor 10 made itself indistinguishable from other robots. To flush out Nestor 10, Calvin arranges to have all the robots see a rock drop toward a human (the rock is deflected at the last second). She measures the reaction time of the robots as they rush to protect the human, reasoning that the robot that is not wired with a complete First Law will react differently. Her reasoning is wrong: they all react the same way. She tries another strategy. She tells the robots that they will be electrocuted if they move towards the human. Calvin reasons that only the robot with a weak First Law won’t move because the Third Law, self-preservation, will equal, not override, the First Law. But Nestor 10 had pointed out to the other robots earlier that if they were going to die, they wouldn’t be able to save the person anyway, so it was better not to move so they could live to save someone else another day. Finally, Calvin arranges a third test to flush out Nestor 10. Only Nestor 10 can tell the difference between harmless and harmful radiation. When the Nestor robots are all told that harmful radiation will be between them and the person in danger, all but one–Nestor 10–remain seated when the rock falls. Nestor 10 moves because he can see that the radiation is not dangerous. He tries to attack Calvin because she has found him out and the Second Law (obey orders) outweighs the weak First Law (don’t harm people). But because the room is bathed in gamma radiation, which kills robot brains, she survives.

“Escape!” 2030: Earth. A competing robot company, Consolidated Robots, asks U.S. Robots to solve a problem that fried their own “Super-Thinker.” The problem is how to build a hyperspace drive for humans. Susan Calvin thinks that the reason Consolidated is having problems is because building the hyperspace drive involves harm to humans, it brings the First Law (do not harm humans) and Second Law (obey human orders) into conflict. U.S. Robot’s own super-thinker, “The Brain,” however, is equipped with a personality. Calvin thinks that The Brain will be able to handle the dilemma because having a personality–emotional circuitry–makes it more “resilient.” But when the scientists feed the problem to The Brain, it doesn’t even acknowledge the existence of a problem, and promises to build the ship. The story jumps ahead to Powell and Donovan inspecting the ship two months later. While inside, the ship takes off and as the ship makes an interstellar jump, each man has a near-death experience. The men return from beyond the galaxy and Calvin learns that, during their time in hyperspace, the two were technically dead (matter turns to energy at light speed). Why was The Brain able to build the ship if it caused human death? It turns out that Calvin had adjusted The Brain’s controls to play down the significance of death for the robot. Since death on the ship was temporary, The Brain, unlike Consolidated’s “Super Thinker,” was able to ignore the harm aspect (First Law) of the order and build the ship (Second Law).

“Evidence” 2032. In the frame story, Calvin discusses how earth’s political structure changed from individual nations to large “regions.” She recalls a man, Stephen Byerly, who ran for a mayoralty of one of the regions. The story begins with Francis Quinn, a politician, trying to convince Lanning, Director of U.S. Robots, to keep Byerly from political office because Byerly is a robot. Byerly denies this, but lets Quinn base his campaign on testing whether or not he is a robot. Byerly returns home and tells John, an old, crippled man who lives with him and who he calls “teacher,” about Quinn’s strategy. Once informed that Byerly might be a robot, Fundamentalists begin huge protests outside Byerly’s home. He goes outside to talk to them and a man challenges Byerly to hit him. Byerly obliges and Calvin pronounces him a human, because the First Law (do not harm a human) would have stopped him if he were a robot. Later, Calvin reveals to Byerly that she suspects that he really is a robot. She recalls that a biophysicist named Byerly was horribly crippled in an accident. Calvin theorizes that the real Byerly is actually the old cripple, “John,” and that he built a new body around a positronic brain he’d acquired. Byerly doesn’t confess but does admit that he spread the rumor that if he were really a robot, he couldn’t hit a human being. Calvin suggests to him that the human Byerly hit wasn’t really a human, but another robot, which let him avoid any conflict with the First Law. She admits later that she doesn’t know whether or not he really was human.

“The Evitable Conflict” 2052: Earth. Earlier parts of the frame story give hints that Robots have evolved into dominating influences in human life as “Machines.” These Machines are vast systems of calculating circuits that maintain the human economy to minimize the harm that humans cause themselves. Earth has “no unemployment, no overproduction or shortages. Waste and famine are words in history books. And so the question of ownership of the means of production becomes obsolescent.” Byerly, now “World Co-ordinator,” worries because production is not precise, which means that the machines may be malfunctioning. Byerly interviews the Vice-Coordinators for each of Earth’s four regions (Eastern, Tropic, European, and Northern) but each one tries to downplay the production problems. Byerly guesses that behind it all is the “Society for Humanity,” a small group of powerful men “who feel themselves strong enough to decide for themselves what is best for them, and not just to be told [by robots] what is best for others.” But Calvin assures Byerly that the Machines allow the Society’s plots so that it will create just enough turmoil to destroy itself. Given that even the cleverest attempts to overthrow the Machines only result in more data for the Machines to consider, large-scale disruptions (wars, economic turmoil, etc.) will be avoidable–evitable. “Only the Machines, from now on, are inevitable!”

ABOUT THIS AUTHOR

Isaac Asimov (January 2, 1920 — April 6, 1992)

Isaac Asimov was born in Russia but his family moved to New York when he was three years old. A self-proclaimed “child prodigy,” he could read before first grade and had an almost perfect memory. Asimov credits his early intellectual development to public libraries: “My real education, the superstructure, the details, the true architecture, I got out of the public library. For an impoverished child . . . the library was the open door to wonder and achievement.”

Asimov became fascinated with pulp science fiction magazines and by age eleven he began to write, imitating in them in subject matter and style. By eighteen, he had sold his first story, and by twenty-one he had published “Robbie”–the first tale in the series gathered here–after several rejections. Over the next few years, he continued to test the Three Laws of Robotics in a series of robot stories.

In 1941, inspired by Gibbon’s Decline and Fall of the Roman Empire, Asimov imagined writing about the rise and fall of future civilizations as if he were a historian looking back. He called such writing future-historical. The first story, “Foundation,” grew into a series that rivaled his robot stories for fame and influence. By 1949, the two series (and other writings), had cemented Asimov’s reputation as one of “The Big Three,” along with groundbreaking science fiction authors Arthur C. Clarke and Robert Heinlein.

Even with early success, Asimov could not afford to be a full time writer until 1958. Meanwhile, he continued to write while obtaining his B.S. in 1939, M.A. in 1941, and Ph.D. in 1948 from Columbia University in the field of Chemistry. He taught at Boston University from 1949 to 1958 and remained a faculty member throughout his life.

This “science” part of Asimov’s science fiction shows in his early commitment to make the science in his stories realistic (or at least plausible). In addition, throughout the 1960s and 1970s he primarily wrote non-fiction science works that covered a dazzling number of subjects including Astronomy, Earth Sciences, Physics, and Biology, among others.

Calling Asimov a prolific writer would be an understatement. Through the years he tried his hands at literary criticism (from Shakespeare to Gilbert and Sullivan), humor (mainly limericks), children’s literature, autobiography, and editing. In addition, he managed to find time to write histories of Europe, North America, Greece, Egypt, England, and Earth. He wrote more than 1,600 essays and published at least 450 books. Famously, Asimov had at least one book published in each of the ten major Dewey Decimal library classifications. As he said in an interview, “I wrote everything I could think of.”

Asimov won every major science fiction award during his life. He won seven Hugo awards, the first in 1963 and the last in 1995. He was also honored with two Nebula awards. In 1986, the Science Fiction Writers Association named him a Grand Master and eleven years later he was inducted into the Science Fiction and Fantasy Hall of Fame.

Asimov died in 1992 but his work lives on through new generations of readers, writers, and scientists. Rather than being outdated, his writing has proved prophetic. Reading his stories about robots in 1950, we would have thought that his reach exceeded his grasp. As advances in robotics and brain imaging have brought the idea of a human-like robot closer, we recognize that the day may come when we just might see Robbie tending to our own children.

DISCUSSION AND WRITING

1)Why are there no female robots in the stories?

2)The two groups consistently opposed to robots are labor unions and religious fundamentalists. Why did Asimov single out these groups to be threatened by robots? What do the groups share that makes them hostile towards robots? What other groups might not welcome robots into our world? Why? What groups would be happiest to see robots develop? Why?

3)What do Mike Donovan and Gregory Powell look like? Without letting them look at their books, have your students describe the two in as much detail as possible. What color are their eyes? How tall is each one? What race are they? Then ask your students to prove their descriptions from citations in the book. Asimov gives only one physical detail about the two (in “Catch That Rabbit!”): Donovan has red hair.

4)In teams or individually, have your students check Asimov’s science. For example, when he claims that sound can’t travel on an airless asteroid (“Catch That Rabbit”), is he right? Why or why not? You can adjust this topic to your students’ level. For more exotic questions, they may try to figure out how fooling around with hyperspace might blow a hole in “the normal space-time fabric” (“Little Lost Robot”).

5)Why does Robbie always want Gloria to tell him the story of Cinderella? If Robbie is a modern Cinderella, who are the wicked stepsisters, the fairy godmother, and the prince?

6)Early in “Liar!” RB-34 says that fiction helps him understand people better than science does. For him, “Science is just a mass of data plastered together by make-shift theory.” In what ways do your students agree with RB-34? How can a novel tell you more about humans than an anthropological textbook, or physics? What facts do novels leave out that science books include?

7)Asimov tended to be a pessimist about humanity. Late in life, he allowed the possibility that humans might improve in the future. “But still,” he said, “people tend to do things that harm Humanity.” Do your students agree or disagree with Asimov’s pessimism? Why or why not? What, for Asimov, makes robots “a better breed” than us? What, for your students, makes us a better breed than robots?

8)The stories often use scientific-sounding terms without explanation (e.g., “hyper-imaginary,” “Mitchell Translation Equation,” “positronic brain,” “hyperatomic travel,” “Planar reactions,” “etheric physics,” etc.). Have your students pick a few of them and invent stories–consistent with their context in I, ROBOT–that explain what these terms are and how they work.

9)Why do robots call humans “master” while humans generically refer to the robots as “boy”?

10)In “Evidence,” Asimov writes that “the three Laws of Robotics” are the essential guiding principles of a good many of the world’s ethical systems.” First, lead your class in a discussion of the concept of an ethical system. What is it? Can your students identify some? Then encourage your students to find out how many of the three Laws are represented in the systems they have identified, and in what order. Does the U.S. legal system, for example, require obedience over self-preservation? If so, why? If not, why not?

11)Through the 1940s, Asimov published each story in I, ROBOT as an individual story. In 1950, he collected the stories and published them together as a book. What clues can your students find in the book that show that the stories have been joined together? Where are the seams? What techniques did Asimov use to make the stories seem like one whole book? Where does this reweaving work well? Poorly? Why does it work in some places but not others?

12)This book begins with a story about a robot that is dominated by a little child (“Robbie!”) and ends a story in which robots control every facet of human life (“The Evitable Conflict”). How likely do your students find a situation where humans would give up control of their worlds to machines? Would we give up the ability to own things? To determine our own movements? To what degree do they think we already have? What signs are there that our lives already have become controlled by machines? That we control our machines?

13)Asimov admits in his Memoirs that, in his early writing, he was most comfortable with European-American characters. What signs of discomfort can your students detect when he writes non-European characters like Ching Hso-lin or Lincoln Ngoma (“The Evitable Conflict”)? Put another way, would Asimov have written any differently if Hso-lin (or others) had been Powell or Donovan? For example, would he have noted that Powell spoke in “precise English” as with Hso-lin, or that Donovan’s English was “colloquial and mouth-filling,” as with Ngoma?

14)Although most readers focus on the Three Laws of Robotics as the animating principle for the robot stories, there is another factor at work: emotional attachment. Asimov said, “Back in 1939, I realized robots were lovable.” What is lovable about the various robots in the stories? Which one was the most lovable? Why? Which was least lovable? Why? How does Asimov manage to make a hunk of metal lovable (or unlovable)?

15)How would the collection have changed if it were titled Mind and Iron (as Asimov wanted to call it originally)? What does the title, I, ROBOT, communicate that the title, Mind and Iron, doesn’t? Similarly, how would the first story change if it were titled “Strange Playfellow” instead of “Robbie?” What does strange playfellow setup that Robbie doesn’t? Come up with other titles that Asimov might have considered for the individual stories and the whole collection.

16)I, ROBOT has been turned into a major motion picture starring Will Smith. How does the movie compare with your book-reading experience? What do you think of the adjustments made and liberties taken when converting this collection of stories to one seamless film adaptation?

SUGGESTED ACTIVITIES

1)Have your students invent their own philosophical puzzle involving the Three Laws of Robotics using Asimov’s human characters, but new robots. They might, for example, imagine a story where Powell and Donovan meet "Star Wars" character R2-D2 who is pulled in three directions by an order to destroy himself, the knowledge that destroying himself will kill a human, and the knowledge that not destroying himself will kill another human.

2)Asimov was deathly afraid of flying, but many of his stories involve flight across the earth to other planets and to distant galaxies. Have your students choose something they fear, and encourage them to write a science fiction story that involves that fear indirectly. For example, if someone is afraid of heights, s/he might write about a society that lives in the treetops. If someone is afraid of spiders, s/he might write about a society based on the pattern of a spider’s web. After they write the story, have them consider how fear factored into their composition. Did they tend to write less about what they were afraid of? More? Did they write about their fear less directly? Return to Asimov’s stories and see if you can identify the marks of fear when he writes about flight. (He was also afraid of other things that they might look for in a biography, or his memoirs.)

3)Let your students pick a story by another author that involves robots and compare it to Asimov’s. What similar concerns do they have? How human are the robots? What contrasts do they find between themes that interest Asimov and the other author?

What role do machines play in our lives today? Have your students keep a journal that lists every machine that helps them live their lives. A list might start, for example, with the alarm clock that wakes them up, the refrigerator that keeps the milk cold, the water heater that keeps the water hot, the computer that transmits email and stores their homework, the vehicle that drives them to school, the phones that deliver messages and pictures, and so on. What would life be like without these machines? In discussion, or writing, have them imagine a world where one by one, all these machines vanish. How would we eat, communicate, travel, etc.? Turning what they learn to the past, have your students research the history of a machine that has become indispensable to us today. What did people do before a particular machine was invented (e.g., clocks)? What changes happened when the machine was invented? Perfected? Turning toward the future, ask your students to think of machines that have yet to be invented? What things will become necessary to future generations that we do not have?

BEYOND THE BOOK

A Note on Inanimacy

We tend to understand things as if they were people. Literarily, this is called personification. James McIntyre, for example, paid homage to a huge block of cheese in a poem that begins, “We have seen thee, queen of cheese, / Lying quietly at your ease, / Gently fanned by evening breeze, / Thy fair form no flies dare seize.”

Although cheese being alive seems silly, it is a fact that we frequently animate the world around us. We curse rocks on which we stub our toes. We explain our hopes to stuffed animals. And we spend countless hours trying to outwit video games that are nothing more than shifting patterns of light, like sunshine dappled through waving leaves.

The stories that make up our cultural heritage often return to the theme of transforming “stuff” into “life.” The Yiddish tale of Golem describes a being made out of mud and created to serve that becomes sad that it isn’t like other children. From Italy comes the story of Pinocchio, a little boy carved from wood. The ancient Greeks offered us King Pygmalion’s wife, Galatea, who began as a statue carved from ivory, and ended as a living woman. And who can forget that horrible little doll Chucky from the movies! Similarly, stories from Africa, China, India, Australia, and other parts of the world populate our imaginations with talking dogs, unreasonable dragons, and spirits that appear as one thing only to slip into a more comfortable avatar later.

Your students themselves may be heirs to some stories in this tradition. Invite them to tell similar stories that they know and/or ask them to query their families about stories like this. Alongside this, they can explore the library (Asimov’s intellectual foundation) for books on mythology, folk tales, and fairy tales. What do the animation stories share? Where do they differ? How might these similarities and differences result from the particular time/culture where the stories originated?

Given the general human habit of treating things like people, Asimov’s idea that robots would look like humans is reasonable. However, in the real world, that seems not to be the case. As we look at the machine servants around us, few of them look remotely human. (Cell phones, for example, do not look like ears and mouths.) Even Honda’s humanoid robot named–appropriately enough– ASIMO looks more like something a child built out of LEGO blocks rather than the product of the some of the best minds in robotics.

As machines become more powerful and necessary in our lives, we seem to want to make them invisible–to forget them. Even Apple Computer’s stunning counter-design, bright color approach is balanced by its use of organic, rounded forms: there are more curves in nature than straight lines. The push in design is always towards the smaller, the more discreet. It’s as if by making the technology that dominates our lives less apparent, we can pretend that we are still in control–not such a different arrangement from what Asimov imagined.

Explore with your students other design possibilities for technological devices. What things could be bigger? Smaller? Different colors and shapes? Why do particular machines look the way they do? How could they look differently? What would happen, for example, if a calculator were round and bounced? Encourage your students to have fun. They may uncover something groundbreaking. As Asimov noted, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka’ (I’ve found it), but ‘That’s funny…’”

ABOUT THIS GUIDE

Darryl Stephens is a Ph.D. candidate in English Literature at U.C. Berkeley and received his undergraduate degree from the same institution in English and Linguistics. He is currently studying how the brain realizes literature.

 
Back to Top