Holiday Gift Guide
Authors & Events
Gifts & Deals
Aug 19, 1997
| ISBN 9780375700514
Jan 18, 2012
| ISBN 9780307766571
Also available from:
Aug 19, 1997 | ISBN 9780375700514
Jan 18, 2012 | ISBN 9780307766571
——————————————————————Date: Sun, 16 Jan 2000 14:27:39 (PST)From: email@example.comTo: Alice@cs.stanford.eduSubject: Hello ——————————————————————Hello, Alice.
Astro Teller is a scientist, entrepreneur, and the author of Exegesis and Sacred Cows. He holds a PhD in artificial intelligence from Carnegie-Mellon University. Additionally, he completed degrees in computer science as well as symbolic and heuristic computation from Stanford University. He has also… More about Astro Teller
Q: Why did you write this book?A: I’m a compulsive storyteller, an avid reader, and have always nurtured the secret goal of spending my life as a writer. I wrote a story I wanted to read, not a story I thought others wanted me to tell. My desire to write and be a writer wasn’t why I started the novel, and yet it’s the real reason I wrote it.Also, as an AI scientist, I’ve always been disappointed with the treatment that the great artificial intelligences of Western literature have received. Ultimately, a timeless story has to be about the human condition. In service of this goal, creatures like HAL and Frankenstein’s monster have never been given a full chance to either be the protagonist of the story nor to be fully inhuman. From a very real point of view, the reason I wrote Exegesis was because I felt there was a hole in the classic creation-of-life stories in our culture.Q: What is artificial intelligence?A: If artificial intelligence (AI) can be defined at all, it is the study of how to get computers to do what humans currently do better. AI is not the study of life. That’s biology. AI concerns itself with thinking and consciousness, but not just as a study. The real goal of AI is to understand and build devices that can perceive, reason, act, and learn at least as well as we can. AI does not constrain itself to the re-engineering of the human brain, so there is little reason to expect that our results will become more human as they become more intelligent.According to those who practice it, artificial intelligence (AI) is "the study of ill-structured problems." The cultural definition of AI is something like "AI is the science of how to get machines to do the things they do in the movies." The truth is somewhere in between. Just as airplanes are not mechanical birds, computers will never be mechanical humans. Nevertheless, just as airplanes do "fly" in their own, equally valid way, AI will eventually produce a machine that is "intelligent" in its own, equally valid way.Q: Why is the book called EXEGESIS?A: "Exegesis" means the careful examination and interpretation of a text. The main reason that I chose Exegesis as the title for the story is that the story’s protagonist, Edgar, is a computer program. As a computer program Edgar lives his entire life through a process of exegesis; he only "sees" the world through text and must find what meaning he can entirely within that text.A second reason for the story’s title is that there are number of different levels on which can be read. On the surface it is clearly a story with its own characteristics. It has, however, been constructed to comment on a wide variety of archetypal and specific stories of "other intelligences" that many readers are familiar with. For this reason, Exegesis as a title is a hint to the reader to pursue these deeper aspects of the book.The third reason is that many people, when they hear "exegesis" rather than seeing it on paper, hear "Exit Jesus" instead. This is a purposeful pun. One of these "deeper" aspects of the story to which I just referred is that Exegesis can be read as an allegory for the second coming of Christ.Q: Why did you choose email as the format for EXEGESIS?A: Our culture has, particularly after Arthur C. Clarke’s depiction of HAL, had some notion of what an artificially intelligent computer program would be like. One way in which I think this popular notion is wrong is the assumption that such a program would experience the world anything like the way people experience it. A software program is, unless it inhabits some robotic shell, disembodied. That means that it has a mental connection, but no physical connection to the world.I’m confident that any consciousness (mechanical or otherwise) that didn’t get any of what we would call "sense information" (e.g., taste, touch, sight, etc.) would grow to feel very differently from us about a great many aspects of life.Edgar is a piece of software and Exegesis makes it clear he cannot, for example, see images or hear sounds. I wanted to be able to make the reader really feel what it might be like to be in Edgar’s shoes. Since Edgar gets no "description," it seemed appropriate to give none to the reader. In particular, Edgar can only get, process, and respond to symbols in the machine (e.g., ASCII text in an email message). By giving the reader this limited view of the world (email), the reader can actually feel the limitations Edgar must learn and grow within.Q: Do you consider this book, which, like the cautionary tale FRANKENSTEIN, is about artificial life, to be anti-science?A: To begin with, I try not to think of Exegesis as a cautionary tale of any sort. I went to some trouble to keep the story from having anything like a moral. However, if the story has a moral, it would indeed be a moral about engineering and the creation of knowledge and might sound something like this:"Like a parent, a scientist is not accountable for all future ramifications of what they produce, but like a parent, a scientist does have some responsibility to provide what structure she can so that the results of her labor tend toward the positive in society."Some people will, no doubt, read Exegesis as a story about a monster and a tale of caution about the hubris of the scientist. The story is just the opposite. In it we see Edgar, a blameless creature despite what the xenophobes feel, and a scientist who was right to create the creature and only wrong in how she shepherded her creation. So my answer is that I am certainly pro-science and I interpret Exegesis as pro-science too. Whether others will too is still, I’d say, an open question.Q: You’ve said that your text can be read on many levels–a fable, a parable, and even an allegory. Tell us what universal stories you are drawing upon.A: Our culture already has a number of well known stories about artificial life and non-human intelligence. In Exegesis I’ve tried to not only tell a new and engaging story, but also to comment on those well known stories through the details of my novel.For example, the story of Christ is our culture’s dominant story of a non-human intelligence. Though it certainly doesn’t have to be, Exegesis can be profitably read as an allegory for the second coming of Christ.Other stories from our culture that I was particularly conscious of while writing Exegesis include Frankenstein, Pygmalion, and Flowers for Algernon. In some stories of the artificial creation of life (such as Frankenstein) the focus and flavor of the story is that of a monster to be hunted. I have had people describe Exegesis to me as a modern retelling of the Frankenstein myth. I think it’s more than that, but they’re not wrong that there are aspects of that archetypal story in Exegesis.In other stories, such as Pygmalion, the story centers instead on a topic as different as love. I have also had people tell me Exegesis is a modern retelling of the creation of Galatea. Again, while that is not the extent of the story, echoes and response to that universal tale are definitely woven into Exegesis.Q: What are some of the more interesting projects being developed in artificial intelligence?A: There are an incredible range of projects going on in AI labs worldwide. I’ll pick a few to try to show the range, but certainly much of the AI work being done does not fall inside these examples. In addition, I will give examples from Carnegie Mellon’s School of Computer Science, both because they are familiar to me and because CMU houses many of the most exciting AI projects anywhere.For example, the Robotics Institute at CMU has a car that can partly drive itself. The limitations are still serious and include the fact that RALPH (as the car driving program is currently called) only steers the Ford minivan; they won’t let him/it control the accelerator and the brake yet. Still, RALPH drove 97% of the way from Washington, D.C. to San Diego on the highway at 65 mph with no one holding the wheel. The applications for this sort of program include not only the cliché robotic chauffeur, but also more modest devices like a warning noise and gentle steering correction for drivers who fall asleep at the wheel.The INTERACT project at CMU has the following as its goal: I pick up the phone in Pittsburgh and call my friend Arjuna in New Delhi. I don’t speak Hindi and Arjuna doesn’t speak English. However, if I call Arjuna through the INTERACT English-Hindi service, we can each speak our own languages and hear the other as though they spoke our language fluently. Right now INTERACT can only accomplish this slowly and with a limited vocabulary, but this AI project will probably go a long way toward making the world even smaller in the very near future.Researchers in computer vision at CMU have developed a camera that actually records 3-D information. That is, at 30 frames a second, it can record not only what the scene looks like, but how far each "pixel" is from the center of the lens. This kind of camera has fabulous uses not only in entertainment, but also for robotics, industrial virtual reality, and a host of other fields.Q: Do you ever feel, as a scientist, that you’re always looking so far into the future that the technology of today is frustrating?A: Frustrated? No, not frustrated. I’m excited to see what the future holds for us (socially and politically, as well as technologically) but I’m even more excited in the process than in the end result. As someone interested in creating the future, I’m also definitely a student of the past. History teaches that people in general and scientists in particular are horrible at predicting the future.There are two very different kinds of difficult problems in science. The first kind of scientific goal is difficult to achieve because achieving it would contradict a law of physics. Making a room temperature perpetual motion machine is a good example of that; such a machine violates the second law of thermodynamics. The second kind of difficult problem in science is difficult because we don’t really know even how to attack the problem. Artificial Intelligence is in the second category, not the first. There are plenty of people who say that AI falls in the first category, that AI has set itself an obviously impossible goal. They are entitled to their opinion, but my opinion is that a failure of the imagination does not constitute a proof. History has repeatedly shown that if it’s not actually contrary to the laws of physics, someone will eventually figure out a way to do it.Q: Edgar is fiction. How soon can we expect to meet him in real life?A: The realistic answer is that it could be tomorrow and it might be never. Neither of those extremes is particularly likely, though. I think that the chances are very good (let’s say, better than 50%) that some cousin of Edgar’s will appear on the scene between 15 and 30 years from now.The answer to your question depends very much on what you mean by "Edgar." If you feel comfortable calling the model-T "a car" and the Wright brother’s glider "an airplane," then I think you’ll have a real "artificial intelligence" in about 20-25 years. I’m making this distinction because the public has a way of raising the bar as science makes progress so that they don’t have to admit that, for example, machines can be creative.These predictions are behavioral predictions, mind you. By that I mean that we’ll have something that SEEMS just like Edgar. Whether this machine/program will FEEL just like Edgar (i.e., be self aware) is an entirely different question. I wouldn’t venture to say when that milestone will be reached because, in part, we don’t really know how to measure that milestone to see if we’ve reached it. Can you describe a test to establish whether or not I’m conscious and/or self aware? I can’t.Q: EXEGESIS paints a vivid future for AI scientists and the programs they create. How accurate would you say EXEGESIS is likely to be in that area?A: My work as a scientist in the field of artificial intelligence obviously had a tremendous impact on the course and details of Exegesis. In particular, the issue of the ways in which a software entity might really be close to human is a serious scientific question that I did not ignore. That being said, the novel is not meant to be scientifically or socially prophetic. If Exegesis has a prophecy hidden in it, that prophecy is about how we will cope with new science, not what that new science will be.I’ve had a number of questions about how exactly Edgar may have become self aware. In the novel I purposefully avoided answering that question because as a scientist I know better than to try to predict the future of science. As a scientist involved in the process, however, I have an educated guess that I’ll let you in on. Evolution is clearly one way to produce human level intelligence. There is now a serious branch of AI that uses a form of artificial evolution in the computer to evolve complex computer behaviors. If I had to bet $1000 today on which technique had the best chance of paying off with an Edgar first, I’d put my money on evolutionary computation.Q: What does the recent victory of Deep Blue over Garry Kasparov mean for the future conflicts between people and machines?A: We still remember some of the great moments in flight: Wilbur and his brother at Kitty Hawk, Lindbergh crossing the Atlantic, Yeager going faster than sound, Apollo 11 on the moon, etc. What we more easily forget, despite history’s repeated lessons, is that up until the very eve of the event, most of society lines up to ascend the public soap box and proclaim, "it can’t be done."Kasparov’s defeat by a computer will go down in the history of computation as one of the most important events since Charles Babbage’s analytic machine started adding at superhuman speed. Deep Blue has not officially earned the title of "best chess player in the world," but it probably will soon. As a culture shock, Kasparov’s loss was no more than a 4.5 on the Richter scale, but there is an 8.5 looming in our future.Around 1870, the machine earned our respect for its strength. John Henry won the spike driving contest but, dying at the climax, clearly demonstrated the future of physical prowess was with the machine. There is a good chance that the majority of people on this planet will live to see a generation of computers whose actions demand that we afford them not just the intellectual rights Deep Blue recently won for them, but the moral and personal rights our society is much less inclined to give away.Twice already in written history science has told us that, in a fundamental and dramatic way, we aren’t the center of the universe. The first time was the Copernican revolution. Over time, most of society has come to accept that being human can be valuable without our planet being in the center of the astronomical universe. The second time was the Darwinian revolution. Over time, much (though certainly less) of society has come to accept that being human can be valuable without the fact that we are the teleological center of the universe (the raison-d’etre of the universe).Kasparov’s defeat by a machine marks something special for the scientists, but for society the real test is yet to come. If they think they have a hard time with Deep Blue, wait until it becomes clear that humans aren’t even the mental center of the universe.The uncomfortable truth that most of us now living will need to deal with before we die is: humans may very well get the evolutionary consolation prize as the last step in the evolutionary process before the immortal and self-improving devices appeared on the scene. Isaac Asimov once said about the future of computers, "If we’re lucky, [they’ll] keep us on as pets."
Visit other sites in the Penguin Random House Network
Stay in Touch