Artificial intelligence (AI), like time travel, is a perennial subject for writers of science fiction. And, like time travel, AI is subject to a number of misunderstandings which can make writing a story in that setting, on that subject, using it as a McGuffin, or as a character, problematic.
With movies like Blade Runner 2049, Ex Machina, and Her keeping AI in our geeky gestalt, writers will continue to tackle the topic. A little research might spark a new, or more realistic, take on the AI tale, though.
The first thing to understand is that true AI, a machine that is self-aware and thinks independently, is the stuff of science fiction. In August, I attended a panel at WorldCon 75 on AI with Greg Hullender, who used to work on Microsoft’s machine learning project. True AI is at least hundreds of years away. It may never be achieved, in fact.
Mr. Hullender pointed out that simulating the brain isn’t the answer. Biology isn’t efficient enough. Also, we humans barely understand the mechanism of how we think. Psychology and neurobiology continue to advance our understanding of the human mind, but how can we attempt to duplicate something the mechanics of which elude our comprehension? So we don’t.
The scientific method is to design an experiment to test a theory—try something you think might work—and use the results of the experiment to feed back into the next attempt. This is what computer scientists are doing now. There is much thinking about thinking left to be done (how meta is that?).
What are scientists doing today that you might be able to extrapolate into the future? As a start, give some consideration to expert systems, neural networks, and autonomous robots.
1) Expert systems
What’s an expert system you ask (and I’m so glad you did)? An expert system is a computer that is specialized and programmed to perform a particular task. In Deep Blue’s case, the task was to play chess and it had been first programmed with the rules of chess, and all of the strategies of all the grand master chess players that its programmers could find. Then, Deep Blue played games of chess with other computer and human opponents to implement those rules and “understand” how the strategies were applied.
Watson’s task was to answer questions. To that end, it is one of the biggest knowledge repositories in existence. Its programmers probably had to prepare Watson for Jeopardy by programming it to find the question from the answer, as Jeopardy competitors must.
They’ve even developed a computer diagnostician that can extrapolate a diagnosis from patient symptoms with as much or greater accuracy than a human doctor.
2) Neural Networks
Recipes, paint colors, and My Little Pony: these are amusing examples of what can happen when you unleash a neural network on a task with a less-than-comprehensive data set. In all of the ensuing cases, the same basic process was used. Program the computer with exemplars of a given data type and then tell it to generate that data type.
Janelle Shane, who works with neural networks in her spare time, decided to teach one how to create recipe names. She programmed it with thousands of recipe names and then set it the task of creating its own.
The result? Recipe names like Crimm Grunk Garlic Cleas, Cabbage Pot Cookies, and Artichoke Gelatin Dogs. Yum! Shane’s efforts were inspired by Tom Brewe’s neural network generated recipes (they’re even weirder).
Shane next turned her efforts to teaching a neural network to name colors of paint. She entered the names and color swatches of 7,700 Sherwin-Williams paint colors and let the neural network do its thing. Once again, the results are entertaining: Hurkey White, Stank Bean, and Turdly were among the titter-worthy. Some of the colors even matched the names (sort of).
My final example is pure fun. Once again, Shane is responsible. This time, she programmed the neural network more than 1,500 names from the My Little Pony Friendship is Magic Wiki, and set it to work. The results tended to the dark side, creating pony names like Sunder Bright, Deader Pony, and Bitter Star.
3) Autonomous Robots
Facebook tried programming their “dialog agents” to negotiate with one another and the results were fascinating. Not only did the agents simulate human negotiation strategies, like pretending to be interested in something worthless so that the other agent would want it more, but at one point, researchers had to change the bots’ programming to a “fixed supervised model” because the agents also started to communicate in their own, incomprehensible language.
Some media reported that Facebook shut down the experiment because the dialog agents were showing signs of independent thought, which isn’t the case. Adrienne Lafrance, who reported on the event for The Atlantic, said that the agents’ created language wasn’t even close to the dreaded singularity (true, emergent AI). The researchers just couldn’t interpret the results of their experiment because they couldn’t understand what the bots said.
The agents did what computers do best. They improved the efficiency of the task they were assigned by creating their own shorthand.
4) Robot workers and self-driving cars
As I often do, I listened to the Canadian Broadcast Corporation’s (CBC’s) Podcast Playlist the other week. The interesting part of the episode was the segment on the podcast Containers and the Freight robot made by Fetch Robotics. Freight is a robot that transports goods to human factory workers. Humans still need to sort and pack the goods because the sensitivity and articulation of hands and fingers are difficult to simulate.
Since Freight works with humans, they’ve had to program the machine to recognize human legs so it won’t injure human workers. They’ve “shown” Freight long, short, thin, and thick legs in a variety of pants, from jeggings, through cargo pants, to slacks. The same principle applies for self-driving cars but the complexity increases exponentially. Freight only has a small environment within which to work. Self-driving cars have to be programmed with a lot more than just leg shapes.
Cars would have to be able to recognize other cars, other types of vehicles, people (of all shapes, sizes, and ages), animals, obstacles, and they’d have to be able to adapt to different driving conditions, like weather. Programmers would have to consider “edge cases” to enable self-driving vehicles to account for the unexpected.
Bringing it to the page
So where does this leave us writers of science fiction when it comes to telling an AI tale? Ultimately, and perhaps ironically, it’s the humanity of the characters and their struggles that attract readers. Science is the setting, not the story.
Think of Ann Leckie’s Ancillary series. Breq is not human in distinct ways, but her goal is quintessentially human (revenge). AI is a touching movie about an abandoned AI who was designed to offer comfort to a grieving mother. The protagonist’s struggle is to find its purpose and home once it has been discarded (the non-SF analog would be The Velveteen Rabbit). Of course, AI more often features as the antagonist of a story, as in the Terminator series of movies, or the adaptation of Asimov’s I, Robot.
My best advice would be to read other novels about AI and robotics. See what’s already out there. Verse yourself in the sub-genre before you try your hand at it. You are free, of course, to write whatever you’d like. In science fiction, though, you can never go wrong with research, and savvy readers will call you out if your science is implausible. These interesting and entertaining resources will give you a solid grounding with which to work.
Melanie Marttila creates worlds from whole cloth. She’s a dreamsinger, an ink alchemist, and an unabashed learning mutt. Her speculative short fiction has appeared in Bastion Science Fiction Magazine, On Spec Magazine, and Sudbury Ink. She lives and writes in Sudbury, Ontario, Canada, where she spends her days working as a corporate trainer. She blogs at http://www.melaniemarttila.ca and you can find her on Facebook and Twitter.