What separates us from the robots? The answer might be common sense…

Richard Wallace

There’s a saying about common sense attributed to Albert Einstein (but possibly snuck into his mouth by a profiler, Life Magazine’s Lincoln Barnett) which goes:  “Common sense is actually nothing more than a deposit of prejudices laid down in the mind prior to the age of eighteen.” Einstein was (supposedly) using this as a foundation to play with principles around gravity and relativity, but of course some of the simpler principles—or prejudices, if you will—that we learn before eighteen are extremely useful. Fire is hot. Water is wet, and possibly deep. Things are heavy or light, long or short, edible or poisonous. We generally know this instinctively, or at least can use our common sense to have a decent guess. An ability to learn, recall and adapt how the world around us will affect us—to develop an instinct for whether something will burn us before we stick our hand in it—is a pretty decent marker of what it means to be human as opposed to a dumb animal.

Or, for that matter, a robot. But perhaps not for long, as Microsoft co-founder Paul Allen’s non-profit AI lab is now attempting to recreate common sense artificially. Undoubtedly the next frontier in the AI world is to give robots reactive, analytical, learned computational abilities that will help them navigate the world the way humans do.

Notwithstanding our fears about terrifying robot dogs that can open doors and Skynet-style dystopias, the current crop of AI robots are actually pretty stoopid. They can perform simple tasks on command, and even mimic human behaviour, but we won’t be seeing any robotic uprisings yet—not until they’ve at least mastered the art of encountering an unexpected task or obstacle and using common sense to solve it. Which is, er, what the Allen Institute’s Project Alexandria is trying to teach them to do. Thanks, guys. No, really.

Project Alexandria’s aims are to develop machine learning, figure out how to actively monitor the common sense properties of an AI system, and crowdsource common sense from humans—essentially it’s a large-scale data collection project that aims to create a repository of common sense that AI systems can draw on. Because in order to reach that hallowed goal of total robotic dominance of the human race, robots will need access to a mind-boggling array of basic facts that humans take for granted. At the moment, AI can be stumped by seemingly simple questions that humans learn through experience from an early age: how to tell if a milk carton is full, or how to guess the contents of a trash can. We humans have the luxury of living with milk cartons and trash cans on a daily basis, but to the cold logic of a robot, it isn’t so straightforward.

In other words, Project Alexandria aims to bridge the gap between what makes us fundamentally human—the intuition and experience that a trash can likely contains banana peels and food wrappers and used tissues—and the mathematical way AI experiences the world. Right now machine learning is a long way from perfect, as these hilarious outputs attest—but initiatives like Project Alexandria, if successful, will begin to change that. And in doing so, call into question what it means to be human in the first place.

Which leads us to the overwhelming question—should we be doing this at all? Such renowned minds as the late Stephen Hawking have publicly expressed the apocalyptic possibilities of AI, and so among the potential benefits of Project Alexandia—medical diagnosis, AI safety, intelligent home assistants—there is the broader question of whether technological progress is inherently beneficial or whether that progress will ultimately slip from our control. Yesterday it was reported that one of Uber’s driverless cars claimed its first pedestrian victim; there is always the possibility that tech will not act in the ways we have anticipated. And, like all tech, once Pandora’s box is open, it’s remarkably difficult to close the lid. If we are going to continue these high-minded developments we must accept that strict regulation will be necessary to minimise any unwanted side-effects.

But as an innovation agency, it’s hard not to be seduced by the possibilities. We live in an age where the sci-fi fantasies that we used to see as impossibly futuristic in movies and novels are suddenly possible, whether its HAL-9000 style smart homes or robot butlers and assistants. Shops in Scotland have already begun to trial (with mixed results) robot shop assistants, and despite fears around automation (you can see it creeping in at self-service checkouts the world over) full automation is also often seen as a radical solution to the less savoury parts of capitalism. In other words, proponents of “fully-automated luxury communism” claim, the outsourcing of manual labour to robots may not be a crisis for humans who require employment—it may be a way to divorce us from the need to work at all, thus freeing up our time for education, self-care, leisure and other more edifying pursuits than performing repetitive tasks behind desks for eight hours a day. A post-work future in which we are not shackled to specific jobs in order to pay the rent is an exciting—if currently unimaginable—prospect, one which will drastically redefine how we live and even how we see ourselves as humans.

As always, innovation carries both a value and a risk. Although luxury communism may well be the economic model western society have been clamouring for, Hawking’s prediction of human extinction is also possible, along with myriad middle grounds that we haven’t yet anticipated. But unlike robots, us humans have the tools to negotiate the anticipated. So while there’s no reason not to pursue our visions of luxury automation and ten-hour weeks, we also need to make sure we are managing that technology carefully so that we remain in control of our destiny at all times. That’s just common sense.