Severin Perez

Designing Our AI Future

May 02, 2021

If one term has captured the popular imagination of late, it’s artificial intelligence (AI). Depending on whom you ask, it’s either the panacea for humanity’s woes, or a harbinger of the end times. And yet, how many of us can truly say that we understand AI? Can either the optimists or the pessimists actually justify their positions? What does the AI future actually look like?

These aren’t easy questions to answer. In truth, there probably isn’t a single answer to describe our AI future. Rather, there is some range of possible futures, and it’s incumbent on us to make choices that lead to one of them or another. And given that we can’t agree on whether the AI future is something to pursue or something to defend against, working cooperatively is likely to be difficult.

Perhaps a first step then is to take a dispassionate look at what an AI future looks like, and what it’s going to take to design one that works for, rather than against, humanity.

The AI Future is Coming

The story of humanity is the story of progress. Where once we led short, miserable lives defined by subsistence farming and a precarious balance on the precipice of disaster, we now lead long lives of relative leisure, safety, and plenty. This is not to say that human suffering has disappeared — far from it, the world remains a deeply unequal and difficult place for many people. But on average, today is in almost all respects the best time to be alive.1 And what has been the driver of our progress? Technology.

Humans have long yearned for better ways to maintain their health, safeguard their interests, and increase their productivity. It’s this yearning that leads us to study the universe, build new tools, and innovate relentlessly. Still, marginal gains in these areas have been the rule for most of human history — until the arrival of the industrial revolution when we saw massive increases in all three. And that trend now continues with the arrival of useful AI. Exponential improvements in computing power and the availability of information have led to AI systems that can solve problems faster and more effectively than humans.2

If we look at the world and identify all of the persistent problems we face — despite significant progress in the last 200 years — it’s hard to imagine that we would allow our progress to come to a halt now.3 Done right, AI promises to help us cure disease, end poverty, explore the universe, automate drudgery, and pursue our passions more freely. In other words, the march of progress will continue — humans will always be motivated to make their lives easier and more pleasant. And AI can help us do just that. Put simply, the benefits of AI are too great for us to ignore.4

At this point, we should pause to clarify a few definitions. What is it that we mean when we say artificial intelligence? U.C. Berkeley Professor Stuart Russel and his co-authors describe intelligence as the ability to “make good decisions, plans, or inferences” based on some set of environmental factors and associated goals.5 AI then is a system that can make such decisions on its own, without human intervention.

Such automated systems are already at work in the wild — at least in the most narrow of senses. Self-driving cars make decisions about how to move from one point to another safely and efficiently. AI systems have mastered decision-making in games like chess and go. And somewhat concerningly, AI programs are making decisions in distinctly human domains like parole hearings and home loans.6

But narrow decision-making such as that described above is just the beginning of our AI future. The end-game of AI development, or at least the end of humanity’s role as a builder of AI, is so-called artificial general intelligence (AGI). An AGI system is one that is capable of decision-making in a wide variety of domains, autonomously, and in the face of novel situations. Moreover, an AGI is not limited to human-level intelligence. To the contrary, it is almost guaranteed to surpass us on the overall intelligence spectrum.3

Machine circuitry can carry out computations faster than biological systems and can do so without exhaustion.7 Once we have created a machine that is smarter than us, it will be capable of improving on itself without limit. In 1965, long before modern AI advances, statistician I.J. Good referred to this as an intelligence explosion.8 More recently, University of Oxford Philosopher Nick Bostrum described the result as a superintelligence, which would far exceed the problem solving ability of humankind.9

All of this leads us to the conclusion that the AI future is coming. And it’s not just a future where constrained AI systems make decisions in a small set of domains. It’s a future where AGI and superintelligence systems make decisions in nearly all aspects of human life. The real question is, what does this future mean for humanity?

A Human-Centered Future

The problem with creating something smarter and more capable than you, is that you had better hope it stays friendly. This is the inspiration behind countless science fiction stories about humankind’s eventual end. And yet, our instinct for innovation is moving us, perhaps inevitably, towards such a creation. As philosopher and neuroscientist Sam Harris put it, “We seem unable to marshal an appropriate emotional response to the dangers that lie ahead.”3

If we accept that the AI future is inevitable, and smarter-than-humans machines along with it, then our focus ought to be squarely on designing that future for the benefit of humans. To this end, our first task should be understanding the workings AI systems. Unlike humans, who often act in ways that are unpredictable, illogical, and against their own interests, AI systems are almost logical to a fault — they have some goal, and pursue that goal relentlessly. It is humans who orient AI systems towards a goal, and it is thus incumbent on humans to ensure that we provide the right goal.

One doubts (or at least hopes) that we as a species would not knowingly provide our superintelligent systems with the goal of destroying humanity. But we might nonetheless arrive at that ignominious result if we create AI systems without care. An AI doesn’t have motivations in the same way that a human does — it has utility functions that it uses to make decisions about what actions to take. If an action is more likely to lead the system to achieve some goal (that is, gain utility), then it will take that action. This is problematic when the utility function is misspecified, resulting in a machine that pursues its goal without consideration for other things that its designers may have intended.10

Imagine, for example, a self-driving car. If the only goal you give it is to “move from point A to point B as quickly as possible,” then it won’t take long to see how dreadfully wrong you were in designing the machine’s utility function. Such a vehicle would break all traffic laws, run over curbs, ignore the safety of pedestrians, and generally be a menace to those around it. And that’s in the case of a domain-constrained system. A superintelligent AGI with a misspecified goal framework could put humanity itself at risk — not through malicious intent, but through the relentless pursuit of some other goal in a way that has adverse consequences for human lives.

AI researchers refer to this issue as a value alignment problem. Meaning, how do we ensure that AI systems act in a way that is consistent with our values? If we do build a superintelligent AGI, we are only going to have one chance at aligning its values with ours. Rolling back deployment of an AGI would be near impossible and it might well have motivation to defend itself from being turned off.7, 10

Considering the dangers of value misalignment, Prof. Russell argues that our focus should be on designing not just AI, but human-compatible AI, which would be provably beneficial to humankind. In Prof. Russell’s definition, a human-compatible AI would be altruistic, in that its only concern would be the realization of human values. Furthermore, it would approach this goal from a position of humility, meaning that through observation of human behavior it could update its definition of what constitutes human values.4

All of this is a way of saying that caution is key. We can hardly dismiss AI development as too dangerous — to do so would be to miss out on the near endless potential benefits to humanity. But just as we can’t dismiss AI’s potential, neither can we dismiss the dangers.

Designing the Future

Earlier, we noted that there are two schools of thought about our AI future: the optimists, who see it as humanity’s saving grace; and, the pessimists, who fully expect AI to bring about the apocalypse. Indeed, it’s hard to look at both the inevitability of AI and the difficulty of doing it right and not feel a deep sense of foreboding. Perhaps though, there is something to be said for both approaches.

If we consider the wonders that technology has given us to date, AI seems to have incredible potential to alleviate human suffering and help us achieve goals heretofore thought impossible. Today, we think curing cancer, interplanetary travel, and long-lasting life are mere fantasies. But could a superintelligent machine, with its superior processing and pattern recognition abilities, get us there? Maybe. In truth, we don’t know. But a few hundred years ago we didn’t know that flight, antibiotics, or worldwide instantaneous communication were possible. For us to say that we know what will remain impossible in the future feels a bit like hubris.

Of course, it’s also hubris to say that we can build a superintelligent machine right on the first try. Humankind has shown a remarkable capacity for self-destruction before. At this very moment, nuclear weapons, climate change, and unabated resource usage threaten humanity’s continued existence. Who is to say that AI won’t be the thing that pushes us into oblivion? If we don’t approach the problem with care, our worst fears may in fact come to pass.

If we’re going to reap the rewards of our AI future, without suffering the admittedly significant dangers, then we need to start designing the right future, right now. This is going to require a massive effort in both technical and value-oriented thinking. In the meantime, we’re going to have to hold off those who see AI as a winner-take-all proposition. If we don’t work cooperatively, then the likelihood of mistake grows, and it only takes one poorly designed superintelligence to put an end to the game for everyone.

All of this may seem like a somewhat academic exercise. AI systems today are amazing, but they’re nowhere near the type of superintelligent systems that we’re discussing here. In some ways, our AI future seems a long way off. Why then worry about it? In short, because if we don’t worry about it now, then we may not have the chance to do so in the future. Technology has a way of surprising people, and exponential growth, as we have seen in computing power, brings change far faster than most people realize.

Long term planning is not humanity’s strong suit. We are understandably focused on what is happening in people’s lives today. That said, even if we invested hundreds of billions of dollars into thinking about the design and operation of superintelligent machines, which may never even come to pass, it would be an investment worth making. Such research could prevent future catastrophe while simultaneously ensuring a bright future for humanity.

References

  1. Pinker, Steven (2018) “Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.”
  2. Brynjolfsson, Erik, McAfee, Andrew (2016) “The Second Machine Age.”
  3. Harris, Sam (2016) “Can we Build AI Without Losing Control Over it?”, TED. link
  4. Russell, Stuart (2017). “3 Principles for Creating Safer AI.” TED. link
  5. Russell, Stuart, Daniel Dewey, and Max Tegmark. “Research Priorities for Robust and Beneficial Artificial Intelligence.” AI Magazine 36, no. 4 (December 31, 2015): 105–14.
  6. Ramey, Corinne. “Algorithm Helps New York Decide Who Goes Free Before Trial.” Wall Street Journal (September 20, 2020).
  7. Bostrom, Nick (2015) “What happens when our computers get smarter than we are?”, TED. link
  8. Good, Irving John (1966). “Speculations Concerning the First Ultraintelligent Machine.”
  9. Bostrom, Nick (2014) “Superintelligence: Paths, Dangers, Strategies.”
  10. Soares, Nate. (2017) “Ensuring Smarter-than-Human Intelligence Has a Positive Outcome.” Google Talks. link

This article originally appeared on Medium.

Tags:
Share:

You might enjoy...


© Severin Perez, 2021