Our #TECH_Newser covers ‘news of the day’ #techNewserTechnology content.
| cutline • press clip • news of the day |
Would “artificial superintelligence” lead to the end of life on Earth? It’s not a stupid question.
The activist group Extinction Rebellion has been remarkably successful at raising public awareness of the ecological and climate crises, especially given that it was established only in 2018.
The dreadful truth, however, is that climate change isn’t the only global catastrophe that humanity confronts this century. Synthetic biology could make it possible to create designer pathogens far more lethal than COVID-19, nuclear weapons continue to cast a dark shadow on global civilization and advanced nanotechnology could trigger arms races, destabilize societies and “enable powerful new types of weaponry.”
Yet another serious threat comes from artificial intelligence, or AI. In the near-term, AI systems like those sold by IBM, Microsoft, Amazon and other tech giants could exacerbate inequality due to gender and racial biases. According to a paper co-authored by Timnit Gebru, the former Google employee who was fired “after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems,” facial recognition software is “less accurate at identifying women and people of color, which means its use can end up discriminating against them.” These are very real problems affecting large groups of people that require urgent attention.
But there are also longer-term risks, as well, arising from the possibility of algorithms that exceed human levels of general intelligence. An artificial superintelligence, or ASI, would by definition be smarter than any possible human being in every cognitive domain of interest, such as abstract reasoning, working memory, processing speed and so on. Although there is no obvious leap from current “deep-learning” algorithms to ASI, there is a good case to make that the creation of an ASI is not a matter of if but when: Sooner or later, scientists will figure out how to build an ASI, or figure out how to build an AI system that can build an ASI, perhaps by modifying its own code.
When we do this, it will be the most significant event in human history: Suddenly, for the first time, humanity will be joined by a problem-solving agent more clever than itself. What would happen? Would paradise ensue? Or would the ASI promptly destroy us?
Even a low probability that machine superintelligence leads to “existential catastrophe” presents an unacceptable risk — not just for humans but for our entire planet.
I believe we should take the arguments for why “a plausible default outcome of the creation of machine superintelligence is existential catastrophe” very seriously. Even if the probability of such arguments being correct is low, a risk is standardly defined as the probability of an event multiplied by its consequences. And since the consequences of total annihilation would be enormous, even a low probability (multiplied by this consequence) would yield a sky-high risk.
Even more, the very same arguments for why an ASI could cause the extinction of our species also lead to the conclusion that it could obliterate the entire biosphere. Fundamentally, the risk posed by artificial superintelligence is an environmental risk. It is not just an issue of whether humanity survives or not, but an environmental issue that concerns all earthly life, which is why I have been calling for an Extinction Rebellion-like movement to form around the dangers of ASI — a threat that, like climate change, could potentially harm every creature on the planet.
Although no one knows for sure when we will succeed in building an ASI, one survey of experts found a 50 percent likelihood of “human-level machine intelligence” by 2040 and a 90 percent likelihood by 2075. A human-level machine intelligence, or artificial general intelligence, abbreviated AGI, is the stepping-stone to ASI, and the step from one to the other might be very small, since any sufficiently intelligent system will quickly realize that improving its own problem-solving abilities will help it achieve a wide range of “final goals,” or the goals that it ultimately “wants” to achieve (in the same sense that spellcheck “wants” to correct misspelled words).
Furthermore, one study from 2020 reports that at least 72 research projects around the world are currently, and explicitly, working to create an AGI. Some of these projects are just as explicit that they do not take seriously the potential threats posed by ASI. For example, a company called 2AI, which runs the Victor project, writes on its website:
There is a lot of talk lately about how dangerous it would be to unleash real AI on the world. A program that thinks for itself might become hell-bent on self preservation, and in its wisdom may conclude that the best way to save itself is to destroy civilization as we know it. Will it flood the internet with viruses and erase our data? Will it crash global financial markets and empty our bank accounts? Will it create robots that enslave all of humanity? Will it trigger global thermonuclear war? … We think this is all crazy talk.
But is it crazy talk? In my view, the answer is no. The arguments for why ASI could devastate the biosphere and destroy humanity, which are primarily philosophical, are complicated, with many moving parts. But the central conclusion is that by far the greatest concern is the unintended consequences of the ASI striving to achieve its final goals. Many technologies have unintended consequences, and indeed anthropogenic climate change is an unintended consequence of large numbers of people burning fossil fuels. (Initially, the transition from using horses to automobiles powered by internal combustion engines was hailed as a solution to the problem of urban pollution.)
Most new technologies have unintended consequences, and ASI would be the most powerful technology ever created, so we should expect its potential unintended consequences to be massively disruptive.
An ASI would be the most powerful technology ever created, and for this reason we should expect its potential unintended consequences to be even more disruptive than those of past technologies. Furthermore, unlike all past technologies, the ASI would be a fully autonomous agent in its own right, whose actions are determined by a superhuman capacity to secure effective means to its ends, along with an ability to process information many orders of magnitude faster than we can.
Consider that an ASI “thinking” one million times faster than us would see the world unfold in super-duper-slow motion. A single minute for us would correspond to roughly two years for it. To put this in perspective, it takes the average U.S. student 8.2 years to earn a PhD, which amounts to only 4.3 minutes in ASI-time. Over the period it takes a human to get a PhD, the ASI could have earned roughly 1,002,306 PhDs.
This is why the idea that we could simply unplug a rogue ASI if it were to behave in unexpected ways is unconvincing: The time it would take to reach for the plug would give the ASI, with its superior ability to problem-solve, ages to figure out how to prevent us from turning it off. Perhaps it quickly connects to the internet, or shuffles around some electrons in its hardware to influence technologies in the vicinity. Who knows? Perhaps we aren’t even smart enough to figure out all the ways it might stop us from shutting it down.
But why would it want to stop us from doing this? The idea is simple: If you give an algorithm some task — a final goal — and if that algorithm has general intelligence, as we do, it will, after a moment’s reflection, realize that one way it could fail to achieve its goal is by being shut down. Self-preservation, then, is a predictable subgoal that sufficiently intelligent systems will automatically end up with, simply by reasoning through the ways it could fail.
Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.
What, then, if we are unable to stop it? Imagine that we give the ASI the single goal of establishing world peace. What might it do? Perhaps it would immediately launch all the nuclear weapons in the world to destroy the entire biosphere, reasoning — logically, you’d have to say — that if there is no more biosphere there will be no more humans, and if there are no more humans then there can be no more war — and what we told it to do was precisely that, even though what we intended it to do was otherwise.
Fortunately, there’s an easy fix: Simply add in a restriction to the ASI’s goal system that says, “Don’t establish world peace by obliterating all life on the planet.” Now what would it do? Well, how else might a literal-minded agent bring about world peace? Maybe it would place every human being in suspended animation, or lobotomize us all, or use invasive mind-control technologies to control our behaviors.
Again, there’s an easy fix: Simply add in more restrictions to the ASI’s goal system. The point of this exercise, however, is that by using our merely human-level capacities, many of us can poke holes in just about any proposed set of restrictions, each time resulting in more and more restrictions having to be added. And we can keep this going indefinitely, with no end in sight.
Hence, given the seeming interminability of this exercise, the disheartening question arises: How can we ever be sure that we’ve come up with a complete, exhaustive list of goals and restrictions that guarantee the ASI won’t inadvertently do something that destroys us and the environment? The ASI thinks a million times faster than us. It could quickly gain access and control over the economy, laboratory equipment and military technologies. And for any final goal that we give it, the ASI will automatically come to value self-preservation as a crucial instrumental subgoal.
How can we come up with a list of goals and restrictions that guarantee the ASI won’t do something that destroys us and the environment? We can’t.
Yet self-preservation isn’t the only subgoal; so is resource acquisition. To do stuff, to make things happen, one needs resources — and usually, the more resources one has, the better. The problem is that without giving the ASI all the right restrictions, there are a seemingly endless number of ways it might acquire resources that would cause us, or our fellow creatures, harm. Program it to cure cancer: It immediately converts the entire planet into cancer research labs. Program it to solve the Riemann hypothesis: It immediately converts the entire planet into a giant computer. Program it to maximize the number of paperclips in the universe (an intentionally silly example): It immediately converts everything it can into paperclips, launches spaceships, builds factories on other planets — and perhaps, in the process, if there are other life forms in the universe, destroys those creatures, too.
It cannot be overemphasized: an ASI would be an extremely powerful technology. And power equals danger. Although Elon Musk is very often wrong, he was right when he tweeted that advanced artificial intelligence could be “more dangerous than nukes.” The dangers posed by this technology, though, would not be limited to humanity; they would imperil the whole environment.
This is why we need, right now, in the streets, lobbying the government, sounding the alarm, an Extinction Rebellion-like movement focused on ASI. That’s why I am in the process of launching the Campaign Against Advanced AI, which will strive to educate the public about the immense risks of ASI and convince our political leaders that they need to take this threat, alongside climate change, very seriously.
A movement of this sort could embrace one of two strategies. A “weak” strategy would be to convince governments — all governments around the world — to impose strict regulations on research projects working to create AGI. Companies like 2AI should not be permitted to take an insouciant attitude toward a potentially transformative technology like ASI.
A “strong” strategy would aim to halt all ongoing research aimed at creating AGI. In his 2000 article “Why the Future Doesn’t Need Us,” Bill Joy, cofounder of Sun Microsystems, argued that some domains of scientific knowledge are simply too dangerous for us to explore. Hence, he contended, we should impose moratoriums on these fields, doing everything we can to prevent the relevant knowledge from being obtained. Not all knowledge is good. Some knowledge poses “information hazards” — and once the knowledge genie is out of the lamp, it cannot be put back in.
Although I am most sympathetic to the strong strategy, I am not committed to it. More than anything, it should be underlined that almost no sustained, systematic research has been conducted on how best to prevent certain technologies from being developed. One goal of the Campaign Against Advanced AI would be to fund such research, to figure out responsible, ethical means of preventing an ASI catastrophe by putting the brakes on current research. We must make sure that superintelligent algorithms are environmentally safe.
If experts are correct, an ASI could make its debut in our lifetimes, or the lifetimes of our children. But even if ASI is far away — or even if it turns out to be impossible to create, which is a possibility — we don’t know that for sure, and hence the risk posed by ASI may still be enormous, perhaps comparable to or exceeding the risks of climate change (which are huge). This is why we need to rebel — not later, but now.
Read more
about the quest for artificial intelligence
‘News of the Day’ content, as reported by public domain newswires.
Source Information (if available)
It appears the above article may have originally appeared on www.salon.com and has been shared elsewhere on the internet, repeatedly. News articles have become eerily similar to manufacturer descriptions.
We will happily entertain any content removal requests, simply reach out to us. In the interim, please perform due diligence and place any content you deem “privileged” behind a subscription and/or paywall.
First to share? If share image does not populate, please close the share box & re-open or reload page to load the image, Thanks!