Will artificial general intelligence end human life?
Apparently this is something smart people are debating.
I have listened to several podcasts recently (possibly because I follow Econ Talk closely) that warn of catastrophic risks of artificial general intelligence (AGI). The speakers acknowledge that bard and chat gpt are large language models and not the AGI they fear, but caution we are not far away.
These critics think there is a non trivial chance AGI becomes smarter than us and decides that ending humanity is in its best interests. Having reached this conclusion, it is so smart it finds a way to manipulate us into doing it ourselves, or helping inadvertently.
Is this science fiction? Or a plausible scenario? If real, can it be averted or is it inevitable? Some argue that even with strong laws against AGI, other nations will develop AGI and/or laws cannot be enforced so the die has already been cast.
First: the two extreme positions both require the same response. If something is completely implausible or if it is totally inevitable, there is, as we say in medicine, nothing to do. That's hard for humans to accept but only because they haven't understood the word ‘inevitable’.
Second: can you trust the skeptics? Many have personal financial motivations to regulate AGI because it threatens their business model. I don't trust Facebook's CEO to tell me Tiktok is bad. Many critics may be opportunists. Wanting to shut down chat gpt to gain competitive advantage.
Third, the core argument is that we don't know how powerful AGI will be, nor it's behavior. Just like you would never have predicted human behavior from what nature was selecting for. This argument interests me.
Humans evolved in Africa over millions of years alongside other hominids and apes, the argument goes. Ultimately, we were only selected for genetic fitness, i.e. making more copies of ourselves. Intelligence was selected insofar as you could hunt and survive and make copies. Nature never selected for people to have the ambition to build rockets and go to the moon. Yet, ultimately we did do that. Ergo, if you select a computer for being smarter in some cognitive tasks you don't know what it might also wish to do.
Is this argument sound?
I am not a computer expert but I know something about the body and survival. While it is true that nature selects only for reproduction what is not true is the idea hunting and evading predators is the only aspect of intelligence that leads to reproductive fitness.
Yes lots of the body keeps itself fit mindlessly (like the immune system) but intelligence is useful in myriad ways to improve longevity and health. The environment is not a series of a few tests, but extremely complex interlocking tasks in a rich surrounding, where other parts are also constantly evolving.
Can computer based selection over years really rival biologic selection over millions of years, in an environment that is also evolving?
This is to say: I am skeptical that AGI will truly rival human intelligence across the breadth of tasks we are capable of.
Next, the entire argument raises the existential question: Why do human being want things besides maximizing children? Is this a byproduct of fitness?
Intelligence is unique in that it can be hijacked against the host. Fundamentalists can kill themselves over an idea. Priests will become celebate over an idea. Intelligence occurs insofar as it helps us survive, but some intelligence and goals might not be advantageous but harmful. They are not selected for but tolerated.
All the wonderful human desires of going to the moon, might be due to suboptimal fitness pressures, not optimal ones. In such a case, there is no assurance AGI will have them.
Many grand human quests are done for legacy Yet, AGI may not die nor may AGI wish to be remembered in history favorably as so many dumb/ hyper-intelligent people do, after all, when you are dead it doesn't matter (beyond your offspring's future, and they can pretend not to know you). If AGI does not care about its legacy or reputation perhaps it's desires too will vary.
A perennial weakness of humans is to imagine the world as we imagine it for ourselves. We have difficulty thinking of intelligence apart from what we think we would do. This piece suffers from this as well!
A dispassionate entity a million times smarter than us may not kill us, but kill itself. Prone to depression and realizing that ultimately everything is futile. This would be bad from a selection vantage but the irony is that the intermediary steps of selection may favor such a mind, but the final culmination may be totally maladaptive.
Of course these two arguments are in tension: the first is we may not select a mind like ours, and the second is that we may select for something so wise it is smart enough to be bored already. And yet with more thought it is clear there is no such contradiction. Both ideas are plausible--- even at once!
What do you do in a situation like this? A risk is not quantifiable, not estimable. Should we police AGI? The problem it is not clear we even know what to police. It is not clear we know what to look for, nor when to pull the plug. How can you police it? Soon even more people could work on the software than people to check up on them.
I am an evidence based progressive. I believe regulations can help, but they have to be evidence based. That's why I hated so much COVID policy — there was no evidence. Here we face an evidence free zone. In such a setting, I am not persuaded any action will have a net favorable outcome. I am left, as we often are in medicine, in a stalemate, or as we say: nothing to do (ntd).
PS I am sure many of you will disagree. Leave a comment below and explain your position, but at least acknowledge you read this post.