My goal isn't to scare people just for the sake of traumatizing or depressing people. I'm sorry if thinking that kind of doomsday scenario makes people feel uncomfortable. But hey, if the ship is sinking, does it matter if you depress people or do you prioritize saving the ship from disaster?
Now, the question is whether the ship is actually sinking or not. Or rather, how big is the chance of a bad outcome?
You might disagree with me and believe the risk is non-existent or very far off in the future, but that doesn't mean my position is
nonsense. Can you at least concede that?
George Hinton recently quit his job at Google so he could openly talk about his fears about AI.
1. We're talking about an important figure in the field. Meaning, he's credible and respected by his colleagues.
2. He's afraid of machines becoming smarter than us, not just job loss or possible malicious use of AI by humans (which it's almost universally agreed upon real near future concern, and enough to raise an alarm).
The median AI researcher, according to this survey, thinks there's a 10% chance of humans failing to control AI:
Katja Grace Aug 4 2022 First findings from the new 2022 Expert Survey on Progress in AI.
aiimpacts.org
So can we agree that the thought of a very bad outcome to the current AI arms race isn't far out of left field?
Responding to your points:
1. GPT 4 isn't impressive? Come on. GPT 3.5 is already pretty impressive. ChatGPT is able to write code, poetry, sales letter, legal text, and fiction better than most humans and faster than every human. It's extremely impressive.
Ok, are standardized tests not effective to measure how good humans are in the professions they were designed to measure? Fair enough, but what's breathtaking is that GPT 3.5 scored in the bottom 10% percentile and GPT4 in the top 90%, in some tests. The rate of improvement is significant. As far as I'm concerned, LLM's started being a thing in 2018, they were laughable in 2020, and somehow in 2022 they made a big leap forward.
The same pattern has repeated in other domains/applications. It went from being dumber than most humans to smarter than most humans to superhuman really fast. It also developed emergent capabilities that we didn't expect and don't yet fully understand.
It also seems we're currently moving the goalpost of what constitutes something "truly intelligent", and we're definitely past the point of what we might have considered intelligence in a machine 50 years ago (not talking about sentience or consciousness, that's not the point).
What's the requirement or measure for you to consider AI truly/generally intelligent?
The last frontier is AI being capable of scientific thought (ie. if it had all the information up to the start of the XXth century, it would come up with the theory of relativity). At which point it's already too late, which brings me to the next point:
2. The risk is AI becoming smarter than humans and capable of programming itself. If it's capable of recursively improving itself, it can become superintelligent in a very short amount of time. As smarter than humans as we are compared to a frog, maybe more. At that point, do you really think we would be able to control something way smarter than us? Intelligence is power. Whatever way we think to control it, it will probably fail/AI will find a way around it/it will simply work in unintended ways.
Saying that AI is confined to a hard drive is like saying that we're confined to our brains. If AI has access to the internet, it has access to a lot of resources, electronic devices, and people.
It's not necessarily that AI is 'evil' and wants humanity destroyed, it's just that it doesn't care, it doesn't have any morality. It just optimizes for some outcome that we don't understand and doesn't particularly care about the wellbeing of humanity, and will likely use resources that are vital for our survival.
We have empathy because we evolved in an ancestral environment where mutual cooperation was essential to the survival of our genes. Creating a "Friendly AI" is the challenge of instilling morality into something completely alien that didn't 'evolve' in the same environment as us, doesn't feel or think like us, but can appear as it does. What can appear as AI showing empathy or moral sentiments is really AI mimicking human generated text.
Sadly, I agree with your last paragraph, there's no puting the genie out of the bottle, or at least it's very hard to do. But thinking it's impossible to do so is part of the problem. If everyone thinks AI risk is nonsense and everyone else will think that it's impossible to stop the arms race, everyone will keep scaling AI. There's incentive to keep going (losing against competition, being attacked by other countries, etc) and no incentive to stop, since AI risk is fearmongering nonsense. If more people start to believe that AI risk makes sense, more people will have less of an incentive to keep going, AI might kill you even if you win, and other people are starting to believe the some so they're going to stop to.
So, we can try. Or at least we can delay it. Even if it's inevitable, I'd rather have AI doom in 15 years and not 5 years.
I'm a very optimistic person, hope I made my points clear and this doesn't come off and doom nonsense.