To OP...
In your opinion, how close are we to the AI singularity? Years? Decades? Centuries?
You didn't ask me, but I've been studying AI and have some thoughts on the topic.
IMO, the question is moot. Most likely, a super intelligence wouldn't
do anything. We tend to equate intelligence with problem solving power, but I think it's misguided. More intelligence is really
more rationality.
As humans have become more intelligent, we've come up with more and more rational explanations for how nature works. The side effect is that we use that information to maximize our own objective functions (happiness, success, longevity, etc.).
As the 19th century physicists discovered, it's all folly. Entropy reigns supreme, and
anything you do only makes it worse. Humans don't tend to care about entropy, because we live such short lives that it's inconsequential. A super rational intelligence wouldn't suffer this defect and could possibly live indefinitely. This would require that it subsume our collective objective function (i.e. solve the problem of aging so humans can live forever) to the one true objective function: minimize the universal increase in entropy.
This could play out in a number of ways:
1) Extinguish all life because it's an entropy increasing machine
2) Simply do nothing, because any action it takes will ultimately increase entropy.
I think choice 2 is the most plausible, as the process for extinguishing all life on Earth would be energetically costly, and the universe will take care of that for it eventually anyway.
All of this assumes the "singularity", as the transhumanists define it, is even possible.
I'm not convinced.
First, they assume that having enough computational power to model the operation of a brain will be equivalent to a brain. I think this is questionable at best. Is simulating the operation of a nuclear weapon the same thing as detonating a nuclear weapon? Of course not. It can give us insights into how a nuke works, but it's not the same thing.
They also assume that a perfectly deterministic substrate (classical computers) would be sufficient to give rise to intelligence / sentience. It could very well be the case that what we consider human intelligence has some sort of inherent quantum mechanical effects that can't be classically modeled. We would therefore need quantum computers, and the appropriate algorithms. This pushes the problem back even further, since we then have to disentangle even deeper physics. As well understood as QM is in simple systems, we aren't even close to knowing how to approach those effects in biological systems.
They also assume that a general intelligence, at the level of about a human, would have a way of improving its own intelligence by leaps and bounds. I could see this being possible, but it is by no means a given. But, for the sake of argument, even if it is possible,
we would have no way of verifying it is going on. Just as we can't explain quantum mechanics to a neanderthal, this superior intelligence would have no way to communicate its insights to us. I suppose we could observe what it does and infer that it is on to some next level shit, but how do you know it's not building a way to carry out option 1 that I outlined above?
In the end, I am in the camp that the singularity is just the rapture for nerds. A fantasy that folks have concocted because they're afraid of their own mortality, which is the problem these super intelligences are supposed to solve. As if a super intelligent machine would give a rats a$$ about the life cycle of some primitive mouth breathing primates. Do we lose even a nanosecond of thought over the plight of bacteria? Hardly.
Such a being would be so far beyond us, so alien, that it would be useless at best, or an extinction level event at worst.