Daniel Lemire has a short entry on technological singularity, or in other word the idea that at some point humanity may develop technology that leads to a phase of rapid development that is so beyond our comprehension that we will be unable to predict even the relatively near future.
Consider hundred years ago. Progress was slow enough that while you couldn't predict most events accurately, you could relatively safely assume that things would be for the most part the same 10-20 years later.
Today, can we? The internet rose to prominence in just a few years. And while cell phones (as the internet) has been around for decades, they to have had transforming effects on society in just a few short years.
Consider a hundred years into the future - can we still assume we'll know for the most part what society will look like 2-3 years ahead? One year?
I find the concept fascinating in part because we know so little about it, or whether it is even a real possibility.
Even if the concept is valid, is the singularity a static point? That is, will we reach a point where humanity is transcended by technology? Or will humanity advance in capabilities sufficiently that the singularity is some sort of ever receding horizon beyond which we can't make predictions with any degree of accuracy?
Daniel mentions he thinks AI is currently out of reach, and it's a view I share. A lot of AI technology such as neural nets are useful, and will continue to improve, but we are still far away from understanding enough to building something intelligent enough to call a real AI.
But he also raises the question if we want to create something more intelligent than ourselves.
To me that question is pointless. The question is "will we?" and the answer is "yes", because as soon as we have the ability anyone not making use of that ability will be left behind, whether it be in terms of defence or in terms of competitive ability in a marketplace.
Another important aspect of the idea of a singularity is that we don't NEED to get to the point where we can directly create something more intelligent than ourselves. We only need to get to the point where we can create a system with self improving intelligence.
If we manage to create ANY form of real intelligence in software, whatever real intelligence is, we already know that genetic programming has the potential to automatically evolve the software. If we manage to create intelligence coupled with a good enough system of increasingly complex competitive pressure, we may at that stage already have created the singularity.
Once that happen, the exponential improvement may happen by itself, in the form of massively accelerated evolution - not directed engineering.
Is it desirable or not? We don't know. There's no way of knowing when/if the singularity happens whatever technological advances will result will be benign or not.