The technological Singularity is the hypothesized creation of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate.
Vernor Vinge originally coined the term "singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human.
Approaching The Singularity
Surfing the web for The Singularity gives me a gnarly headache. No doubt a reaction to asking my brain to find ways of replacing itself.
Both intriguing and frightening, my approach to the Singularity is to treat it like a dessert tray, the further it is away, the less I am infatuated by it.
Through the exponential growth of computer processing power, biotechnology or some other means, futurists have predicted that The Singularity could arrive as early as 2050.
Here are a few reasons why The Singularity might arrive later than expected.
Software is Hard
I agree that by mid-century, hardware or bioware could
exist that is capable of housing a superintelligent
entity. What we will not have by that time, is the
software to utilize it.
Put simply, until my computer is smarter than a trash can, (a real-life receptacle never asks, "Do you really want to throw that away?"), I'm not worried about programmers developing a superintelligence.
It has been suggested that before AI experts understand the brain well enough to make up their minds, communities of "dumb" computers could be linked together to program themselves into a superintelligence.
To this scenario, I submit my last family reunion as one example where adding more brains in the room did not increase the overall intelligence of the group.
Good Warning, Dave?
Futurists and science fiction writers are often optimistic when putting a date on future technology. Example: Stanley Kubrick's & Arthur C. Clarke's 2001, A Space Odyssey (1968).
Better Me, Then You
What I cannot push past my comfort date, is the enhanced human brain. It is conceivable that genetics, drugs and/or brain-machine interfaces, could help create a superintelligent brain by the year 2050. Would you mind terribly, if I go first?
By the Way
The path to the pinnacle of humanity is a slippery slope. Is society ready for the precursive power presented by the building blocks of the Singularity--computers, nanotechnology, biotechnology, artificial intelligence and information?
Everywhere I look I see vanity, greed, hunger and waste (I'm writing this from a coffee house at the airport). Undesirable human traits could fuel global catastrophes, in the technologically advanced civilization we are becoming.
Long before the arrival of the Singularity, we will need to change our ways. A benign, compassionate and sharing civilization has the best chance to survive the flood of information and technology that is headed our way. --ffa
Isaac Asimov's Three Laws of Robotics
Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans.
A robot may not injure a human being or, through
inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings
except where such orders would conflict with the First
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, Asimov added the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm"; the rest of the laws are modified sequentially to acknowledge this.