Posts Tagged ‘nanotechnololgy’

Apocalypse in the Wings—Hint: It’s Not Zombies, Part 2

April 17, 2017

Two rants ago I promised to return to the subject of the dangers of artificial intelligence. I made the assertion, likely outrageous to some (and dissonant with my inclinations as a technophile), that the threat it will pose is much more proximate than global warming. Years ago, based on the changes in technology we’d seen up until that point in time, I extrapolated what seemed to be its inevitable evolution born of the exponential advancement in both computer hardware and software. Prior treatments of runaway technology in word and cinema are legion, and the act of purveying it as entertainment, just like natural disaster and post-apocalyptic visions, frighten and desensitize us to them at the same time. It’s a “Whoa—what if …? Nah!” reaction. The sad truth is, zombies excluded, the scenarios all have terrifying plausibility and are so overwhelming that it’s easier and more palatable to shrug and say, “Good thing it’s not for real.” Unfortunately, while EMPs asteroid collisions, and global pandemics are rare and far between, wars and malevolent AI are neither. The former we’re familiar with because, large or small, they impact every generation. As our martial technologies progress they become more deadly, but they are the devil we know. Untethered artificial intelligence is something we write about, but have yet to experience, because we’re approaching a nexus never before seen, what Ray Kurzweil calls the Singularity. In his book by the same name, written 12 years ago, he painstakingly outlines the multiple technological changes advancing geometrically that will converge in the not-too-distant future that will provide unimaginable benefits in terms of our health, energy availability, longevity, and much more (you can listen to him here). Artificial intelligence will augment human intelligence a thousand, or perhaps a million-fold. He sees it not as a competing force, but a symbiosis between the human and artificial, one that will be realized when we reach the point of Singularity some time in the 2040s. The 20-20 hindsight of the last 12 years perhaps exposes his timeline as a bit aggressive but more on point than my prior estimates. I, like others, even the experts, suffer from the same shortcoming of thinking linearly, not geometrically, in assessing change. But the truth is that, as Mr. Kurzweil documents, we are on the knee-bend of the technological growth curve, where everything before appears to have progressed linearly, and will soon take off like a Saturn rocket. If you closely examine the last ten years, this is already becoming evident, with powerful computers in every pocket, incredible advances in voice recognition, and medical advances on the brink of a major paradigm shift, to name a few.

So this is a good thing, right? Yes, and no. With the unfathomable good comes an equally disturbing downside. Look at the bellwether of the Internet to see this more clearly: The old “World Wide Web” changed our lives forever in big ways and small. Information-gathering, interacting, and commerce are instantaneous and global. With it comes the thieves, the scammers, and the predators. With each new threat comes another solution, the good a half-step ahead of the bad. We talk about the risks of cyberattacks on the financial sector and the infrastructure, but thus far have managed to limit these to skirmishes rather than world war, at increasing cost and with the recognition of increasing risk with every passing year.

Now let’s move the above reality into the arena of AI. As computers continue to grow in computational power at an accelerating rate and the software that controls them follows suit, program complexity also soars. Programming languages layer one on another, increasing the ease of use and also the automation of programming functions. In the near future programs will write and improve programs. This is not conjecture, but inevitability. At some point, and I argue this is withing a few decades, the intellect and speed will be so far beyond human reach, that only machine intelligence will be able to write and debug code. Along with this, genetic manipulation, bio- and nanotechnology, and robotics will improve in a parallel fashion, becoming faster and more nimble in executing the changes in their own evolution. Recognizing the risk, of course there will be multiple layers of safeguards. Past history has shown, however, that there will always be bad people hijacking the technology for their own foul purposes, and again we will witness the war between good and evil. Unfortunately, the risks and the stakes in this new war will be unlike anything the human race has previously encountered. When the atomic bomb was introduced, we entered an era where, for the first time, mass destruction became possible. To date, we’ve been able to narrowly avert this catastrophe. With malevolent, runaway code combined with high-tech machines we may not be in a position to decide.

The promise and risk of the next few decades is enormous and unprecedented. May God give us the wisdom to prevail.

***

Shameless plug alert: Just released my second novel, The Nidus, on Amazon as a Kindle ebook which pertains to this topic. It highlights near-future events as they might happen. I know that it’s by no means the first treatment of this subject, only the first to get it right (yes, I am privy to future events, the stock market, unfortunately, excluded).

Advertisements