A little over ten years ago, I bought myself a hand-held chess computer. This was before the days of smart phones where now you can just download a free app. So I spent $60.00 on this one hit wonder that came with a stylus. The purpose was two fold: One, I like chess and didn’t have anyone to play with other than online strangers who’d yell at me that I was wasting their time because I ranked myself incorrectly and; two, because my son was about to be born and I imagined pining away my time on this hand-held device while he napped in my arms.
The only time I could beat the machine was when I played it on the dumb-dumb setting. Or something like that. If I went too high in level, I couldn’t match it. And at the top Grand Master level, I suffered an early thumping every time. But I don’t feel so bad.
In 1997, for the first time, a machine by the name of Deep Blue beat Master chess player, Garry Kasparov. Since that time, the machine has only improved. Today, there is no human player that can match the machine. Short of total war and annihilation, this is how it’s going to be for the rest of the future. No biological entity will ever match it or catch up. Biological evolution is much too slow. But the machine…it’s becoming terrifyingly too fast and intelligent.
The machine isn’t just beating chess, it’s beat Space Invaders. At this point, Google’s Deep Mind has managed to become the best Space Invaders player and no human will ever be as good. Sure, it’s just Space Invaders, but there’s more important perfections to come.
I’m a firm believer that the American Civil War did not have to happen to end slavery. I am a firm believer that it was the second industrial revolution that would have brought an end to forced human labor even had the war not happened. It’s much easier to maintain a tractor or two with related equipment than a few families of people. And, the machine’s productive value is much better. I know, hindsight is twenty-twenty, but I’m still pretty certain the machine would have won out over human slaves.
Today we have replaced a lot of what was once human labor with machines. Two hundred years ago, ninety-percent of people worked in agriculture. Today, it’s only two-percent. And yet, we have more food (at least in the West), than we know what to do with.
One hundred years ago, you were a doctor. But today, you have to specialize because it’s impossible to know everything about a human being. You have doctors specializing in just the colon. And there are specializations within specializations. Even in automotive, there are shops that simply deal with tires, shops that deal with oil changes and bigger shops that deal with it all. And yet in order to do that, they have specialists employed to deal with every aspect of the vehicle. In engineering, there are whole businesses whose sole purpose is to design a better axle.
But what if we managed to create a machine that instead of piecemeal, could assimilate all the knowledge on a particular subject? Regarding medicine, what if we could build an intelligence that knew everything about the human being? What if we could run a “systems check” like our computers and input the results of blood test, spinal fluids, colon, heart, liver, bladder, and let the machine crunch the numbers to tell us exactly what was going on? What if it could predict heart attacks, lung cancer, the next seizure? We’re not far from that. In fact, we’re doing it now with finding various genes responsible for some cancers. But what if we could go further?
A few years ago, researchers began the study of blood cell size devices that could be injected into people to do everything from increase oxygen in the blood, to monitor organs and the body and make diagnosis or make corrections when problems present themselves. And this could also apply to my personal vested interest: Hunting down cancer cells.
Nanobots for cancer. The thought intrigues me. Imagine if instead of being diagnosed four years ago via a colonoscopy that I had colon cancer, and then a week later put under a knife, then for the next six months undergoing preventive chemo-therapy, I had cell-sized bots patrolling my blood stream, looking for mutations. And, without me even being aware, the bots zapped them and went on their way looking for others. Your body already does this. Cells are programmed to expire on their own. Sometimes, they don’t. This is how cancers can form.
What if in the near future, a patient comes to a “clinic”. She is young and in good health and wants to keep it that way. In fact, this is a hypothetical future where every one does this as commonly as entering high school. She sits in a comfortable chair and is injected with a solution containing General Non-Specific Information Gathering Nanobots. She is then asked to return in a week’s time. A week later, she returns, a blood draw is made and the sample is inserted into another machine. That machine makes some calculations and she is told exactly how her body and organs are functioning, the percentage and risk to which she may suffer various problems as she ages and her entire genome and DNA is mapped for errors or defects. She can then make decisions on what to do next.
Let’s say said patient, like myself, is diagnosed with Lynch Syndrome. This is a Syndrome where the DNA doesn’t always repair itself should a defect occur and cancer, especially colon cancer, can form. In our future world, at this point, the patient decides to be injected with Specialized Nanobots that hunt for colon cancer cells and she has to return to the office every ten years for a new upgrade/injection. And with that, she never has to worry about colon cancer.
What does all this have to do with freedom? Well it should be obvious by now. Saving you from time intensive labor and preventing debilitating diseases would free up much of your time to engage in…other things.
That’s where a problem presents itself. It sounds great on the surface but…
How will this affect jobs? What about surgeons being replaced with cancer zapping nanobots? What about family doctors being put out of business because they’re not needed for yearly physicals or illness diagnosis anymore? What about the pharmaceutical companies themselves? What about engineers being replaced with the perfect machine that can spot minor defects a person can not? We’re already seeing signs of minimum wage, fast food jobs replacing workers with machines. Could people employed in advanced areas of knowledge and learning be at risk with the creation of artificial intelligence? Could the human being find itself like in the movie WALL-E, where it does nothing but lounge on a mobile chair all day because the machine provides everything it needs? Do we want this?
Imagine with this technology, people get to live a very long time. Cells get repaired without the person even being aware it’s going on. They don’t age much anymore. Short of an accident, people now live to be about three-hundred-years-old (I’m just making up this number for an example). This opens up so much more and here’s at least one area I see jobs developing. Space travel now becomes doable because people live long enough for trips to and back. And colonization can begin.
That’s some pretty cool other things.
As optimistic as I am about all this, there is a dangerous side to continuing to build an artificial intelligence, especially one that can write it’s own code and evolve.
The same algorithm that let Deep Mind beat Space Invaders, has been used to beat several other games from the 1980s like Breakout. The same algorithm! In other words, it is learning even though each game’s rules are different.
It’s not just Atari games. A flight simulator program beat out human pilots in simulated aerial combat. This is where it gets scary and more real.
A machine is now the leading champion in chess, several Atari games and simulated aerial combat. And a human being will never be better. One day the machine will be the best at just about everything else. The machine will be the best oncologist, the best auto-technician, the best guitarist, the best race car driver, the best chef and so forth. And we’ll be able to combine these skills like in today’s smart phones with multiple apps. A machine will be master of, well, maybe everything. What will be our place in this new world?
A year ago, I upgraded to the, then, top of the line smart phone. After about a month of use, I noticed something very interesting. Monday through Friday, a half-hour before leaving for work, the phone would put on the display something like, “Estimated time to work is 17 minutes.” Then, about a half an hour before leaving from work, the display read, “Estimated time to home is 18 minutes.” This astounded me. I never told my phone about my schedule and no one told me it would do this. But an algorithm running within was keeping track with a GPS of my daily activities and took a correct guess that this is exactly what I was doing going to the same address every morning and returning to the same address every afternoon. I wonder, what else is my phone keeping track of?
My personal machine “sees” me. It sees me when I’m about to go to work and come home. It knows my likes and makes suggestions. And it sometimes just starts playing music on it’s own when I get into my car because I synced it up when I got it. We are on our way to creating something even more complicated, more helpful and yet, potentially more intrusive.
Human beings have been populating their world with gods from time immemorial. But this time, with artificial intelligence, we’re about to create the first one that does something and will truly be greater in intelligence than any human being. This god can be evil and it can be good. And it might not even know what these terms mean but end up doing what it thinks is right to the detriment of human beings. We’re talking about the eventual coming of self-replicating, self-reprogramming and compiling, improving machines that will outmatch anything the human brain can come up with. How are we ever going to, in our programming, account for every possibility for safety measures?
What happens when we program an AI for one purpose but it ends up capable of doing something else? This is referred to as Genetic Programming and Evolvable Machines where we’re not programming the machine with the way to solve a problem but we’re giving it the problem and letting it write it’s own software to get to a solution.
It is important now to start the discussion on how we’re going to handle this. I don’t believe we can stop the development. No law will do. Even if the United States banned the development of AI, it doesn’t apply to the rest of the world. And like the drug war, even a ban wouldn’t stop the development from going underground. AI is going to happen.
The human race is working towards creating it’s first real god. We have to start planning for the implications right now. We may not all be programmers, but we’re all involved in politics, economics and ethics. Our economy is going to change and jobs will be radically altered. What kind of political system is capable of handling AI? Can we possibly program a safe AI, especially when we let it evolve? We all have to be part of this project and all have something to contribute.