In our digital age, a vast amount of information is at our fingertips. With a few keystrokes, we can know the answer to many, MANY questions and AI is helping to make it possible. A respected Swiss scientist, Conrad Gessner, might have been the first to raise the alarm about the effects of information overload. In a landmark book, he described how the modern world overwhelmed people with data and that this overabundance was both “confusing and harmful” to the mind. The media now echo his concerns with reports on the unprecedented risks of living in an “always on” digital environment. It’s worth noting that Gessner, for his part, never once used e-mail and was completely ignorant about computers. That’s not because he was a technophobe but because he died in 1565. His warnings referred to the seemingly unmanageable flood of information unleashed by the printing press. If we let the fear of the unknown drive decision making, we would never have tamed fire. Societies would be at a standstill. The way we grow and develop is based on questioning the status-quo and yes, taking risks… calculated risks.
Let’s start with the economics, economically there is an upside to utilizing robots in certain types of jobs. This will happen, however, a strong economy works on supply and demand. It does a company no good to create more supply if the demand isn’t there. The demand is created by spending. A strong economy has strong spending. Companies realize this, therefore it behooves the company to maintain many jobs. Humans create spending. With new technology comes new jobs. Being profitable is done two ways, cutting expenses and increasing the demand for their product. This combination will be the linch-pin for safe regulated AI.
Through mindful and knowledgeable regulation in 30, 40, 50 years Artificial Intelligence may very well be as harmless as the cell phone is today. Humans will decide what those regulation look like. Therefore humans are in charge of how AI develops and is used. Humans still make the decisions. Actually the biggest example of regulation at it’s finest and most important is nuclear weapons. Multiple humans in multiple countries created weapons that could destroy the world. During the cold war, two polar opposite ideologies faced off but realized that nothing good would come of this. So here we are here, us flawed humans, recognizing that mutually assured destruction is utterly ridiculous. We have the ability to obliterate all life on earth. This ability to do something does not equal intent. Nuclear weapons are a perfect example of regulation at its finest.
“While we can’t predict the future we can look to the past and learn from it” – Philip Wiser CTO (Chief Technology Officer) at Hearst. What this teaches us is societies advance through technology. From the printing press to electricity, the telephone, to the automobiles, the planes, to penicillin, and x-ray machines, society is better with inventors. People who are willing to not say no, that it can’t be done, we are better for the dreamers. AI is simply the next invention created by humans. By humans for humans. The idea that ultimately sentient beings will inherently want to destroy humanity is unfounded. Why do we assume that they won’t want to coexist with us? That is an assumption and assumptions are dangerous. AI is already here, it’s not going anywhere, we need to embrace it, regulate it, and responsibly apply it.