Artificial Intelligence (AI) has been a prevalent theme of science fiction for decades now. The idea that machines can exhibit same level of intelligence as humans has kept writers and audience in the firm grip in all sorts of art, from top selling novels to blockbuster movies. But artists and futurists have a certain habit of romanticizing the subject. Like frequent inclusion of robots similar to humans. And I get it, that is an easier method than expecting your audience to show empathy with long lines of code. But this artistic image of AI is wrong. Portraying AI as humanoid robot is analogous to creating chassis of car; even before invention of combustion engine.
is there so much hype around AI now? Will it really destroy the world or is it
going to create a new Utopia where all of the troublesome labor will be handled
by intelligent machines? Over the years, great minds like Elon Musk and Stephen
Hawkings raised genuine concerns about Artificial Intelligence. But AI has also
shown promising future in many fields like health where artificial limbs for
handicap persons work with the help of this new technology. So where will AI
take us in the future?
1960s, there was a similar hype around Nuclear Technology. There were
proponents of nuclear technology who dreamed of a future powered by nuclear
energy. Where cars would never run out of fuel and all of human energy needs
would be fulfilled by this seemingly never-ending source of energy. And then
there was an opposing side, which feared a dystopia, scared by the destructive
power of nuclear technology, as demonstrated by US in 1945. But look at us now.
AI is not different; there are two extreme positions but as it is the case with
every issue with two extremes, truth lies somewhere in the middle.
in AI has seen an exponential growth over the years, thanks to the enabling
variables such as computer processing power. As great minds around the world
are in a race to mimic millions of years of human evolution on computers, there
are few things which need to be addressed by policy makers and scientists
alike. How far is too far? How much control we want AI to have? Can we trust AI
to hand over everything? Is AI even capable enough? There are many experts who
believe that AI can never be at the level of human intelligence.
perhaps they are right. We cannot create intelligence in machine which can
match human intelligence. Perhaps, we are incomprehensibly complex and we
cannot mimic nature. But then again, if random mutations can result in human
intelligence, what’s stopping AI to achieve the same level? In fact, AI has
already surpassed us in limited scope. Deep Blue and Alpha Go computer systems
has defeated best human players in popular board games like Go and Chess. Think
about it, even if you dedicate your whole life mastering one of these games,
you can never be as good as these computer systems. You can dismiss these
achievements but AI is not going to stop at beating us in Chess. It is going to
be far more than a playing toy in near future.
AI is going to be capable enough to do all physical and mental labor, what
purpose would we as humans serve? If we, in this idealistic quest, render an
entire species useless, would it stop AI to recognize our obsolescence and
taking control? If one day, AI is going to surpass human intelligence and we
become ants in front of it, how will it treat us? Matter of fact, how we treat
ants? Think of it in this way: You are going to build a new home, but there is
a colony of ants on a land where your new home is going to be built. You
wouldn’t care less about those little ants. Try explaining immense benefits to
animals, of cutting those trees and destroying their natural habitats. What if
AI needs something and we are in its way? Would it care about us? And these are
not scribblings of a paranoid mind, this is very much in realm of possibility
and according to experts, it is going to happen in next few decades.
October 5, 1960, an early warning system in Greenland issued a level-5 warning.
Which meant a long range soviet missile is about to hit USA. The warning was
quickly passed to the high command but it was dismissed due to the fact that
Nikita Khrushchev, head of the Soviet Union was in US at that time, you
wouldn’t expect him to be there at the time of nuclear attack. Later, it was
investigated that early warning system in Greenland mistook rising moon as
Soviet missile. But think about it for a second, had Nikita not been there in
US? We were grazed by the apocalypse. We humans are fallible, we make mistakes
and now, with the advent of destructive technologies on scale of extinction of
all life on earth, we are right in our concerns about giving up control and
handing it all over to the machines.