Robotics & Artificial Intelligence : Future At A Glance
Introduction : This blog is written as a part of Thinking Activity assigned by Dr Dilip Barad regarding the topic of Digital Humanities. This blog is also written in order to forsooth the future with Robots and Artificial Intelligence and the crisis of human lives on the planet.
In the further course of this blog, we will be referring to the words augured by some of the leading astronomers and scientists of the world with regard to the future of Rlbotics and Artificial Intelligence and their role in upcoming human life.
(1) Stephen Hawking :
He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, "computers can, in theory, emulate human intelligence, and exceed it."
Hawking talked up the potential of AI to help undo damage done to the natural world, or eradicate poverty and disease, with every aspect of society being "transformed."
But he admitted the future was uncertain.
"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech.
"Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." (Source)
Some futurists try to foresee the future. Others attempt to shape it. Yet prolific science-fiction author and biochemist Isaac Asimov did both.
Asimov not only invented the word “robotics,” his “Three Laws of Robotics,” first written as part of a short story in 1942, have had a massive impact on framing how people think about the development of artificial intelligence and the field of robotics itself.
Perhaps most amazing are Asimov’s many accurate predictions on the Internet and what the world would look like in this decade. Several were in made a famous article published in The New York Times in 1964, which envisioned life in 2014.
In the further course of this blog, we will be referring to the words augured by some of the leading astronomers and scientists of the world with regard to the future of Rlbotics and Artificial Intelligence and their role in upcoming human life.
(1) Stephen Hawking :
He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, "computers can, in theory, emulate human intelligence, and exceed it."
Hawking talked up the potential of AI to help undo damage done to the natural world, or eradicate poverty and disease, with every aspect of society being "transformed."
But he admitted the future was uncertain.
"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech.
"Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." (Source)
(2) Isaac Asimov :
Asimov not only invented the word “robotics,” his “Three Laws of Robotics,” first written as part of a short story in 1942, have had a massive impact on framing how people think about the development of artificial intelligence and the field of robotics itself.
Perhaps most amazing are Asimov’s many accurate predictions on the Internet and what the world would look like in this decade. Several were in made a famous article published in The New York Times in 1964, which envisioned life in 2014.
Below are some of Isaac Asimov’s most accurate predictions.
On robotics:
“Robots will neither be common nor very good in 2014, but they will be in existence.”
“Much effort will be put into the designing of vehicles with “Robot-brains”*vehicles that can be set for particular destinations and that will then proceed there without interference by the slow reflexes of a human driver.”
On the human race:
“Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.”
“Mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” (Source)
On robotics:
“Robots will neither be common nor very good in 2014, but they will be in existence.”
“Much effort will be put into the designing of vehicles with “Robot-brains”*vehicles that can be set for particular destinations and that will then proceed there without interference by the slow reflexes of a human driver.”
On the human race:
“Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.”
“Mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” (Source)
(3) Martin Rees :
In one of the article on Evening Standard website, Rees writes with regard of Robots and their futuristic possibility of taking grip over human beings :
Way back in 1942, the great science fiction writer Isaac Asimov formulated three laws that robots should obey. First, a robot may not injure a human being or, through inaction, allow a human being to come to harm. Second, a robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. Third, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In later writings, Asimov added a fourth law: a robot may not harm humanity, or by inaction allow humanity to come to harm. Perhaps AI developers will need to be mindful of that law as well as the other three.
Seven decades later, intelligent machines pervade popular culture — most recently in the movie Ex Machina. But, more than that, the technology of artificial intelligence (AI) is advancing so fast that there’s already intense debate on how Asimov’s laws can be implemented in the real world.
Experts differ in assessing how close we are to human-level robots: will it take 20 years, 50 years, or longer? And philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs — so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life. But there’s agreement that we’re witnessing a momentous speed-up in the power of machines to learn, communicate and interact with us — which offers huge benefits but has downsides we must strive to avoid.
There is nothing new about machines that can surpass mental abilities in special areas. Even the pocket calculators of the 1970s could do arithmetic better than us.
Computers don’t learn like we do: they use “brute force” methods. Their internal network is far simpler than a human brain but they make up for this disadvantage because their “nerves” and neurons transmit messages electronically at the speed of light — millions of times faster than the chemical transmission in human brains. Computers learn to translate from foreign languages by reading multilingual versions of (for example) millions of pages of EU documents (they never get bored!). They learn to recognise dogs, cats and human faces by crunching through millions of images — not the way a baby learns.
In the 1960s the British mathematician I J Good — who worked at Bletchley Park with Alan Turing — pointed out that a super-intelligent robot (were it sufficiently versatile) could be the last invention that humans need ever make. Once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones. Or could humans transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness?
Way back in 1942, the great science fiction writer Isaac Asimov formulated three laws that robots should obey. First, a robot may not injure a human being or, through inaction, allow a human being to come to harm. Second, a robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. Third, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In later writings, Asimov added a fourth law: a robot may not harm humanity, or by inaction allow humanity to come to harm. Perhaps AI developers will need to be mindful of that law as well as the other three.
Seven decades later, intelligent machines pervade popular culture — most recently in the movie Ex Machina. But, more than that, the technology of artificial intelligence (AI) is advancing so fast that there’s already intense debate on how Asimov’s laws can be implemented in the real world.
Experts differ in assessing how close we are to human-level robots: will it take 20 years, 50 years, or longer? And philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs — so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life. But there’s agreement that we’re witnessing a momentous speed-up in the power of machines to learn, communicate and interact with us — which offers huge benefits but has downsides we must strive to avoid.
There is nothing new about machines that can surpass mental abilities in special areas. Even the pocket calculators of the 1970s could do arithmetic better than us.
Computers don’t learn like we do: they use “brute force” methods. Their internal network is far simpler than a human brain but they make up for this disadvantage because their “nerves” and neurons transmit messages electronically at the speed of light — millions of times faster than the chemical transmission in human brains. Computers learn to translate from foreign languages by reading multilingual versions of (for example) millions of pages of EU documents (they never get bored!). They learn to recognise dogs, cats and human faces by crunching through millions of images — not the way a baby learns.
In the 1960s the British mathematician I J Good — who worked at Bletchley Park with Alan Turing — pointed out that a super-intelligent robot (were it sufficiently versatile) could be the last invention that humans need ever make. Once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones. Or could humans transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness?
We don’t know where the boundary lies between what may happen and what will remain science fiction. But some of those with the strongest credentials think that the AI field already needs guidelines for “responsible innovation”. Many of them were among the signatories — along with anxious non-experts like me — of a recent open letter on the need to ensure responsible innovation in AI. (Source)
My Personal Point of View : After reading the auguries proposed by the juggernaut experts and scientists in the field of Robotics and Artificial Intelligence, I would like to opine that whatever future may be, whatever technology we invent and advance in, or whatever type of planetary mass-shift may happen in the nearby future, we have to keep the master control over ehatever we manufacture as robots and as artificially intelligent divices with purpose to serve humanity, else their way of learning and speed of data-transmission will be the danger for the human beings in nearby future.
Thank you!
Thank you!
Comments
Post a Comment