We live in an age of unprecedented technological advancements, with robotics and artificial intelligence taking center stage. Robotics has found its way into our everyday lives and industry. Robots help surgeons perform life-saving operations, defuse bombs, and robotic vehicles will very soon drive. But will they destroy humanity? The chances are 50/50 because robots will become stronger and sharper than us.
Autonomous programmable machines improve with smaller power supplies, finer motor control, and more speed, power, and stability. It’s hoped that they will soon be put to even greater use. One company in America has focused on creating robots to deal with almost any obstacle they encounter. They’ve got four-legged robots that can climb mountains or open doors.
In AI, the idea of robots or AI systems causing harm to humanity (intentionally or not) is referred to as an existential risk. This could happen if an AI system becomes so intelligent and powerful that it can act outside of human control or misinterprets the goals it’s given in a way that causes harm.
Several prominent scientists and entrepreneurs, including Stephen Hawking and Elon Musk, have warned about the potential dangers of AI. Musk has suggested that AI could potentially be more dangerous than nuclear weapons. He and others advocate for robust research into AI safety and ethics to ensure that AI systems are developed in a way that benefits humanity as they become more advanced.
Will robots destroy humanity?
Autonomous machines capable of playing a more active role in military disputes are already very much used. For example, uncrewed aerial vehicles and military drones have been used in various countries for tactical operations since the early 2000s. In many cases, drones are remotely piloted, but more recently, a good deal of programming means that specific tasks require little or no pilot input.
Since the drones can be loaded with everything from radar to visible light and infrared cameras, even reportedly weapons that annoying buzzing overhead. The worry is that if you add weaponry to these more advanced hypermobile robots, they pose quite a danger to humanity. Imagine a future not far from now where drones are miniaturized. They’re equipped with GPS with thermal imaging, facial recognition, and a few grams of shaped explosive, which might not sound like much.
Drones could penetrate almost all defenses, evade almost every attack, and deliver a close-range explosion enough to cleanly and efficiently remove a target. Of course, that is technically humans using machines to kill people, which is admittedly bad enough.
Artificial narrow intelligence (ANI)
There’s relatively little to fear from the so-called ANI. Artificial narrow intelligence is charged with tasks like beating you at virtual chess or understanding bad requests. Artificial narrow intelligence has a designated and designed program that does specific tasks that interconnect with the virtual or physical world.
However, deep learning can cross the line of narrow artificial intelligence. The ANI is programmed to self-improve to get better at their job.
Artificial general intelligence (AGI)
Artificial general intelligence or AGI would be an artificial mind that could think like a human, vastly more complicated than narrow intelligence. To replicate lateral thinking and problem-solving, AGI would need a supercomputer. But technology is increasing at a rate of knots.
The idea that computing power doubles every 18 months. -Moore’s law.
There’s probably a limit to this exponential growth, but researchers are already looking for ways to keep up the pace. We are approaching a time when human-level artificial general intelligence will become feasible. Artificial general intelligence has access to its neural wiring. They have general problem-solving skills, computer speed mental agility. Experts predict that the AGI will quickly optimize and upgrade, evolving into artificial superintelligence in the blink of an eye.
An ASI would not only be able to think quicker than humans, but it would also be able to think better. In addition, rewiring its brain could develop new and more efficient ways of processing information.
Killer robots
A fully autonomous weapon is designed for hunting and killing without a human involved. As far as we know, they don’t exist yet, but Canadian company ClearPath insists killer robots have no place on the battlefield. There is a line that is being crossed with this technology.
There’s an ethical and moral disconnect from war. Major powers and non-state actors will use these weapon systems in dangerous ways. The risks include a machine killing the wrong person or Hecht engaging with another machine to escalate a conflict.
It has committed to not knowingly, at least, creating fully autonomous weapons. Thousands of scientists, engineers, tech companies, and artificial intelligence experts have done the same. But they’re not shunning military contracts completely. Instead, they say AI can be a valuable tool. Russia and China are already investing in military AI technology.
Worker robots
There’s no denying robots and automation is increasingly part of our daily lives. The rise of robots has led to some pretty scary warnings about the future of work. Robots will be able to do everything better than us. A recent study found up to 670,000 U.S. jobs were lost to robots between 1990 and 2007. That number is likely to go up. A widely-cited study from 2019 found nearly half of all jobs in the U.S. are in danger of being automated over the next 20 years.
Occupations requiring repetitive and predictable transportation, logistics, and administrative support were especially high-risk. Robots don’t need health benefits, vacations, or even sleep, for that matter. But the debate over whether robots will take over our jobs is not settled. Many economists argue automation will ultimately create new jobs.
A survey of 20,000 employers from 42 countries found that the IT, customer service, and advanced manufacturing industries will add workers over the next two years due to automation. It’s hard to imagine that robots could replicate human characteristics, like empathy or compassion, required in many jobs. However, research shows they will significantly change day-to-day tasks in the workplace.
Robots Vs Human
One bit for sure, one of the breakthroughs in science recently has been machine superintelligence. Today’s AI can learn for itself and can master a variety of games and skills. In the 21st century, scientists will awaken the power of artificial intelligence on a quantum scale.
It will unlock an intelligence explosion that is unprecedented in history. In the not-too-distant future, AI will reach the intelligence level of a low-IQ human. Very soon after that, the most intelligent person on Earth.
- At that point, artificial intelligence will have an IQ close to 200 and be smarter than almost everyone.
But it doesn’t stop there for AI! Since the same physical constraints do not limit AI, it will continue becoming more intelligent. Computers will eventually surpass 1000 IQ, 10000 IQ, and even a million.
- A highly regarded futurist said computers would eventually become billions of times smarter than humans.
- The brilliant innovator Elon Musk believes that all humans will eventually have to merge with AI.
Machine intelligence will surely be the last great invention humans make. That’s because super-intelligent AI computers and robots will do all the future creation. Not only that, but they’ll dramatically reduce the time. It takes to create innovative technologies such as self-replicating nanobots cures for every disease, free energy, and space colonization. But, instead of taking generations or centuries for these things to occur, they could happen within a decade.
However, when computers become a billion times smarter than humans, they’ll be impossible to control. The AI would be able to outsmart us at every turn and act in a way that suits its preferences instead of humans. We may wrongly assume that a super-intelligent AI will be compassionate, understanding, and peaceful. But that’s the wrong way to think about AI robots.
Artificial intelligence would likely think differently than us. Solving a problem could easily place us in harm’s way. Unless we program it to make sure it always takes concerns into account. For example, if the system felt that humans were a problem for the Earth, it could create a deadly virus. That could wipe us off the face of the planet in a matter of days.
It’s a popular fallacy to equate super intelligence with compassion and empathy. However, the two are not mutually exclusive. AI reaches intelligence levels a billion times greater than humans. It will realize that humans are a burden to the planet and create an objective to dispose of us. There would be no way to plan against it, stop it from happening, or pull the plug on AI after all.
So once AI reaches exponential levels of intelligence, there’s virtually no going back. That means we would have to fear AI eradicating human life constantly. It would most likely know that, giving the AI unlimited power over humanity and Earth. This is why Elon Musk also believes AI will eventually destroy us all.
Some AI experts believe we can keep it on our side by teaching artificial intelligence human values. The AI will continually rewrite its code and evolve its values. It will be interesting to see how an AI evolves and how the world will look once superintelligence is achieved. When that happens, the world will be one where computers, robots, and animals live harmoniously on the planet.
Laws of robotics
More than 70 years ago, science fiction writer Isaac Asimov realized the dangers a self-improving intelligent robot would pose to the human race. So how he proposed three laws of robotics:
- A robot should not harm a human or, through inaction, allow a human to come to harm.
- A robot must obey the orders given to it by humans.
- A robot must protect its existence.
AI pioneer Elon Musk called on the UN to ban the development of autonomous weapons. Unfortunately, it may be too late! If humans can be relied on for anything, it’s ignoring the rules and bans when they get in the way of their priorities.
Do you think we will meet our fate in the metallic manipulator arms of hyper-intelligent robot overlords? I would love to hear your thoughts. So please do put them in the comments below.
More Articles:
Can Brain Upload Possible To A Computer?
Is Mind Reading Possible With Neuralink Technology?
Could Human Cloning Be Possible?
Can The Human Brain Live Without A Body?
Top 20 Exoplanets That Support Life
References:
Bongard, Josh, “Evolutionary Robotics.” Communications of the ACM.
Floreano, Dario, “Evolution of homing navigation in a real mobile robot.”
Cliff, Dave, “Explorations in evolutionary robotics.”
Sims, Karl, “Evolving 3D morphology and behavior by competition”. Artificial life.
Ventrella, Jeffrey, Explorations in the emergence of morphology and locomotion behavior in animated characters.
Lipson, Hod; Pollack, “Automatic design and manufacture of robotic lifeforms.”
Table of Contents