Be sure to follow me for upcoming content!


Why Regulations On Artificial Intelligence Should Be Discussed Today

Artificial Intelligence is here today, and we need to ensure it remains safe.

Artificial Intelligence

Though it may often be thought of as science-fiction, a future with super-intelligent machines seems certain and could pose a serious threat to our species. Artificial intelligence is everywhere today. It’s in phones, watches, and even in our cars. Artificial Intelligence is software that has been programmed to analyze and act without direct instructions. A.I. is very powerful, because if a human brain could be emulated, it would be able to solve human problems at a very fast rate. The fast rate comes from engineering faster processing speeds over time. Moore’s Law was an observation by Gordon Moore in 1965 at Intel. According to Moore’s observations, “the number of transistors per square inch on integrated circuits had doubled every year since their invention” (“Moore’s Law”). What this means is A.I.’s ability to think and solve problems will get faster exponentially every year if this trend keeps up like it has for the past fifty years.

A.I. is being developed so quickly, that it may reach a dangerous point before safe regulations can be built around it. Artificial intelligence will pose a serious threat to humanity in the form of either destruction or integration, if there are not very strict and carefully thought out regulations in place. This is because of what is known as the Intelligence Explosion. The explanation requires the understanding that recursive self-improvement is the state of a machine where the machine is able to improve upon its own ability to improve upon itself. That sounds pretty out there, but recursion is used everywhere in programming languages. It’s a pattern that computers heavily rely on. The intelligence explosion hypothesis states that if machines were capable of recursive self-improvement, it would lead to the technological singularity, or the insanely uncontrollable rapid growth of Artificial Intelligent systems. So what’s there to worry about? All we have to do is have a big red stop button, right? Well actually it’s not nearly that simple, as I will explain later. For now, just understand that many animals are much stronger than us physically, but it is our intelligence that gives us our ability to be the most dominant species on the planet. Even though it may not be a conscious human intention, humans have caused many species to go extinct. Machines are already much stronger than us, but are nowhere near as intelligent. If machines capable of effortlessly lifting thousands of pounds are granted intelligence much higher than our own, is the possible destruction of our species due to super-intelligent machines truly far-fetched?

Emulating the human brain sounds outlandish until you realize there are organizations dedicated to making this happen. One example is The Blue Brain Project. The Blue Brain Project is a Swiss organization, led by scientists, to emulate the brains of rodents and eventually the human brain itself (“In Brief”). In 2008, Anders Samberg and Nick Bostrom released an incredibly detailed research paper on whole brain emulation or WHE. It’s one hundred thirty pages long and becomes quite technical, but the gist is that there is a solid roadmap for the development of machines capable of WHE (Sandberg and Bostrom). Whole brain emulation is real, it is being developed, and it is clear that we are on track to being able to emulate human brains that are capable of thinking much faster than real human brains. In Sandberg and Bostrom’s paper, they lay out a clear path from scanning human brains all the way to the finished product. This absolutely can not be ignored, and if that isn’t reason enough to consider forms of regulations, let’s look into the technological singularity hypothesis.

Recursive self improvement appears to be widely accepted as the variable that will lead into what is known as the technology singularity. The technology singularity is a hypothesis stating that Artificial Intelligence could greatly overwhelm organic intelligence leading to the overtaking of the human species. Some scientists believe this will occur somewhere around the year 2050 (Shanahan par 2). Overtaken doesn’t necessarily have to mean that we get wiped out, it could also imply that we integrate with the machines in some way. This could be through artificial cognitive enhancement, or even through the process of emulating a person’s brain and placing it inside of a human body. Carter and Nielsen state in the conclusion of their article titled, “Using Artificial Intelligence to Augment Human Intelligence”, that, “It would not be a Singularity in machines. Rather, it would be a Singularity in humanity’s range of thought” (Carter and Nielsen).These concepts can invoke some pretty intense realizations that can cause any normal person to ignore out of disbelief, but let’s look at the current progress of A.I. and determine whether or not it appears to be leading towards singularity.

A robot is running for mayor in Tama City, Japan (O’Leary). Will the robot win? Almost certainly not, but my point here is that this publicity stunt is gathering attention. In my opinion, when something gathers attention it tends to manifest as greater interest by people with influence including investors. I didn’t believe that A.I. was close to actually getting involved with politics, but it appears to already be happening in Japan regardless of how silly it is. A more concerning example of artificial intelligence making its way into the limelight is Apple’s newest phone, the iPhone X. The iPhone X is rocking their new A11 bionic chip (Novet). The chip is designed specifically to handle neural networks, which is a form of artificial intelligence. Specifically, neural networks are designed to process information similarly to the neurons in a brain. Apple executive Phil Schiller stated, “The dual-core A11 bionic neural engine chip can perform 600 billion operations per second” (Novet par 2). Anyone with an iPhone X in their pocket has a very powerful neural network in their pocket. So it’s already in our devices, but does that really imply that we are going to integrate A.I. into our bodies? There are organizations who are exploring this possibility right now. One example is by a machine learning research company called Distill. Through their research, they appear to believe that augmenting human intelligence with artificial intelligence is very possible (Carter and Nielsen). Eric Leuthardt, Professor of Neurological Surgery, Neuroscience, Biomedical Engineering, and Mechanical Engineering & Materials Science Director at the Center For Innovation In Neuroscience and Technology stated in an article written for Psychology Today that, “we absolutely should be augmenting human intelligence” (Leuthardt par 5). He’s not the only one though. A quick Google search will show you that there are an enormous amount of people endorsing the idea of augmenting human intelligence. So this means that it is possible, and it is recommended by experts, but would people reject such operations? This requires a poll, but it is highly doubtful that everyone would reject the opportunity to increase their own capabilities through augmentation. It seems unlikely that if people were told they could double their IQ and completely relieve stress and anxiety, that they would decline.

The next issue that desperately needs to be addressed as soon as possible is the idea that Artificial Intelligence capable of recursive self-improvement would become open-source. Open source software code is software code that is available to the general public to copy and often edit to their liking. The issue here is quite clear. If A.I. capable of recursive self-improvement were leaked and made public, then anyone with any intention could create super intelligent machines. This is a very real danger, as websites such as GitHub thrive off of massive open source communities all over the planet. In fact, A.I. technology has already made its way into the open source world. A bot was released recently that begun swapping celebrity faces with pornographic models by itself. Assistant editor of Motherboard Vice was referenced in an article stating that, “these deep fakes were made possible by adapting an open source machine learning library called TensorFlow” (Oberhaus par 4). This doesn’t present any real danger, but if this happens with matured A.I. technology it could end poorly. In order to ensure this doesn’t happen, there needs to be some form of regulation to prevent it from happening. This gets pretty complex. For instance: who would be held responsible if an A.I. managed to cause damage to property, hurt people, or overtake human intelligence in some way? The developers? The CEO? This question is unanswered, and is a perfect example of what needs to be discussed today. Especially if the technology singularity is expected to happen faster than humans can even perceive. Perhaps companies should be forced to keep their technology proprietary, and be prohibited from making it open source. The downside to this, is that government regulations on business slows progress. At this point we shouldn’t be arguing if the singularity is possible, but instead we should be coming up with solutions to prevent it.

Worse than all of the above are software bugs. Every piece of software is susceptible to bugs. I’ve been a software developer for several years now, and I must say that I do not believe it is possible to have complex bug-less software. A software bug is a problem with the software that the developers did not recognize before releasing it. The typical process for software release will be to release a beta test, where the testers will report as many bugs as possible. After the developers fix as many bugs as possible, they will release the first version to the public, where they will certainly receive many more bug reports. It is an ongoing process of fixing bugs forever. The Dutch scientist Edsger Dijkstra said, “If debugging is the process of removing bugs, then programming must be the process of putting them in” (Ganesh par 6). So with all of that said, A.I. will be susceptible to unexpected glitches. The real fear is a military A.I. weapon glitching and causing damage to innocent people. This sort of thing needs to be acknowledged early and addressed.

So why don’t we just put a big red stop button on the machine, and if it were to act up, we could just shut it down? This is the biggest argument I hear, often when someone is discussing reasons why the singularity seems far fetched to them. Rob Miles, a PhD student at the University of Nottingham stated it well when he said “In almost any situation, being given a new utility function is going to rate very low on your current utility function” (Miles). What he is saying here is that if an A.I. is programmed specifically to learn, understand, and act on a certain focus, the A.I. will find any change to its current function to be going against its current function. If the A.I. is programmed to complete its function, then it could act in a way that prevents you from changing its function. This would happen most likely through deception. To relate this to the point, if there is a red button that will shut the A.I. down, then that big red button is a threat to its current utility function because it could potentially prevent it from completing its task if it were shut down. If there is even a slight possibility that the A.I. could be attempting to deceive its owner, then the A.I. should be deemed dangerous.

The other big argument I often hear against my point is that A.I. wouldn’t be inherently evil. This argument is really silly, because they are relating Artificial Intelligence to human intelligence. Humans have a set of morals that machines absolutely do not have. These are morals that we have developed through thousands of years of natural selection. Intelligence that is programmed into a machine will be without human morals. An A.I. could theoretically be programmed to hold morals, however, they would still be artificial. The morals would still be coming from mathematical functions that are instructing the machine to behave in a particular way. Whether or not this is how our brains operate is a completely different discussion, however, what needs to be taken from my point is that machines will not have morals that are developed through experience and emotion, but rather they will feel as they are programmed to feel. Any development of morals after that will be outside of our hands. Artificial Intelligence is not currently at the point to where it can pass the Turing Test, however, it is on the fast track to that destination. The Turing Test is passed if an A.I. can effectively convince a human that it is a living being. It is easy to look at where A.I. is today and say that it’s too far away to care right now, but with advancements in machine learning and neural networks, the progress in this area is beginning to exponentially advance at a much higher base rate. Regulations have a stigma to them, and it is completely justifiable to be skeptical when handing the government more power, however when we are dealing with something as powerful as super-intelligent machines, we need to be sure we are taking the future generations’ safety into account. The benefits of A.I. are numerous, but the algorithms must be specific to the goals at hand, and must also be kept private. Super-intelligent machines should never be used as weapons. Only humans are capable of understanding what is right and wrong in the realm of human ethics and morality, so if machines must be used as weapons, they should be controlled fully by human minds. The future of artificial intelligence is still unknown, but now is the time to start thinking about how to ensure it is safe.

Works Cited

Carter, Shan, and Michael Nielsen. “Using Artificial Intelligence to Augment Human Intelligence.” Distill, 21 Dec. 2017, distill.pub/2017/aia/.

Ganesh, S. “Joy of Programming: Why Is a Software Glitch Called a ‘Bug’?” Open Source For You, 29 June 2016, opensourceforu.com/2012/03/joy-of-programming-why-software-glitch-called-bug/.

“In Brief.” About Blue Brain EPFL, 1 Nov. 2017, bluebrain.epfl.ch/page-56882-en.html.

Leuthardt, Eric. “A Case for Neural Augmentation.” Psychology Today, Sussex Publishers, www.psychologytoday.com/us/blog/mind-blender/201710/case-neural-augmentation.

“Moore’s Law.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 29 Nov. 2017, www.britannica.com/technology/Moores-law.

“Moore’s Law 40th Anniversary”, www.intel.com/pressroom/kits/events/moores_law_40th/.

Novet, Jordan. “Apple Packed an AI Chip into the IPhone X.” CNBC, CNBC, 12 Sept. 2017, www.cnbc.com/2017/09/12/apple-unveils-a11-bionic-neural-engine-ai-chip-in-iphone-x.html.

Oberhaus, Daniel. “Top Researchers Write 100-Page Report Warning About AI Threat to Humanity.” Motherboard, 21 Feb. 2018, motherboard.vice.com/en_us/article/a34nm4/ai-report-oxford-malicious-weapons.

O’Leary, Abigail, and Anna Verdon. “Robot to Run for Mayor in Japan Promising ‘Fairness and Balance’ for All.” Mirror, 18 Apr. 2018, www.mirror.co.uk/tech/robot-run-mayor-japan-world-12377782.

Sandberg, Anders and Nick Bostrom. Whole Brain Emulation: A Roadmap, FHI. The Future of Humanity Institute, 2008. http://www.fhi.ox.ac.uk/reports/2008-3.pdf.

Shanahan, Murray, et al. “The Technological Singularity.” The MIT Press, The MIT Press, mitpress.mit.edu/books/technological-singularity.