We see it’s use on our smartphones with SIRI and the introduction of self-driving cars. Often when we think of AI, we think of Robots taking on human tasks but AI can include anything from using Google’s search engines to independently functioning weapons.
Artificial Intelligence is split into two groups, narrow/weak or general/strong. Weak AI is used for internet searches, facial recognition and self-driving cars. While Weak AI will certainly outdo any human, say, at the Grande Vegas online casino or at a game of chess, Strong AI will outperform humans in almost all cognitive functioning.
The Need for Artificial Intelligence Safety
Ongoing research in AI is really important in order to keep the use of it safe for society. It needs to be well-functioning, not only for our personal laptops, but in order to control medical devices, air safety or the national power grid. Another area of Artificial Intelligence presenting a huge challenge is in the development of autonomous weapons and preventing an arms race.
An important question is how far do we want Strong Artificial Intelligence to go. Do we really want AI to outperform humans on every cognitive level? Artificial Intelligence and it’s technologies may go a long way in eliminating world poverty and disease and this is obviously a positive development.
However, we do need to ensure that we align the goals of Artificial Intelligence with the needs of human society. An artificial intelligence system is generally thought to be something only beneficial but it also has the potential to cause great harm. It is, therefore, important that research be ongoing to prevent any negative consequences at the same time as enjoying the benefits.
What are the dangers of AI?
A super-intelligent AI is not going to share any of our human characteristics – those relating to feelings and emotions like love and hate. There is no reason to believe that
AI will deliberately become benevolent or the opposite.
It is possible that a super-intelligent AI could be programmed to perform something dreadful. The development of autonomous weapons which use an AI system presents a real danger. In the hands of an unstable person these weapons can have devastating results for humanity. They can be designed to do the most damage, allowing for little, or no, human intervention. The risk grows as the level of Artificial Intelligence and autonomy increases.
It is possible to program Artificial Intelligence to perform something beneficial but the means by which it reaches that goal, can possibly result in destruction. A super-intelligent system must be fully aligned with our aims and how to attain them and not get out of our control. A super-intelligent system can be programmed to perform a task that if not aligned properly with our goals can cause irreparable damage to, say, our ecosystem. And it may well view human intervention as a threat to its achieving its goal.
A major goal of AI safety research is to ensure that AI remains safe and is always aligned with the needs and values of a civilized society
AI Safety Research
The interest in AI safety has been growing in recent years and it is now considered a priority. With the incredible breakthroughs in technology the idea of creating Strong AI, superintelligence, is no longer considered fiction or something that could come about, albeit way into the future. Many top people in the science and technology fields are expressing concern and the need to carry out research. Experts now see the possibility of superintelligence in our lifetime and therefore research is needed now to look at the risks posed by it and to be prepared.
Creating a superintelligence which is smarter than any human would likely have the ability to outsmart us and potentially we could lose control. We need to ensure that we remain in control of technology and use it wisely for the benefit of civilization. For this reason AI safety research is thought to be vital.
Myths about the Timeline of Artificial Intelligence
There are so many myths about AI and about its possible impact on human society. There are many arguments about whether we should welcome it as something positive or actually fear it.
One common myth is that superhuman AI is imminent and will be with us within a few years. This is far from the truth. There are those in the science and technology fields who may exaggerate or be overly optimistic in what can be accomplished and this has been shown to be true. In 1956 scientists suggested that a mere two month study period using 10 researcher looking at AI could advance our understanding of machines using language and to solve problems usually done by humans.
Contrary to the above, there are those who say there is no way we will have superintelligence AI in this century, if at all. Some say it is impossible. But, again, we don’t know for sure. Shortly before the invention of the nuclear chain reaction, Ernest Rutherford, a top nuclear physicist, in 1933 suggested that nuclear energy was “moonshine”.
Basically, the worlds experts don’t agree on a timeline. In 2015 there was an AI conference in Puerto Rico where a poll of the researchers was taken. The median answer was by the year 2045 but many researchers suggested way beyond that time.
There is a suggestion that only those with little knowledge about Artificial Intelligence are advocates of AI safety research and that it is just not necessary. This is just not the case. The risks of AI do not have to be high in order to prove that a modest investment in AI safety research is justified. It can be compared to any standard insurance policy where the risks may be minimal but still worth the investment.
It is possible that the AI safety debate has become more of an issue because of the media coverage it gets. Doom and gloom attract more attention and fear sells. Those are, therefore, likely to be the ones the media focuses on.
Myths concerning the risks of Superhuman Artificial Intelligence
These myths usually consist of evil robots, rising up and taking over human society and killing off civilization as we know it. These ideas are something that AI safety researchers are not in the least concerned with. These ideas suggest that AI has consciousness and is able to have subjective experiences. This is not the case. The more pertinent question to focus on is: how competent is AI in accomplishing its goals and making sure that those goals align with ours, in order to avoid any disasters.
The problem will always be when intelligence goals are misaligned with ours Superhuman Intelligence that is misaligned could potentially outsmart us and probably cause havoc.
The Debate Goes On
There are more relatable controversies concerning the introduction of AI, those that would affect how we live our lives. How do we see our future and that of our children? Do we want a society where work is obsolete and leisure is the norm, where jobs have been taken over by automation. What do we want our children to aspire to? Do we really want to develop deadly autonomous weapons? Will we create machines that we can control or will they control us?
These are questions that are being asked. What will life look like if, and when, we live in a world with Super Artificial Intelligence?