Artificial intelligence (AI) is advancing at a breakneck pace, from SIRI to self-driving automobiles. While science fiction frequently depicts AI as humanoid robots, the term “AI” can refer to anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Today, we referred artificial intelligence as narrow AI (or weak AI), as it is design to accomplish a certain purpose (e.g. only facial recognition or only internet searches or only driving a car). However, many researchers’ long-term goal is to develop generic AI (AGI or strong AI). While narrow AI may surpass humans in a single skill, such as playing chess or solving equations, AGI would outperform humans at practically all cognitive tasks.


In the short term, the goal of minimising the negative impact of AI on society stimulates study in a variety of fields, ranging from economics and law to technical subjects such as verification, validity, security, and control. While it may seem insignificant if your laptop crashes or got hack, it becomes critical that an AI system accomplishes what you want it to do if it is controlling your automobile, aeroplane, pacemaker, automated trading system, or electricity grid. Another immediate concern is averting a catastrophic arms race in lethal autonomous weapons.

In the long run, a critical concern is what will happen if the drive for strong AI is successful and an AI system surpasses humans in all cognitive skills. As I.J. Good noted in 1965, developing superior AI systems is a cognitive task in and of itself. Such a system may be capable of recursive self-improvement, resulting in an intelligence explosion far outpacing human intellect. By inventing innovative new technologies, such a superintelligence may aid in the eradication of war, disease, and hunger, and so the development of strong AI could be the single most significant event in human history. However, some scientists have raised concern that it may also be the final one, unless we learn to align the AI’s aims with ours prior to it becoming superintelligent.

Some argue that we never realise strong AI, while others argue that the development of superintelligent AI is certain to be good. At FLI, we acknowledge both of these possibilities, but also the possibility that an artificial intelligence Malaysia system may cause significant harm purposefully or accidentally. We believe that research conducted today will assist us in better anticipating and preventing such potentially undesirable repercussions in the future, allowing us to reap the benefits of AI while avoiding potential pitfalls.


The majority of experts feel that a superintelligent AI is unlikely to exhibit human emotions such as love or hatred, and that there is no reason to believe that AI will become willfully good or evil. Rather than that, when experts analyse how AI could become a risk, they believe two scenarios are more likely:

We try to train AI to perform a heinous act: Autonomous weapons are self-aware artificial intelligence systems designing to kill. In the wrong hands, these weapons might potentially result in huge casualties. Additionally, an AI arms race could unwittingly end in an AI war with catastrophic victims. To prevent being foiled by the adversary, these weapons would be extremely difficult to simply “switch off,” creating the possibility that humans would lose control in such a case. This risk exists even with narrow AI, but becomes more pronounced as AI intelligence and autonomy rise.

To train AI to perform something useful, but it creates a damaging technique of accomplishing that goal: This can occur if we fail to completely align the AI’s aims with ours, which is exceedingly difficult. If you instruct an obedient intelligent car to transport you to the airport as quickly as possible, it may do so while being pursued by helicopters and covered in vomit, accomplishing the opposite of what you want to accomplish. If we assign superintelligent system with an enormous geoengineering project, it may unintentionally wreak havoc on our biosphere and regard human attempts to stop it as a threat to be met.

Source: mobius.co

Related Articles

istanbul escort

Leave a Reply

Your email address will not be published. Required fields are marked *