AI Risks Need To Be Mitigated Urgently & At A Global Level

Eagle-Eye-T-AI-Risks-Need-To-Be-Mitigated
Crowd of robots

A question that comes to the minds of a lot of people is if Artificial Intelligence (AI) poses a threat to humanity, along side that of nuclear war and any pandemics.

Loading poll ...

In a new statement signed by notable figures and AI scientists, AI should be treated as a threat to humanity.

The statement reads as follows:

 “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The above statement was published by the Center for AI Safety (CAIS). This is a San Francisco based Non Profit Organization. They said that the purpose of this statement is to open global dialogue about the urgent risks when it comes to AI.

Geoffrey Hinton, who is an AI research pioneer and who left google was found at the top of the list of signatories. He is sounding the alarm into what he calls a clear and present danger by AI. He can be quoted as saying the following:

“These things are getting smarter than us.”

Hinton is a founder of the deep learning methods that are used by LLM’s (Large Language Models) like GPT-4, along side Yoshua Bengio.

One notable person that is not visible on this list is the Turing Award winning research team member, Yann LeCun who is the chief AI scientist at Meta.

There are also a number of CEO’s of major AI players including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind’s, and Dario Amodei of Anthropic.

A part of the CAIS’s website has a list of potential AI risks which include:

  •  Weaponization
  • Misinformation
  • Deception
  • Power Seeking Behaviour, etc
With the possible attempts at power grabbing by AI, the website says the following:

“AI’s that acquire substantial power can become especially dangerous if they are not aligned with human values. Power seeking behaviour can also incentivize systems to pretend to be aligned, collude with other AI’s, over power monitors and so on. On this view, inventing machines that are more powerful than us is playing with fire.”

CAIS also asserts that if a power hungry AI my be incentivized by political leaders that see the strategic advantages, quoting Vladmir Putin who said:

“Whoever becomes the leader in [AI] will become the ruler of the world.”

Click Here To Read the CAIS statement as shown on its website. (Source: Center for AI Safety)

The previous CAIS statement is the latest in a series of extremely hi profile initiatives that are focused in addressing AI safety.

Earlier in 2023 a highly controversial open letter, supported by some of the same people that endorsed the current warning, urging for a 6 month break in AI development. This was met with mixed reactions in the scientific community.

Critics of this letter said it either overstates the risk AI poses, or other critics agreed with the potential risks but disagreed with the solution being proposed.

artificial intelligence (ai) and machine learning (ml)

The previous letter, which was authored by the Future of Life Institute (FLI) and it had this to say about the statement issued by CAIS:

“Although FLI did not develop this statement, we strongly support it, and believe the progress in regulating nuclear technology and synthetic biology is instructive for mitigating AI risk.”

It is recommended by FLI that a number of actions are taken to mitigate the risk of AI. These include coming up with and implementing international agreements to limit specifically the high risk AI proliferation,  mitigating risks involved with advanced AI. Lastly it was proposed that an intergovernmental organization be setup similar to that of the IAEA (International Atomic Energy Agency). This agency would promote the peaceful use of AI while ensuring the risks are mitigated and any guardrails that are setup are enforced.

Some AI experts are saying that such letters are misguided and that AGI (Autonomous Systems with General Intelligence) is not the most pressing concern.

Professor of computation linguistics at the University of Washington and a member of the famed AI ethics team which was fired by Google in 2020, Emily Bender said in a tweet that the statement is a:

“wall of shame where people are voluntarily adding their own names.”

She goes on to further write:

“We should be concerned by the real harms that [corporations] and the people who make them up are doing in the name of ‘AI,’ not [about] Skynet.”

One such harmful nature of AI can be seen in an example of an eating disorder helpline. This helpline recently fired the human team and replaced them with a chatbot called Tessa.

The National Eating Disorder Association (NEDA) was running this active helpline for 20 years.

Vice reported that after NEDA workers took steps to unionize earlier last month, the association announced that it would replace their employees with their own chatbot called Tessa as their primary support system.

Tessa was taking down by the organization two days prior to it going live as it was encouraging damaging behaviors that would make eating disorders worse. Some of this damaging behavior included severely restricting caloric intake as well as weighing oneself on a daily basis.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *