laitimes

Tech leaders warn that AI poses the risk of extinction comparable to nuclear war

author:The unsolved mystery of Global Decryption
Tech leaders warn that AI poses the risk of extinction comparable to nuclear war

The picture is a schematic diagram. (Pixabay)

Altman and other tech leaders warn that artificial intelligence (AI) could pose a risk of extinction comparable to the consequences of a nuclear war.

AI industry experts and tech leaders said in an open letter that AI could lead to human extinction. Reducing the risks associated with the technology should be a global priority.

"Reducing the risk of human extinction from AI should be a global priority compared to societal risks of other scales, such as pandemics and nuclear war," the statement on Tuesday (May 30) reads. ”

OpenAI is the maker of the artificial intelligence chatbot ChatGPT. CEO Sam Altman, as well as executives from Google's artificial intelligence division, DeepMind and Microsoft, supported and signed the brief statement from the Center for AI Safety.

Tech leaders warn that AI poses the risk of extinction comparable to nuclear war

After the chatbot ChatGPT was publicly released and made available to the public in November 2022, the AI technology quickly became popular in the following months and developed at an even faster pace.

In just two months after its launch, the number of ChatGPT users reached 100 million. The AI-powered chatbot's ability to respond human-like to users' prompts surprised researchers and the public. This shows that artificial intelligence can fully imitate humans and replace humans to do their jobs.

Tuesday's statement said there was growing discussion about "broad, important and urgent risks posed by AI," but that those discussions could "hardly address the most serious risks that state-of-the-art AI could cause," and that the purpose of the statement was to overcome that obstacle and open a discussion on the issue.

It can be said that ChatGPT has sparked more awareness and interest in artificial intelligence. Now, major tech companies around the world are racing to develop AI products and capabilities designed to outperform their competitors.

Tech leaders warn that AI poses the risk of extinction comparable to nuclear war

Altman admitted in March that he himself was "a little scared" of AI. Because he feared that authoritarian regimes would develop and exploit this technology. Other tech leaders, such as Tesla's Elon Musk and former Google CEO Eric Schmidt, have also warned of the risks AI could pose to society.

In an open letter in March, Musk, Apple co-founder Steve Wozniak, and some tech leaders urged AI research labs to stop training AI systems more powerful than GPT-4. GPT-4 is the latest and largest language model known to OpenAI. They also called for a six-month moratorium on the development of such advanced AI.

"Contemporary AI systems have begun to compete with humans in general tasks," the letter said. ”

The letter asks: "Should we automate all work, including those that bring fulfillment?" Should we develop non-human brains? They may eventually outnumber us, their intelligence will eventually outnumber us, and they will eventually eliminate and replace us. Should we risk losing control of our civilization to exploit them?"

Last week, Schmidt again warned that humans will face "existential risks" associated with AI as AI technology develops.

Read on