laitimes

Bill Gates' 5,500-word explanation of AI risk: real, but manageable

Bill Gates' 5,500-word explanation of AI risk: real, but manageable

Focus:

  • 1 Gates pointed out that the future of artificial intelligence will not be as grim or optimistic as some people think. The risks of AI do exist, but they are still manageable.
  • 2 Gates said that the impact of artificial intelligence will not be as huge as the industrial revolution, but it will have a major impact like the advent of personal computers, and many of the problems caused by artificial intelligence can be solved by artificial intelligence itself.
  • 3 The main impact of artificial intelligence on work will be to help people do their jobs more efficiently, and the unemployment caused by artificial intelligence is also controllable.
  • 4 Humans can manage the risks of AI while maximizing its benefits, but governments need to act quickly.
Bill Gates' 5,500-word explanation of AI risk: real, but manageable

Tencent Technology News July 12 news, Microsoft founder Bill Gates (Bill Gates) published a blog post on Tuesday saying that the future of artificial intelligence will not be as severe as some people think, nor will it be as optimistic as some people think. The risks of AI do exist, but they are still manageable. Gates believes that the impact of artificial intelligence will not be as huge as the industrial revolution, but it will have a major impact like the advent of personal computers, and many of the problems caused by artificial intelligence can be solved by artificial intelligence itself.

The following is the full text of the article:

The risks posed by AI may seem overwhelming. What happens to people who are robbed of their jobs by smart machines? Will AI affect election results? What if future AI decides that humans are no longer needed and wants to get rid of them?

These are fair issues, and the concerns they raise need to be taken seriously. But there's a good reason to think we can tackle them: it's not the first time that major innovations have brought new threats that must be contained. We've been through this before.

Whether it's the advent of the automobile, or the rise of personal computers and the Internet, humanity has experienced moments of change. Despite the many upheavals in the process of change, they all got better in the end. Shortly after the first car hit the road, the first car accident occurred. But we didn't ban cars – we enacted speed limits, safety standards, driver's licenses, drunk driving laws and other traffic rules.

We are now in the embryonic phase of another profound transformation, the era of artificial intelligence. This is similar to those uncertain times before speed limits and seat belts. AI is changing so quickly that it's unclear what will happen next. We are facing major questions about how current technology works, how people will use it for bad intentions, and how AI will change society.

In moments like these, it's natural to feel uneasy. However, history shows that it is possible to solve the challenges posed by new technologies.

I've written before about how AI will revolutionize our lives. It will help address issues such as health, education, climate change, etc., which seemed difficult to solve in the past. The Gates Foundation has made this a priority, and our CEO, Mark Suzman, recently shared his thoughts on the role of AI in reducing inequality.

I'll have more to say in the future about the benefits of AI, but in this article, I want to acknowledge the concerns I hear and read most often, many of which I share. I will also explain my opinion on them.

It's clear from all the articles on AI risks so far that no one knows all the answers. Another thing that is clear to me is that the future of AI is not as grim as some people think it is, nor as optimistic as others think. The risks are real, but I am optimistic that they are manageable. As I discuss each issue, I return to a few topics:

Many of the problems caused by AI have historical precedents. For example, it will have a significant impact on education. But the same was true of handheld calculators decades ago. Recently, schools have allowed students to bring computers into their classrooms. We can learn from past experiences.

-- Many problems brought about by artificial intelligence can also be managed with the help of artificial intelligence.

We need to change the old law and adopt the new one – just as the existing anti-fraud laws must adapt to the online world.

In this article, I will focus on the risks that already exist or will be present. I'm not dealing with what happens when we develop an AI that can learn any topic or task, as opposed to AI that is specifically developed today. Whether we get to this point ten years or a century from now, society has some profound questions to consider. What if a super-AI established its own goals? What if they conflict with humans? Should we build a super AI?

But considering these long-term risks should not come at the expense of more immediate risks. Let me now turn to these issues:

The false and misinformation generated by AI can undermine elections and democracy

The idea that technology can be used to spread lies and falsehoods is not new. People have been doing this with books and leaflets for centuries. With the advent of word processors, laser printers, email, and social networks, this has become much easier.

AI solves the problem of fake text and extends it, allowing almost anyone to create fake audio and video, known as deepfakes. If you receive a voice message that sounds like your child is saying, "I've been kidnapped, please send $1,000 to this bank account in the next 10 minutes, don't call the police." This will have a terrible emotional impact, far more than the impact of an email with the same content written on it.

On a larger scale, AI-generated deepfakes could be used to try to tilt elections. Of course, it doesn't take complicated technology to cast doubt on the legitimate winner of an election, but AI will make that easier.

There are already fake videos featuring fictional footage of famous politicians. Imagine a video showing a candidate robbing a bank on the morning of a major election going viral. It's fake, but it will take hours for the news media and campaign to prove it. How many people will change their vote at the last minute after seeing it? This could turn the tide, especially in a close election.

When OpenAI's co-founder, Sam Altman, recently sat before a U.S. Senate committee, senators from both parties focused on AI's impact on elections and democracy. I hope that this issue will remain on everyone's agenda.

We certainly haven't solved the problem of misinformation and falsification. But there are two things that keep me cautiously optimistic. One is that people have the ability to learn not to look at everything at face value. For years, email users fell for scams where someone posed as a Nigerian prince promising to give a large sum of money in exchange for sharing your credit card number. But eventually, most people learn to read more of those emails. As scams become more sophisticated, so do many of their goals.

Another thing that gives me hope is that AI can help identify deepfakes. For example, Intel has developed a deepfake detector, and the Defense Advanced Research Projects Agency (DARPA) is working on techniques to identify whether video or audio has been manipulated.

It will be a cyclical process: someone has found a way to detect counterfeiting, someone has come up with a way to deal with it, someone has countermeasures, and so on. It won't be perfect success, but we won't be helpless either.

AI has made it easier to launch attacks on people and governments

Today, when hackers want to find exploitable flaws in software, they do so through brute force – writing code that attacks potential weaknesses until they find a way in. This requires a lot of dead ends, which means time and patience.

Security experts who want to fight hackers have to do the same. Every software patch you install on your phone or laptop represents a long search for people with good intentions or malice.

AI models will speed up this process by helping hackers write more efficient code. They were also able to use publicly available information about individuals, such as where they work and who their friends are, to develop more advanced phishing attacks than we see today.

The good news is that AI can be used for both good and bad purposes. Security teams in government and the private sector need to have up-to-date tools to find and fix security vulnerabilities before they can be exploited by criminals. I hope that the software security industry will expand on what they are already doing in this area – and that should be their top concern.

That's why we shouldn't try to temporarily prevent people from realizing new developments in AI, as some have suggested. Cybercriminals don't stop making new tools. Neither will anyone who wants to use artificial intelligence to design nuclear weapons and bioterrorist attacks. Efforts to stop them need to continue at the same pace.

There is also a related risk at the global level: development competitions in the field of artificial intelligence can be used to design and launch cyberattacks against other countries. Every government wants to have the most powerful technology so it can stop attacks from adversaries. This incentive not to let anyone get ahead could spark a race to create increasingly dangerous cyberweapons. Everyone will be worse.

It's a scary thought, but we have history to guide us. Despite its flaws, the world nuclear non-proliferation regime prevented a full-scale nuclear war that our generation grew up fearing. Governments should consider creating a global agency for AI similar to the International Atomic Energy Agency.

AI will take people's jobs

In the coming years, the main impact of AI on work will be to help people do their jobs more efficiently. This is true whether they work in a factory or in an office that handles sales calls and accounts payable. Eventually, AI will be able to express ideas well, and it will be able to write emails for you and manage your inbox.

You will be able to write a request in plain English or any other language and give a rich introduction to your work.

As I argued in my February article, productivity gains are good for society. It gives people more time to do other things, whether at work or at home. The need for people who help others – for example, to teach, care for the sick and support the elderly – will never go away. But it's true that as we transition to an AI-driven workplace, some workers will need support and retraining. It's the job of governments and businesses, and they need to manage it well so that workers aren't left behind — to avoid the kind of disruption to people's lives that happened during the decline in manufacturing jobs in the United States.

Also, keep in mind that this is not the first time that new technologies have led to a major shift in the labor market. I don't think the impact of AI will be as dramatic as the Industrial Revolution, but it will certainly be as dramatic as the introduction of the personal computer. Word processing apps didn't eliminate office work, but they changed office work forever. Employers and employees have to adapt, and they do. The shift brought about by AI will be a bumpy one, but we have every reason to think that we can reduce the disruption to people's lives and livelihoods.

Artificial intelligence inherits human bias and makes things up.

Hallucinations – a term that refers to when an AI confidently makes some assertion that is not true at all – usually occurs because the machine does not understand the context of your request. Ask an AI to write a short story about a vacation to the moon and it might give you a very imaginative answer. But let it help you plan a trip to Tanzania, and it might try to drop you off at a non-existent hotel.

Another risk of AI is that it reflects or even exacerbates existing biases against certain groups of people with gender identities, races, ethnicities, etc.

To understand why hallucinations and biases occur, it's important to know how today's most common AI models work. They're essentially very complex versions of code that allow your email application to predict the next word you're going to type: they scan vast amounts of text – in some cases, almost everything you can find online – and analyze it to find patterns in human language.

When you ask the AI a question, it looks at the words you use and then searches for blocks of text that are often associated with those words. If you write "list the ingredients for pancakes," it may notice that the words "flour, sugar, salt, baking powder, milk, and eggs" often come along with the phrase. Then, based on the order in which it knows those words usually appear, it generates an answer. (The AI model that works this way uses so-called transformers.) GPT-4 is one such model. )

This process explains why artificial intelligence may hallucinate or appear biased. The questions you ask and the things you tell it have no context. If you tell a person that it made a mistake, it might say, "Sorry, I made a mistake." "But it was an illusion — it didn't hit anything. It just says because it scans enough text to know that "sorry, I mistyped that" is a sentence that people often write after someone corrects them.

Similarly, AI models inherit any bias in the text they were trained on. If a person reads a lot about physicians, and most of the books mention male doctors, its answer assumes that most doctors are men.

While some researchers believe hallucinations are an inherent problem, I disagree. I am optimistic that, over time, AI models can be taught to distinguish between fact and fiction. OpenAI, for example, is doing promising work on this.

Other organizations, including the Allen Turing Institute and the National Institute of Standards and Technology, are working to address bias. One way is to build human values and higher-level reasoning into artificial intelligence. This is similar to how a self-aware person works: Maybe you think most doctors are men, but you're aware enough of this assumption to know that you have to consciously fight it. AI can operate in a similar way, especially if the model is designed by people from different backgrounds.

Finally, everyone using AI needs to be aware of the bias problem and become an informed user. The articles you let the AI draft may be full of bias or factual errors. You need to check your AI and your own biases.

Students won't learn to write because AI will do it for them. Many teachers worry that AI will disrupt their work with students. In an age where anyone with internet access can use AI to write a decent first draft of a paper, what's stopping students from submitting it as their own?

There are already AI tools that are learning to discern whether something was written by a human or a computer so that teachers can know when their students aren't doing their homework. But some teachers aren't trying to stop their students from using AI in their writing – they're actually encouraging it.

In January, a veteran English teacher named Cherie Shields wrote an article in Education Weekly about how she uses ChatGPT in the classroom. It helps her students go from writing essays to outlines, and even giving them feedback on their work.

"Teachers will have to embrace AI technology as another tool that students can use," she wrote. "Just as we used to teach students how to do proper Google searches, teachers should design clear lessons around how ChatGPT bots can help with essay writing. Acknowledging the existence of AI and helping students use it could revolutionize the way we teach. "Not every teacher has the time to learn and use a new tool, but educators like Cherie Shields make a good argument that those who do will benefit immensely."

This reminds me of the 70s and 80s of the 20th century when electronic calculators were popular. Some math teachers worry that students will stop learning how to do basic arithmetic, but others embrace the new technology and focus on the thinking skills behind arithmetic.

AI can also help with writing and critical thinking in another way. Especially in these early days, when hallucinations and biases were still a problem, educators could have AI generate articles and then check the facts with their students. Educational nonprofits like Khan Academy and the OER Project, which I fund, offer teachers and students free online tools that place great emphasis on test assertions. Few skills are more important than knowing how to tell the truth from a fake.

We really need to make sure that educational software helps close the achievement gap, not make it worse. Today's software is primarily geared towards students who are already motivated. It can create a study plan for you, point you to good resources, and test your knowledge. But it doesn't yet know how to bring you into a topic

You are no longer interested. This is something that developers need to solve so that all types of students can benefit from AI.

What's next?

I believe there are more reasons not to be optimistic that we can manage the risks of AI while maximizing its benefits. But we need to act quickly.

Governments need to build expertise in AI to be able to develop smart laws and regulations to deal with this new technology. They need to grapple with misinformation and disinformation, security threats, changes in the job market, and the impact on education. Just one example: the law needs to make it clear which uses of deepfakes are legal and how to flag deepfakes so that everyone understands that what they see or hear is not true.

Political leaders need the ability to engage in informed, thoughtful conversations with voters. They also need to decide the extent to which they cooperate with other countries on these issues, rather than going it alone.

In the private sector, AI companies need to work safely and responsibly. This includes protecting people's privacy, ensuring their AI models reflect basic human values, minimizing bias to benefit as many people as possible, and preventing technology from being used by criminals or terrorists. Companies in many sectors of the economy will need to help their employees transition to an AI-centric workplace so that no one is left behind. Customers should always know when they interact with AI rather than humans.

Finally, I encourage everyone to pay as much attention as possible to the development of artificial intelligence. This is the most transformative innovation we will see in our lifetime, and a healthy public debate will depend on everyone's understanding of the technology and its benefits and risks. The benefits will be enormous, and the best reason to believe we can manage risk is that we've done it before. (Mowgli)

Read on