laitimes

Zhang Yaqin: AI is risky, but it can make it kind 丨Human opinion in the era of AI

Zhang Yaqin: AI is risky, but it can make it kind 丨Human opinion in the era of AI

Zhang Yaqin: AI is risky, but it can make it kind 丨Human opinion in the era of AI

Dear Readers,

It is my pleasure to share with you my thoughts and that of the Institute of Intelligent Industry (AIR) of Tsinghua University on the future of AI (artificial intelligence) through this letter. In the outgoing year of 2023, artificial intelligence technology represented by generative AI has stirred people's hearts, bringing new hope to this era full of uncertainty and conflict. As the friends of the Economic Observer said, today, we are standing at a new turning point. What will our world look like in another 10 years?

I am very confident in the capabilities and potential of AI. In the next decade, AI will be ubiquitous. It not only affects our daily clothing, food, housing, transportation, learning and Xi, life and work, but also affects our enterprises, society and policies. At the same time, AI will achieve similar capabilities to humans in most fields, or even surpass them.

In Yizhuang, Beijing, it is now common to see unmanned vehicles running on the road, but most of the unmanned vehicles still have safety drivers sitting in them. In another ten years, truly unmanned unmanned vehicles will become the norm. Ten years from now, humanoid robots will appear in some homes, monitoring your physical condition like friends or assistants, and chatting with you. The community we live in will have robots as security guards, and it is very likely that the police will also be robots in the future. Now, some hospitals are using robots to read images of patients. Ten years from now, we will see more robots than humans, and there will be more robots than humans.

Within ten years, the above-mentioned AI scenarios will gradually occur, and the scope of application may not be very wide, but it will slowly appear.

At the same time, another form of development of artificial intelligence, biological intelligence, will also have a great impact on society. If someone has problems with vision, hearing, or other physical impairments, biological intelligence can help them fix them. With the help of AI, people can use brain waves and heart waves to control an object. For example, people now play the piano with their fingers on the keyboard. Ten years later, people can play the piano with brainwave-controlled machinery. Similar operations can occur in many scenarios, such as manipulating the prosthetic leg to write, pour coffee, shake hands, exactly like a normal human hand, and can also make the prosthetic leg run, climb mountains, and climb rocks, which can run faster than a human.

At present, some scientists are already doing scientific research in the direction of biological intelligence. Ten years later, these scientific research technologies will be turned into products and truly enter human society.

If we look at AI from a longer perspective, in 50 or 100 years, a new life form may be born. It will be a combination of silicon-based and carbon-based life, bringing humans and machines together. At that time, AI will make humans more capable and become "superhuman" to some extent.

It may sound fantastic or incredible right now, but we can't underestimate AI. The AI we see now is just the tip of the iceberg, and although it is already remarkable, it is still very young.

If we use the development context of the Internet as an analogy, the current AI has just developed to the moment of netscape, and there is just an underlying system for everyone to use. After Netscape, the Internet world had IE (browser), portals, social networking, e-commerce, search, and then the Internet industry really rose. In the future, AI will be like the Internet, re-optimizing all industries.

Just as Windows is the operating system in the PC (personal computer) era, Andriod/iOS is the operating system in the mobile era, and the large model will be the "operating system" in the era of artificial intelligence, which is reconstructing the application ecology and reshaping the industrial landscape. Compared with the era of mobile Internet, the industrial opportunities in the era of large models have increased by at least ten times. Compared with the PC era, the industrial opportunities in the large model era have increased by at least a hundredfold.

However, there is a very important issue now, which I have mentioned on many occasions this year, but some practitioners still do not pay enough attention to. I would like to emphasize this matter again.

The future world can be divided into: the information world, the physical world and the life world. All three worlds could be at risk of AI getting out of control.

In the information world, the risks posed by AI are relatively small, one risk is nonsense, and the other risk is fraud. The risks posed by out-of-control information are manageable, and the most serious consequence is that people are scammed out of money. The problem can be circumvented by existing laws or by introducing new ones.

In the physical world, AI will be connected to robots, unmanned vehicles, drones, IT (information technology) equipment, etc., and the number of these mechanical devices may be hundreds of times or more than that of humans. In the physical world, it would be a catastrophe for AI to get out of control. If it is not well controlled, it will create an existential crisis for humanity, just as nuclear weapons or the new crown epidemic have brought to humanity.

AI in the biological world is the most risky. Through biological intelligence, the human brain and AI are connected with chips or sensors, and it has the advantage of being able to monitor the human body in a timely manner and prevent or treat diseases. But once something goes wrong, if the biointelligence gets out of control or is used by bad actors, the damage caused will be unimaginable.

At present, there is an idea in the technology community that technology should be put first, and the AI model, architecture, algorithm and other technologies should be completed first, and then let the government departments supervise them. I don't agree with this approach, we should involve the government at the beginning and promote the development of technology together, otherwise it may be too late to wait until the technology is perfected to regulate.

At the same time, I also recommend that the best and brightest people do research on governance and develop governance techniques. We can make AI smarter and more capable than humans, but more importantly make it kinder and more creative, so that it aligns with our values so that it doesn't make big mistakes. The most important thing is to build a kind AI.

This is something that human beings can do, just like we educate children, let them learn Xi from an early age, let them innovate and explore in the future, but the most important thing is to have a kind heart. There will be a lot of challenges, of course, but this is the responsibility of AI technologists, entrepreneurs, and large corporations.

Like Asimov's Three Laws of Robotics, there should be some basic principles for the development of artificial intelligence. A few weeks ago, I convened a small workshop in the UK with two Turing Award winners, Yoshua Bengio and Mr. Yao Chi-zhi, and made some concrete recommendations.

In government regulation, we recommend mandatory registration for the build, sale, and use of models of AI systems that exceed certain capability thresholds, including their open-source copies and derivatives, to provide governments with critical, but currently missing, visibility into emerging risks.

We also recommend that there should be some clear red lines and that a quick and secure termination procedure be established. As soon as an AI system crosses this red line, the system and all copies of it are to be shut down immediately. Governments should work together to build and sustain this capacity.

For AI developers, we recommend that cutting-edge AI systems must be clearly aligned with the intentions, social norms, and values of their designers. They also need to be robust to malicious attacks and rare failure modes. We have to make sure that these systems have full human control.

In addition, we call on leading AI developers to commit to dedicating at least 10% of their AI R&D funding to AI safety research, and call on government agencies to fund at least equal proportions of academic and non-profit AI safety and governance research.

These measures to control AI risks should be completed within 10 years, and if they are done after 10 years, it may be too late.

We humans have two kinds of wisdom: the wisdom of inventing technology and the wisdom of grasping the direction of technological development.

As an optimist, I firmly believe that we can maintain this balance and let AI innovations and technologies serve the goodness and well-being of humanity.

Zhang Yaqin

December 2023

(The author is an academician of the Chinese Academy of Engineering and the dean of the Institute of Intelligent Industry (AIR) of Tsinghua University.

Read on