The Paper
On October 14, the winners of the 2024 Nobel Prize in Economics were announced. Massachusetts Institute of Technology (MIT) professor Daron Acemoglu· MIT professor Simon Johnson · MIT professor, University of Chicago professor James · James A. Robinson · received the award for their "research on how institutions are formed and how they affect prosperity."
Acemoglu is a popular candidate for the Nobel Prize in Economics in recent years, and he is a professor in the Department of Economics at MIT, where his research interests include macroeconomics and political economy. Simon · Johnson is a member of the MIT Sloan School of Management, where he served as Chief Economist at the International Monetary Fund from 2007 to 2008. Acemoglu and Johnson co-authored Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity in 2023, which explores the history and economics of major technological change.
The book also talks about the artificial intelligence (AI) revolution that could upend human society, arguing that AI development has gone astray and that many algorithms are designed to replace humans as much as possible, "but the way technological progress is made is to make machines useful to humans, not to replace them." ”
Acemoglu believes generative AI is a promising technology, but is skeptical around some of AI's overly optimistic predictions of productivity and economic growth. In a previous paper published by the National Bureau of Economic Research in United States, he pointed out that the productivity gains brought by AI progress in the future may not be large, and it is estimated that the upper limit of AI's total factor productivity (TFP) growth in the next ten years will not exceed 0.66%.
In an exclusive interview with The Paper (www.thepaper.cn) in June this year, Acemoglu mentioned that the more he delves into AI's capabilities and direction, the more convinced it is that its current trajectory is repeating and exacerbating some of the worst technical errors of the past few decades. Most of the top players in the AI space are driven by the unrealistic and dangerous dream of realizing artificial general intelligence, "which is to put machines and algorithms above humans".
Some analysts see Acemoglu as an AI pessimist. In an exclusive interview, he responded that as a social scientist, he would pay more attention to some negative social impacts.
With the race to commercialize artificial intelligence, AI models are competing for the spot, but there is no doubt that technology giants such as OpenAI, Microsoft, Google, and Nvidia have seized the opportunity for AI development. Acemoglu said he was very concerned about AI becoming a way to transfer wealth and power from ordinary people to a small group of tech entrepreneurs, and that the "inequality" we see now is "a canary in a coal mine."
The following is the full text of the interview:
Technology & Society: The greatest asset is people
The Paper: Your research covers a wide range of fields, including political economy, technological change, and inequality. In what context and in what context did you begin to focus on the role of technological development on inequality? What was your initial view of technological development, and how did it evolve into what you now advocate that "the current path of AI is neither good for the economy nor good for democracy"?
Acemoglu: Much of my research focuses on the interplay between political economy and technological change, two of the forces that shape our capabilities and opportunities for growth, as well as our political and economic choices.
AI has become the most important technology of this era, partly because it has attracted a lot of attention and investment, and partly because it has made some impressive advancements, especially with the improvement of GPU performance. Part of the reason is also the ubiquitous impact of AI. These factors led me to conduct research in this area.
The more I delve into AI's capabilities and direction, the more convinced I am that its current trajectory is repeating and exacerbating some of the worst technical mistakes of the last few decades – overemphasizing automation, just as we prioritize automation and other digital technologies without adequate investment in creating new tasks; As well as all the mistakes that social platforms make as a result of trying to monetize people's data and interests.
I'm also particularly concerned about the fact that most of the top players in the AI space are driven by unrealistic and dangerous dreams of realizing the dream of general artificial intelligence, which is a way to put machines and algorithms above humans, and often as a way for these head players to rise above others.
The Paper: Advanced computer technology and the Internet have enabled many billionaires to transfer their wealth and make tech giants more powerful than ever before. Still, we embrace this technological innovation because it also has a positive impact. There are pros and cons to technological change, and historically, society has always found a way to adapt to new technologies. With a new wave of technology sweeping in, why do you think inequality is particularly worrisome?
Acemoglu: When it comes to social platforms and artificial intelligence, I agree with that, but when it comes to the internet, it's different, I have a different opinion. I don't think the Internet is being used properly in some ways, and I certainly don't deny that the Internet is a very beneficial technology that plays a very important role in connecting people, providing information to people, and creating new services and platforms.
For AI, I'm very concerned about it becoming a way to transfer wealth and power from the average person to a small group of tech entrepreneurs. The problem is that we don't have any control mechanisms necessary to ensure that ordinary people profit from AI, such as strong regulation, worker participation, civil society, and democratic oversight. The "inequality" we see is a "canary in the coal mine" and means that something worse is coming.
The Paper: You pointed out that the inequality caused by automation is "the result of how companies and society choose to use technology". As the tech giants grow in market power and influence and may even spiral out of control, what is the key to our response? If you were the CEO of a big tech company, how would you use AI to manage the company?
Acemoglu: My advice to CEOs is to realize that their greatest asset is their workers, and instead of focusing on cutting costs, look for ways to increase the productivity, capacity, and impact of workers. This indicates the need to use new technologies to create new tasks and new capacities for workers. Of course, automation is beneficial, and we will inevitably apply it more in the future, but it's not the only thing we can do to increase productivity, and automation shouldn't be the only thing CEOs pursue and prioritize.
The Paper: United States antitrust enforcers have publicly voiced a series of concerns about artificial intelligence, and the United States Department of Justice and the Federal Trade Commission have reportedly reached an agreement that paves the way for antitrust investigations by Microsoft, OpenAI and Nvidia. Will this antitrust action against big tech companies really increase competition in the market and prevent AI development from being dominated by a small number of companies?
Acemoglu: Absolutely, antitrust is important, and the root of some of the problems in the tech industry is the lack of antitrust enforcement in the United States. The Big Five tech companies have all established a strong monopoly in their field because they are able to acquire potential competitors without any regulation. In some cases, in order to consolidate their monopoly positions, they have bought and deactivated technologies that may compete with them, and we absolutely need antitrust to break down the political power of big tech, which has become very powerful over the last three decades.
But I also want to emphasize that antitrust alone is not enough, we need to redirect technology in a direction that is beneficial to society. It would not be possible to split Meta into Facebook, Instagram, and WhatsApp (to increase competition in the market and avoid a few companies dominating AI development). In the field of AI, antitrust will not be the solution in itself if there is concern that AI technology is being used for manipulation, surveillance, or other malicious purposes, and antitrust must be integrated with the broader regulatory agenda.
Technology and people: how to avoid repeating the mistakes of the past
The Paper: You've been emphasizing "machine usefulness," which means "trying to make machines better for humans." How do you think this should be achieved? What are the consequences of failing to achieve such a goal?
Acemoglu: It has to do with the advice to CEOs mentioned above. We want machines that can stretch human capabilities, and in the case of AI, there is a good chance that this will happen. AI is an information technology, so we should consider what kind of AI tools can provide useful, context-dependent, instant information to human decision-makers, and AI tools can be leveraged to make humans better problem solvers and able to perform more complex tasks. This is not just for creatives, academics or journalists, but for blue-collar workers, electricians, plumbers, healthcare workers, and all other professions. Better access to information can drive more informed decisions and higher-level tasks, which is where machine usefulness comes in.
The Paper: You propose fair tax treatment for workers' labor. Are taxing equipment and software like human employees, or tax reforms to encourage employment rather than automation, are these practical solutions?
Acemoglu: Yes, I argue with Simon · Johnson in Power and Progress that a fairer tax system could be part of the solution. In the United States, companies face a marginal tax rate of more than 30% when hiring labor. When they use computer equipment or other machinery to perform the same tasks, the tax rate is less than 5%, which provides an undue incentive for automation while discouraging employment and investment in training and human capital. Harmonizing marginal tax rates on capital and labor to the same level is a sound policy idea.
The Paper: You're proposing tax reform to reward employment rather than automation. How will such a revolution affect the adoption and investment of automation technology by enterprises?
Acemoglu: There has to be a lot of caution in this area and not to discourage investment, especially in many countries where there is a need for rapid growth and new investment in areas such as renewable energy and healthcare technology. But if we can encourage technology to evolve in the right way, it's also good for business. So my proposal is to remove the over-incentives for automation and hopefully it will be implemented in a way that doesn't generally discourage business investment.
The Paper: The rapid development of social platforms has brought some negative effects, such as information bubbles and the spread of misinformation. How do you think we can avoid repeating the same mistakes in the further development of artificial intelligence?
Acemoglu: There are three principles that help avoid repeating the mistakes of the past: (1) prioritizing machine usefulness, as I argue; (2) empower workers and citizens instead of trying to manipulate them; and (3) introduce a better regulatory framework that holds tech companies accountable.
Technology & Industry: The Digital Advertising Tax makes the industry more competitive
The Paper: Technologist Jalen · Lanier emphasized the issue of data ownership for Internet users. How do you think the ownership and control of personal data should be better protected in terms of policy?
Acemoglu: I think that's an important direction. First, we will need more and more high-quality data, and the best way to produce that data is by rewarding the people who create it, and datamarts can do that. Second, data is currently being plundered by tech companies, which is unfair and inefficient.
However, the point is that the data market is not like the fruit market, and my data is usually highly substitutable for your data, so if tech companies can negotiate with individuals to buy their data, there will be a "race to the bottom", which can be very expensive to do. So I think that a well-functioning data marketplace requires some form of collective data ownership, which can be a data union or a data industry association, or some other form of collective organization.
The Paper: What do you think about introducing a digital ad tax to limit algorithm-driven misinformation to monetize? What impact might such a tax policy have on the digital advertising industry and the dissemination of information?
Acemoglu: I support the digital advertising tax because digital advertising-based business models are extremely manipulative, and they are synergistic with strategies to create emotional anger, digital addiction, extreme jealousy, and information cocoons. They can also synergize with business models that leverage personal data, leading to negative consequences such as mental health issues, social polarization, and a diminished democratic citizenry.
To make matters worse, if we were to re-orient AI as I suggested, we would need to introduce new business models and new platforms, but today's digital advertising-based business models make it impossible. You can't launch a new social platform based on user subscriptions, you can't replicate the success of Wikipedia because you're against the kind of company that offers free services and has a large customer base. So, I see the digital ad tax as a way to make the tech industry more competitive: if the "low-level means" of capturing user data and monetizing digital advertising can be curbed, new business models and more diverse products will emerge.
The Paper: Can you share some of the positive changes that you think are likely to be brought about by future technological developments, and how we should prepare for and promote these changes?
Acemoglu: If we use AI correctly, it can improve the vocational skills of workers in all walks of life, and it can also improve the process of scientific discovery. I also think there are ways to use AI democratically.
Surging News reporter Chen Qinhan