laitimes

Can the "toxic bubble" of the algorithm really be quit?

Image source @ Visual China

Text | Alter, by | Zhang Hefei, Editor, | Shen Jie

After turning off the personalized recommendation function, Lin Mo fell into deep anxiety.

"Before opening today's headlines, there will always be a lot of content that interests you on the home page, although you feel that your own may be wrapped up in the algorithm, but it really saves a lot of unnecessary information screening." Now I turn off personalized recommendations, and the screen is full of spam messages that have nothing to do with myself. ”

After the official implementation of the "Provisions on the Administration of Internet Information Service Algorithm Recommendation", many APPS have launched algorithm close buttons, allowing users to turn off "personalized recommendations" in the background. Lin Mo, who has always had a slight word for algorithm recommendation, turned off the recommendation function of multiple apps at the first time, but the result was not as expected.

"You could have brushed your favorite short video or graphic content between work, but after turning off the algorithm recommendation, the matching degree of the content is not as good as before, and you can't find the favorite content after sliding several pages on the screen, and the fragmentation time is wasted on finding content, and the advertising recommendation is not less."

Users like Lin Mo are obviously not in the minority, living in the environment woven by algorithms for a long time, there is a fear of being dominated by algorithms instinctively, but they have also invisibly developed a dependence on algorithms, and it seems that it is not an easy task to escape or even fight algorithms.

01 "Toxic" algorithms

In his book Monopoly of Technology, Neil Bozeman asserts that every new technology is both a burden and a gift, not an either-or result, but a product of co-existing advantages and disadvantages.

The emergence, application, and evolution of algorithms accurately corroborate Bozeman's assertion. Algorithms are essentially a mathematical technique of analysis and prediction, emphasizing relevance, which can be used to improve the accuracy of information and product recommendations, and is also conducive to reducing the time cost of users. However, the "lag" of legal supervision and user awareness, coupled with the deep binding of algorithms and Internet business models, has gradually bred many chaos.

The first to be discussed was the "information cocoon". This is the concept proposed by Harvard Law School professor Keith Sunstein in the "Network Republic", the public in the process of information dissemination will be overly biased to their own interests in the content, reduce contact with other information, unconsciously create an "information cocoon" for themselves, gradually immersed in an increasingly narrow world, and even produce some extreme cognition.

Sunstein proposed the concept of "information cocoon" in 2006, but because the concept itself is too academic, and "algorithm recommendation" is still a relatively unfamiliar word at that time, it has not been paid too much attention. Until the arrival of the mobile Internet wave, some entrepreneurs began to use algorithms to please users, constantly guessing and recommending user preferences of content to enhance the so-called user stickiness and user time, "information cocoon" gradually became a normalized social phenomenon.

However, the drawbacks of the algorithm are the fuse that most people condemn, but the big data that is directly related to the interests of users is killed. E-commerce platforms, travel APP, takeaway platforms, etc. have been exposed to cases of big data killing, the system will push different discounts according to user tags, the same product to new users at a relatively low price, while the price of old users is significantly higher. The Beijing Consumers Association once conducted a questionnaire on the issue of "killing", and more than half of the users said that they had experienced "big data killing".

However, the public criticism of algorithms in the field of public opinion will have to wait until the fermentation of "data privacy". Just chatted about a certain product on the social networking site, turned around and appeared on the recommendation page of the e-commerce platform, and even some netizens complained that their phone content may be monitored by the e-commerce APP. The causal relationship in this is actually not difficult to explain, the training of the algorithm is inseparable from the data raw materials, the user's browsing records, stay time, hobbies, etc. are secretly recorded by the platform, and for the accuracy of the algorithm recommendation, some platforms are not without the suspicion of violating the user's privacy.

To borrow a somewhat conspiracy theory: it is like a process of boiling frogs in warm water, first using algorithms to recommend the construction of a closed "information cocoon", users living in the "cocoon room" become the object of stubble being harvested, as for the way of harvesting, or big data killing, or eavesdropping on user privacy... At this point, the algorithm is no longer a technical term of "no values", and it has begun to be linked to the word "poison".

02 The "boots" finally landed

When the algorithm is criticized by more and more people, step by step towards stigmatization, the call for governance of the platform's algorithm recommendation has intensified, and the boots of supervision have landed one after another.

The aforementioned "Provisions on the Recommendation and Administration of Internet Information Service Algorithms" is the corresponding product. It was proposed by the Cyberspace Administration of China in August 2021 and officially implemented on March 1, 2022. The spearhead of the "Administrative Provisions" directly refers to the algorithm recommendation function, proposing that the algorithm recommendation service provider should provide users with options that are not specific to personal characteristics, or provide users with a convenient option to turn off the algorithm recommendation service.

Up to now, many Internet manufacturers have responded to the call, WeChat, Douyin, Taobao, Meituan, Baidu, Weibo, Xiaohongshu and other popular APPS, have launched personalized content recommendations and personalized advertising recommendations of the closing button, but there are still reservations in the implementation of the strength.

For example, the Cyberspace Administration of China stipulates that the platform should launch the algorithm off key in a prominent position, and in reality, many apps hide this function very deeply. Taking today's headlines as an example, if you want to turn off the "personalized recommendation" function, you need to open the APP, click "My", find "Privacy Settings", open "Personalized Recommendation Settings", turn off "Personalized Recommendation" and at least 5 steps, close the entrance hidden deeply and cumbersome operation.

Even if the boots of supervision have landed, and there is nearly 7 months of rectification time, the attitude of many apps to cope with supervision is very subtle, and there is even a possibility of confrontation with supervision. After all, algorithm recommendation has long been the traffic password of many platforms, once the user turns off the "personalized recommendation", it may make the user's opening frequency become lower, the length of stay becomes shorter, and then affect the advertising revenue on which the platform depends.

As a result, some platforms have not optimized the product of non-algorithm recommendations, and have adopted a "one-size-fits-all" product logic, and users can hardly find content that matches their interests after turning off "personalized recommendations", essentially forcing users to choose one of the two: either sacrifice the user experience, or obediently return to algorithm recommendations.

This is only touching the cake of algorithm recommendation, if it further involves product recommendation, privacy protection and other links, it is not excluded that some platforms will give more cunning coping strategies. Perhaps this is the meaning of the topic of "technology bullying", not only how many transgressions the platform uses technology, but also involves the kidnapping of user behavior, and if you want to escape the "algorithm trap", you must pay a big enough price.

It is frustrating that the rules of the algorithm are becoming more and more complex over time, and for most users, it is an impenetrable "black box", unable to understand the working principle of the algorithm and not knowing what abuses there are. It may only stay on a short video for a few more seconds, just open a certain commodity link and forget to close it, it will be recorded by the platform, using algorithm analysis, invisibly weaving a net that cannot be escaped.

At least in the regulatory action on the "algorithm recommendation", it has verified a less optimistic fact: most users have a hard time getting out of the "siege" of the algorithm, seemingly given the right to refuse, and the difficulty and cost of exercising power will make most people give up the idea of change.

03 The "addiction" that can't be quit

As the German existentialist philosopher Jaspers put it: "Technology is only a means, it has no good or evil in itself, everything depends on what man makes out of it, and what ends it serves man." ”

This concept of "technical innocence" has derived the conclusion that "algorithms have no values" in China, and has also become the theoretical root of some people recommending "washing the ground" for algorithms. For example, when the topic of "information cocoon" triggered public discussion, a well-known self-media wrote in the article: "If we really need a technical road to get rid of the so-called information cocoon, the algorithm is obviously one of the road builders, it is certainly not a roadblock." ”

The underlying logic is often based on the "self-discipline" of the general public, if you do not want to be controlled by the "information cocoon", as long as you take the initiative to jump out of your "comfort zone", take the initiative to contact the information that you do not want to contact, rather than blaming all the responsibility on external objective reasons.

Such views seem to be difficult to argue, but they selectively ignore another established reality: whenever the founders of the mobile Internet era are mentioned, "insight into human nature" can be said to be an evaluation that often appears, and Zhang Yiming, Huang Zheng, etc. have been commented on by the media. Reflected in a series of problems induced by the algorithm, what those "initiators" are good at is to exhaustively addict users and then destroy your self-discipline.

Adam Ault, a doctor of psychology at Princeton University, deliberately explained the various "addictive" behaviors of the Internet in "Can't Stop" and subdivided them into 6 elements, namely, presupposing unattainable and attractive goals, providing irresistible positive feedback, effortlessly making users feel progressive, giving escalating challenges, creating unfinished tension, and increasing obsessive social interactions.

If Ault is from the perspective of product managers, dialyzing the principle of games, short videos and other products to guide users to addiction, the famous strategic problem expert Brzezinski put forward the famous nipple music theory from the perspective of God: 80% of the world's population will be marginalized by productivity increases, in order to comfort the "abandoned" people in society, one of the ways is to let enterprises mass-produce "nipples" - let addictive entertainment and products full of sensory stimulation fill people's lives, and then immerse themselves in "happiness" Unconsciously, I lose the ability to think.

Directly viewing the algorithm as an "accomplice" of the nipple music theory is undoubtedly somewhat arbitrary, but there is no doubt that the algorithm strengthens the user's immediate satisfaction psychology, amplifies the stimulation and temptation of dopamine, and makes most users gradually adapt to the state of "entertainment to death".

Unfortunately, most users do not actively restrain themselves, habitually accept the algorithm's recommended content, let the algorithm collect their own data, unconsciously tell the algorithm which content they like, and finally be framed in an invisible "comfort zone". Even if the regulatory level has reacted and opened the door for users to escape the algorithm, it is not a feasible path for users to refuse addiction by relying on self-discipline and the awakening of privacy awareness, and the key to unlocking the bell is still in the hands of the ringer.

04 The "Life Gate" of Governance

It is not to provoke the confrontation between users and platforms, in the realistic context of "algorithm toxicity", it is precisely the Internet platform that holds the algorithm to exercise "power" that needs "self-discipline".

British scholar Jamie Saskander mentions in his book The Power of Algorithms that few engineers within Internet companies think about the systemic consequences of their work, and most of them only need to solve some scattered technical problems to make mistakes.

Whenever public opinion begins to denounce the original sin of the algorithm, the Internet companies involved in it will throw out the shield of "technical innocence". The reason is inseparable from the "black box" mechanism of the operation of the algorithm, that is, the engineer is responsible for providing data, models and architecture, the algorithm will give the answer, the intermediate operation process and principle may not be clear, and even the engineer himself feels that it is the misunderstood party.

However, as early as a few years ago, Alibaba Cloud programmers proudly declared in media interviews: "We have now been able to achieve the Taobao interface of hundreds of millions of people in China is different, and almost achieved a second-level update." "There is also the unexplainability of the algorithm, and the Internet manufacturers are happy to see the positive results produced by the algorithm, but they are unwilling to take corresponding responsibility for the process and results of the algorithm."

The documentary "Surveillance Capitalism: The Smart Trap," which aired in 2020, cleverly illustrates the crux of the matter in one sentence: "If you don't spend money on a product, you're a commodity to be sold." "Consumers don't have to pay directly for the services provided by the algorithm, and the invisible price tag has long been marked."

Real-life cases prove that Internet companies are not without room for correction. As early as 2019, Google announced a technology called concept activation vector test (TCAV) to carry out technical supervision of problems such as algorithm bias and algorithm discrimination.

In the final analysis, it is still the balance of business interests and user rights, and "inaction" is the first choice of most enterprises. Not only to close the "personalized recommendation" entrance on the cover, when the user growth into the bottleneck, many platforms deliberately use the algorithm to "kill time", vibrato, Kuaishou, B station and even Know that there are artificially created "time black hole" suspects, the algorithm to some extent has become a "calculation" tool.

You can take a long-term view, and all business foundations are inseparable from user trust. Due to the profit-seeking of enterprises, the lag of supervision, and the acquiescence of individuals, it has stirred up the whirlpool of public opinion such as information cocoons and big data killing, is it not an external manifestation of users trying to fight algorithms? Although technology has two sides, the platform side in it has the ability to make trade-offs, appropriately limit the boundaries of algorithm applications, and adjust the desired "objective function".

Whether or not it can awaken the "conscience" of the Internet platform is the "life gate" of algorithm governance. Or take the rectification of algorithm recommendation as an example, based on the "locking effect" of the algorithm on user behavior, it is a strategy for users to choose one or the other between the algorithm recommendations; it is also a strategy to carry out scenario-based and refined governance of the algorithm and give better solutions from the user's position.

Hopefully, some Internet platforms will be on the right side of history.

Note: Lin Mo is a pseudonym.

Read on