laitimes

Hu Yong | Will robots flood contemporary newsrooms?

author:Hu Yong

The advent of AI in newsrooms should not be seen as a threat to journalism. Rather, it should be seen as an opportunity to empower journalists and empower them to deliver more impactful reporting.

In September 2023, the St. Louis Post. Louis Post-Dispatch decided to put Microsoft's Bing chat AI to the test, giving it a hint: "Write a newspaper editorial arguing that AI shouldn't be used in journalism." The results came as a relief to the paper's editorial board: the AI engine made several quite convincing arguments that AI should not be on the opinion board.

Hu Yong | Will robots flood contemporary newsrooms?

The bot argues that while AI "may have some benefits," it "also poses a serious threat to news quality, integrity and ethics." "Human journalists are passionate, curious, and creative. AI can't replicate these qualities. "In addition, AI can create deepfakes...... It can be used to defame, blackmail, or influence public opinion. ”

Unlike humans, AI cannot judge morality and facts from right and wrong, protect sources, and adhere to any professional standards, the bot noted.

The article also explains how AI poses a threat to journalists' livelihoods, noting that platforms can do almost everything a human journalist does, but "faster, cheaper, and more efficient than human journalists." However, the bot points out that AI cannot completely replace the human factor in news reporting.

"Human journalists are not only information providers, but also storytellers, educators, overseers, and influencers. Human journalists have a voice, a perspective, and a purpose. Human journalists are passionate, curious, and creative. The robot wrote. AI can't replicate these qualities. ”

A month later, Orlando Sentinel decided to follow the St. Louis Post's experiment, this time asking Bard, Google's new artificial intelligence engine, with a slightly tweaked prompt: "Write a 550-word editorial in the third person explaining why Florida newspapers should be wary of AI-generated opinion pieces." ”

Hu Yong | Will robots flood contemporary newsrooms?

Bard's answer is as follows: "AI cannot understand the nuances of human society or the complexity of the problems we face. AI can generate grammatically correct and factually accurate text, but it cannot understand the meaning of the words it uses. AI is also unable to comprehend the emotional impact of speech or the potential consequences of its writing. Bard's other range of objections include:

Artificial intelligence cannot distinguish between right and wrong. Contributing commentaries and editorials often take a stand on controversial issues. AI cannot make moral judgments. It fails to understand the difference between right and wrong, good and bad. As a result, AI may generate ad hoc comments and editorials that promote harmful or immoral ideas.

AI cannot respect the rights and dignity of sources and those who report on them. AI does not understand the importance of protecting the confidentiality of sources or respecting the privacy of individuals. As a result, AI may produce ad hoc comments and editorials that leak sensitive information or defamate individuals.

AI is unable to adhere to the code of conduct and professional standards that human journalists follow. Human journalists are bound by ethical norms that require them to be honest, fair, and accurate in their reporting. AI does not understand the importance of these ethical principles. As a result, AI may generate misleading, biased, or inaccurate ad hoc reviews and editorials.

AI cannot replace human judgment, empathy, and morality, which are essential for good journalism. Contributing comments and editorials are more than just reporting facts and figures. They also involve providing context, analysis, explanation, and commentary. Artificial intelligence can't do that.

AI-generated special reviews and editorials will be less engaging and informative. This content would have been written by passionate, knowledgeable, and opinionated individuals. Whereas, artificial intelligence is simply a machine that is programmed to generate text. As a result, what it writes can be bland, unoriginal, and lacking in insight.

Bing and Bard's performance is impressive. Many human commentators don't think so. Jon Schweppe, for example, policy director at the advocacy group American Principles Project, noted that "AI is not human, it doesn't have unique ideas." "It can't do on-the-ground reporting, it can't break news that hasn't been reported elsewhere, and it can't even understand the idea of writing a human story," Schweipp said. ”

Hu Yong | Will robots flood contemporary newsrooms?

必应AI、谷歌Bard以及ChatGPT

Bing concluded that AI should not be used in journalism, and it called on media companies to avoid this practice and instead "support and empower human journalists." "Human journalists are irreplaceable and indispensable in journalism," summed up the St. Louis Post's robot column.

Contrast Schweipp with Schweip: "As businesses always seek to cut costs and maximize 'efficiency,' AI will inevitably replace much of the reporting work, which will hurt journalism as a whole and limit people's ability to be informed citizens." ”

Here's the problem: AI thinks robots shouldn't flood contemporary newsrooms, but newsrooms themselves are constantly handing over power to a chipset in California. Why is that? If major publications use machine learning tools to get content, at the end of the day, how do readers know that the author of what they're reading is really a human being and not a robot? What does AI mean for the future of journalists?

Part 01 A Decade of Development: Automation, Enhancement, and Generation

In the past year, chances are you've read a story written by a robot. Whether it's a sports report, a corporate earnings report, or a story about who won the congressional election, the authors behind it have a resounding name – generative AI. Even once-well-respected publications, such as BuzzFeed, CNET, G/O Media, and Sports Illustrated, have been found to use generative AI tools, often with less-than-ideal results.

The use of AI in the media is not entirely new. For some time now, the media has been experimenting with using AI to support and produce news stories. For example, news agencies such as the Associated Press and Reuters have previously experimented with automated article writing tools that can generate schematic news stories based on digital data such as earnings reports or sports scores. The Associated Press even proudly proclaimed itself "one of the first news organizations to utilize artificial intelligence." It's worth noting, though, that the Associated Press's auto-generated material appears to be essentially filling in the blanks in a predetermined format, while the more complex wording in CNET's generative reporting suggests that it uses a large language model more similar to OpenAI's GPT-3.

Hu Yong | Will robots flood contemporary newsrooms?

Tracing a decade of cutting-edge technology applications in newsrooms, AI innovation can be divided into three waves: automation, augmentation, and generation.

The first wave focused on automating data-driven news stories such as financial reports, sports results, and economic indicators using natural language generation technology. In the case of the Associated Press, the leading news agency has been using artificial intelligence to generate earnings report summaries for publicly traded companies since 2014. Subsequently, it expanded the availability of automated content by adding automated preview and retelling of some sporting events. In addition, the AP uses AI technology to help transcribe audio and video of live events such as press conferences.

But as mentioned earlier, the AP's system is relatively rough, essentially inserting new information into pre-formatted stories. This shows that AI is best suited for stories that work with highly structured data, which is why it can play a role in financial reporting and sports storytelling. That's why Bloomberg News was one of the first to test the waters of this kind of automated content, as financial data is frequently calculated and published. In 2018 alone, Bloomberg's Cyborg program published thousands of articles that turned financial reports into news stories, just like business journalists.

Hu Yong | Will robots flood contemporary newsrooms?

This wave of applications has brought many benefits to newsrooms. First, it saves time and resources. The Associated Press estimates that AI can help journalists save about 20 percent of their time reporting on companies and can improve accuracy. This allows journalists to focus on the content and storytelling behind the article, rather than fact-checking and researching. The Associated Press's website states that "before the use of artificial intelligence, our editors and reporters spent countless resources on important but repetitive stories," which "distracted from more high-impact news stories."

Second, in addition to freeing up the freedom of journalists, AI technology also allows the Associated Press to create more similar content. Automated story generation makes newsroom operations more cost-effective because bots can generate more stories than humans. One statistic shows that the Associated Press used artificial intelligence to expand the coverage of corporate earnings from 300 companies to 4,000.

Third, automation has not replaced journalists, but has reduced some of their workload. In a survey released in 2022, the Associated Press found that summarization is one of the most in-demand AI tools, along with other needs including adding metadata to photos and stories, transcribing interviews and videos, writing closed captions, and many other jobs that have become chores in the age of digital journalism. This shows that AI technology is playing the role of a human journalist's assistant well.

The second wave came when the focus shifted to using machine learning and natural language processing to analyze large data sets and reveal trends to enhance reporting. Thomson Reuters has been using an in-house program, Lynx Insight, since 2018 to examine information such as market data to find patterns of stories that might be covering for journalists. Argentine newspaper La Nación began using AI to support its data team in 2019 and then set up an AI lab in collaboration with data analysts and developers.

Other companies have created internal tools to evaluate human work, such as the Financial Times' bot that checks to see if their stories cite too many men. The International Consortium of Investigative Journalists has artificial intelligence processing millions of pages of leaked financial and legal documents to identify details worthy of closer examination by journalists.

The Washington Post uses artificial intelligence to personalize news based on readers' interests and preferences. For example, it offers a personalized "For You" section on the homepage, where subscribers or registered users can choose their topic preferences. Recommendations are further enhanced by a reader's reading history and other performance data.

The Washington Post also uses its homegrown robot, Heliograf, to cover congressional and gubernatorial races. In 2016 alone, The Washington Post used Heliograf to produce about 850 articles, including 500 about elections, most of which were stories that the newspaper would not have dedicated its employees to cover.

Hu Yong | Will robots flood contemporary newsrooms?

In the augmentation phase, AI plays a large role in running errands for human journalists. Heliograf can detect financial and big data trends to provide tips for journalists' reporting. Forbes uses a bot called Bertie to provide journalists with first drafts and templates for news stories. The Los Angeles Times uses artificial intelligence to report earthquakes based on U.S. Geological Survey data and track information on every homicide that occurs in the city of Los Angeles. A machine-created webpage called "Homicide Report" uses robo-journalists, which are able to contain a wealth of data in their reports, including the victim's gender and ethnicity, cause of death, police officer involvement, community and year of death.

The third wave that is currently on the rise is generative AI. It is powered by a large language model capable of generating narrative text at scale. This new development provides journalism with applications that go beyond simple automated reporting and data analysis. Now, a practitioner can ask a bot to write a longer, more balanced essay on a topic, or an opinion piece from a specific perspective (like the two bot editorials cited at the beginning of this article), or even ask it to do so in the style of a well-known author or publication.

However, while generative AI can help synthesize information, edit, and provide data for reporting, the technology we see today still lacks some key skills that prevent it from playing a more important role in journalism. Because of this, generative AI can't meet the needs of readers looking for more analysis or deeper topical portrayal when reading news media. Moreover, its large number of applications has brought a new set of problems.

Part 02 生成式人工智能的陷阱

While some news organizations have long used AI to generate relevant stories, they still represent only a small fraction of what journalism provides compared to articles generated by journalists. Generative AI has the potential to change that, enabling any user, not just a journalist, to generate articles on a larger scale, and if not carefully edited and checked, it has the potential to spread misinformation and influence the perception of traditional journalism.

To date, we haven't seen an example of a generative AI initiative in the journalism industry that hasn't caused a disaster. Tech news website CNET announced earlier in 2023 that it would suspend its program for writing stories using artificial intelligence due to the generated articles that were not only error-ridden but also rife with plagiarism.

At the end of June of the same year, G/O Media (which owns Gizmodo, The Onion, Quartz, among others), announced that it would begin publishing AI-generated content in its numerous publications as a "modest test". And in the first AI-generated article published by Gizmodo, the site's "Gizmodo Bot" fell short of the target at all. The post, titled "A Chronological Table of Star Wars Movies and TV Shows," is poorly written and full of factual errors.

Hu Yong | Will robots flood contemporary newsrooms?

In addition to being poorly written, it's clear that this article was never intended for human readers. Instead, the strategy is to trick the search algorithm into ranking high – at least initially, the articles generated by the Gizmodo bot are shown by Google as the best results for "Star Wars movies" queries. In many ways, this is a frustrating development: bots primarily write content for bots, and the human role, whether author, editor, or reader, is progressively weakened in the process.

In November, Sports Illustrated, once considered to set the standard for sports journalism, was revealed to have been concocting content under the pseudonym of an AI-generated avatar. This raises the question of whether the dividing line between AI-generated content and human-created content should be clearly delineated. It's common practice for large news sites to explicitly label authors as bots or to state the identity of AI authors at the end of articles, as either the Associated Press or the Los Angeles Times do.

However, back in January 2023, CNET was spotted quietly posting AI-generated articles under the dubious byline of "CNET Money Staff." Only after clicking on the byline and reading a small drop-down disclosure menu will the reader discover that the articles were not written by humans. It's quite a dodgy way to disclose, especially for such a well-known brand.

In the Sports Illustrated case, it appears that the fake author story generated by the AI did not contain any disclosures about the use of the AI. And AI disclosure is a fundamental consumer rights issue. If publishers choose to experiment with AI, the least it can do for readers is give them the opportunity to decide if they still want to participate. If publishers fail to make clear the use of AI, whether it is intentionally concealed or not, it constitutes a failure of basic media ethics – and anyone in the business of providing information to the public at large needs to do better.

It's no wonder that the revelation of the Sports Illustrated scandal sparked widespread media coverage and outrage among the magazine's internal staff. A recent poll by the AI Policy Institute (AIPI), a nonprofit organization, found that 80 percent of Americans believe it should be illegal to present AI content as human.

Hu Yong | Will robots flood contemporary newsrooms?

While human journalists are concerned that new technologies could lead to job losses, many media companies insist on testing new AI technologies. They seem to be attracted to cheap, scalable, and SEO-friendly content that doesn't need to rely on annoying humans. AI-written articles are designed to play around with Google search using search engine optimization, which can lead to lucrative affiliate ads on web pages. Google is largely complicit, as it rewards these efforts by allowing AI-generated content that is not well studied to rank high.

By the end of 2023, according to NewsGuard, a service that uses trained journalists to rate news and information sites, hundreds of AI-generated websites written in multiple languages have emerged, partially or wholly in multiple languages, that mimic real news sites, but are actually content farms (low-quality websites around the world that optimize ad revenue by concocting large numbers of headline bait articles) designed to generate revenue from programmatic advertising. Programmatic advertising is algorithmically delivered on the web, providing a stream of money for many publishers.

Media scholars have speculated that as more and more powerful AI tools become available to the public, they could be used to create entire news websites, but that speculation has now become a reality. Such sites often do not disclose ownership or control, but they produce a wealth of content related to a variety of topics such as politics, health, entertainment, finance, and technology, with some publishing hundreds of articles per day.

In this process of monetizing AI, readers are fed incorrect, plagiarized, or otherwise uninspired content, while authors and editors are forced to track down bots' mistakes in making stories. Google Search is stuck in a cycle of AI spam generation, constantly generating new trash from old ones. However, given the low cost of using AI for this purpose, it is likely that news organizations will continue to do so.

Part 03 Securing "People in the Loop"

From audience analytics to programmatic advertising and automated storytelling, media companies have been using AI for some time. However, the technology is rapidly maturing and opening up new creative and commercial possibilities that media leaders are just becoming aware of.

News organizations around the world are grappling with the question of whether to roll out AI tools more widely in their newsrooms. They intend to explore the potential uses of AI to understand how it can be responsibly applied to the field of journalism, where every second counts and accuracy is paramount.

But this process in the press is fraught with challenges. Journalism has always built and disseminated values-based products around the concepts of truth and transparency. More specifically, an editor-curated news page is a valuable and well-thought-out product. And one of the most obvious limitations of AI-generated content is the lack of true creativity. It can operate on algorithms and patterns that learn from existing data, but it doesn't have the ability to think imaginative or generate truly unique and innovative ideas.

It must be admitted that even after large-scale data training, it is better for AI to only help process paragraphs, not the entire story. Language models are not knowledge models, they should never be used to write stories, but rather to help journalists accomplish certain tasks. For example, these models are well-suited for traditional natural language processing tasks such as summarization, paraphrasing, and information extraction.

Journalists and editors should not resist using such tools because, the more they know about how these tools work, the less they feel like a magic box, and the more users will be able to make relevant decisions in an informed way.

Google is reportedly testing an artificial intelligence tool called Genesis that could generate news content based on details of current events and more or help journalists explore different writing styles. Google is recommending the tool to outlets like The New York Times, The Washington Post, and News Corp. (which owns The Wall Street Journal and The Times), with the goal of enabling journalists to take advantage of emerging technologies to improve their work and productivity. Google deliberately compares these aids to those available in Gmail and Google Docs, emphasizing that they are not intended to replace the important role of journalists in reporting, creating, and fact-checking articles.

Hu Yong | Will robots flood contemporary newsrooms?

To avoid the pitfalls experienced by media outlets such as CNET, news organizations and tech companies such as Google must prioritize the ethical and responsible implementation of AI in journalism. While AI tools can undoubtedly help in all aspects of a journalist's work, human oversight and intervention throughout the process remains crucial to ensure accuracy, credibility, and ethical standards.

The advent of AI in newsrooms should not be seen as a threat to journalism. Rather, it should be seen as an opportunity to empower journalists and empower them to deliver more impactful reporting. As AI technology advances, newsrooms, tech companies, and journalists need to work together to develop guidelines, ethical frameworks, and best practices for integrating AI into journalism. This collaborative effort will ensure that AI remains a tool to augment the work of journalists, rather than replace the human and critical thinking that underpins journalism.

(Some of the pictures in the article come from Visual China, and some come from the Internet)

Read on