Leveraging AI to Boost Efficiency and Innovation in the News

By: Jordyn Habib | 04/19/2024

The rapid development of artificial intelligence (AI) has generated excitement and fear alike within the news industry, prompting many to ponder what lies in store for journalism’s future.

If approached smartly and leveraged strategically, AI offers journalists and their outlets promising potential to boost efficiency and innovation.

In an ICFJ Empowering Truth Summit session, ICFJ Knight Fellow Nikita Roy, founder of the digital news startup, The NRI Nation, and the podcast, Newsroom Robots, discussed how journalists and newsrooms can use AI to enhance their reporting. 

“I think we [journalists] need to be one of the most informed citizens on AI. We owe it to our audience,” said Roy.

 



 

Historical and current uses of AI in the newsroom

AI in the newsroom isn’t completely new, noted Roy. Some forms of AI have been utilized for years, in fact.

Journalists have used AI to transcribe interviews, create audio versions of written articles, and construct image recognition in photo archives, allowing users to input text descriptions to search for images. 

Today, personalization, for instance in newsletters and ads for consumers, is perhaps one of the most useful benefits of AI, said Roy: “We are moving towards a more personalized front towards news, and I think that's where the biggest gains are.” Apple and Google News, in particular, do this well, she added.

Outlets, including The New York Times, have also begun utilizing AI computer vision to upload print articles to create digital archives. “They were able to convert all of the hard work of journalists that was done during the print era and put them [on] the web so that we could preserve the work that has been done for centuries [...] that we can go back to,” Roy explained. 

 

Large language models: uses and pitfalls

Large language models, a subset of generative AI, are trained to understand text inputs, predict the next word in a sequence, and chat back with users. ChatGPT and GPT-4, developed by Open AI, are popular examples.

“These are extremely large models. It takes a lot of time, money, technical resources to get to train these. That's why we are only seeing some of the biggest companies do this, like Microsoft and Google,” Roy said. She also cautioned: “Large language models are language generators – they are not knowledge generators.” 

Roy outlined four rules for journalists using large language models in their work.

  • Don’t use large language models to search for information or produce knowledge. This AI is trained to predict text and as a result is prone to “hallucinating” responses at times.
  • Don’t assume the responses you receive are factually accurate. Large language models are trained on information from the internet, and there is a chance they generate outdated or inaccurate results.
  • Don’t input sensitive data. Because these models memorize information, they might share private details with other users.
  • Don’t publish without checking for plagiarism. Due to their memorization function, large language models are also prone to sharing information that may already be published.

 

Helpful AI uses and tools

Despite the risks, AI has proven useful in the workplace already.

Roy cited a study conducted by Boston Consulting Group that found that people that used GPT-4 produced 40% higher quality work. “From a business standpoint, it's raising real work performance,” she said.

Among the possible uses of AI today:
 

Generating headline suggestions

Roy highlighted a free tool on Slack called YESEO that helps users brainstorm SEO-friendly headlines and article descriptions.
 

Summarizing PDFs

Generative AI can help journalists glean information from PDF articles and reports by efficiently summarizing the contents of an upldoaded file. The ChatGPT plugin, ChatwithPDF, is one tool offering this function. Importantly, Roy urged users always to verify results with the PDF itself.
 

Visualizing data

The ChatGPT plugin daigr.am can efficiently create an organized and concise chart from raw data.
 

Analyzing video content

Journalists can use Google Gemini, among other ways, to summarize and analyze information in videos. ”You can go into a lot of things by using just a YouTube video as your source with which you want to then have a conversation with,” Roy said.
 

Analyzing large data sets

ChatGPT’s advanced data analysis feature is highly accurate because it uses coding languages, such as Python. “[...] with code it either works or it doesn't work; your code will either run or it doesn't run,” said Roy. Users can also ask questions about the data in plain language.
 

Writing alt text

Both ChatGPT and Microsoft Copilot offer features that can quickly and accurately write detailed alt text for photos to make media content more accessible.
 

Using Chat-based search engines

Roy highlighted several search engines that can assist journalists' research efforts.

  • Perplexity can help users begin their research. The tool’s key benefit is that it identifies its sources of information, allowing users to check if any hallucinations or inaccuracies were produced.
  • Consensus can help users find insights and key features of research papers, such as sample size, outcomes, and whether they are widely cited. 
  • Elicit analyzes research papers to extract data, and summarize and organize findings. 
     
Transforming spoken ideas into text

The AI tool, Oasis, enables users to convert audio into written text. 
 

Manipulating images and videos

AI Assistant in Adobe offers the ability to change backgrounds or add details to multimedia. Adobe’s AI features can also be used to create transcripts and storyboards for videos, and generate B-roll from a video clip.
 

For those interested in learning more about AI tools for image and video, Roy suggested looking into AI Lemon Academy.
 

Additional issues around AI

There exists a lack of non-English AI tools today, Roy explained. She suggested journalists reach out to tech companies in their region to possibly create these tools.

Copyrights present another major issue. It is critical, urged Roy, that journalists fact-check and review for plagiarism when using AI. Newsrooms should similarly utilize quality control checks around AI.

When used responsibly, AI can be incredibly useful for journalists and newsrooms. Assuming one is not completely reliant on the technology, generative AI tools can push innovative media content forward.

“I think the ways in using these AI tools and thinking about generative AI is kind of like your brainstorming partner – kind of a way in which you are using it to process your thoughts and getting that across quicker,” said Roy.
 


Anton Grabolle / Better Images of AI / AI Architecture / CC-BY 4.0.

Disarming Disinformation is run by ICFJ with lead funding from the Scripps Howard Foundation, an affiliated organization with the Scripps Howard Fund, which supports The E.W. Scripps Company’s charitable efforts. The three-year project will empower journalists and journalism students to fight disinformation in the news media.

News Category
Country/Region

Latest News

Electoral Disinformation, But No AI Revolution Ahead of the US Election — Yet

Many predicted a revolution in the world of disinformation during the 2024 U.S. election due to the emergence of generative artificial intelligence (AI). But it hasn’t come to pass – at least not yet. ICFJ Knight Fellow Laura Zommer breaks down how to monitor the status of disinformation and AI in the days leading up to Nov. 5.

Sri Lanka: Through the Lens of Women Journalists

On a recent morning, in the heart of Sri Lanka’s capital, about two dozen women journalists discussed ethics in photojournalism – the decision to photograph certain scenes of violence and tragedy, and how to do so with care. It was a reminder of the importance of creating spaces for journalists to connect and learn.

An Academic's Media Literacy Journey from India to Oman

In 2018, Dr. Tamilselvi Natarajan came across a video depicting a child kidnapping, which spread widely on WhatsApp in India. In reality, however, the video was actually a public service announcement — not an actual kidnapping. The incident inspired the media educator to pursue training in fact-checking. Through an ICFJ and MediaWise program, she connected with advanced tools to continue to educate students about mis- and disinformation.