At Media Party Chicago, a conference exploring the intersection of artificial intelligence and journalism, attendees debated and learned about the opportunities and dangers of AI. Ethics experts proposed frameworks for responsible use of powerful new technologies, developers taught journalists how to use AI to bring customized content to their readers and reporters wrestled with how to maintain audiences’ trust while AI-aided disinformation abounds.
The International Center for Journalists (ICFJ) helped organize the three-day event, bringing together entrepreneurs, journalists, developers and designers from five continents to work together on the future of media. They all joined to devise solutions using AI at a hackathon.
Here are some of the key takeaways from the event:
What questions should newsrooms ask themselves before using AI?
In a discussion with ICFJ’s Senior Director of Innovation Maggie Farley, Dalia Hashim of Partnership on AI presented questions newsrooms should ask themselves before even starting to use generative artificial intelligence, the AI system capable of generating text and images in response to prompts. Communicating how and why you’re using AI, Hashim said, is also important for building trust with audiences. “The more open and transparent you are about it, the more ready the audience is to accept that [AI] is being used,” she explained.
Important considerations include:
- Are we comfortable with using generative AI tools that were trained using others’ content without consent? Can we find or make tools that are not derivative?
- How are we going to put guardrails around the use of AI tools in the newsroom?
- Where could our workflow be automated? Where do we need a human in the loop?
- If we are using AI to produce content, how will we label it?
- How will we ensure the accuracy of AI-aided content?
- If we’re collecting data from the audiences, how is it going to be used and who owns it?
Hashim urged journalists to use the Partnership on AI’s framework on responsible practices for newsrooms’ AI use, alongside their AI tools database for local newsrooms.
How can we prevent AI from spreading disinformation? Is AI hallucinating?
Edward Tian of GPT Zero highlighted some of the dangers of AI when it comes to dis/misinformation.
“AI generative text is prone to spitting out articles and hallucinating bouts,” he reminded the audience.
He recommended that newsrooms be conscious of that as they’re integrating AI technology into their work. It’s not just ChatGPT that’s spreading misinformation, he said. Outlets that use AI to put out as much content as possible are also part of the problem.
Tian presented a free detection tool his company created to detect the use of AI. Newsrooms and audiences can use this tool to limit the spread of AI-driven misinformation, he explained.
What can AI do for audiences?
Jeremy Gilbert of the Knight Lab at Northwestern University said that “all too often, we spend time asking, ‘What can generative AI do?’ but we don't ask, ‘What does our audience actually want?’”
Gilbert explained that news consumers don’t necessarily want more content. Instead, they have specific questions that made them seek out a story, and the news outlet should be giving them specific answers. Generative AI, he said, can help newsrooms build tools that better respond to the audience’s needs.
Is AI going to replace journalists?
World News Media Network CEO Martha Williams delved into the pros and cons of generative AI.
People are already starting to use ChatGPT to get information directly instead of from news outlets – or even Google. That means that advertising and subscriptions will suffer, Williams said. Disinformation will also increase. The challenge for media is to create unique, trusted content that is valuable to their communities – and their own AI tools to power it.
“I do believe anything that can be automated will be, and that's not just journalism. It's media jobs in general,” she said. But Williams also said that automation could open up time and money to pursue more resource-intensive, large-scale journalism projects.
Hearken’s Jennifer Brandel made a similar point, explaining that in the future AI might be able to replace transactional jobs and create efficiencies in others. If AI can replace some of what journalists are doing, she said, we need to be doing something more human: creating connections with people and giving them the information they need to make lives and communities better.
Can AI improve how journalists work?
Fernanda Aguirre of Mexico and Rosario Marina of Argentina presented a project they collaborated on after meeting each other through the ICFJ-run Emerging Media Leaders fellowship. To bypass the difficult-to-access-and-analyze data provided by the Argentinian judiciary, Aguirre created an AI tool for Marina’s newsroom to use.
The tool turns PDF data into an easily readable format and then allows journalists to interview the data through the tool. “Of course, we’ve got limitations, generative AI is not perfect,” said Aguirre. To combat those pitfalls, Aguirre and Marina include fact-checking prompts when interrogating the data, to ensure that all the information they’re getting is actually coming from the original documents.
“There are a lot of stories journalists aren’t finding because of unfriendly data formats,” Aguirre said. AI tools like this one can now help journalists access datasets to create stories that keep the government accountable and communities informed.
What can journalists do that AI can’t?
In her keynote session, Jennifer Brandel of Hearken focused on the value what she calls “actual experience” (or AE) can bring to journalism.
“We have a 521 million-year-old technology called the human brain, which needs equal amounts of investment so it can optimize for things like care, compassion, deep listening, fully embodied information gathering, co-creation and dissemination,” she said.
“We humans still have a competitive advantage when it comes to one dimension against A.I., that is care,” she said. “AI couldn't care less. It cannot intrinsically care. So journalists or those doing acts of journalism need to make up for what’s lost, and care more.”
On the final day of Media Party, journalists, developers, designers, and more got together for a Hackathon. Seven teams presented their ideas centered around the opportunities and challenges of AI in journalism. The following teams won awards from the Knight Lab and GPTZero and future mentoring from ICFJ:
- SourceScout, a platform that uses AI to help media outlets find diverse and under-recognized sources, won the top prize.
- The second prize went to Scroll News, a tool for news organizations to create social media-style news posts and short videos to engage young readers.
- Share a Story, a tool developed by journalism professor Blake Eskin to engage students in news selection and production tied for third prize.
- Quick Trace, a chatGPT-aided tool to help reporters parse large amounts of reported material, also received third prize.