News Outlets Respond to AI-Generated Summer Reading List Insert

AI-generated syndicated Summer 2025 Reading List was printed in the Chicago Sun Times and Philadelphia Inquirer

The Chicago Sun Times and Philadelphia Inquirer did not help boost any new media trust when caught in an AI content controversy. The news outlets ran a syndicated insert which included a summer 2025 reading list article. The piece included recommendations for books that did not exist. 

After some investigation, the outlets discovered that a freelancer wrote the article for the insert, produced by Hearst’s King Features. According to The Washington Post, the writer of the article, Marco Buscaglia, admitted to using AI tools like ChatGPT and Claude to create the piece. 

“I’m very responsible about it,” Buscaglia said about using AI for research and reporting. “I do check things out, but in this case, I mean, I totally missed it. I feel like, if given the opportunity, I would approach these things differently and have a lot, you know, obviously better set of filters.”

How the Outlets Responded to Fake News Created by AI

Both newspapers issued statements about the incident. Chicago Sun Times’ CEO Melissa Bell published a response for the May 18 section saying, “We are in a moment of great transformation in journalism and technology, and at the same time our industry continues to be besieged by business challenges. This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it.”

The outlet also listed actions it was taking to make up for the deed:

  • Subscribers will not be charged for this premium edition.
  • The section has been removed from our e-paper version.
  • We are updating our policies so that all our third-party licensed editorial content comply with our journalistic standards.
  • Moving forward, we will explicitly identify third-party licensed editorial content and provide transparency about its origin.
  • We are reviewing our relationship with our national content partners to avoid future mistakes of this nature.

King Features also released a statement to Chicago Public Media saying it has “a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content. The Heat Index summer supplement was created by a freelance content creator who used AI in its story development without disclosing the use of AI. We are terminating our relationship with this individual. We regret this incident and are working with the handful of publishing partners who acquired this supplement.”

The Inquirer posted an article explaining the situation, which included a short statement from Gabriel Escobar, The Inquirer’s editor and senior vice president. He said using AI to produce content was a “violation of our own internal policies and a serious breach. We are looking at ways to improve the vetting of content in these supplements going forward.” 

The article also said that no Inquirer newsroom staff was involved with the creation of the piece. 

The Impact on Media Trust

According to a February 2025 Gallup report, Americans’ trust in the mass media is at its lowest point in more than 50 years. While both newspapers issued statements distancing themselves from the content and reaffirming their editorial standards, the incident underscores how quickly AI-generated content can undermine reader trust—even when buried in features articles.

This episode is another cautionary tale for communicators and journalists alike: syndication does not mean exemption from fact checking. As AI becomes a more common tool in content creation, the responsibility to verify facts still remains squarely on human shoulders. While fake book titles may seem like a joke to some, the mistakes come with serious reputational risks for the media.

Curtis Sparrer, former journalist and now Principal at Bospar PR says that while the outlets apologized, he fears it is not enough and does not hit the right notes for the audience. 

I feel that the 636-word treatise [in the Times] is as clinical as AI itself,” Sparrer says. “I’d recommend Bell go on video with a journalist, and let us hear her explain [the situation] like a human being, with all the “urms,” “uhs” and other ticks that differentiate us as human rather than an algorithm.”

Jeromee Scott, Assistant News Director at KOTV in Tulsa, may have put it best about using AI in a post on LinkedIn

“AI doesn’t understand truth,” he wrote. “It generates what sounds right, not what is right. If your name or brand is on something, it’s your job to check the facts. No exceptions.”

 

Nicole Schuman is Managing Editor at PRNEWS.