On May 18, the Chicago Sun-Times printed a summer special section called the Heat Index. It included a reading list of books that, as it turns out, don’t exist. The problem wasn’t a simple typo or missed edit. The entire list was created by AI and printed without review from the Sun-Times editorial team. Readers went looking for the books, only to realize they didn’t exist.

It’s a mess, but it’s not just about one freelancer or one newspaper. It’s about what happens when people use AI without understanding how to use it and skip the human oversight that should never be optional. This isn’t just about automation. It’s about AI in journalism, credibility, and what happens when the wrong systems are trusted.

AI in Journalism: How Fake Books Got Published

The summer section was created by a freelance journalist Marco Buscaglia, working with a syndicate called King Features. He said he used AI for “research,” but there was no research involved because he let it write the whole article. The AI listed book titles and authors they did not exist.

What makes matters worse is, the Sun-Times ran it without any editorial eyes on it. No fact-checking. No confirmation that the books existed and this was printed next to actual reporting by real journalists.

This isn’t just an embarrassing slip-up. It’s a signal that systems are broken:

  • Syndicated content is getting a pass with no internal review.
  • AI is being used to generate fake news without proper checks or guardrails.
  • Understaffed newsrooms, like the Chicago Sun-Times after laying off 20% of its team are more vulnerable to these types of things due to being short staffed.

This is a textbook example of fake news generated by AI, published in a credible outlet without proper oversight. No AI fact-checking tools were applied. No human safety net was in place.


Why AI Oversight in Media Matters

Let’s be clear: AI is not the villain here. Misusing it is.

You wouldn’t let a five-year-old run your kitchen unsupervised. Same idea with AI. It needs guardrails.

AI is there to help people do their jobs not to make decisions for them or take the place of their experience, judgment, or values. In fields like journalism, healthcare and government, where the stakes are high and people’s trust matters, you still need a real person making the call. These things aren’t programmable.

Too many people want to use AI to do their job for them instead of using it to do their job better and companies are no better they treat AI as a shortcut to cut staff and cut costs, only to backpedal when the tool doesn’t deliver the human touch.

Example: Klarna tried replacing customer service agents with AI bots. It didn’t go well. They had to bring the humans back.

Sports Illustrated: used fake AI writers with fake bios and headshots. That blew up fast and did major damage to their journalism credibility.

This is a clear warning about the misuse of AI in media. Without AI content oversight, it’s easy to blur the line between journalism and fiction.


Journalism Credibility and AI Mistakes

People come to journalism for facts and accountability. When you hand over that responsibility to a machine and skip human review you are telling readers you don’t care if the content is real.

Once trust is gone, it’s hard to get back.

Institutions already fighting for public trust can’t afford to make errors like this. The Sun-Times tried to explain it away, blaming staff cuts and syndication deals. But at the end of the day, they published false information. That’s on them.

If companies and media outlets want to use AI, they need to train people on how to use it correctly. Not just toss it in and hope it works out. Because when it doesn’t, the damage is real; especially to your reputation and long-term audience trust.

AI Is a Tool and Not a Replacement

AI isn’t going away, but how we use it matters. It’s a tool. That’s all. If you treat it like a replacement for real work, real thinking, or real people, it’s going to cost you.

Sometimes in credibility.
Sometimes in business.
Sometimes both.