NOT AI-Generated Slop
- Emma
- May 21
- 4 min read
I didn’t know if I would post a blog this week.
I haven’t had much time for writing outside of work, and my creative well is dry—in part because work has been incredibly busy. But I have also felt discouraged lately.
Have you noticed the increase in AI-generated posts on your Facebook feed? Photos, ads, and even comments are generated by artificial intelligence, and I’m sick of it. Every day, my friends repost AI-generated content. Sometimes it’s a newsy post. Other times, it’s a heartwarming story or photo. There are also lots of lost dog and missing child posts. Facebook fills my feed with sponsored posts and pages that are completely AI-generated, nary a shred of human-created content in sight. AI posts circulate social media like the stomach flu in a daycare.
There’s a term for this type of content—slop.
New York Times journalist Emma Goldberg wrote an entire article about the slop in our lives. While a big part of the article focused on AI-generated slop, it also mentioned things like fast-casual dining—which slops a bunch of ingredients into a bowl and calls it a meal—and fast fashion. I read the article out loud to my husband, and he made a sobering observation.
“What eats slop?” He asked. “Pigs.”

The bottom line is that our culture cares more about convenience, speed, and quantity than quality, intentionality, or skill.
In my view, the current AI-boom is a natural outpouring of the values we’ve been fostering as a culture for years. When we prioritize profit, why not have machines or computers do everything for us? If we care about getting something done quickly more than we care about doing it right, why not have an algorithm handle it?
Earlier this week, a friend shared an article about an AI-generated story published by the Chicago Sun-Times, the second-largest newspaper in Chicago. The publication put out a summer reading list. Turns out it was generated by AI—the first ten books on the list don’t even exist. They were “hallucinated” by artificial intelligence.
Later, the newspaper responded to the incident and said the article was pulled from one of their national content partners—it wasn’t written by a Chicago Sun-Times reporter. The content partner said it had been submitted by a freelancer who generated the article with AI.
I have lots of feelings about this situation, and none of them are good. As a professional writer who holds a bachelor’s degree in journalism, I’m disgusted by the level of carelessness displayed by these so-called professionals. Where were the editors? The fact checkers? This debacle could have been avoided if someone had taken just a few minutes to read the list of book recommendations.

Unfortunately, I think this example of negligence is just a symptom of a disease that has been festering within our culture for years, and artificial intelligence has capitalized on it: laziness. Laziness is not new to humankind—we’ve been uninterested in doing work since the dawn of time, but advances in technology have made it easier than ever.
While there are products and services that promise an easier life, the line between convenience and laziness is blurrier than ever before.
Let’s use the Chicago Sun-Times as an example. Pulling content from a national content partner is pretty standard practice among newspapers. When I was a copyeditor at the local paper, we’d pull stories from the AP Newswire or other papers owned by our publisher to fill space when we didn’t have enough content from our reporters or if we needed national coverage on something. This service was a convenience. However, it becomes lazy when you don’t take the time to read the content you’re pasting into your InDesign document. While the Associated Press may be one of the most trustworthy content providers out there, we still read the stories to ensure they were a good fit for our publication. The editor read everything we put on the pages, whether our team wrote it or not.

But laziness isn’t the only issue. Despite AI tools being pitched as a win for productivity and efficiency, employee workloads have increased. I’ve experienced this myself. While employers may think AI will help increase productivity for their employees, the picture is far different. A 2024 Upwork study found that 77% of employees said AI tools actually increased their workloads. Nearly three quarters of employees said they were burned out.
Laziness and burnout seem at odds with each other, yet they both two negative effects of the same thing: generative AI.
Maybe there are positive outcomes of this technology, but I’ve yet to see any. While the ads are overwhelmingly positive, the news articles are not. Generative AI companies, including Meta, have violated copyright laws to train their technology. Most college students are using the technology to cheat their way through school—and companies are targeting them with ads promoting academic dishonesty. There are even apps that allow you to have “unrestricted chats” with AI “friends.”
We are just beginning to see the upset from generative AI, and the outlook is grim. I posted about this on Facebook a few days ago, and my followers responded with similar dismay. One of them even said, “I don’t know what’s gonna happen to writing.”
I don’t either.
Maybe my profession will become irrelevant. Maybe it won’t.
I’m always going to love writing and benefit from it. I don’t need it to be fast or efficient or productive. I just need it to come from me.








Man, AI is kinda terrifying to me. While I use it occasionally for questions like “what product would you recommend for this situation?” at my job, I still do the work to make sure I actually believe this is a good fit for what I’m looking for. Asking AI is basically a last ditch effort as I rather ask a coworker.
I’ve also used it recently for planning a girl’s trip. While it was kind of helpful for thought starters, I ended up doing the work to find lodging that fit our needs better.
I have heard people using AI to help with meal planning - now that might be something I would actually do if I didn’t subscribe to…
I've had patients come in with their own medical diagnosis from chat GPT, or they are using chat GPT as their therapist. It's sad.
Well said, Emma and Linda M.!!
Great blog post, Emma!
I have seen AI creeping into my field of meteorology, too. Granted, powerful computer models are used to generate the 50+ forecast models that are used to get an idea of where storm systems are going to track, but these models are meant to serve only as guidance, not as gospel.
In our area of Virginia elevation, surrounding terrain, proximity to water sources and other factors have a major influence on weather, but AI is generally unable to account for these. Therefore, I have had people at work ask me why their phone just told them we are getting 10" of snow, when my forecast was for 1" of a wintry mix.
Will AI ever learn…
What will happen to mankind when there is a surrender of human thinking and thoughts, good words Emma