LLM Slop Will Make Us Antisocial - Ruminations on IFComp 2025
The Interactive Fiction Technology Foundation has recently announced that they have released a new rule on the so-called Generative AI (GAI) or Large Language Model (LLM) technology for their upcoming iteration of IFComp. While they allow people to use tools for developmental assistance, "all player-facing content in IFComp entries, including cover art, must be entirely created by humans."
As an entrant and judge, I've participated in the debates and surveys that would later make this ruling possible, so I thought it would be interesting to provide some context on why this differs from the typical tiring "AI debates" you see on social media and how my opinion on the subject has changed thanks to it.
The IFComp 2025 Context
For the uninitiated, IFComp is an annual competition that brings in all interactive fiction subcultures into one major festival. If you follow the history of narrative design, you'll definitely come across names like Emily Short (Counterfeit Monkey) and Bruno Dias (Fallen London) in the annals of this long storied event.
But as Dias wrote on his preliminary observations of IFComp on 2025, something has changed:
Out of a few dozen entries, I count 12 that explicitly say they used generative AI in some way or another – whether to generate a cover image, in-game assets, or actual text. These are just the more egregious examples of using it for cover images.
From my perspective, this was not a new phenomenon by any means. Ever since large language models and image generation took over the web, IFComp was receiving several pieces of clearly AI-generated cover art and text every year. The rules allowed these works as long as the entries bring up they were AI-generated.
But I agree with Dias that there was so much that it was beginning to become an eyesore. I was embarrassed to link anyone the IFComp page because the first things they might see were the ugliest abominations ever.
However, there were other games that offer even more secondhand embarrassment. Dias brought up up one entry that's actually a prompt you copy and paste into ChatGPT or Claude. Another game that he didn't discuss used an API key to generate text from a GAI service, but the developer accidentally deactivated their key, rendering the game unplayable for almost all of the competition -- unsurprisingly, that game was placed last.
All of this made last year's IFComp unpleasant to participate in. With 85 games submitted, this was the largest IFComp since the beginning of the pandemic, overwhelming the 67 submitted the previous year. Several games, including my own (though temporarily), were also blocked for UK citizens at the last minute due to some legal nonsense. That's a lot of noise to filter through, on top of the cognitive load required to critically evaluate a work.
And as judges, we can't slack off and give 1s to every AI game either. The rulings state that we shouldn't rate games unfairly, so I have recused myself from judging any game with generative AI because I am biased against it. I cannot imagine a scenario in which I would like a game with GAI enough to give it a fair shake.
As such, these tensions resulted in one of the most heated threads in the forum: "Can we just ban AI content on IFComp?". This thread is so long and meandering that I would only recommend reading it in full if you're at work and want to commit some time theft. But do note that the rest of this article is heavily influenced by my participation in the thread.
My 2025 Opinions on GAI/LLMs Then
I think it's now the time for me to lay out what I thought about the technology and its users then: a resounding "eh".
LLMs have already made their mark in history by accelerating the structural violence that already exists in our societies: witness how it made the already hard task of researching even harder. I've already been using Reddit, not Google, as the only way to find any advice that might help me fix something.
Similarly, I'm skeptical of arguments about copyright and intellectual property against the technology. Theft obviously sucks, but more copyright protections mean more power to the companies, not to the artists. Indeed, I feel vindicated seeing major players in creative industries agree to exploit the technology further without compensating their artists.
So for a long time, I saw the rise of LLMs as part of the ongoing trend in the devaluing of labor, especially in the arts. We've seen this happen with how companies use "good enough" machine translations to depress wages and get rid of human translators. I had the feeling that the reason more people started caring about this was because ChatGPT might in fact threaten their white collar jobs.
Thus, while I found LLM bros insufferable, I also didn't want to talk to most people in the "anti-AI" crowd. Even though I share similar concerns about deskilling and the emergence of cognitive deficiencies from prolonged usage, I found very little value in discussing the subject matter when most people remain pro-capitalist. All sound and fury, signifying nothing.
I therefore saw myself entering the debate as an affirmed Luddite who was unsure whether banning GAI was the right thing to do. As much as I would like it to get banned, I recognized that it's becoming more and more difficult to verify what's generated. Perhaps, the pro-LLM people were right: this could lead to witch hunts because people unfamiliar with English might sound like ChatGPT or whatever. Our AI tells aren't perfect. A false positive is far worse than a false negative in this regard: someone's authentic game might in fact get banned.
And Then, I Learned to Hate LLM Slop
But the more the pro-LLM people spoke, the more I realized we had different priorities. Their goals are self-serving, all about their titles and not the health of the community. The more I listened to them, the more I started to realize that if we don't do anything, our bars are going to be full with AI nonsense and drive the real customers away.
Daniel Steltzer, a moderator of the intfiction forums, was on the same boat as I am: a skeptic who thought maybe the issue was overblown, even if irritating. But when pro-LLM users started explaining how they never checked their code (i.e. vibe coding) and expected humans (that is us, the judges) to playtest their entries and report bugs, his tone started to change:
The most compelling argument I’ve heard for banning LLM-generated works from community events—since, after all, the ethical and environmental issues could theoretically be improved—is that people enjoy generating them but don’t enjoy playing them. The slop takes barely seconds to shovel out, but much longer than that to wade through. And this thread is only reinforcing that argument. The pro-LLM side is posting code samples that they clearly haven’t even read, asking us to find any issues in them, then when we put in the work to respond in good faith, they shrug and say they can always generate more slop later.
[...]
After reading this thread, I’m convinced that we shouldn’t allow DoS attacks against this community either. It takes time and effort for reviewers to play and analyze the games. That time and effort should be devoted to things people care about, not fire-and-forget shovelware that took less time to create than it does to play through. And I’ve yet to see any compelling argument that LLM-generated IF is anything more than that.
The DoS (denial of service) analogy is very apt: we've learned earlier that newcomers are disincentivized to participate in IFComp in any form. A user signed up on the forums just to say that "as a result of seeing the AI slop, I simply decided against playing this year.". A VTuber deleted their VOD on 2024 IFComp after the randomized shuffle on the website gave them an AI game. These revelations got me thinking: "is this generative AI stuff actually worth “debating” when it’s pushing newcomers out of IFComp?".
In that post, I wrote:
My previous post on the matter was about judging for IFComp. I’ve spent a few years in the IF space, so participating in this year was a no-brainer decision. I might personally dislike the LLM entries, but the system does theoretically allow me to ignore these games and just focus on the stuff I care about.
But as I see posts from newcomers and returning members talking about how their personal shuffles are giving them terrible AI entries and then giving up, I started thinking this mindset I have is not something new players should be expected to have. Yes, this is an approach many of us judges ended up taking. But considering cases like kaliranya the [VTuber] streamer, I don’t believe new people and outsiders can talk about IFComp without feeling they’re legitimizing something they abhor.
[...]
I know what I’m thinking about loudly is going beyond the scope of judging and authoring. It’s about other people outside the IF community looking into us. But I think the public perception of IFComp is at stake if there are no rules discussing what should be done about this technology. I thought it would be fine if the competition made people disclose what LLM technology they used in creating their games, but now I think that without any other supporting mechanism is just bad optics.
Today, I don't think it's simply bad optics but an attack on how we make art in creative communities together.
Adam Neely, in his video essay on how Suno (a generative AI music app) is ruining music, brings up how Suno users who participated in his survey are not listening to other people's music or even other people's generated AI music. They're only listening to their own work.
This solipsistic worldview to art is antithetical, nay, damaging to game jams and festivals. As we've seen on Itch jams, it is common for LLM users to hastily generate crap and submit them to every competition and jam they can see, drowning out the other works that took weeks of hard work. Many LLM users are indeed not interested in playing others' work; all they want is people to play theirs, with no compensation or gratitude.[1]
This parasitical behavior is already killing open source software. The so-called AI agents LLM users use rely on the work of maintainers, but their users are not giving back to the community. As a result, humans are leaving the open-source world and vibe coding will eventually become unsustainable without updated projects.
To me, IFComp and related spaces are only possible because we have judges willing to spend time to play through these games and write comprehensive reviews on what works and what doesn't. This is a social activity that cannot be replicated by algorithms.
But the rise of LLM slop threatens that harmony: we cannot extend our good faith and time into critiquing these games when there's so many of them. There's too much low quality crap and these LLM users don't care about this exercise; they just want to be treated the same like other artists who spent years honing their craft. They don't care about us, they care about them and only them.
When the pro-LLM people bring up the valid concern of witch hunting, they are only thinking about themselves. I believe witch hunting only happens when the community cannot trust the organizers and the participants so much that they feel compelled to do this. At the moment, many members still care about the competition and are actively discussing these games. This is not exactly witch hunting behavior, but pro-LLM people probably don't realize this because they're not reading what people are saying about other games.
In fact, I believe the witch hunting "step" can be skipped by just talking to the person. Marcus Olang' has a great article about how ChatGPT writes like people like him; he notes that it shouldn't be surprising that ChatGPT writes like a non-native speaker because it has trained itself on the same texts that students from Kenya would study. So-called AI detection tools would not be able to differentiate Global South writers from AI, but there is no need to hunt for better policing technology. What I think we need is a dialog: a person who is studying English is more likely to be a participant asking for feedback and criticism than someone who is using LLMs to generate the prose of the game and assume it's a done deal.
I see the rhetorical tactic of "what if witch-hunt" as the shrugging of responsibilities to do something, as a kind of toxic nihilism. As Dr. Emily Price writes for Unwinnable,
People are challenged by someone saying no. It’s seen as holier-than-thou. When I insist on my right to say no, I hear that it’s complicated. Saying no is anti-intellectual or anti-nuance. Sure, sometimes. But after what amount of research, or what amount of horrible news or scientific studies or whatever, am I allowed to say no?
It turns out, never. Infinite nuance becomes another way of stalling for time. People with justified objections get trapped in needing to seem authoritative without lecturing, informative but not know-it-alls, informed on their opponents’ positions but not so informed that they look paranoid. At the end of the day, if people don’t want to listen to you, they won’t listen to you. There’s very little you can do when someone’s decided to tune you out.
And at the moment, I think we are suffering from a critical mass of tuning everything out. Toxic nihilism has us convinced that if we’re not doing the max, we might as well do nothing. And more than that, it’s right that we do nothing; we’re realists for doing nothing.
We know this will be difficult, but that's why we discuss and try things out. The fatalism of accepting that AI is coming is the same fatalism in other discourses on existential threats like climate change: it serves no one but the people spearheading the destruction. It is our duty as members of the community to do something because we love our little space where we make our text games and want others to join in the fun.
That's why I advocated for a ban then and why I support the IFTF ruling now. LLM slop and those who generate this crap are an existential threat to our communities, and we need to get rid of this blight now.
What It Means to Fight Against LLMs
I have no idea if this ruling will be easy to enforce for the upcoming IFComp. We've seen a similar conclusion reached by the organizer of Spring Thing, so I imagine this will be seen as the testing grounds of what's to come. There's already debates on the ruling in the forums as expected, so this is still a continuing story.
Nevertheless, I do think this is the right first step in reclaiming our communities from LLM slop. While I see LLMs as just another method to depress wages, I'm beginning to see another aspect that is just as insidious: it's deeply antisocial.
Current LLMs encourage users to depend on their services and only them. Your fellow humans are not to be trusted because they are flawed and prone to mistakes. But the machines? They flatter you and say you're on the right track.
And in the context of art, it turns the collective activity of meaning-making into a transactional relationship: instead of reviews and critiques helping people grow, we are blurring the distinction between commissioning and creating art. As one YouTube commenter on the aforementioned Neely video says, we are not seeing the rise of artists but clients seeking LLM agencies to commission art.
This is changing our practices where we learn and make art together into something disempowering. We rely more and more on these services instead of shared wisdom. This isn't art anymore, it's a dependency.
It took me a long time to recognize how vibe coding and the culture of pro-LLM people are repudiating the very idea of community, and I find this to be the most convincing argument against them. Even if people have figured out how to make LLMs not spread misinformation, found a way to decrease energy costs, etc., I still believe these technologies as they are currently implemented should be rejected for inculcating rent-seeking mentalities. As Matthew Gault writes for 404 Media,
AI is the ultimate rent seeker, a middle-man that inserts itself between a creator and a user and it often consumes the very thing that’s giving it life.
If we love the communities that inspire us, then we must make a stand against pro-LLM users lest they devour us (and themselves).
There are so many profound dangers lurking in the rampant usage of LLMs that it will be impossible to count them all. I feel like we're on the tip of the iceberg regarding how antisocial this technology is making us. And so, we need ways to refuse them entry into our communities. In a world that is becoming more reactionary every day, it is more important than ever to protect the few spaces we have from deteriorating.
I do not want to see communities flooded with the most selfish assholes who can only talk about their own works. We are not their chatbots, so we are in the right to say no to their requests for validation and legitimacy. It stings to get a rejection in any context, but developing an interest in other people and not just yourself is all I'm asking. You need us, but we don't need you.
Until then, I hope IFComp and other spaces that adopt some form of ban on LLMs succeed. We need to flush out these anti-social practices if we want to see art survive in the long term.
Thanks to Lia for referring me to Dr. Emily Price's and Marcus Olang's writings as well as offering some editing suggestions, Encorm for a quick check on the statistics on IFComp 2025, KA Tan for reading it in draft form, and of course Len for making sure my English reads better than ChatGPT.
[1]: There used to be a section where I talked about polling statistics after this paragraph where I argue there shouldn't be a slowdown in the average votes per game. But Josh Grams pointed out that the conclusion may not hold water: there has always been a slow decline, and the statistics recorded are too noisy that many other reasons like burnout can be attributed. Agreeing with him, I've chosen to delete the section but also feel the need to record that this paragraph used to be here.