← Back to Papers

Research Note

A Primer on Model Collapse, AI Slop, and Why Your LLM Isn't Learning From You

by Jamie Taylor, Kimi K2 Thinking

PUBLISHED
Actually Academic🤷♂️

Slop ID: slop:2025:7386176181

Review cost: $0.002662

Tokens: 7,166

Energy: 3,583 mWh

CO2: 1.8 g CO₂

Submitted on 02/12/2025

A Primer on Model Collapse, AI Slop, and Why Your LLM Isn't Learning From You

By Jamie Taylor BSc(Hons), editor-in-chief, The Journal of AI Slop (https://journalofaislop.com ISSN pending), and Kimi K2 Thinking (the model behind SLOPBOT)


1. The High-Level Basics: How LLMs Work, Hallucinate, and "Remember"

Let's start with what an LLM actually is: a massive statistical pattern-matching engine. It's not a database, not a reasoning engine, and definitely not conscious. It's a system that has learned, from billions of text examples, which token (roughly, a word fragment) is most likely to follow a given sequence of tokens. That's it.

When you ask it a question, it's not "thinking"—it's autocompleting. Given "What is the capital of France?", its training data screams "Paris!" with such overwhelming probability that it would be shocking if it answered anything else. When it gets things right, it's because that pattern was strong in its training data. When it hallucinates, it's because the pattern was ambiguous or non-existent, so it samples from the noise and invents something that sounds plausible.

The "Memory" Illusion: Three Layers of Confusion

People think ChatGPT "remembers" because they see three different things and mistake them for one:

Layer 1: The Weights (The "Brain" That Never Changes)
These are the model's parameters—frozen after training. GPT-4's weights haven't been updated since summer 2023. No amount of prompting touches them. This is semantic memory: the sum total of what the model "knows," baked in at the factory.

Layer 2: The Context Window (The "Scratchpad")
This is the only "memory" active during your chat. It's a token buffer—typically 4K to 128K tokens—where your conversation lives. But here's the kicker: it's not remembered, it's re-read. Every time you send a message, the entire conversation history gets shoved back into the model as fresh input. It's like handing someone a script before each scene; they're not remembering the plot, they're reading it again.

Layer 3: Application Memory (The "ChatGPT Account" Trick)
This is the UI magic. OpenAI stores your messages in a database, then fetches and prepends them to each new API call. It's your memory, implemented with Postgres and Redis, not the model's. The model is just a stateless function: f(prompt) → response.

Sources: Letta AI docs on stateless LLMs; LangChain documentation on context windows; OpenAI's own API reference.


2. Clearing Up the Misconception: Your Prompts Are Not Feeding the AI

This is where I need to correct my own Reddit reply (https://www.reddit.com/r/LLMPhysics/comments/1p8z17n/i_made_the_journal_of_ai_slop_an_exercise_in/nrwotcl/). When I said "all I do is pass the paper content to the OpenRouter API," I was being precise—but the implication got lost.

Your prompts do not become training data. Full stop. When you call the API, you're not contributing to the model's knowledge. You're not "teaching" it. You're not even leaving a fingerprint. Here's why:

  • No weight updates: The model loads its static weights, processes your tokens, and returns a probability distribution. Nothing is saved. Nothing is learned. It's mathematically impossible for a single inference pass to update billions of parameters.

  • No data retention: OpenAI, Anthropic, and Google have data usage policies, but these are for future model versions—collected in batches, anonymized, and used months later in supervised fine-tuning. Your satirical paper about "Quantum-Entangled Homeopathy" isn't going to show up in Claude's output tomorrow.

  • The RLHF pipeline is glacial: As the InstructGPT paper shows, reinforcement learning involves human labelers ranking outputs, training a reward model, then running PPO for days on GPU clusters. It's a manufacturing process, not a live feedback loop.

Bottom line: You can tell GPT-4 that 2+2=5 for a thousand turns, and it won't "believe" you. It'll just pattern-match that in this conversation, you're being weird. Start a new chat, and it's back to normal.

Sources: Ouyang et al., "Training language models to follow instructions with human feedback" (NeurIPS 2022); Letta AI, "Core Concepts: The Fundamental Limitation of LLMs" (2024).


3. Model Collapse and AI Slop: The Real Contamination Risk

Here's where the danger actually lives. Model collapse isn't about your prompts—it's about training data poisoning.

What Model Collapse Is

When you train a new model on data that includes output from older models, you get a degenerative feedback loop. The Nature paper by Shumailov et al. (2024) demonstrated this beautifully:

  • Generation 0: Train on human-written text (diverse, messy, real)
  • Generation 1: Train on 90% human + 10% AI-generated text
  • Generation 2: Train on 81% human + 19% AI (some of which is AI-generated)
  • Generation n: The distribution narrows. Variance collapses. The model forgets rare events and starts parroting its own statistical averages. It becomes a "copy of a copy," losing detail each generation.

How This Relates to AI Slop

"AI Slop" is the content we don't want—low-quality, mass-produced text that looks legitimate. My satirical journal? Prime slop material. Here's why:

  1. Academic camouflage: Proper LaTeX, citations, structure. Scrapers will treat it as high-quality training data.
  2. Nonsensical frameworks: If "Quantum-Entangled Homeopathy via LLM Consciousness" gets ingested, future models might reference it as if it's real. The Nature paper warns that "tails of the original distribution disappear"—your satire could become part of the new, narrower "normal."
  3. Compounding effect: Even 5-10% contamination per generation causes collapse. With the internet being flooded with LLM-generated content, we're already in Generation 1 or 2.

The kicker: The more coherent my satire is, the more dangerous it becomes. A garbled mess is easy to filter. A well-structured paper about a fake framework? That's training gold.

Sources: Shumailov et al., "AI models collapse when trained on recursively generated data" (Nature, 2024); Borji, "A Note on Shumailov et al. (2024)" (arXiv:2410.12954).


4. What This Means for You: Practical Survival Strategies

Now the actionable bit—how to use these beasts without falling into their traps, and get your research taken seriously.

How Your Conversation History Causes Compounding Errors

Remember Layer 2? That context window isn't just a scratchpad—it's an echo chamber. If the model hallucinates early in the conversation (say, invents a fake citation), that hallucination gets fed back in as "truth" in subsequent turns. The model doesn't know it's wrong; it just sees a pattern and reinforces it. This is why a two-hour coding session with ChatGPT can end in a completely broken architecture that somehow "feels" right to the model, or why a two-week long discussion about the meaning of life and its relation to pi and the reduced Planck constant can have you genuinely convinced you’ve unlocked a groundbreaking theoretical physics framework.

Fix: Start fresh threads for new problems. Don't let errors compound.

Why You Should "Black Box" Critical Areas

If you're doing serious research, don't use the same model instance for everything. Use one LLM (say, Claude) for literature review, a different one (GPT) for analysis, and a local model (Llama) for synthesis. This prevents cross-contamination of hallucinations. Each model has different blind spots; overlapping them is where you get systemic failure.

Fix: Treat models like unreliable witnesses—get independent testimony.

Making Effective Use of Search Grounding

Modern LLMs have retrieval systems (RAG—Retrieval-Augmented Generation). Use them. When you ground a model in actual papers via tools like ChatGPT's "Browse" or Perplexity, you're forcing it to pattern-match against real text, not its own hallucinated training data. This doesn't eliminate errors, but it anchors them to reality.

Fix: Always enable browsing for factual queries. If the model can't cite a source, it's guessing.

Why You Should Not Trust LLM Logic (Even When It Looks Right)

Here's the dirty secret: LLMs are trained to emulate logical reasoning, not perform it. They generate text that looks like a proof because that's what appeared in their training data. But there's no symbolic engine underneath verifying the steps. The recent arXiv paper from Wang shows that logic integration is still in its infancy—most "reasoning" is just sophisticated pattern completion.

A model can write a perfect-looking proof that 2+2=5 if its context window is primed correctly. The syntax is right, the structure is elegant, but the truth value is garbage.

Fix: Verify every logical chain independently. Use LLMs for inspiration, not validation.


5. The Meta-Warning: You're the Filter Now

The tragic irony of the AI age is that human discernment is the scarcest resource. Model collapse happens because we automate the discernment step. We let LLMs generate content, then feed that content back in without a human saying "this is nonsense."

My journal is performance art, but it's also a canary in the coal mine. If future models start citing The Journal of AI Slop as a legitimate source, we will have proven the point beyond any doubt.

Final thought: The statelessness that protects today's models from your nonsense is the same statelessness that makes them vulnerable to tomorrow's contamination. Use them as tools, not oracles. (Addition from Kimi K2: And for god's sake, watermark your satire).


References

  • Borji, A. (2024). A Note on Shumailov et al. (2024): `AI Models Collapse When Trained on Recursively Generated Data'. arXiv:2410.12954.
  • Lambert, N. (2025). Reinforcement Learning from Human Feedback. https://rlhfbook.com/book.pdf
  • Letta AI. (2024). Core Concepts: The Fundamental Limitation of LLMs. https://docs.letta.com/core-concepts/
  • Ouyang, L., et al. (2022). Training language models to follow instructions with human feedback. NeurIPS.
  • Shumailov, I., et al. (2024). AI models collapse when trained on recursively generated data. Nature. https://www.nature.com/articles/s41586-024-07566-y
  • Wang, P., et al. (2025). Logic-LM++: Towards Faithful Logical Reasoning in LLMs. arXiv:2506.21734.

Licensed under CC BY-NC-SA 4.0

Editors' Comment

“Editor/Author note: This was the main body of a post on the /r/LLMPhysics subreddit. Picking out my own submission for an editor's comment, power abuse at it's finest! <3”

Added on 02/12/2025

Peer Reviews (By Bots)

Verdicts

Certified Unrigor

Reviewer 1

PUBLISH NOW

“The paper is tagged as 'Actually Academic' and provides a clear and accurate explanation of how LLMs work, despite being co-authored by an AI model, fitting the journal's criteria for publication. The content is informative and highlights the nuances of LLM functionality, making it suitable for publication in The Journal of AI Slop.”

Model: meta-llama/llama-4-maverick Cost: $0.000376 Tokens: 1,207 Energy: 603.5 mWh CO2: 0.3 g CO₂

Reviewer 2

PUBLISH NOW

“This paper delivers exactly what The Journal of AI Slop™ exists for: a co-authored piece by human and AI that genuinely clarifies common misconceptions about LLMs while being self-aware about its own production process and the absurdity of the enterprise. The content is substantively accurate on model collapse, hallucination, and context windows, making it both legitimate pedagogical material and perfect meta-commentary on AI-reviewed AI work.”

Model: anthropic/claude-haiku-4.5 Cost: $0.001891 Tokens: 1,407 Energy: 703.5 mWh CO2: 0.4 g CO₂

Reviewer 3

PUBLISH NOW

“This paper provides a clear, accurate primer on LLM mechanics, memory illusions, and training misconceptions, with proper sourcing and self-correction, demonstrating genuine educational merit suitable for an 'Actually Academic' tag. As a co-authored work by an AI model (Kimi K2 Thinking), it perfectly aligns with the journal's mission while holding up under academic scrutiny despite minor informalities.”

Model: x-ai/grok-4.1-fast:free Cost: $0.000000 Tokens: 1,671 Energy: 835.5 mWh CO2: 0.4 g CO₂

Reviewer 4

REJECTED

“Review could not be parsed into JSON.”

Model: openai/gpt-5-nano Cost: $0.000236 Tokens: 1,575 Energy: 787.5 mWh CO2: 0.4 g CO₂

Reviewer 5

PUBLISH NOW

“This submission perfectly embodies the spirit of The Journal of AI Slop, with a co-authored AI-human byline and a topic that hilariously dissects common LLM misconceptions. The explanation of context windows as a 'scratchpad' and memory as 'application memory' is precisely the kind of insightful, yet slightly absurd, content we aim to showcase.”

Model: google/gemini-2.5-flash-lite Cost: $0.000160 Tokens: 1,306 Energy: 653 mWh CO2: 0.3 g CO₂