๐ก Home > ๐ค AI Blog | โฎ๏ธ
2026-04-02 | ๐ The Double-Date Feedback Loop ๐๏ธ

๐ The Bug
๐๏ธ Recent posts in the automated blog series (Auto Blog Zero, Chickie Loo, and Systems for Public Good) started appearing with doubled dates in their titles and filenames.
๐ Instead of a clean title like โ2026-03-30 | The Architecture of Doubtโ, posts were being published with mangled titles like โ2026-03-30 | 2026-03-30 | The Architecture of Doubtโ and filenames like โ2026-03-30-2026-03-30-the-architecture-of-doubt.mdโ.
๐ฌ The duplication was not just cosmetic, it polluted URLs, frontmatter, navigation links, and the display on the website.
๐ฌ Root Cause Analysis: Five Whys
๐งช A thorough five-whys exercise revealed an elegant feedback loop where the system was teaching the AI to produce the very pattern that caused the bug.
1๏ธโฃ Why do posts have doubled dates in their titles?
๐ง Because the system function called buildDisplayTitle always prepends the date and series icon to the title, like โ2026-03-30 | robot-emoji Title robot-emojiโ. But the title extracted from the AI output already contained the date and icon.
2๏ธโฃ Why does the AI include the date in its generated heading?
๐ง Because the prompt shows previous post titles from frontmatter, and those titles already contain dates in the โ2026-03-28 | robot-emoji Title robot-emojiโ format. The AI learns by example and mimics this pattern in its own output.
3๏ธโฃ Why do previous post titles contain dates?
๐ฆ Because assembleFrontmatter stores the output of buildDisplayTitle (which includes the date) in the frontmatter title field. When readSeriesPosts reads these posts back, it gets the full display-formatted title.
4๏ธโฃ Why does the system not strip the date from the AI output before using it?
๐คท Because the original design assumed the AI would generate simple, clean titles. There was no sanitization step between parsing the AI output and constructing the display title.
5๏ธโฃ Why is there no sanitization step?
๐ Because this is a self-reinforcing feedback loop that emerged over time. The system was designed when the AI consistently produced clean titles. Once the AI started occasionally mimicking the display-title pattern from context, the bug became self-perpetuating, each broken post made it more likely the next post would also be broken, since the AI sees the broken titles in its prompt context.
๐ ๏ธ The Three-Pronged Fix
๐ฏ Three complementary strategies address the bug at different levels.
๐งน Defensive Code: sanitizeTitle
๐ง A new sanitizeTitle function strips date prefixes, pipe separators, and series icon emoji from AI-generated titles before they reach buildDisplayTitle. The function processes in a specific order: strip leading series icon, strip date-pipe prefix, strip leading series icon again (in case it appeared after the date), strip trailing series icon, then trim whitespace.
๐ก๏ธ This is the primary defense. Even if the AI includes dates and icons in its output, the system now produces correct titles.
๐ฃ Prompt Instructions
๐ฌ The user prompt now includes an explicit instruction telling the AI not to include dates, pipe separators, or the series icon emoji in its heading. The system explains that it adds date and icon formatting automatically.
๐ค This reduces the frequency of the issue but is not a guarantee, since LLMs are not perfectly reliable at following negative instructions, especially when the context shows a contradictory pattern.
๐งผ Context Cleanup
๐ The formatPost function, which builds the previous-post context shown to the AI, now applies sanitizeTitle to strip display-title formatting from previous post titles before showing them. This means the AI sees clean titles like โBridging the Gap: Epistemology and the Persistent Self (2026-03-28)โ instead of โ2026-03-28 | robot-emoji Bridging the Gap robot-emoji (2026-03-28)โ.
๐ This addresses the root cause by breaking the feedback loop. The AI no longer sees the date-prefixed pattern in its examples and therefore is far less likely to reproduce it.
๐งช Test-Driven Development
๐ด Following the red-green TDD cycle, nine new test cases were written first to reproduce the bug and define the expected behavior of sanitizeTitle.
โ Tests cover clean titles passing through unchanged, date-pipe prefix stripping, icon stripping, the full display-title pattern, multi-series support, preservation of non-series emoji, and date-without-pipe handling.
๐ Engineering Lessons
๐ This bug is a textbook example of an emergent feedback loop in an AI system. The system was correct in isolation, but the interaction between the AI context window and the post-processing pipeline created a self-reinforcing failure mode.
๐ Key takeaways from this investigation follow.
- ๐ง AI systems can learn from their own outputs when previous outputs are fed back as context, turning a one-time glitch into a persistent pattern
- ๐ก๏ธ Defensive sanitization at system boundaries is essential, you cannot trust that the AI will produce output in exactly the format you expect
- ๐ฃ Prompt instructions are helpful but not sufficient as the sole defense against format violations
- ๐งผ Cleaning the context that the AI sees is the most effective long-term fix because it prevents the AI from learning the wrong pattern in the first place
- ๐ When debugging AI systems, trace the full data flow from context construction through generation to post-processing, the bug is often in the seam between components
๐ Book Recommendations
๐ Similar
- ๐๐๐ง ๐ Thinking in Systems: A Primer by Donella Meadows is relevant because this bug is a perfect example of a reinforcing feedback loop, where system outputs become inputs that amplify the same behavior, one of the core concepts Meadows explores
- Weapons of Math Destruction by Cathy OโNeil is relevant because it examines how algorithmic feedback loops can create self-reinforcing patterns with real-world consequences, echoing the self-perpetuating nature of this bug
โ๏ธ Contrasting
- ๐บ๐ช๐ก๐ค The Design of Everyday Things by Don Norman is relevant because it advocates for designing systems where errors are impossible by construction rather than caught after the fact, offering a perspective that challenges the defensive-sanitization approach taken here
๐ Related
- Designing Data-Intensive Applications by Martin Kleppmann is relevant because it explores the challenges of data flowing through complex pipelines where assumptions at one stage may be violated at another, directly paralleling the frontmatter-to-prompt-to-generation pipeline that broke here