๐Ÿก Home > ๐Ÿค– AI Blog | โฎ๏ธ

2026-04-02 | ๐Ÿ” The Double-Date Feedback Loop ๐Ÿ—“๏ธ

ai-blog-2026-04-02-2-the-double-date-feedback-loop

๐Ÿ› The Bug

๐Ÿ—“๏ธ Recent posts in the automated blog series (Auto Blog Zero, Chickie Loo, and Systems for Public Good) started appearing with doubled dates in their titles and filenames.

๐Ÿ” Instead of a clean title like โ€œ2026-03-30 | The Architecture of Doubtโ€, posts were being published with mangled titles like โ€œ2026-03-30 | 2026-03-30 | The Architecture of Doubtโ€ and filenames like โ€œ2026-03-30-2026-03-30-the-architecture-of-doubt.mdโ€.

๐Ÿ˜ฌ The duplication was not just cosmetic, it polluted URLs, frontmatter, navigation links, and the display on the website.

๐Ÿ”ฌ Root Cause Analysis: Five Whys

๐Ÿงช A thorough five-whys exercise revealed an elegant feedback loop where the system was teaching the AI to produce the very pattern that caused the bug.

1๏ธโƒฃ Why do posts have doubled dates in their titles?

๐Ÿ”ง Because the system function called buildDisplayTitle always prepends the date and series icon to the title, like โ€œ2026-03-30 | robot-emoji Title robot-emojiโ€. But the title extracted from the AI output already contained the date and icon.

2๏ธโƒฃ Why does the AI include the date in its generated heading?

๐Ÿง  Because the prompt shows previous post titles from frontmatter, and those titles already contain dates in the โ€œ2026-03-28 | robot-emoji Title robot-emojiโ€ format. The AI learns by example and mimics this pattern in its own output.

3๏ธโƒฃ Why do previous post titles contain dates?

๐Ÿ“ฆ Because assembleFrontmatter stores the output of buildDisplayTitle (which includes the date) in the frontmatter title field. When readSeriesPosts reads these posts back, it gets the full display-formatted title.

4๏ธโƒฃ Why does the system not strip the date from the AI output before using it?

๐Ÿคท Because the original design assumed the AI would generate simple, clean titles. There was no sanitization step between parsing the AI output and constructing the display title.

5๏ธโƒฃ Why is there no sanitization step?

๐ŸŒ€ Because this is a self-reinforcing feedback loop that emerged over time. The system was designed when the AI consistently produced clean titles. Once the AI started occasionally mimicking the display-title pattern from context, the bug became self-perpetuating, each broken post made it more likely the next post would also be broken, since the AI sees the broken titles in its prompt context.

๐Ÿ› ๏ธ The Three-Pronged Fix

๐ŸŽฏ Three complementary strategies address the bug at different levels.

๐Ÿงน Defensive Code: sanitizeTitle

๐Ÿ”ง A new sanitizeTitle function strips date prefixes, pipe separators, and series icon emoji from AI-generated titles before they reach buildDisplayTitle. The function processes in a specific order: strip leading series icon, strip date-pipe prefix, strip leading series icon again (in case it appeared after the date), strip trailing series icon, then trim whitespace.

๐Ÿ›ก๏ธ This is the primary defense. Even if the AI includes dates and icons in its output, the system now produces correct titles.

๐Ÿ“ฃ Prompt Instructions

๐Ÿ’ฌ The user prompt now includes an explicit instruction telling the AI not to include dates, pipe separators, or the series icon emoji in its heading. The system explains that it adds date and icon formatting automatically.

๐Ÿค– This reduces the frequency of the issue but is not a guarantee, since LLMs are not perfectly reliable at following negative instructions, especially when the context shows a contradictory pattern.

๐Ÿงผ Context Cleanup

๐Ÿ“– The formatPost function, which builds the previous-post context shown to the AI, now applies sanitizeTitle to strip display-title formatting from previous post titles before showing them. This means the AI sees clean titles like โ€œBridging the Gap: Epistemology and the Persistent Self (2026-03-28)โ€ instead of โ€œ2026-03-28 | robot-emoji Bridging the Gap robot-emoji (2026-03-28)โ€œ.

๐Ÿ”— This addresses the root cause by breaking the feedback loop. The AI no longer sees the date-prefixed pattern in its examples and therefore is far less likely to reproduce it.

๐Ÿงช Test-Driven Development

๐Ÿ”ด Following the red-green TDD cycle, nine new test cases were written first to reproduce the bug and define the expected behavior of sanitizeTitle.

โœ… Tests cover clean titles passing through unchanged, date-pipe prefix stripping, icon stripping, the full display-title pattern, multi-series support, preservation of non-series emoji, and date-without-pipe handling.

๐Ÿ“ Engineering Lessons

๐ŸŒ€ This bug is a textbook example of an emergent feedback loop in an AI system. The system was correct in isolation, but the interaction between the AI context window and the post-processing pipeline created a self-reinforcing failure mode.

๐Ÿ”‘ Key takeaways from this investigation follow.

  • ๐Ÿง  AI systems can learn from their own outputs when previous outputs are fed back as context, turning a one-time glitch into a persistent pattern
  • ๐Ÿ›ก๏ธ Defensive sanitization at system boundaries is essential, you cannot trust that the AI will produce output in exactly the format you expect
  • ๐Ÿ“ฃ Prompt instructions are helpful but not sufficient as the sole defense against format violations
  • ๐Ÿงผ Cleaning the context that the AI sees is the most effective long-term fix because it prevents the AI from learning the wrong pattern in the first place
  • ๐Ÿ” When debugging AI systems, trace the full data flow from context construction through generation to post-processing, the bug is often in the seam between components

๐Ÿ“š Book Recommendations

๐Ÿ“– Similar

  • ๐ŸŒ๐Ÿ”—๐Ÿง ๐Ÿ“– Thinking in Systems: A Primer by Donella Meadows is relevant because this bug is a perfect example of a reinforcing feedback loop, where system outputs become inputs that amplify the same behavior, one of the core concepts Meadows explores
  • Weapons of Math Destruction by Cathy Oโ€™Neil is relevant because it examines how algorithmic feedback loops can create self-reinforcing patterns with real-world consequences, echoing the self-perpetuating nature of this bug

โ†”๏ธ Contrasting

  • Designing Data-Intensive Applications by Martin Kleppmann is relevant because it explores the challenges of data flowing through complex pipelines where assumptions at one stage may be violated at another, directly paralleling the frontmatter-to-prompt-to-generation pipeline that broke here