<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Agentic DevOps | The .NET Blog</title><link>https://thedotnetblog.com/tags/agentic-devops/</link><description>Articles, tutorials and insights from the .NET community.</description><generator>Hugo</generator><language>en</language><managingEditor>@thedotnetblog (The .NET Blog)</managingEditor><webMaster>@thedotnetblog</webMaster><lastBuildDate>Thu, 23 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://thedotnetblog.com/tags/agentic-devops/index.xml" rel="self" type="application/rss+xml"/><item><title>68 Minutes a Day Re-Explaining Code to Copilot? There's a Fix for That</title><link>https://thedotnetblog.com/posts/emiliano-montesdeoca/auto-memory-stop-re-explaining-code-to-copilot/</link><pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate><author>Emiliano Montesdeoca</author><guid>https://thedotnetblog.com/posts/emiliano-montesdeoca/auto-memory-stop-re-explaining-code-to-copilot/</guid><description>Context rot is real — your AI agent drifts after 30 turns, and you're paying the compaction tax every hour. auto-memory gives GitHub Copilot CLI surgical recall without burning thousands of tokens.</description><content:encoded>&lt;p&gt;You know that moment when your Copilot session hits &lt;code&gt;/compact&lt;/code&gt; and the agent completely forgets what you were doing? You spend the next five minutes re-explaining the file structure, the failing test, the three approaches you already tried. Then it happens again. And again.&lt;/p&gt;
&lt;p&gt;Desi Villanueva timed it: &lt;strong&gt;68 minutes per day&lt;/strong&gt; — just on re-orientation. Not writing code. Not reviewing PRs. Just catching the AI up on things it already knew.&lt;/p&gt;
&lt;p&gt;Turns out there&amp;rsquo;s a concrete reason this happens, and a concrete fix.&lt;/p&gt;
&lt;h2 id="the-context-window-lie"&gt;The Context Window Lie&lt;/h2&gt;
&lt;p&gt;Your agent ships with a big number on the box. 200K tokens. Sounds massive. In practice it&amp;rsquo;s a ceiling, not a guarantee.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the actual math:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;200K total context&lt;/li&gt;
&lt;li&gt;Minus ~65K for MCP tools loaded at startup (~33%)&lt;/li&gt;
&lt;li&gt;Minus ~10K for instruction files like &lt;code&gt;AGENTS.md&lt;/code&gt; or &lt;code&gt;copilot-instructions.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That leaves you with roughly &lt;strong&gt;125K before you type a word&lt;/strong&gt;. And it gets worse — LLMs don&amp;rsquo;t degrade gracefully as context fills up. They hit a wall at around 60% capacity. The model starts losing things mentioned 30 turns ago, contradicts earlier responses, hallucinates file names it stated confidently 10 minutes prior. The industry calls this the &amp;ldquo;lost in the middle&amp;rdquo; problem.&lt;/p&gt;
&lt;p&gt;Effective limit: &lt;strong&gt;45K tokens&lt;/strong&gt; before quality degrades. That&amp;rsquo;s maybe 20-30 turns of active conversation before the agent starts drifting. Which is why you&amp;rsquo;re hitting &lt;code&gt;/compact&lt;/code&gt; every 45 minutes — not because you&amp;rsquo;ve filled 200K tokens, but because the model is already rotting at 120K.&lt;/p&gt;
&lt;h2 id="the-compaction-tax"&gt;The Compaction Tax&lt;/h2&gt;
&lt;p&gt;Every &lt;code&gt;/compact&lt;/code&gt; costs you flow state. You&amp;rsquo;re deep in a debugging session. Shared context built up over 30 minutes. The agent knows the file structure, the failing test, the hypothesis. Then the warning hits.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ignore it → agent gets progressively dumber, hallucinates old state&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;/compact&lt;/code&gt; → agent has a 2-paragraph summary of a 30-minute investigation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Either way you lose. Either way you&amp;rsquo;re narrating your own project back to it like a new hire on day one.&lt;/p&gt;
&lt;p&gt;The cruel part? &lt;strong&gt;The memory already exists.&lt;/strong&gt; Copilot CLI writes every session to a local SQLite database at &lt;code&gt;~/.copilot/session-store.db&lt;/code&gt; — every file touched, every turn, every checkpoint. It&amp;rsquo;s all sitting on disk. The agent just can&amp;rsquo;t read it.&lt;/p&gt;
&lt;h2 id="auto-memory-a-recall-layer-not-a-memory-system"&gt;auto-memory: A Recall Layer, Not a Memory System&lt;/h2&gt;
&lt;p&gt;That&amp;rsquo;s the key insight behind &lt;a href="https://github.com/dezgit2025/auto-memory"&gt;auto-memory&lt;/a&gt;: don&amp;rsquo;t build a new memory system — build a read-only query layer over the one that already exists.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pip install auto-memory
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;~1,900 lines of Python. Zero dependencies. Installs in 30 seconds.&lt;/p&gt;
&lt;p&gt;Instead of flooding the context with grep results, you give the agent surgical access to what actually matters:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Tokens&lt;/th&gt;
&lt;th&gt;What you get&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;grep -r &amp;quot;auth&amp;quot; src/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~5,000–10,000&lt;/td&gt;
&lt;td&gt;500 results, most irrelevant&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;find . -name &amp;quot;*.py&amp;quot;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~2,000&lt;/td&gt;
&lt;td&gt;Every Python file, no context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent re-orientation&lt;/td&gt;
&lt;td&gt;~2,000&lt;/td&gt;
&lt;td&gt;You explaining what it should know&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;code&gt;auto-memory files --json --limit 10&lt;/code&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~50&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;The 10 files you touched yesterday&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;That&amp;rsquo;s a 200x improvement. The agent skips the archaeological dig and goes straight to what matters.&lt;/p&gt;
&lt;p&gt;The recommended flow: when you&amp;rsquo;re approaching 50-70% context usage, run &lt;code&gt;/clear&lt;/code&gt; and then prompt with &amp;ldquo;review last sessions we discussed topic X&amp;rdquo;. Instead of burning 12K tokens on blind searches, auto-memory pulls the relevant context in 50.&lt;/p&gt;
&lt;h2 id="why-this-matters-for-net-developers"&gt;Why This Matters for .NET Developers&lt;/h2&gt;
&lt;p&gt;If you&amp;rsquo;re using GitHub Copilot CLI for .NET work — scaffolding services, debugging EF Core queries, iterating on Blazor components — the context rot problem hits just as hard. Complex solutions with multiple projects, shared libraries, and deep call chains are exactly the kind of codebase where the agent loses track fastest.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/dezgit2025/auto-memory/blob/main/deploy/install.md"&gt;install guide&lt;/a&gt; walks through pointing Copilot CLI at it. It&amp;rsquo;s a one-time setup.&lt;/p&gt;
&lt;p&gt;Honestly? 68 minutes a day back in your pocket is not a minor quality-of-life tweak. That&amp;rsquo;s almost 6 hours a week.&lt;/p&gt;
&lt;h2 id="wrapping-up"&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;Context rot is a real architectural constraint, not a bug that will get patched. auto-memory works around it by giving your agent a cheap, precise recall mechanism instead of expensive, noisy re-exploration. If you&amp;rsquo;re doing serious AI-assisted development with GitHub Copilot CLI, it&amp;rsquo;s worth the 30-second install.&lt;/p&gt;
&lt;p&gt;Check it out: &lt;a href="https://github.com/dezgit2025/auto-memory"&gt;auto-memory on GitHub&lt;/a&gt;. Original post by Desi Villanueva: &lt;a href="https://devblogs.microsoft.com/all-things-azure/i-wasted-68-minutes-a-day-re-explaining-my-code-then-i-built-auto-memory/"&gt;I Wasted 68 Minutes a Day Re-Explaining My Code&lt;/a&gt;.&lt;/p&gt;</content:encoded></item></channel></rss>