Patterns from the Claude Code source leak mapped to our Continuous Claude setup — April 2026
Right now, 46% of MEMORY.md is silently truncated every session. Everything below line 200 (venture portfolio, Flippa, Gumroad, video pipeline, social stack, document extraction, PepGuide socials) is invisible. We're paying for memory we can't use.
Claude Code's autoDream enforces: ≤200 lines, ≤25KB. MEMORY.md is an index of pointers, not the memory itself. Each entry <150 chars pointing to a topic file.
## Flippa Deal Scout
Runs daily at 8:30 AM. Scans Flippa content sites, scores 0-100, classifies into three deal types.
| Field | Detail |
|-------|--------|
| Location | opc/scripts/flippa/ |
... (15 lines)
## Flippa Deal Scout
- [Details](flippa_deal_scout.md) — Daily 8:30AM, scores listings 0-100. `scripts/flippa/`
venture_portfolio.mdvideo_pipeline.mdsocial_media_stack.mdtypefully_api.mdEvery session loads 3,756 lines of rules. The 6 postgres-*.md files alone are 1,500+ lines of reference material that's only relevant during database work. This wastes context and degrades prompt cache hit rates.
CLAUDE.md gets reinserted every turn change. Bloated instructions = tokens re-paid on every single message. The tool system uses on-demand loading — fetch knowledge when needed, not upfront.
| Tier | What | Lines | Load Strategy |
|---|---|---|---|
| Always | Core behavior (destructive-commands, git, api-keys, claim-verification, no-haiku) | ~300 | Keep in ~/.claude/rules/ |
| On-Demand | Postgres (6 files), Supabase, WordPress, content pipeline, video pipeline | ~2,500 | Move to ~/.claude/references/ — load via skill or explicit request |
| Project | Docker rebuild, team knowledge, dockerize-services | ~150 | Keep in project .claude/rules/ |
~/.claude/references/ directory for on-demand materialpostgres-*.md (6 files, ~1,500 lines) to referencessupabase-workflow.md (~200 lines) to referenceswordpress-server-optimization.md (~150 lines) to referencescontent-pipeline.md, video-pipeline.md, internal-linking.md to referencespostgres skill that loads the postgres references on demandNo autoDream equivalent. Memory grows monotonically. Contradictions accumulate. Stale entries persist.
We have claim-verification.md but it's a rule, not enforced mechanically. Memory claims get trusted without verification.
[verified 2026-03-15] = confirmed against codebase[stale?] = not verified in >30 days
Our /review skill does parallel reviews but doesn't have an explicit "try to break it" adversarial agent.
/build and /fix workflowsNo visibility into cache hit rates. We don't know how much context bloat costs us.
| Priority | Task | Effort | Impact | Status |
|---|---|---|---|---|
| P0 | Compress MEMORY.md to pure index (≤150 lines) | 30 min | Restores 173 lines of invisible memory | Ready now |
| P1 | Move reference rules to ~/.claude/references/ |
20 min | -2,500 lines from every session context | Ready now |
| P2 | Build memory consolidation script | 2 hrs | Prevents memory rot over time | Design ready |
| P2 | Add verification timestamps to memory | 1 hr | Catches stale/wrong memories | Design ready |
| P3 | Add adversarial verification to /build and /fix | 1 hr | Catches bugs before completion | Design ready |
| P3 | Cache hit rate monitoring | 30 min | Cost visibility | After P1 |
Our workspace already has the right architecture (3-layer memory, multi-agent, risk tiers). The problem is discipline: MEMORY.md grew past its budget, reference material leaked into always-on rules, and there's no automated pruning. The leak confirms we built the right patterns — we just need to enforce the budgets that make them work.
P0 + P1 together take ~50 minutes and recover ~2,700 lines of wasted context per session. That's the highest-ROI optimization available right now.