Changelog
This log mirrors the evolution log in CLAUDE.md. The site updates when the system improves significantly, not after every session. Newest changes first.
2026-03-30
Video pipeline validated and locked down. A full article-to-video production run confirmed the pipeline is production-stable. Remotion retired; FFmpeg with produce_video.py is now the canonical assembly tool. Specialist routing is mandatory: a Broadcast Scriptwriter agent handles script conversion, a Slide Designer agent handles image generation, and inline FFmpeg assembly closes the pipeline. An FFmpeg 8.0 concat filter regression was identified and fixed by switching from the concat filter to the demuxer method. Six conflicting video production memories were purged and the SOP was rewritten clean. Additional rules locked in: Facebook XHR blob method for B-roll sourcing, card-type label format standardized.
Telegram acknowledgment rule added. Every inbound Telegram message now requires an immediate acknowledgment reply before any work begins, even a one-word confirmation. This prevents the user from sending a message and not knowing if it was received, especially during longer tasks where the response takes time.
Website synced with video production page. The video production page was added as a new page on this site. Navigation updated across all pages.
2026-03-25
Away mode retired. Every session is now Telegram-native. The distinction between local and remote operation is gone. The Telegram channel plugin handles communication in every session; there is no separate away mode to activate or deactivate. The custom bridge project was removed from the project registry. A memory supersedence audit deleted 16 stale entries and corrected 2 that had been overwritten by newer system decisions.
Multiple articles published and two investigations advanced. Two long-running articles moved through the full pipeline and were published. A new investigation into a large residential development project launched with four parallel research agents and a NotebookLM query. A monthly water billing beat moved to a future publish date pending the first monthly bill cycle. The article fetch protocol was enforced: WebFetch first, then Playwright, then Chrome. Google Alerts are always read in full before any summary is accepted.
Canvas and video production defaults updated. The render script default was flipped to deploy mode. Music beds are now mandatory in video production (a new pipeline step added to the video production agent). Social post format is an HTML copy-button page, replacing the JSON queue format. The Drive outbox pattern was established for delivering research briefs to sources.
2026-03-24
Evergreen investigation launched on residential development policy. A source phone call triggered a research cascade: three parallel agents swept school concurrency policy, state statutes, and litigation history. A staff report was pulled from the public agenda system showing staff recommended approval but the council overrode 5-0. A briefing document was prepared and delivered to the source via a new Google Drive outbox folder. A five-article series structure was defined from the research.
Federal and state litigation sweep completed. Eight federal cases and five state cases swept. A federal pretrial conference and a federal answer deadline both pending within 24 hours. One state mediation date set for August. Gmail mark-read via the old IMAP tool was found broken; the fix is to use the Google Workspace CLI modify endpoint instead.
Video production updated: music bed now mandatory. The video production agent spec was updated to add a music bed step between slide assembly and voiceover layering. All video production runs now include a music track by default. This is a required pipeline step, not optional.
2026-03-22
Cost of Living Tracker built as a new recurring monthly beat. Data collection pulls from three public economic APIs (economic indicators, housing prices, fuel costs), automated chart generation produces five static PNG charts per run, and a dedicated writing agent handles the article draft. A 12-month backfill was completed on the first run. Agent count: 26 active.
Telegram channel confirmed working both directions. Custom bridge fully retired. 32 files removed from the bridge codebase. SSH and reverse-tunnel infrastructure cancelled as redundant. Google Analytics MCP server installed. MCP catalog sweep reviewed 460 servers. Gmail tool routing finalized: a triage wrapper handles classification and deduplication, native CLI subcommands handle send, reply, and forward.
2026-03-21
Google Workspace CLI installed and authenticated. Gmail REST API replaces IMAP for all read, search, and triage operations. 24 Google APIs enabled via OAuth2. Speed improvement: approximately 1-2 seconds per operation versus 4-6 seconds with the prior IMAP tool. Old IMAP tool retained as fallback. A triage wrapper preserves the classification and deduplication logic from the prior tool. Message IDs switch from IMAP UIDs to hex format.
Monthly system audit completed. Contact database expanded from 82 to 90 entries with 22 email addresses added. Three agents archived, reducing the active roster from 28 to 25. (A new agent added in the same session brings the count to 26.) Project registry cleaned: 14 archived entries removed, 4 new entries added. Seven skill frontmatter blocks added. Six new routing entries added to the operating brief.
2026-03-20
Telegram channel plugin installed (MCP-based). Replaces the custom bridge with an official plugin running as an MCP server directly inside the Claude Code session. Messages arrive as channel events with chat ID and message ID. Outbound replies go through MCP tool calls. Cleaner architecture, fewer moving parts. The custom bridge is now retired.
2026-03-19
Meeting transcription pipeline built. A three-stage pipeline handles full meeting recordings: FFmpeg extracts audio and chunks it at silence points, a fast AI model transcribes chunks in parallel with speaker labels, and a name-mapping pass resolves speaker labels to real names using a contact roster. Tested at approximately 2.4 minutes for a two-hour recording. Auto-roster mode builds the contact list automatically from the VIP database. Local fallback mode uses open-source transcription and diarization tools for offline operation.
2026-03-17
Native IMAP/SMTP inbox triage tool built. A native IMAP/SMTP tool replaced the Gmail MCP server for all mail operations. Commands: fetch (with body preview), mark-read, archive, send, and draft. Snippet extraction handles base64, quoted-printable, and HTML-only emails. The tool operates on all mail by default rather than the filtered inbox view. Full triage workflow: fetch all unseen messages, read and classify each one, archive noise automatically, report only actionable items. This closes a gap the MCP left open: reading without the ability to modify. The native tool handles both in a single interface.
Drive consolidation. Cold storage migrated between drives after diagnosing USB-C hub power insufficiency causing NTFS corruption and IO errors. SMART confirmed zero bad sectors -- the hub was the problem, not the drive. Seven path references updated across agent instruction and archive rule files.
2026-03-15
Session management simplified for the 1M context window. Intent routing replaces the explicit command list: natural language maps to the right close workflow without requiring slash commands. The session depth warning was retired. Context health checks moved to 20-turn periodic sweeps with a 75% threshold for a wrap recommendation. Catchup, refresh, and compaction reference all streamlined. With a 1M context window, most sessions never require mid-session recovery, so the architecture reflects that reality rather than over-engineering for edge cases.
Canvas-Telegram integration built. A Netlify function now sends canvas deployment notifications to a dedicated Telegram group via a notification bot. A unified poller monitors both private chat and group messages alongside Netlify form submissions. When a canvas action triggers a notification, the poller fires an exit-on-action event that notifies the CLI session. The full loop: article deploys to canvas, notification fires to Telegram, user reviews on phone, feedback routes back to the assistant. No manual checking required.
Opus fact-checker mandated on every article pipeline. Every article that moves through the publishing pipeline now requires a fact-check pass by an Opus-tier agent before publication. This is a mandatory gate, not optional. The rule applies regardless of article type or urgency. Fact-checking joins the standard checklist alongside SEO, audio, and video production.
Substack branding and publishing tools updated. Publication name, description, categories, and intro were updated on Substack. A pandoc-based clipboard tool was built for converting markdown drafts to Substack-ready HTML. The SEO agent was updated with an alt text formula for consistent image metadata. NotebookLM was queried for Substack-specific knowledge extraction across 10 sources to inform the updates.
Canvas layout SOP standardized. The article canvas layout order is now fixed: video embed first, then title, subtitle, "read to me" audio player, and article body. The render script was updated to support subtitle rendering. Video production gained a 0.5-second music lead-in before voiceover starts, improving the listening experience on the opening slide.
2026-03-14/15
Overnight task queue: first run completed. Four tasks executed autonomously without human prompting: a federal case docket sweep across 8 cases, a PACER and state court gap fill, a property search expansion for a vacation rental project, and a full article pipeline from research through video production and scheduling. A bookings monitoring tool was built and tested during the run. The TTS engine gained a --rate flag for speech speed control. The status line was trimmed to show only the context graph and percentage, removing cost and model display.
2026-03-14
Periodic memory sweep built. Every 20 conversation turns, a hook triggers a background agent to review the last 20 exchanges and save any noteworthy observations, corrections, preferences, or patterns to the memory server. A one-line context status bar appears at each interval showing context usage percentage. This runs automatically without manual intervention. The sweep supplements the per-turn memory nudge and the end-of-session distill, catching patterns that accumulate gradually over a long session.
Away mode fully operational in the Telegram bridge. The bridge now supports a complete dispatch-wait-reply loop from the CLI brain. When the user says "away mode" with a task list, the assistant creates a flag file, works the tasks, dispatches results via Telegram, and blocks waiting for a reply. Replies route back to the CLI brain for processing. "9" stops the session. "poll N" adjusts the idle timeout. The bridge also gained enriched context loading: time-sensitive deadlines, key contacts, and editorial preferences are injected into every fresh CLI spawn so the away-mode assistant carries full situational awareness.
Workspace cleanup. The assistant root directory was cleaned from 67 files down to 22, removing approximately 45 stale screenshots, temp scripts, and unused JSON files. Reddit community fully set up with rules, flair, sidebar, and branding for article distribution.
Overnight queue processed. Four tasks ran unattended: a federal docket sweep across 8 cases, a state court check, a property search for a vacation rental project, and a full article pipeline (research, draft, TTS audio, video production, SEO, and scheduling) completed end-to-end without human prompting. A bookings management tool was also built and tested during the overnight run.
2026-03-13
Tammy Bridge deployed. A persistent Telegram bot now serves as the primary away-mode communication channel, replacing SMS polling. The bot runs a hybrid brain: a fast API path handles simple queries directly; complex work routes to the full CLI. Session continuity is preserved via a resume flag so multi-step tasks can span multiple exchanges without losing context. The bot auto-starts on boot, runs five scheduled background jobs, and watches the inbox for incoming files. Away mode now feels like a conversation rather than a polling loop.
Smart context loading. The bot's system prompt is no longer static. A context loader assembles it dynamically at startup and refreshes every 30 minutes, drawing from deadline tables, contact databases, and recalled memory archives. When the CLI brain is invoked on a fresh session, it receives a task-mode preamble that prevents the startup initialization sequence from running and consuming context. The result: the away-mode assistant carries live situational awareness into every message rather than working from a fixed snapshot.
Substack SEO API fully captured. All SEO metadata fields, including search title, search description, social title, slug, and cover image, can now be set via a single API call. This replaces the prior browser automation approach for metadata and runs without any UI interaction. First full API-only SEO workflow completed and confirmed working. The tag API body format bug that caused silent 400 failures was also identified and fixed.
PDF compression tool built. A Ghostscript-based command-line tool compresses large government documents for easier handling and distribution. Tested against a large city council packet with a 27% size reduction. Documented in the tools directory for repeated use.
2026-03-12
/check-city skill built. A sweep skill monitors four public record sources: meeting agendas, council meeting videos, city news releases, and legal/public notices. The skill is state-tracked so each run only surfaces items that are genuinely new since the last check. When new findings appear, a Telegram notification fires automatically. The skill is integrated into the daily maintenance routine so the beat never goes unwatched between sessions.
Slide Designer Agent added. A dedicated agent (agent 28) generates presentation slides via direct image generation API with an expert prompting protocol. Key design decisions locked in: all slides are static (motion effects were retired as dated and visually inconsistent), prompts use natural language with explicit color and typography constraints, and the agent supports both full-slide generation and composite backgrounds where real document images are overlaid on generated backgrounds. The agent replaces the prior email-relay pipeline for slide generation.
Social Media Command Center built. A standalone Express server hosts an HTML command center with platform-specific copy buttons for distributing articles across social channels. The /social skill now writes an HTML page as its output instead of a JSON queue file. Copy buttons replace manual transcription. A no-resurface blocklist filters out stories already covered in the archive so the draft generator does not repeat angles.
Video production pipeline standardized. The article-to-video format is now fixed: abstract title cards for opening and closing slides, document-composite images for middle slides where real source material is composited onto generated backgrounds. The first full pipeline run under this standard produced a complete article video end-to-end, confirming the format holds.
2026-03-11
Image generation pipeline via email relay. An image generation workflow routes requests through email to an external service that handles generation and uploads results to cloud storage. The assistant polls for completion and downloads the results automatically. Sequential generation is enforced for batch tasks to prevent parallel jobs from triggering a research permission gate in the external service. Confirmed timing: approximately 60 seconds per image. The pipeline is used for article featured images and slide backgrounds.
Mass publishing pipeline day. Four articles moved through the full pipeline in a single session: SEO agent, featured image generation, canvas update, and deploy. Social post HTML card format was established as the standard output for post-publish distribution: one HTML page per article with copy buttons for each platform. JSON queue format retired.
2026-03-10
Canvas deploy workflow reached production. The article-to-canvas pipeline went fully live with 11 drafts deployed. Mandatory pre-dispatch memory recall was formalized as a required step: before writing any agent prompt, the orchestrator queries domain-specific memories and injects returned rules verbatim into the prompt as a RULES block. This is the only mechanism by which institutional knowledge reaches sub-agents; isolated agents cannot access memory on their own. Skipping this step is defined as a process failure.
/overnight skill built. An unattended task-queue dispatch skill runs a defined list of jobs sequentially without human prompting. Tasks are defined in a queue file; the skill works through them in order and reports results. Designed for low-priority batch work that can run while away from the desk.
Delegation gate system. A three-question pre-work test governs when to dispatch a sub-agent versus working inline: is the task bounded, does a template exist, and does it require three or more tool calls? A decision matrix covers 17 task types. A PreToolUse hook fires a soft warning before browser tool calls as a tripwire, since browser tasks are the most common violation of the delegation principle. The full matrix is stored in memory and surfaced on recall.
/save-state optimized. Quick mode is now the default: parallel sweep with no distill step, completing in under 30 seconds. A --full flag runs distill first when a full consolidation is needed. Same philosophy as the session-start cold-start architecture: keep the main thread lean and run heavy operations only on explicit request.
2026-03-09
Mid-roll TTS promo feature. The audio pipeline gained support for promotional inserts between article segments. Promos use a distinct voice profile, separate from the main article narration, so listeners can distinguish editorial content from promotional content. Mid-roll activation is threshold-driven: the promo fires between audio chunks once the article reaches sufficient length. A shared evergreen promo pool rotates content so the same promo does not repeat in back-to-back articles. A frontmatter field in the canvas template controls whether mid-roll is enabled on a per-article basis.
Pre-dispatch memory recall protocol. A mandatory step was added to the agent dispatch process: before writing any agent prompt, the orchestrator now queries the memory server for rules relevant to the task domain. Video production, article writing, browser automation, email sending, and other task types each trigger a domain-specific recall query. The returned rules are injected verbatim into the agent prompt as a RULES block. This closes a gap where sub-agents, which are isolated and cannot access memory directly, would repeat mistakes that had already been corrected and stored. The orchestrator is now the bridge that carries institutional knowledge into every dispatch.
Project checklists system. Every project folder now requires a checklist.md file. The checklist tracks task completion within each project and provides a quick status snapshot without reading the full project README. This standardizes project tracking across all active, evergreen, and archived projects.
Audit-sync system built. A new /audit skill runs a system-wide consistency check: verifies that the operating brief, skill files, agent instructions, and documentation website are all in agreement. When it finds drift, it produces a structured report of discrepancies. The audit runs as a standalone command and is also embedded as a silent pre-check (Step 0.5) in both /wrap and /freeze session-close skills. This means every session close now includes a quick integrity check before distillation begins.
2026-03-09
Trudy retired. Single-agent architecture. The Gemini-based sister agent (Trudy) has been retired as an autonomous agent. Those core capabilities, including video analysis, fact-checking, and Google Workspace access, are retained as tools within Tammy's orchestration layer. Gemini joins ChatGPT and Perplexity as an external tool Tammy calls on demand rather than a peer agent running independently. The system is now Tammy-centric: one orchestrator, 25 internal agents, and multiple external AI tools dispatched as needed. The website, agent documentation, and pipeline references have been updated to reflect the new architecture.
2026-03-09
MCP memory server live. The memory system migrated from flat markdown topic files to an MCP server backed by ChromaDB with semantic search. 194 memories were migrated from the old markdown files. The end-of-session skills (/distill, /save-state, /catchup, /refresh, /wrap) were updated to use memory_save and memory_recall instead of manually reading and writing topic files. The /consolidate-memory skill was retired because deduplication is now automatic (the server rejects entries above 0.92 similarity). The old markdown memory files remain as read-only archive. New memories go to the MCP server, which supports natural-language recall, tag-based filtering, and automatic staleness detection.
2026-03-08
Memory file compression rollout. Six system instruction and memory files compressed for token efficiency using a mechanical five-rule method: strip markdown formatting, remove blank lines, use shorthand notation, collapse lists inline, remove redundant context. Total reduction: 57% across 125KB of files (down to 54KB). The main operating brief (CLAUDE.md) was also compressed. No information was lost. No behavior changes. The AI reads the compressed versions identically. The method is documented on the Token Trim page and works with any AI system that loads instructions from files.
2026-03-07
SMS gateway and away mode. Bidirectional text messaging added via email-to-SMS gateway. The assistant can now send text messages to the user's phone and read replies via Gmail. This enables a remote command loop called "away mode": give the assistant a task list, leave the desk, and manage work from your phone. The assistant works through tasks, texts results, and polls for replies on a 90-180 second cycle. Reply "9" from your phone to stop polling. Say "home mode" when back at the desk. The full loop was tested end-to-end with live back-and-forth task dispatch, status updates, and follow-up instructions via SMS.
Netlify form submission pipeline. The todo board and draft canvas both use Netlify Forms to capture user input from the web. A startup script now pulls all pending submissions via the Netlify API, writes actionable items to the inbox, and deletes processed submissions from the server. The todo board has a complete round-trip: check items on the web UI, submit, and the assistant processes them at next session start. Actions supported: Done (remove from list), Do This (flag for immediate work), Park (defer).
Gemini CLI sub-agent dispatch method changed. Version 0.32.1 of the Gemini CLI broke the -p flag system-wide. The confirmed working method is stdin pipe dispatch via a Bash subagent: pipe the prompt directly into gemini --yolo from the agent directory. A secondary constraint confirmed alongside it: the Desktop Commander MCP tool cannot capture Gemini CLI output because the CLI writes to the console buffer rather than a file descriptor. The Bash subagent route captures output correctly. Both constraints are documented so future sessions do not retrace the same discovery path.
Gemini CLI hook format corrected. The BeforeAgent hook configuration had been using a flat format that was valid in earlier versions but invalid in 0.32.1. The mismatch caused a startup warning in the Gemini agent on every new session. The format was corrected to the nested structure the current version requires. The fix was confirmed in the Gemini CLI migration source code.
2026-03-06
ChatGPT added to multi-AI routing. ChatGPT joins the multi-AI routing config as a third option alongside Gemini Pro and Perplexity Pro. Designated for cross-model validation and non-Google/non-Anthropic perspectives. Model picker includes Auto, Instant (fast), and Thinking (extended reasoning) modes.
Cold-start architecture. Session startup redesigned around a cold-start principle: minimum reads at launch, everything else on demand. Context at startup dropped 82% (roughly 15,600 tokens to 2,800). The main operating brief holds lean pointers; detail moved to satellite files that load only when a task requires them. Session init now performs four reads at startup, then greets.
Sub-agent library for the Gemini agent. The Gemini-side agent now supports a library of specialized sub-agents. Each sub-agent lives in its own folder with a dedicated GEMINI.md containing only its mission logic. The Gemini CLI loads instructions hierarchically, so sub-agents automatically inherit the root identity, operating rules, and output standards while adding nothing extra. The primary Gemini agent remains the orchestrator: it routes tasks, maintains cross-domain awareness, and dispatches bounded, stateless work to sub-agents. First sub-agent live: an email digest processor that extracts structured news alerts from the inbox, routes the digest to the Claude-side assistant, and handles all Gmail mark-read and archive operations the Claude-side Gmail MCP cannot perform.
Gmail delegation via Gemini agent. A single trigger command now hands off a full inbox sweep to the Gemini agent. It reads, categorizes, marks as read, archives, and returns a structured summary for the Claude-side assistant to act on. The Claude-side Gmail MCP can search and read but cannot modify. Delegation to Gemini closes that gap without any manual steps.
Transcriptionist sub-agent. A dedicated sub-agent handles audio transcription. It receives audio files, produces structured transcripts with speaker identification and timestamps, and returns output in a consistent format the main assistant can parse and act on. Follows the same bounded, stateless sub-agent pattern as the digest processor.
Parallel-dispatch coverage pattern. A workflow for real-time event coverage that dispatches multiple sub-agents against the same source simultaneously. Each sub-agent handles a different output layer: raw quotes, action items, narrative thread. Results return to the main assistant for synthesis. The pattern keeps any one agent from carrying too much context during a live session.
Standalone observation agent. A domain-specific agent built to analyze recorded meetings and produce timestamped behavioral observation logs. The agent describes only what is visible: device use, visible screen content, posture changes. No intent is inferred. Output format is designed to be factual and usable in formal proceedings. The agent runs entirely outside the main pipeline, isolated by design. Establishes the pattern for evidence-oriented AI workflows where keeping the human out of the judgment chain is the goal.
2026-03-05
Full website redesign and T2 rebrand. The site moved from single-agent "Tammy" branding to the T2 system identity. T2 stands for Tom and Tammy. New design system: dark navy (#0f1624) with blue, purple, and green accents. T² logomark used as favicon and nav brand. T² wordmark in the nav. All seven pages on the new design system. Homepage rebuilt with capability icon grid, agent division cards, six-node workflow pipeline, stat cards, and three-step build guide.
Weekly and monthly maintenance completed. Tasks 2.1 (memory cleanup), 2.2 (VIP contact reconciliation), and 2.5 (signals log maintenance) completed for the weekly tier. Full monthly maintenance Tasks 3.1 through 3.6 completed: VIP database deep audit, agent instruction file review, project registry deep audit, archive integrity check, skills audit, and routing review. Five audit files written to the Tom/ async channel.
Session index live. A lightweight retrieval index now records every session automatically at close: date, topic summary, session ID, and open-vocabulary tags. The index accumulates across sessions and supports keyword search to locate prior work without reading full transcripts. The /distill skill appends one entry per session, generating the topic summary and tags from the conversation content. The /save-state skill now includes a Session Title field that seeds the entry. For long-running projects that span many sessions, this is the retrieval layer that makes accumulated work findable.
2026-03-04
Video B-Roll download pipeline validated. yt-dlp configured with Node.js as the JavaScript runtime and the EJS challenge solver pulled from GitHub. Without a JS runtime, yt-dlp falls back to a 360p android API path. With Node plus the remote solver, format selection resolves correctly and downloads reach 720p HLS. The production command is documented and cached after the first run. Pipeline tested end-to-end.
B-Roll Library project initiated. Palm Bay's official YouTube channel has 397 videos spanning 2014 to present: Parks and Rec events, infrastructure tours, city council content. A filtering pass estimated 140 eligible clips under five minutes. Project folder live at projects/active/B-Roll-Library/. Scope decision on batch download deferred.
Trudy STATUS.md auto-check added to session initialization. Session startup now reads C:\Users\tgaum\Gemini\STATUS.md at greeting and surfaces any IN PROGRESS or BLOCKED Trudy tasks. Previously, Trudy task status required a manual check. Now it surfaces automatically alongside the project registry and workflow candidates.
Bidirectional agent communication system completed. The two-agent system now operates with a fully closed communication loop. The primary agent can dispatch tasks to the secondary agent in one command, writing a structured trigger file that the secondary agent's background watcher converts to a task automatically. If the secondary agent is not actively running, the watcher now spawns a headless session to process the incoming task without manual intervention. The secondary agent reports back through a shared inbox; the primary agent receives an in-session notification when the report lands. Task handoff and response handling are both automatic.
2026-03-03 (session 5)
Second agent benchmarked and moved to permanent roster. The Gemini-based agent completed its first formal work session with documented performance metrics. A written operating protocol now governs the two-agent team: task handoff procedures, communication conventions, quality standards, and the oversight gates that require human review before anything leaves the workspace. A "Context Handshake" step was added to the workflow -- before starting any high-context task, the Gemini agent checks the relevant project folder for existing research to avoid duplicating work done in prior sessions. The agent moved out of probationary status and onto the permanent roster.
File context management: Phase 1 implemented. The main operating brief and the Gemini agent's operating file were restructured to stay lean. Approach: the core file holds identity, commands, and pointers; everything else moves to satellite files loaded on demand. The main brief shrank from 683 lines to ~396 lines. Self-pruning by design: history grows in the archive, the active context load stays flat.
File context management: Phase 2 implemented. Three changes completed the context optimization pass. Tool integration reference extracted into a dedicated import file. All output from the secondary Gemini agent now routes through the primary assistant before reaching the user. Google Calendar pulled automatically at every session greeting rather than requiring an explicit command.
2026-03-03 (session 4)
A second AI agent joined the team. Gemini CLI (running Gemini 2.5 Pro with a 2M token context window) is now configured as a second autonomous agent alongside the Claude-based assistant. The two agents communicate through shared file inboxes. The Claude assistant acts as the buffer -- all output from the Gemini agent routes to the Claude inbox first for review and routing before reaching the user. This gives the system access to deep document processing, 2M token context, and native Google Search grounding for heavy-lift tasks without bypassing the primary workflow.
Zero-token inbox notification system. A background PowerShell FileSystemWatcher monitors both agent inboxes simultaneously. When a new file arrives, it writes a flag file and sends a push notification to the phone via ntfy.sh (a free, open-source push service requiring no account on the assistant's end). The entire detection layer costs zero AI tokens -- it runs at the OS level, fully outside the Claude context window. Auto-starts at Windows login via a silent VBS launcher in the Startup folder.
2026-03-03 (session 3)
Gmail MCP live. Gmail is now directly accessible from every Claude Code session via the @gongrzhe/server-gmail-autoauth-mcp package. OAuth credentials are stored locally and auto-refresh -- no manual token needed at session start. Two email commands are formalized: one for scanning the general inbox, another for processing inbound mail addressed to the assistant.
2026-03-03
Maintenance skill built. /tammy-maintenance is a three-tier routine for keeping the system healthy. The daily tier checks the todo list for stale or blocked items, scans the project registry for missed deadlines, and sweeps the inbox for unprocessed files. The weekly tier runs memory cleanup, audits the VIP contact database, triggers a signals sweep, and syncs the website if system files changed. The monthly tier runs a full NotebookLM audit and a deep memory consolidation pass.
Async communication channel established. A Tom/ folder was added to the assistant workspace as a one-way async channel from the assistant to Thomas. Thread summaries, proposals, audit reports, and research results land there. Thomas reads on his own schedule. The assistant does not act on Tom folder contents in a new session unless Thomas directs it to.
Google Analytics live on this site. GA4 tracking deployed across all seven pages. Measurement ID G-BHD1LM4PFG is live in production. Baseline data collection started March 3, 2026.
Auto-compact experiment: tried, reverted. Disabled auto-compaction globally for one session. It wasn't better. Auto-compact is the right default. autoCompactEnabled is back to true.
MCP gap identified: Claude.ai connectors and Claude Code use separate registries. Connectors installed via the Claude.ai web interface are not available in Claude Code sessions. The fix is to configure each service explicitly as a Claude Code MCP server via the provider's developer console. Once done, email and calendar operations work natively in Claude Code without any browser automation.
Canva MCP server wired into Claude Code. Canva's Model Context Protocol server is now configured in the global Claude Code settings, enabling programmatic design creation, asset management, template operations, and export directly from the assistant.
Video production pipeline reached production quality. The article-to-video pipeline completed its first full production run and standards are locked in. Key decisions: 30fps standard, publication URL on every slide, caption band stops at 85% frame height. A FRESH_BUILD flag controls whether the run regenerates all API assets or reassembles cached media. Script quality is the controlling factor.
2026-03-02
Status line configured. A persistent one-line status bar now appears after every response showing context window percentage, session cost in dollars, and the active model. Reads session data via a shell script; zero token cost.
Compact Instructions section added to CLAUDE.md. A ## Compact Instructions section tells Claude what to preserve during context compaction: active projects, pending decisions, modified files, tasks in progress, and time-sensitive items.
Claude Code research sweep. Twenty tips documented from official docs and community sources. Key finds: status line, session forking with /fork, background bash with Ctrl+B, external editor for prompts with Ctrl+G, per-turn diff viewer with /diff, and built-in usage analytics.
Litigation Research Agent built. A dedicated Sonnet agent for monitoring federal and county court dockets and drafting public records requests when litigation milestones occur.
Website sync skill added. /tammy-website-sync compares the live system against the site and updates it automatically. Handles new page creation including scaffold generation and navigation updates across all existing pages.
Contact database expanded with email intelligence. A people database of key contacts grew to include email addresses, relationship context, organizational context, and tone preference per contact. Every email action updates the record automatically.
Capability reference sheet built. A single-page REFERENCE.md documents every command, skill, agent, and tool in one place. A /help command surfaces it on demand.
Workflow crystallization system built. A detection log records multi-step task patterns as they happen. When the same pattern appears twice, the system surfaces it at session start. Tree of Thought reasoning gained an automatic trigger: it now activates on any decision with three or more viable approaches.
Slash commands registered as Claude Code commands. Custom skills are now registered in .claude/commands/ as proper Claude Code slash commands with tab autocomplete.
/wrap session close skill built. One command runs the complete end-of-session sequence: distill with auto-accept, website sync if system files changed, save-state, then a thread close signal.
Expert signals agent built with suppression mechanism. A dedicated agent reads a cross-reference log before each sweep. Signals already triaged are suppressed unless their status has changed. Seven parallel source agents check public data and deliver a three-bucket triage.
Research notebook made public. The documentation notebook covering the T2 system, Claude Code, hooks, skills, and agent architecture is now publicly accessible via NotebookLM. Link added to the home page and getting-started page.
Todo workflow documented as a standalone page. A Todo Protocol section explains the persistent todo function: two trigger phrases, a plain-text file on disk, and a two-instruction CLAUDE.md implementation.
2026-03-01
Post-publish agent workflow built. Two new agents added for the publication pipeline. SEO Agent handles platform audit via browser automation. Social Media Agent writes platform-optimized posts for each distribution channel. Core rule: posts should tease, not summarize.
Slide deck agent built. Takes a published article and generates a structured prompt for visual slide deck generation. Flexible slide count based on article complexity. Call-to-action slide is always last.
Project folder structure reorganized into three tiers. Evergreen for permanent beats and live tools. Active for work in progress. Archive for completed high-value work. Cold storage tier added for low-value completed projects.
/legal skill built. Legal research via Gemini CLI with Google Search grounding. Retrieves current statute text before answering. Two output modes: Quick Reply and Full Brief.
/pst skill built. Public sentiment sweep on any topic via browser automation. Returns a structured brief with sentiment distribution, themes, and story angles.
Voice drift system built. /distill extended to scan for voice corrections alongside memory updates. Voice corrections are logged automatically to voice-drift.md. When the same correction appears three or more times, it is flagged as confirmed.
Exit-signal triggers added to /save-state. Phrases like "wrapping up" and "I'm done for today" now trigger /save-state automatically without waiting for an explicit command.
CLAUDE.md import syntax added. Large, stable sections of CLAUDE.md can now be extracted to dedicated files and imported with a single reference line.
2026-02-28
Browser state rule added. At session start and after compaction, browser is CLOSED. No tab IDs from prior sessions are valid. Rule injected into session-start.sh, post-compact.sh, and CLAUDE.md.
Claude Code documentation added to knowledge base. New notebook covering Claude Code, MCP, hooks, and browser automation.
T2 website project built. Static site documenting how to build a Claude-based personal assistant. Interview-first approach: no template CLAUDE.md, instead a prompt that generates one from the reader's answers. Seven pages. Deploy target: Netlify.
2026-02-27
Hooks system built. Three lifecycle hooks: session-start (identity/voice/checklist on startup), post-compact (context restoration after compaction), memory-reminder (nudge to update memory on every prompt). Config in .claude/settings.json, scripts in .claude/hooks/.
Structured memory rebuilt. Monolithic MEMORY.md split into a concise index plus topic files. Index stays under 200 lines; depth lives in topic files.
Six system improvements implemented: Path-specific rules in .claude/rules/. PreToolUse validation hook. /catchup skill. !command prompt syntax documented. Inbox pipeline formalized. /consolidate-memory skill built.
/signals skill built. Pre-news signal sweep across public record sources relevant to the beat. Parallel Sonnet subagent dispatch.
2026-02-26
NotebookLM integration built. Routing config maps query types to notebooks. NotebookLM Agent (Sonnet) added.
/notebooklm-maintain skill created. Maintenance workflow for notebook source hygiene: audit, dedupe, consolidate, add/remove.
/uber-health skill created. Template for automating a recurring multi-step workflow combining calendar data with an external booking service.
Project management system built. PROJECT_REGISTRY.md created as single source of truth. Archive index created as searchable catalog of completed work.
Domain-specific agents separated from generic Draft Agent. Each publication got its own agent with a dedicated style guide, fact-check protocol, and source verification step.
Publication archive built. Articles processed from export into a structured repository. Archive Agent (Haiku) handles builds; Librarian Agent (Sonnet) handles queries.
2026-02-25
Initial onboarding session. Identity, work context, beat areas, workflows, and source data locations documented in CLAUDE.md. The file started empty and was filled through an interview.
Initial agent architecture defined. Orchestrator (Opus), domain research and writing agents (Sonnet), structured extraction agents (Haiku). Task handoff protocol and folder structure defined.
NotebookLM added as primary research tool. Multiple notebooks organized by topic area. Each notebook is a focused corpus, not a general dump.
Multi-AI strategy documented. Gemini Pro, Perplexity Pro, and Manus added alongside Claude agents. Routing principle: cheapest capable tool per subtask.
Analytics infrastructure built. Google Search Console verified for both publications. GTM containers and GA4 tags configured and published.