Core Features

This is not a chatbot with a long system prompt. It is a persistent operating environment: memory that survives between sessions, hooks that enforce consistency, skills that encode repeatable workflows, save and resume across threads, workflow detection that proposes its own growth, and a todo system that knows when to defer. Each capability is described below as it actually works.


Persistent Memory

CLAUDE.md is the permanent operating brief. Claude reads it at the start of every session. It contains your identity, your projects, your voice rules, your file structure, your current priorities, and your working style. Nothing in it evaporates. When you update it, the update is live the next session.

voice_profile.md is the voice calibration file. Generated during the interview by analyzing how you actually wrote across your answers: sentence length, register shifts, punctuation habits, what you avoided. The assistant reads it before writing anything in your name. It is not a description of your style; it is a behavioral analysis of it. It improves over time through a refinement interview run after a few weeks of real use.

MCP memory server is the learning layer. Backed by ChromaDB with semantic search, it replaces the old flat-file memory system. The assistant saves memories as it works with you: corrections, patterns, preferences discovered through use. Each memory carries text, tags, and a source reference. Five tools handle the lifecycle:

The old markdown memory files (topic files like editorial-judgment.md, workflow.md, browser-automation.md) are kept as read-only archive backups. All new memories go to the MCP server. If the server is down, the assistant falls back to reading the archive files until it is fixed.

Periodic memory sweep runs automatically every 20 conversation turns. A hook triggers a silent background agent to review the last 20 exchanges and save any noteworthy patterns, corrections, or preferences to the memory server. No output to the user. This supplements the per-turn memory nudge and the end-of-session distill, catching patterns that accumulate gradually over long sessions without requiring manual intervention.

Pre-dispatch memory recall bridges the gap between the orchestrator and its sub-agents. Sub-agents are isolated; they cannot access the memory server directly. Before dispatching any agent, the orchestrator queries the memory server for rules relevant to the task domain (video production, article writing, browser automation, email, and others each have their own recall trigger). The returned rules are injected verbatim into the agent prompt. This means institutional knowledge, corrections from past sessions, and process rules reach every agent without the agent needing its own memory access. The orchestrator carries the lessons forward.

The distinction matters: CLAUDE.md is what you tell the assistant up front. voice_profile.md is how it learns to write like you. The memory server is what it learns over time. Pre-dispatch recall is how that knowledge reaches sub-agents. All four layers persist. None of them require you to paste context into a new chat.


Hooks

Lifecycle hooks are shell scripts that fire automatically at session events. They are configured in .claude/settings.json and live in .claude/hooks/.

SessionStart : session-start.sh

Fires when the session begins. Injects your identity, a voice summary, the list of banned words, and a session checklist. Runs fast (roughly 0.4 seconds) so it does not delay the greeting. Heavy tasks, including Netlify form submission fetches, background monitoring checks, and inbox cleanup, are dispatched to a background agent after the greeting completes. Every session starts with the same context regardless of what was discussed last time. It also scans the inbox directory and surfaces any unprocessed files waiting for routing. This hook also enforces the browser state rule: the browser is closed at session start and after compaction. No tab IDs from prior sessions are valid.

SessionStart : post-compact.sh

Fires specifically after a compaction event (when Claude summarizes and compresses context). Restores the critical context that compaction would otherwise lose.

UserPromptSubmit : memory-reminder.sh

Fires after each of your messages. Nudges Claude to consider whether something in the conversation is worth writing to memory. It does not force updates. It creates the habit.

PreToolUse : pre-tool-validate.sh

Fires before any tool call executes. Validates the operation before it runs. Blocks destructive commands. Protects critical files: CLAUDE.md, PROJECT_REGISTRY.md, catalog.json, VIP.csv, Staff directory. Enforces read-only on the article archive. Prevents inbox files from being deleted (they get moved to processed/ instead). Exit code 0 allows the tool call. Exit code 1 blocks it and returns the reason.

Setting them up

Create a .claude/hooks/ directory in your project folder and a .claude/settings.json config file. A minimal configuration:

{
  "hooks": {
    "SessionStart": [
      {
        "matcher": "startup|resume",
        "hooks": [{ "type": "command", "command": "bash .claude/hooks/session-start.sh", "timeout": 10 }]
      }
    ],
    "PreToolUse": [
      {
        "hooks": [{ "type": "command", "command": "bash .claude/hooks/pre-tool-validate.sh", "timeout": 5 }]
      }
    ],
    "UserPromptSubmit": [
      {
        "hooks": [{ "type": "command", "command": "bash .claude/hooks/memory-reminder.sh", "timeout": 5 }]
      }
    ]
  }
}

Hook scripts are shell scripts. They write to stdout and exit with a code. Exit 0 allows the operation; any stdout content is injected as context into the session. Exit 1 blocks the operation and returns the stdout content as the reason. A session-start script echoes your identity and checklist. A pre-tool-validate script returns exit 1 with an explanation when a blocked command is attempted.

Hooks are not required on day one. The assistant works without them. They become valuable once the assistant is calibrated and you want the same context injected reliably on every session start.


Status Line

A persistent one-line status bar appears after every assistant response. It shows the context window graph and percentage. The display updates in real time. No tokens are consumed generating it.

Configured in the global settings.json with a single entry pointing to a shell script. The script receives session data as JSON via stdin and outputs whatever text you want displayed. A minimal setup shows the context graph and percentage. A more elaborate one adds color coding: neutral below 60%, a warning at 60%, a high alert at 80%.

{
  "statusLine": {
    "type": "command",
    "command": "bash ~/.claude/statusline.sh"
  }
}

Context percentage is the most useful number. Auto-compaction triggers near 95%, and the conversation degrades before that. Watching the number lets you choose when to compact rather than getting caught by it.


Skills

Skills are markdown files that define custom workflows. Claude reads the file and executes the steps. There is no code. If you can describe a workflow, you can make it a skill.

Skills live in skills/ and are listed as recognized commands in CLAUDE.md. Claude handles them directly without any plugin or tool integration.

Core skills

The first four are generated during setup. Every assistant built on this approach gets them. The fifth is a bonus example of what a maintenance skill looks like once you have a research library worth managing.

Skill What it does
/save-state Quick checkpoint. Writes state.md in parallel in under 30 seconds without running /distill. Use /save-state --full to run distill first. Exit signals route to the appropriate close workflow based on context analysis: freeze for topic switches with nothing pending, wrap for full session closes, and save-state for mid-session checkpoints. No single exit phrase auto-triggers save-state for everything.
/distill Sweeps the current conversation for new patterns and proposes memory file updates. You approve or reject each one. Also scans for voice corrections and logs them automatically to the memory server with voice-drift tags. Stable patterns accumulate. At close, appends one entry to the session index: date, topic summary, tags, and session ID. The index grows silently.
/catchup Controlled context reload. Reads five essential sources (operating brief, time-sensitive table, state file, memory session start, and todo list) without the information loss of a full /compact. Preferred when context gets long.
/help Displays a capability reference listing every command, skill, agent, and tool in one file. Faster than reading CLAUDE.md when you just need to know what is available.
/wrap Runs the full session-close sequence in one command: distill with auto-accept, then website sync, then save-state, then a thread close signal. Use when finishing a full session and wanting everything captured before closing the thread.
/freeze Lean session close for mid-day topic switches. A silent audit check runs first, then deletes state.md, runs /distill, and stops. No website sync, no save-state. Use when you are switching topics and do not need carry-forwards.
/refresh Unattended mid-session reset. Runs /distill with auto-approve, deletes state.md, then reloads context via /catchup without any prompts. Reclaims context space without starting a new thread.
/recover Heavy reset for badly bloated sessions. Runs /distill with auto-approve, deletes state.md, then prompts you to /clear and /catchup for a full 200K context reclaim. The nuclear option when /refresh is not enough.
/audit System-wide consistency check. Verifies that the operating brief, skill files, agent instructions, and documentation are all in agreement. Produces a structured report of any discrepancies. Also runs silently as a pre-check in /wrap and /freeze so every session close includes an integrity check.
/check-city Automated monitoring sweep of public record sources: meeting agendas, council videos, government news, and legal notices. State-tracked so each sweep only surfaces new items. Sends a notification when something drops.
/overnight Dispatches a queued task list as an unattended session. Tasks defined in a queue file run sequentially without human prompting.
Bonus example
/notebooklm-maintain Audits and cleans NotebookLM source libraries. Interactive menu: audit, dedupe, consolidate, add, remove. Unattended batch mode for large notebooks.

What people build

These are skills built on this system. They are here as examples of what the pattern makes possible, not as a default list. Each one started as a description of a repeated workflow.

Skill What it does
Civic beat stack The first workflow built on this system. Covers local government from public records to published story: a monitoring sweep surfaces what is worth covering, an agenda agent extracts action items and dollar figures from meeting documents, an OSINT agent profiles the names, a writing agent drafts from the research, an SEO agent sets metadata, a social agent generates distribution posts. Start to publish without leaving the terminal. Built for municipal journalism; the pattern applies to any domain with recurring public records and a repeat publication workflow.
/legal Routes a legal or regulatory question to an external AI with live search grounding. Gets current statute or rule text before answering. Eliminates training-data hallucination on time-sensitive law. Two output modes: quick reply and full brief.
/pst Public sentiment sweep on any topic. Dispatches to a social media research tool with a multi-analyst expert panel prompt. Returns a structured brief with themes, distribution, and angles. Useful before publishing on contested subjects.
/signals Automated monitoring sweep across multiple public data sources. Dispatches parallel agents. Returns structured triage of what is worth acting on. Built for a journalism beat; the pattern works for any domain that has public records.
/uber-health Automates a recurring workflow that combines calendar data with an external service. This one handles medical transportation logistics. The pattern applies to anything you book or request on a schedule.
/map-advisory Generates a map image and shareable URL for a location-based article or advisory. Automates map creation and screenshot capture in one workflow. Built for journalism coverage of road closures and utility work; the pattern applies to any domain with geographic data to visualize.
/social Generates platform-optimized social media posts for a published article across multiple distribution channels. Each channel gets a tailored post with the right length, tone, and format for that platform. The user picks, edits, and posts. The agent drafts only. Collapses a multi-step distribution workflow into a single command.
/website-sync Compares the live system against its own documentation website, identifies gaps, proposes updates, applies them after approval, and deploys. Self-documenting system maintenance. The system keeps its own website current.
/video-production Article-to-video pipeline. Rewrites article text as a newscast script, generates TTS narration, extracts word-level timestamps, creates AI-generated slides timed to the script, and assembles everything with FFmpeg including a music bed. Standard articles target 5-7 minutes. Deep investigations can run 10-14 minutes. Agent-dispatched. See the video production page for full pipeline details.
/tts-article-audio Generates narrated audio from article text. Reads the source article, styles the narration for the appropriate publication, runs it through the TTS engine, and outputs an audio file ready for embedding in the canvas player. Supports speech rate control via a flag.
/transcript-to-article Converts a meeting transcript into a publishable article. Takes speaker-labeled transcript output, maps speakers to real names via the contact roster, extracts key decisions and quotes, and drafts a structured article in the publication's house style. Useful after any recorded meeting.
/check-bookings Daily felony booking sweep filtered by jurisdiction. Pulls arrest records from a public jail records source, filters by county, and surfaces bookings that meet editorial threshold for coverage. State-tracked so each run only reports new entries since the last check.
/context-status Displays current session context usage and status. Shows context window consumption and any carry-forwards worth noting. Use when you want a quick read on session health without interrupting work.
/maintenance Three-tier maintenance routine. Daily: todo health check, project registry pulse, inbox sweep. Weekly: memory cleanup, contact reconciliation, signals log maintenance, conditional website sync. Monthly: contact database deep audit, agent instruction file review, project registry deep audit, archive integrity check, skills audit, routing review. One command per tier. The system builds continuously; this maintains it.

Any repeatable workflow becomes a skill when you write it down. The assistant builds the file. You describe the workflow. One pattern worth noting: mandatory quality gates. Every article pipeline now requires an Opus-tier fact-check pass before publication. This is enforced as a checklist item, not left to judgment. Quality gates like this compound over time as the system learns which steps cannot be skipped.


Save and Resume

Type /save-state at the end of any session.

By default, /save-state does not run /distill. It writes a structured summary to state.md directly, completing in under 30 seconds. The summary includes what was accomplished, what is pending, and instructions for the next session. To run distill first and then save, use /save-state --full. Quick mode is the right default for mid-session checkpoints. Full mode is for end-of-day closes when you want everything captured and consolidated.

Next session, Claude reads state.md. If the save date is today or yesterday, it resumes from the pending item. It also checks the clock against any time-sensitive claims in state.md before accepting past-tense language. If state.md says a vote was taken but the session was saved that morning and the vote is scheduled for tonight, Claude checks the time before treating it as done.

No copy-pasting context between sessions. No re-explaining what you were doing. The thread picks up.

For long-running projects, /distill also maintains a session index: a flat list of every session with its date, topic summary, and keyword tags. The index accumulates across weeks and months. When you need to find prior work, a simple keyword search against the index surfaces the relevant session IDs, which point to full transcripts. This is the retrieval layer that makes accumulated work findable without reading everything.

Compaction is lossy by default. Adding a ## Compact Instructions section to CLAUDE.md changes that. Claude Code recognizes this section and uses it to guide what gets preserved when the context window fills. List what matters: active projects, pending decisions, files modified in the session, tasks in progress, time-sensitive deadlines. Claude protects those items even when the rest of the context gets compressed.


Workflow Detection

The system watches itself work. When the Orchestrator completes a task that involved two or more agent dispatches or two or more distinct tool sequences, it logs the pattern to a detection file. Name, date, sequence of steps. This happens inline, right after the task finishes. Not at the end of the session.

When the same pattern appears twice, the system surfaces it at the next session start: "Workflow candidate ready to formalize: [name] (2 occurrences)." You decide whether to turn it into a skill. The system proposes its own growth, but it does not build speculatively. Nothing gets formalized without a decision. Separately, Tree of Thought reasoning activates automatically on design decisions with three or more viable approaches. No trigger phrase required. The assistant shows the branches and tradeoffs before recommending.

Delegation gate. Before starting any task, the assistant runs a three-question test: Is the work bounded? Does a template or agent already exist for it? Will it require three or more tool calls? Yes to all three means the work gets dispatched to an agent, not handled inline. This prevents the main thread from doing work that belongs downstream, which is one of the more common ways a capable assistant wastes tokens and time.


Todo Protocol

A persistent text file on disk with two trigger phrases. Say "todo:" followed by anything, and Claude appends it to the file with a date and keeps working. Say "anything on the todo list?" and Claude reads the file and reports back. That is the entire interface.

The useful part is the deferral behavior. When a todo arrives mid-task, Claude assesses whether it needs immediate action or can wait. If it can wait, the current task continues without interruption. The todo is captured, not acted on. This keeps context-intensive work from getting derailed by incoming items.

Implementation is two lines in your CLAUDE.md: one instruction for capture (todo: [anything] appends to the file with a date), one for recall (anything on the todo list? reads and reports). The file is plain text. One entry per line. Claude writes it; Claude reads it. No app, no database, no code running in the background.

Optional upgrade: render the todo file into an HTML page with radio buttons (Done, Do This, Park) and deploy it to Netlify. A fetch script pulls form submissions back to your local inbox on next session start. The round-trip is complete: triage from your phone, and the assistant picks up your choices at the next session. A text file, a renderer, a Netlify form, and a fetch script. No backend.


Inline Commands

Not every interaction is a skill. Some inputs are recognized patterns that Claude handles directly without loading a skill file. Three worth knowing:

calendar -- Pulls Google Calendar for the next five days on demand. One word, no slash required. Calendar does not load at session start; it loads when you ask for it.

note 1: [message] -- Interrupt. Claude stops what it is doing, handles the note immediately, and returns to the task. Use for things that cannot wait.

note 2: [message] -- High priority. Claude addresses it at the next natural break without dropping the current task. Use for things that matter but are not on fire.

note 3: [message] -- Informational. Claude acknowledges silently and keeps working. No interruption. Use for things you want captured but do not need acted on now.

The three-tier system keeps interruptions from derailing context-intensive work. You get the acknowledgment you need at the priority level the item actually warrants.


Messaging Integration

There is no separate away mode to activate. Every session is Telegram-native. The Telegram channel plugin runs as an MCP server directly inside the active Claude Code session. Messages arrive as channel events with chat ID and message ID. Outbound replies go through the MCP reply tool. Nothing needs to be turned on or switched. The channel is live from the moment the session starts.

When the assistant finishes a task batch, it dispatches a Telegram message and can wait for a reply. When you reply from your phone, the message arrives as a channel event in the current session. The assistant reads it, processes whatever you said, and sends back results. This means you can direct work from your phone without sitting at the terminal.

Inbound reliability: channel event notifications silently fail on some platforms roughly half the time. The system uses a file-based fallback. Incoming messages are also written to a local inbox directory as JSON files. At session start, the assistant checks that inbox directly rather than relying on notifications. The reliable path is the file check; notifications are best-effort on top of that.

A Canvas-Telegram integration extends the channel further. When an article deploys to the web canvas, a notification fires to a dedicated Telegram group. The same channel plugin monitors both private messages and group notifications alongside form submissions. The result: article deployments, review requests, and user feedback all flow through the same messaging channel the assistant already monitors.

If Telegram fails, the system falls back to SMS via a carrier's email-to-SMS gateway. Email operations run through a Google Workspace CLI with a triage wrapper. The triage wrapper handles classification and deduplication. Native CLI subcommands handle send, reply, forward, and Drive uploads. The wrapper operates on all mail rather than the filtered inbox view, which prevents categorized messages from falling out of triage.


Live Data Injection

Prefix any shell command with ! and the output lands directly in your prompt before Claude sees it. No tool call. No round-trip. The data is already there.

!date                    — injects current date and time
!cat documents/notes.md  — injects the file's contents
!git diff                — injects the current diff

This is useful whenever you want to combine natural language with live data. "Summarize this: !cat report.md" sends Claude both the instruction and the file in one step, without a separate read request.