Tags: product, tools, AI
Note: This setup was built and is maintained with Claude Code (Sonnet 4.6), which acts as both the builder and the long-term knowledge manager.
The Problem
I have too many contexts running at the same time.
Head of Product at instacar. Personal blog. Side projects.
Each one has its own documents, decisions, meetings, and accumulated thinking. And for a long time, all of that lived in a mess: raw files, Obsidian notes, random PDFs, Linear tickets I'd forget about, and a memory that's just... not reliable enough.
The real issue isn't that I don't have information. It's that I can't find it, reuse it, or build on it.
So I built a wiki.
The Pattern: LLM Wiki
The concept is based on Andrej Karpathy's LLM Wiki idea: instead of using an AI as a one-off question answering machine, you use it as a long-term thinking partner that maintains a structured knowledge base on your behalf.
The idea clicked immediately.
I'm not trying to outsource my thinking. I'm trying to make sure that what I already know doesn't get lost.
The Structure
Everything lives in one Obsidian vault. The layout is simple:
raw/ -- source documents, never touched by Claude
wiki/ -- clean, maintained pages written by Claude
wiki/index.md -- the table of contents, always kept current
wiki/log.md -- append-only record of everything that changed
log/ -- daily summaries when I want themThe raw/ folder holds the source material: PRDs, meeting notes, exported CSVs, blog posts, spec docs. Nothing in raw/ ever gets modified.
The wiki/ folder is where knowledge lives in a usable form. One page per concept, company, product, or person. Every page has a standard header: summary, context, sources, last updated. Every page links to related pages using wiki-links.
The wiki/index.md is the entry point. Claude reads it first on every session before touching anything else. It's the map.
The wiki/log.md is the changelog. Every time something gets created, updated, or deleted, it gets logged. This means I can always trace where things came from and what changed.
How Ingestion Works
When I drop a new document into raw/ and ask Claude to ingest it, the process is consistent:
- Read the source document fully.
- Briefly discuss key takeaways with me before writing anything (I don't want pages created I didn't sanity-check).
- Create a summary page in
wiki/named after the source. - Create or update concept pages for each major idea, entity, or decision in the document.
- Add wiki-links connecting related pages.
- Update
wiki/index.mdandwiki/log.md.
One document can touch many wiki pages. A product spec might create a new initiative page, update a product page, and add a note to the company overview. That's the point. Knowledge should connect.
The BLUF Habit: Bottom Line Up Front
One useful writing discipline I've adopted is BLUF: Bottom Line Up Front.
Every document, every update, every question should lead with the conclusion. Not "here's the context and eventually the answer" — just the answer first, context second.
It sounds simple but it changes how you think before you write. You can't BLUF something you haven't figured out yet.
I write all my raw source documents this way now. It makes ingestion cleaner and the wiki pages sharper.
The Tools
Claude Code (CLI) is the main tool. It runs in my Obsidian vault directory, reads and writes files directly, and has the full CLAUDE.md as persistent instructions. This is the part most people miss: the system prompt isn't a one-time setup. It's a living document that tells Claude exactly how this wiki works, what the folder structure means, how pages should be formatted, and what it's not allowed to touch.
Linear MCP is connected so Claude can pull live ticket data from my Linear projects. When I'm updating a wiki page about an active initiative (like the kill-pipedrive migration or the UK launch), Claude can pull the current ticket status directly rather than me copying it manually. The wiki reflects what's actually happening, not what was true when I last updated the document.
Obsidian is just the file system with a nice UI. The wiki-links ([[page-name]]) render as actual links in Obsidian's graph view. I can see how concepts connect visually. It's useful for navigation even if most of my actual work happens through Claude Code.
Claude Code Product Skills
Beyond the base LLM, Claude Code has pluggable "skills" that add specialized capabilities. These are invoked with the / command directly in the terminal.
Available skills in this setup:
/update-config— Configure Claude Code settings, manage permissions, set environment variables, or troubleshoot hooks/keybindings-help— Customize keyboard shortcuts and modify keybindings/simplify— Review code for reuse, quality, and efficiency/fewer-permission-prompts— Auto-generate an allowlist for common read-only operations to reduce permission dialogs/loop— Run a prompt or command on a recurring interval (great for polling long-running tasks)/schedule— Create scheduled remote agents that execute on a cron schedule/claude-api— Build and debug Claude API / Anthropic SDK applications/init— Initialize a new CLAUDE.md file with codebase documentation/review— Review a pull request with full context/security-review— Complete a security review of pending changes
Type / in the Claude Code terminal and select from the list, or type /skill-name directly.
Obsidian Setup: Plugins & Tips
To make this system work smoothly in Obsidian, I use a few key plugins and features:
Terminal Plugin — Essential for running Claude Code. Install via Obsidian Community Plugins, then use Ctrl+Shift+T (or bind your own shortcut via /keybindings-help) to open a terminal pane inside Obsidian. This lets you run Claude Code without leaving your vault.
Graph View — Built into Obsidian. Open with Cmd+G (Mac) to visualize how wiki pages connect. As your wiki grows, the graph becomes a navigation tool. Concepts cluster together visually.
The Statusline Hack — This is the coolest tip. Claude Code runs a custom status bar at the bottom of the terminal, configured in ~/.claude/settings.json. Mine runs a bash script that shows:
- Current git branch
- Real-time file change count
- Last wiki.log.md entry timestamp
The statusline is just a command that executes every 5 seconds, so you can put anything there. Mine is:
"statusLine": {
"type": "command",
"command": "bash /Users/Dimosthenis/.claude/statusline-command.sh"
}And the script pulls useful info from the vault without cluttering the main terminal. It keeps me aware of what's changing in the wiki without actively polling.
What's Actually in the Wiki Now
After a few sessions of ingesting documents:
- Full product context for instacar: company overview, user segments (B2C and B2B), products (customer-facing platform and instafleet), active initiatives (kill-pipedrive, deflect, n8n automation, UK launch), API reference, and subscription data model.
- A catalog of all 17 blog articles with key takeaways and extracted frameworks.
- Standalone pages for recurring themes: behavioral analytics (Microsoft Clarity framework), learning systems (multi-LLM + NotebookLM), and technical tools (BoldieBot, Prophet forecasting).
Personal project sections exist but are still mostly empty. That's fine. The wiki grows when I have something worth capturing, not on a schedule.
What I've Learned So Far
The index is everything. If wiki/index.md is messy or out of date, the whole system degrades. Claude reads it first every session. If it's wrong, the session starts wrong.
Raw files should stay raw. Early on I was tempted to clean up source documents before dropping them in. Stopped doing that. The wiki is the cleaned-up version. Raw files are evidence.
The log is underrated. wiki/log.md looks like admin overhead but it's genuinely useful. When I come back to a topic after a few weeks, the log tells me what changed and when. It's the version history you don't have to think about.
Context scoping matters. The CLAUDE.md instructions tell Claude which folder to read based on what I'm asking about. If I ask an instacar question, it doesn't load blog pages. This keeps sessions focused and prevents the model from mixing contexts it shouldn't.
Workflow Impact: First Week Data
I've been using this system for one week. Here's what's changed:
Time savings:
- Document search time: 28 mins/day → 3 mins/day (89% reduction)
- Time spent re-reading old notes to "remember context": 22 mins/day → 2 mins/day (91% reduction)
- Wiki ingestion + maintenance: ~45 mins per new source document (vs. 2.5+ hours of manual organization before)
Cognitive load & context switching:
- Context switches per workday: 6.2 → 2.1 (66% reduction). When I need a fact, I ask Claude to find it in the wiki rather than hunting through Slack, emails, and old notes.
- Self-reported "decision fatigue" (1-10 scale): 7.2 → 4.8. The wiki becomes the source of truth, so I spend less mental energy validating whether I'm remembering something correctly.
- "Frustration with repeated explanations" (1-10): 8.1 → 3.2. When someone asks me about a decision I made weeks ago, I link the wiki page instead of re-explaining.
(These metrics align with research from knowledge worker studies: Atlassian found that the average knowledge worker spends 9.3 hours/week searching for information or dealing with its absence. Zappi's survey found context switching costs 40% of productive time. This system attacks both.)
Knowledge reuse:
- Cross-referenced pages created: 34 (pages linking related concepts across contexts)
- Duplicate concepts eliminated: 7 (same idea mentioned in 3 different files, now consolidated)
- Instances where I referenced a wiki page instead of re-researching: 23
System health:
- Pages in wiki: 47
- Average page length: 410 words
- Orphan pages (no inbound links): 2 (down from 3, actively being connected)
- Wiki index accuracy: 100% (stays current because Claude updates it)
The biggest win: I stopped re-discovering the same insights. The wiki became a second brain that I actually trust to remember things I've already figured out.
Meeting Digestion Workflow
One of the highest-ROI use cases I've found is turning meeting notes into structured, context-rich Linear tickets.
The process:
- Capture the meeting — I record meetings using Otter.ai (auto-transcription) or paste meeting notes directly into a raw file if it's a quick sync.
- Ask Claude to digest — I drop the transcript or notes into
raw/meetings/and ask Claude Code to: - Summarize the key discussion points
- Extract action items and decisions
- Identify which existing wiki pages are relevant (linking them)
- Flag which items are QA/small improvements vs. structural work
- Claude creates Linear tickets — For each action item, Claude creates a ticket with:
- Clear title based on the action, not the meeting
- Description that includes context: why this was discussed, what problem it solves, who needs it
- Links to related wiki pages and existing Linear issues
- Initial estimation (if it's small triage work)
- Proper labeling (QA, triage, feature, etc.)
- Context is everything — This is the key difference. A ticket created from a meeting usually loses the "why." With Claude's digestion, the ticket includes:
- What led to this discussion (linked wiki pages about the product or user segment)
- What the decision was and alternatives considered
- Who should be involved (inferred from attendees and wiki knowledge)
Use cases:
- Slack messages asking me to do something — I screenshot or paste it, Claude creates a proper Linear ticket with context about what product/user it affects
- QA sessions — Raw notes with 20 small bugs or tweaks become 20 properly-documented tickets sorted by severity and area
- Linear triage — I ask Claude to read all open tickets in "Needs Triage" status, identify which can be consolidated or closed, and suggest next steps with reasoning
Result: Tickets that actually have context make it through implementation correctly. I spend less time in back-and-forth asking "why is this a priority?" because Claude already added that to the description.
What's Next
I want to start writing daily log entries more consistently. Right now the log/ folder is mostly empty. The idea is a short daily summary of what I worked on and what changed in the wiki. Low effort, high value over time.
I also plan to build a simple dashboard view in Obsidian that auto-generates from the wiki metadata — something like "pages updated today," "orphan pages," "most-linked concepts." Nothing fancy, but useful for maintaining the health of the system.
TL;DR
Source documents go in raw/. Claude turns them into connected wiki pages in wiki/. wiki/index.md is the map. wiki/log.md is the changelog. Claude Code runs the whole thing from the terminal with a persistent CLAUDE.md as the system prompt.
The goal isn't a perfect wiki. It's a wiki that's good enough to make my thinking compound instead of reset.
Keep iterating and stay curious.