01

What Is This Thing?

You type one word. An entire AI dev team wakes up. Here's what's really happening.

The Problem: AI Coding Tools Are Fragmented

Imagine you're building software with AI assistants. You've got Claude Code for orchestration, GPT for reasoning, Gemini for creativity. Each one is good at different things. But switching between them? Configuring workflows? Debugging when things break?

That's what Oh My OpenCode solves. It's a single plugin that turns a basic AI coding tool into a full AI development team.

๐Ÿ˜ซ

Without OmO

Manually switching between AI models. Copy-pasting context. Configuring each tool separately. Debugging agents yourself.

โœจ

With OmO

Type "ultrawork". The right model picks the right task. Agents run in parallel. Everything just works.

One Word: ultrawork

When you type ultrawork (or just ulw), here's the chain reaction that fires off under the hood:

1
Plugin Loads

OpenCode calls the Oh My OpenCode plugin. It reads your config, sets up tools, and wires everything together.

2
Sisyphus Wakes Up

The main orchestrator agent โ€” Sisyphus โ€” activates. He's your project manager who delegates to specialists.

3
Agents Get Assigned

Sisyphus analyzes your request, creates a todo list, and fires off background agents in parallel โ€” each on the AI model best suited for the job.

4
Work Gets Done

Agents read files, write code, run diagnostics, and verify their own output. Hooks silently guard against mistakes. The loop continues until every todo is complete.

Under the Hood: A Plugin for OpenCode

Oh My OpenCode is a TypeScript plugin for OpenCode. When OpenCode starts, it loads the plugin, which injects agents, tools, and hooks into the system.

CODE

const OhMyOpenCodePlugin: Plugin = async (ctx) => {
  const pluginConfig = loadPluginConfig(ctx.directory, ctx)
  const managers = createManagers({ ctx, pluginConfig, ... })
  const toolsResult = await createTools({ ctx, pluginConfig, managers })
  const hooks = createHooks({ ctx, pluginConfig, ... })
  return { name: "oh-my-openagent", ...pluginInterface }
}
          
PLAIN ENGLISH

When OpenCode starts up, run this setup function...

First, read the user's configuration file to know their preferences

Create the "managers" โ€” background task runner, skill loader, etc.

Set up all the tools agents can use (LSP, edit, grep, etc.)

Wire up 80+ hooks that silently guard against mistakes

Hand everything back to OpenCode as a named plugin

๐Ÿ’ก
Key Insight: The Plugin Pattern

Instead of building a whole new app, OmO extends an existing one. This is like adding a turbocharger to an engine โ€” the car is the same, but it performs completely differently. In software, this pattern is called a "plugin architecture" and it's how tools like VS Code extensions and browser add-ons work too.

The Tech Stack at a Glance

๐Ÿ”ท
TypeScript + Bun

Bun runs the code. TypeScript catches bugs before they happen. Together they make a fast, reliable foundation.

๐Ÿค–
Multi-Model AI

Claude Opus for orchestration. GPT-5.4 for deep work. Kimi K2.5 and GLM-5 as alternatives. Each agent gets the model that's best at its job.

๐Ÿ”Œ
OpenCode Plugin SDK

The SDK from OpenCode that lets plugins register agents, tools, hooks, and commands.

๐Ÿ› ๏ธ
LSP + AST-Grep

LSP gives IDE-level code intelligence. AST-Grep enables structure-aware code search and rewriting.

Check Your Understanding

A friend asks: "Should I uninstall Claude Code to use Oh My OpenCode?" What do you tell them?

When you type "ultrawork", what actually happens first?

02

Meet the Agents

A film crew where every member is an AI. The director delegates, the lead actor improvises, and the script doctor never touches the camera.

The Production Crew

Oh My OpenCode doesn't use one AI for everything. It runs a team of discipline agents, each cast for a specific role. Think of it like a film production crew.

S

Sisyphus — The Director

Claude Opus / Kimi / GLM. Orchestrator who reads the script, assigns scenes, and never picks up a camera.

H

Hephaestus — Lead Actor

GPT-5.4. Deep autonomous worker who figures things out on set, improvising through complex scenes.

O

Oracle — Script Doctor

Read-only strategic advisor. Called in when something fails — reviews the situation and recommends rewrites, but never touches the code.

P

Prometheus — Pre-Production

Strategic planner using interview mode. Maps out the entire project before anyone starts working.

E

Explore — Location Scout

Fast codebase grep. Scans the entire project in seconds to find relevant files, patterns, and structure before the real work begins.

L

Librarian — Research Asst.

External docs search. When the crew needs to reference a framework's API or a library's changelog, the Librarian fetches it.

Watch Them Work Together

Here is a realistic delegation sequence. Sisyphus receives a task, scouts the codebase, delegates the heavy lifting, and calls in an advisor when things go sideways.

How an Agent Is Born

Every agent is created by a factory function. Here is the real code that builds Sisyphus — the orchestrator that manages all other agents.

CODE

const MODE: AgentMode = "primary";

export function createSisyphusAgent(
  model, availableAgents, ...
): AgentConfig {
  const tools = categorizeTools(availableToolNames);
  const prompt = buildDynamicSisyphusPrompt(
    model, agents, tools, skills, ...
  );
  return {
    description: "Powerful AI orchestrator...",
    mode: MODE,
    model,
    prompt,
    ...
  };
}
          
PLAIN ENGLISH

This agent runs in "primary" mode — it's the main agent the user talks to directly

Define a factory function that creates a Sisyphus agent, given a model and a list of available teammates

First, sort all available tools into categories (edit, search, run, etc.) so the prompt can explain each one

Then dynamically generate the system prompt — it changes based on which agents, tools, and skills are actually available right now

Finally, return the agent config object: a description for UIs, the mode, the AI model to use, and the generated prompt

💡
CS Insight: Separation of Concerns

Each agent has exactly one job and knows nothing about the others' internals. Sisyphus never writes code. Hephaestus never delegates. Oracle never edits files. This is the single responsibility principle applied to AI — the same pattern that makes microservices, Unix pipes, and React components work. When each piece does one thing well, you can swap, upgrade, or debug any piece without breaking the rest.

Why Different Models for Different Agents?

Not every task needs the most expensive AI. The system matches each agent to the right model for its job — like casting the right actor for each role.

S
Sisyphus → Claude Opus / Kimi / GLM

Orchestration requires understanding complex, multi-step requests and deciding which specialist handles what. Needs the smartest model available.

H
Hephaestus → GPT-5.4

Deep autonomous coding requires sustained reasoning across many files. GPT-5.4 excels at long, self-directed implementation sessions.

O
Oracle → Read-only Advisor

Only activated on failure escalation. Reads code and diagnostics, then returns strategic advice. Never writes — so it can use any strong reasoning model.

E
Explore → Fast, Cheap Model

Just runs grep and glob. Doesn't need intelligence — needs speed. Uses the cheapest available model to keep costs down.

Check Your Understanding

A user asks Sisyphus to fix a CSS bug. Why doesn't Sisyphus fix it directly, since it uses a powerful model?

Oracle identifies a performance issue in a database query. What happens next?

You're designing a new agent that only needs to check whether a file exists. Which model strategy makes the most sense?

03

How Agents Think

Before any code gets written, every request passes through a triage system. Like a 911 dispatch center, the agent classifies your intent and routes to the right specialist.

The 911 Dispatch Center in Your Terminal

When you call 911, the operator doesn't fight fires or chase criminals. They ask one question: what kind of emergency is this? Then they route to fire, medical, or police. Sisyphus works the same way.

Every single message you send hits the Intent Gate before anything else happens. This is Phase 0 โ€” the moment the agent decides how to think about your request.

👤
User
🛡
Intent Gate
💬
Verbalize
🤖
Specialist
Click "Next Step" to begin
🎯
CS Concept: Routing & Single Responsibility

This is the Single Responsibility Principle in action. The Intent Gate only classifies โ€” it never executes. The dispatcher only routes โ€” it never classifies. Each component does one job well, like a well-run emergency center where the operator, dispatcher, and responders each have distinct roles.

The Five Types of Intent

Sisyphus follows an explicit ruleset to classify every incoming message. These aren't vague guidelines โ€” they're deterministic rules baked into the system prompt.

SISYPHUS RULES

### Step 1: Classify Request Type

Trivial (single file, known location)
  -> Direct tools only

Explicit (specific file/line, clear command)
  -> Execute directly

Exploratory ("How does X work?")
  -> Fire explore (1-3) + tools in parallel

Open-ended ("Improve", "Refactor")
  -> Assess codebase first

Ambiguous (unclear scope)
  -> Ask ONE clarifying question
          
PLAIN ENGLISH

First, figure out what kind of request this is...

Trivial: You know exactly where the file is and what to change. Just do it with tools, no delegation needed.

Explicit: The user gave a clear command like "fix line 42 in utils.ts". Execute it directly, no questions asked.

Exploratory: The user wants to understand something. Launch 1-3 explore agents in parallel to search the codebase simultaneously.

Open-ended: Vague requests like "make this better". First assess the full scope before doing anything.

Ambiguous: Can't tell what they mean? Ask exactly one clarifying question โ€” never two, never zero.

💡
Intent Verbalization

After classifying, Sisyphus announces its routing decision out loud before acting โ€” "This is an exploratory request, I'll fire parallel explore agents." This makes the agent's thinking transparent and debuggable, like a 911 operator saying "I'm dispatching fire and medical" before pressing the button.

Delegation by Category, Not by Model

Sisyphus never says "send this to GPT-5.4" or "use Claude Opus." Instead, it delegates by category โ€” describing what kind of work needs doing. The system maps categories to models behind the scenes.

This is like a hospital where doctors order "imaging" not "use the Siemens MRI machine." The abstraction means you can swap models without rewriting any agent logic.

🎨

visual-engineering

UI work, CSS, component styling, layout fixes. Routes to models strong at visual reasoning and frontend code generation.

🧠

deep

Complex multi-file refactors, architecture decisions, intricate debugging. Gets the most powerful reasoning model available.

quick

Simple edits, typo fixes, straightforward tasks. Routes to fast, cost-efficient models that don't need heavy reasoning.

🔮

ultrabrain

The hardest problems: novel algorithms, cross-system design, critical decisions. Reserved for the absolute strongest model in the roster.

🔄
Session Continuity Saves 70%+ Tokens

When a specialist is delegated work, it reuses the same session_id across interactions. Instead of re-reading the entire conversation each time, agents continue where they left off โ€” like a doctor reviewing their own notes instead of re-interviewing the patient.

🔀
Parallel Execution

For exploratory requests, multiple explore agents and librarian agents fire simultaneously. Three agents searching different parts of the codebase at once is faster than one agent searching sequentially.

The Context-Completion Gate

Even after classification and delegation, there's one more checkpoint. Before any implementation begins, the Context-Completion Gate must be satisfied. Think of it as the surgical timeout โ€” the pause before the first incision where the team confirms they have the right patient, right procedure, right site.

Condition 1: Scope Clarity

The agent must know exactly which files and functions will be touched. No guessing, no "I'll figure it out as I go."

Condition 2: Pattern Knowledge

The agent must understand the existing code patterns and conventions in the target area. No introducing alien styles.

Condition 3: Dependency Awareness

The agent must map what depends on the code it's changing. Breaking a function that 12 other files import is not acceptable.

🛡
Why This Matters

Without this gate, agents would start writing code the moment they see a request โ€” like a surgeon cutting before checking the X-rays. The Context-Completion Gate is what separates a thoughtful agent from a reckless autocomplete. If any condition fails, the agent goes back to exploration.

Check Your Understanding

Scenario

A developer types: "How does the hook system work? I need to understand the lifecycle before I add a new hook."

How does the Intent Gate classify this request?

Scenario

You're reading the Sisyphus prompt and see a delegation call. Which of these would you expect to find?

How does Sisyphus specify which agent to delegate to?

A specialist agent has been delegated a refactoring task. What must happen before it starts writing code?

04

The Tools Arsenal

Agents don't just think โ€” they act. Here are the surgical instruments they use to read, edit, and understand your code.

A Surgeon's Instruments, Not a Sledgehammer

Think of a surgeon's operating room. They don't use one big tool for everything โ€” they have scalpels, clamps, retractors, each designed for a specific job. One wrong instrument and the patient is in trouble.

OmO's agents work the same way. Each tool is purpose-built for a specific kind of code manipulation. The right tool for the right job means fewer mistakes and faster work.

โœ๏ธ

Hashline Edit

Content-hash validated edits. Every line tagged with a fingerprint. Zero stale-line errors.

๐Ÿ”

LSP Tools

IDE-level intelligence: rename across files, find references, jump to definitions, run diagnostics.

๐ŸŒณ

AST-Grep

Search code by structure, not text. Find "all functions that take 3 arguments" across 25 languages.

๐Ÿ–ฅ๏ธ

Tmux / Interactive Bash

Full interactive terminal. Run REPLs, debuggers, and TUI apps โ€” the agent stays in session.

Hashline Edit: The Breakthrough

Most AI coding failures aren't the model's fault โ€” they're the harness's fault. The edit tool is usually the weak link. Here's the problem most tools have:

โŒ

Traditional Edit

AI reads a file, then later tries to edit it by reproducing the exact text it saw. If the file changed (or the AI misremembers whitespace), the edit fails or corrupts the file.

โœ…

Hashline Edit

Every line gets a unique content hash. The AI edits by referencing hash tags, not text. If the file changed, the hash won't match โ€” edit rejected before corruption.

The result? Success rate jumped from 6.7% to 68.3% โ€” just by changing the edit tool.

WHAT THE AGENT SEES

11#VK| function hello() {
22#XJ|   return "world";
33#MB| }
          
PLAIN ENGLISH

Line 11, hash "VK" โ€” this is the function declaration. The hash is a fingerprint of this line's content.

Line 22, hash "XJ" โ€” the return statement. If someone changes this line, "XJ" would become a different hash.

Line 33, hash "MB" โ€” closing brace. The agent says "edit line #XJ" instead of reproducing text.

๐Ÿ’ก
Key Insight: Content Addressing

This idea โ€” identifying data by its content hash instead of its position โ€” is everywhere in computing. Git uses it to track file versions. IPFS uses it to store files across the internet. Bitcoin uses it to chain blocks together. It's one of the most powerful ideas in computer science: if the content changes, the address changes, so you always know if something has been tampered with.

LSP: IDE Superpowers for Agents

When you use VS Code and right-click "Go to Definition" or "Rename Symbol" โ€” that's LSP at work. OmO gives agents those exact same powers.

lsp_goto_definition Jump to where a function or variable is defined โ€” across any file in the project
lsp_find_references Find every place in the codebase that uses a specific function or variable
lsp_rename Rename a symbol everywhere it appears โ€” safely, across all files at once
lsp_diagnostics Check for errors and warnings โ€” like a spell-checker, but for code
lsp_symbols List all functions, classes, and variables in a file โ€” a table of contents for code
lsp_prepare_rename Check if a rename is safe before doing it โ€” like a dry run
CODE

export const builtinTools: Record<string, ToolDefinition> = {
  lsp_goto_definition,
  lsp_find_references,
  lsp_symbols,
  lsp_diagnostics,
  lsp_prepare_rename,
  lsp_rename,
}
          
PLAIN ENGLISH

Here's the list of tools that every agent gets by default...

Jump to where something is defined

Find everywhere something is used

List all the "ingredients" in a file

Run a health check on the code

Preview what a rename would change

Actually rename it across the whole project

All bundled together as "built-in tools"

Background Agents & Task Delegation

Beyond the surgical tools, agents also have the power to spawn other agents. The background agent system lets Sisyphus fire up 5+ specialists at once, each working on a different part of the problem.

1
task() โ€” Delegate Work

Sisyphus fires a task with a category and skills. The system picks the right model automatically.

2
background_output() โ€” Collect Results

When a background agent finishes, Sisyphus retrieves its output and verifies the work.

3
background_cancel() โ€” Clean Up

If an agent is no longer needed (task superseded or already answered), cancel it to save tokens.

๐Ÿ”ง
Why This Matters for You

When you tell an AI agent to "add dark mode to my app", knowing about these tools helps you understand WHY the agent is running lsp_diagnostics after every edit (it's checking its own work), or why it fires 3 explore agents in parallel (it's searching different parts of the codebase simultaneously). You can steer it better when you know what instruments it has.

Check Your Understanding

An agent tries to edit line #XJ in a file, but the edit is rejected. What most likely happened?

You want to rename a variable called "user" to "currentUser" across your whole project. Which approach is safest?

Why does Sisyphus run lsp_diagnostics after every significant code change?

05

The Safety Net

80+ invisible hooks silently intercept, guard, and recover โ€” so agents never go off the rails.

An Invisible Army of Bodyguards

Imagine a skyscraper with motion sensors, fire alarms, and security cameras on every floor. You never notice them โ€” until something goes wrong. Then they kick in instantly, preventing disaster.

OmO has 80+ hooks that work exactly like that. They intercept agent actions at every stage: before a message is sent, after a tool runs, when context gets too large, when an error occurs. The agent never sees most of them โ€” they just silently make everything work better.

๐Ÿ‘ค
User
๐Ÿ›ก๏ฟฝ๏ฟฝ
Pre-Hooks
๐Ÿค–
Agent
โœ…
Post-Hooks
๐Ÿ”„
Recovery
Click "Next Step" to see how hooks intercept every stage

Three Layers of Protection

Hooks are organized into three categories, created by three factory functions in the code:

CODE

const core = createCoreHooks({
  ctx, pluginConfig, modelCacheState, isHookEnabled
})
const continuation = createContinuationHooks({
  ctx, pluginConfig, backgroundManager, sessionRecovery
})
const skill = createSkillHooks({
  ctx, pluginConfig, mergedSkills, availableSkills
})
          
PLAIN ENGLISH

Set up the core guardrails โ€” comment checking, model fallbacks, context monitoring, error recovery...

Pass in the config so hooks know what the user wants

(end of core setup)

Set up the "keep going" hooks โ€” todo enforcement, session recovery, background notifications...

These need access to the background agent manager to track parallel work

(end of continuation setup)

Set up skill-specific hooks โ€” auto slash commands, category reminders...

These need to know which skills are loaded so they can inject the right context

(end of skill setup)

๐Ÿ›ก๏ธ
Core Hooks (25+)

Comment checker (no AI slop), model fallback, context window monitor, edit error recovery, hashline enhancer, tool output truncation, thinking block validator

๐Ÿ”„
Continuation Hooks (15+)

Todo enforcer (yanks idle agents back to work), Ralph Loop (self-referential completion), session recovery, background notifications, preemptive compaction

โšก
Skill Hooks (10+)

Auto slash command detection, category-skill reminders, agent usage hints, skill-embedded MCP activation

The MVPs: Hooks That Save Your Project

!
Comment Checker

Scans every code change for AI-generated comment slop โ€” vague comments like "// Handle the logic" or "// Process data". Forces agents to write comments a senior engineer would write, or skip them entirely.

โฐ
Todo Continuation Enforcer

If an agent goes idle with uncompleted todos, this hook injects a system reminder that essentially says "Hey, you're not done yet." It yanks the agent back to work. Your task gets finished, period.

๏ฟฝ๏ฟฝ
Model Fallback

If the primary AI model fails (rate limit, outage, error), this hook automatically switches to a fallback model and retries. You configured the fallback chain; the hook handles the switching invisibly.

๐Ÿ’พ
Session Recovery

If a session crashes โ€” context window overflow, API failure, random error โ€” this hook automatically recovers it. Saves state, restarts from where it left off. No lost work.

๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ
Key Insight: Defense in Depth

This strategy โ€” having multiple independent safety layers, each catching different kinds of failures โ€” is called "defense in depth." It's the same principle behind airplane safety: seatbelts, airbags, crumple zones, and ABS brakes each handle a different failure mode. No single layer catches everything, but together they make catastrophic failure extremely unlikely.

You're in Control

Every hook can be disabled via your config file. The plugin reads a disabled_hooks list and skips any hook you don't want:

CODE

const disabledHooks = new Set(
  pluginConfig.disabled_hooks ?? []
)
const isHookEnabled = (hookName) =>
  !disabledHooks.has(hookName)
          
PLAIN ENGLISH

Make a list of hooks the user wants to turn off...

Read the list from their config (if they didn't set one, use an empty list)

(end of list creation)

To check if a hook should run...

Just check: "is this hook's name NOT in the disabled list?"

This gives you fine-grained control. Don't like the comment checker? Disable just that one hook. Everything else keeps working.

Check Your Understanding

Your agent is using Claude Opus but Anthropic's API goes down mid-task. What happens?

An agent completes 3 out of 5 todo items and then seems to stop working. Which hook saves you?

06

The Big Picture

Zoom out. See how every piece connects โ€” and why this architecture lets you build faster than ever.

The Full Map

Think of the entire system as a city. Each neighborhood has a purpose, roads connect them, and traffic lights (hooks) keep everything flowing safely. Let's see the whole city at once.

Entry Point โ€” Plugin Initialization

๐Ÿš€
index.ts
โš™๏ธ
plugin-config.ts
๐Ÿ—๏ธ
create-managers.ts

Agents โ€” The AI Team

๐Ÿ‘‘
Sisyphus
๐Ÿ”จ
Hephaestus
๐Ÿ”ฎ
Oracle
๐Ÿ“‹
Prometheus

Tools โ€” What Agents Can Do

โœ๏ธ
Hashline Edit
๐Ÿ”
LSP Suite
๐ŸŒณ
AST-Grep
๐Ÿ–ฅ๏ธ
Tmux

Hooks โ€” The Invisible Guardrails

๐Ÿ›ก๏ธ
Core Hooks
๐Ÿ”„
Continuation
โšก
Skill Hooks
Click any component to learn what it does

How the Code Is Organized

The monorepo has a clear hierarchy. Every directory has a single responsibility.

src/ All source code lives here
agents/ Agent definitions โ€” Sisyphus, Hephaestus, Oracle, Prometheus, Explore, Librarian, Momus, Metis
tools/ Everything agents can DO โ€” hashline-edit, LSP, AST-grep, interactive-bash, skill tools, session manager
hooks/ 80+ event interceptors organized by concern โ€” each hook is its own directory with focused logic
features/ Major subsystems โ€” background agents, skills loader, Claude Code compatibility, MCP integration, task management
shared/ 170+ utility files โ€” model resolution, config parsing, session management, error classification
plugin/ The glue layer โ€” tool registry, hook factories, agent builders, skill context
config/ Configuration schema (Zod validated), type definitions, defaults
cli/ Command-line tools โ€” the "doctor" diagnostic command, config manager
index.ts The entry point โ€” where everything comes together
packages/ Platform-specific binary builds (darwin-arm64, linux-x64, windows-x64, etc.)
docs/ User-facing documentation โ€” installation guide, orchestration guide, features reference

From Boot to Shutdown: The Full Lifecycle

1
OpenCode Calls the Plugin

OpenCode finds "oh-my-openagent" in its plugin list and calls the exported async function with a context object containing the project directory and client.

2
Config Loading

The plugin reads your JSONC config file, validates it with Zod schemas, and gracefully handles invalid sections (skipping them while loading the rest).

3
Manager Creation

Background agent manager, skill MCP manager, and tmux session manager are initialized. These are the runtime engines that power parallel execution.

4
Tool & Skill Registration

All built-in tools (LSP, hashline, grep, glob) plus skill-provided tools are assembled. Skills can bring their own MCP servers that spin up on-demand.

5
Hook Wiring

80+ hooks are created across three layers (core, continuation, skill). Each one registers for specific events and runs its logic when triggered.

6
Plugin Interface Returned

The assembled agents, tools, hooks, and commands are packaged into a plugin interface and handed back to OpenCode. The system is live.

The Design Philosophy

Now that you've seen every layer, here are the principles that hold everything together:

๐ŸŽฏ

Right Model, Right Job

Instead of using one AI for everything, each agent gets the model that's strongest at its specific task. Claude for orchestration, GPT for deep work, models matched to strengths.

โšก

Parallel by Default

Explore agents, background tasks, file reads โ€” everything that CAN run in parallel DOES run in parallel. Sequential execution is the exception, not the rule.

๐Ÿ›ก๏ธ

Defense in Depth

80+ hooks at every checkpoint. Model fallbacks. Session recovery. Hash-validated edits. Multiple independent safety layers catch different failure modes.

๐Ÿ”Œ

Everything Is Configurable

Disable any hook, override any agent's model, add custom skills, change delegation categories. Opinionated defaults, but nothing is locked down.

๐Ÿ’ก
Key Insight: Composition Over Inheritance

Notice how OmO doesn't build one giant monolithic system. Instead, it composes small, focused pieces: agents are separate from tools, tools are separate from hooks, hooks are separate from skills. Each piece can be added, removed, or replaced independently. This "composition" approach is one of the most important architectural patterns in modern software โ€” it's why LEGO blocks are more versatile than a pre-molded toy.

Final Check: Can You Navigate This Codebase?

You want to modify how the "todo enforcer" works. Where in the codebase would you look?

A user has a typo in their agent override config. What happens when the plugin loads?

Why can you disable a single hook without breaking the rest of the system?

๐ŸŽ“
You Made It!

You now understand how Oh My OpenCode works under the hood โ€” from the plugin entry point, through the agent orchestration system, to the tools and hooks that make it all reliable. Next time you type "ultrawork", you'll know exactly what's happening behind the scenes.