This commit is contained in:
2026-04-12 01:06:31 +07:00
commit 10d660cbcb
1066 changed files with 228596 additions and 0 deletions

View File

@@ -0,0 +1,212 @@
---
name: ck:fix
description: "ALWAYS activate this skill before fixing ANY bug, error, test failure, CI/CD issue, type error, lint, log error, UI issue, code problem."
argument-hint: "[issue] --auto|--review|--quick|--parallel"
metadata:
author: claudekit
version: "2.0.0"
---
# Fixing
Unified skill for fixing issues of any complexity with intelligent routing.
## Arguments
- `--auto` - Activate autonomous mode (**default**)
- `--review` - Activate human-in-the-loop review mode
- `--quick` - Activate quick mode
- `--parallel` - Activate parallel mode: route to parallel `fullstack-developer` agents per issue
<HARD-GATE>
Do NOT propose or implement fixes before completing Steps 1-2 (Scout + Diagnose).
Symptom fixes are failure. Find the cause first through structured analysis, NEVER guessing.
If 3+ fix attempts fail, STOP and question the architecture — discuss with user before attempting more.
User override: `--quick` mode allows fast scout→diagnose→fix cycle for trivial issues (lint, type errors).
</HARD-GATE>
## Anti-Rationalization
| Thought | Reality |
|---------|---------|
| "I can see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. Scout first. |
| "Quick fix for now, investigate later" | "Later" never comes. Fix properly now. |
| "Just try changing X" | Random fixes waste time and create new bugs. Diagnose first. |
| "It's probably X" | "Probably" = guessing. Use structured diagnosis. Verify first. |
| "One more fix attempt" (after 2+) | 3+ failures = wrong approach. Question architecture. |
| "Emergency, no time for process" | Systematic diagnosis is FASTER than guess-and-check. |
| "I already know the codebase" | Knowledge decays. Scout to verify assumptions before acting. |
| "The fix is done, tests pass" | Without prevention, same bug class will recur. Add guards. |
## Process Flow (Authoritative)
```mermaid
flowchart TD
A[Issue Input] --> B[Step 0: Mode Selection]
B --> C[Step 1: Scout - Understand Context]
C --> D[Step 2: Diagnose - Structured Root Cause Analysis]
D --> E[Step 3: Complexity Assessment + Task Orchestration]
E -->|Simple| F[Quick Workflow]
E -->|Moderate| G[Standard Workflow]
E -->|Complex| H[Deep Workflow]
E -->|Parallel| I[Multi-Agent Fix]
F --> J[Step 4: Fix Implementation]
G --> J
H --> J
I --> J
J --> K[Step 5: Verify + Prevent]
K -->|Pass + Prevention in place| L[Step 6: Finalize]
K -->|Fail, <3 attempts| D
K -->|Fail, 3+ attempts| M[Question Architecture]
M --> N[Discuss with User]
L --> O[Report + Docs + Journal]
```
**This diagram is the authoritative workflow.** If prose conflicts with this flow, follow the diagram.
## Workflow
### Step 0: Mode Selection
**First action:** If there is no "auto" keyword in the request, use `AskUserQuestion` to determine workflow mode:
| Option | Recommend When | Behavior |
|--------|----------------|----------|
| **Autonomous** (default) | Simple/moderate issues | Auto-approve if score >= 9.5 & 0 critical |
| **Human-in-the-loop Review** | Critical/production code | Pause for approval at each step |
| **Quick** | Type errors, lint, trivial bugs | Fast scout → diagnose → fix → review cycle |
See `references/mode-selection.md` for AskUserQuestion format.
### Step 1: Scout (MANDATORY — never skip)
**Purpose:** Understand the affected codebase BEFORE forming any hypotheses.
**Mandatory skill chain:**
1. Activate `ck:scout` skill OR launch 2-3 parallel `Explore` subagents
2. Discover: affected files, dependencies, related tests, recent changes (`git log`)
3. Read `./docs` for project context if unfamiliar
**Quick mode:** Minimal scout — locate affected file(s) and their direct dependencies only.
**Standard/Deep mode:** Full scout — map module boundaries, test coverage, call chains.
**Output:** `✓ Step 1: Scouted - [N] files mapped, [M] dependencies, [K] tests found`
### Step 2: Diagnose (MANDATORY — never skip)
**Purpose:** Structured root cause analysis. NO guessing. Evidence-based only.
**Mandatory skill chain:**
1. **Capture pre-fix state:** Record exact error messages, failing test output, stack traces, log snippets. This becomes the baseline for Step 5 verification.
2. Activate `ck:debug` skill (systematic-debugging + root-cause-tracing techniques).
3. Activate `ck:sequential-thinking` skill — form hypotheses through structured reasoning, NOT guessing.
4. Spawn parallel `Explore` subagents to test each hypothesis against codebase evidence.
5. If 2+ hypotheses fail → auto-activate `ck:problem-solving` skill for alternative approaches.
6. Create diagnosis report: confirmed root cause, evidence chain, affected scope.
See `references/diagnosis-protocol.md` for full methodology.
**Output:** `✓ Step 2: Diagnosed - Root cause: [summary], Evidence: [brief], Scope: [N files]`
### Step 3: Complexity Assessment & Task Orchestration
Classify before routing. See `references/complexity-assessment.md`.
| Level | Indicators | Workflow |
|-------|------------|----------|
| **Simple** | Single file, clear error, type/lint | `references/workflow-quick.md` |
| **Moderate** | Multi-file, root cause unclear | `references/workflow-standard.md` |
| **Complex** | System-wide, architecture impact | `references/workflow-deep.md` |
| **Parallel** | 2+ independent issues OR `--parallel` flag | Parallel `fullstack-developer` agents |
**Task Orchestration (Moderate+ only):** After classifying, create native Claude Tasks for all phases upfront with dependencies. See `references/task-orchestration.md`.
- Skip for Quick workflow (< 3 steps, overhead exceeds benefit)
- Use `TaskCreate` with `addBlockedBy` for dependency chains
- Update via `TaskUpdate` as each phase completes
- For Parallel: create separate task trees per independent issue
- **Fallback:** Task tools (`TaskCreate`/`TaskUpdate`/`TaskGet`/`TaskList`) are CLI-only unavailable in VSCode extension. If they error, use `TodoWrite` for progress tracking. Fix workflow remains fully functional without them.
### Step 4: Fix Implementation
- Implement fix per selected workflow, updating Tasks as phases complete.
- Follow diagnosis findings fix the ROOT CAUSE, not symptoms.
- Minimal changes only. Follow existing patterns.
### Step 5: Verify + Prevent (MANDATORY — never skip)
**Purpose:** Prove the fix works AND prevent the same bug class from recurring.
**Mandatory skill chain:**
1. **Verify (iron-law):** Run the EXACT commands from pre-fix state capture. Compare output. NO claims without fresh evidence.
2. **Regression test:** Add or update test(s) that specifically cover the fixed issue. The test MUST fail without the fix and pass with it.
3. **Prevention gate:** Apply defense-in-depth validation where applicable. See `references/prevention-gate.md`.
4. **Parallel verification:** Launch `Bash` agents for typecheck + lint + build + test.
**If verification fails:** Loop back to Step 2 (re-diagnose). After 3 failures question architecture, discuss with user.
See `references/prevention-gate.md` for prevention requirements.
**Output:** `✓ Step 5: Verified + Prevented - [before/after comparison], [N] tests added, [M] guards added`
### Step 6: Finalize (MANDATORY — never skip)
1. Report summary: confidence score, root cause, changes, files, prevention measures
2. `docs-manager` subagent update `./docs` if changes warrant (NON-OPTIONAL)
3. `TaskUpdate` mark ALL Claude Tasks `completed` (skip if Task tools unavailable)
4. Ask user if they want to commit via `git-manager` subagent
5. Run `/ck:journal` to write a concise technical journal entry upon completion
---
## IMPORTANT: Skill/Subagent Activation Matrix
See `references/skill-activation-matrix.md` for complete matrix.
**Always activate (ALL workflows):**
- `ck:scout` (Step 1) understand before diagnosing
- `ck:debug` (Step 2) systematic root cause investigation
- `ck:sequential-thinking` (Step 2) structured hypothesis formation
**Conditional:**
- `ck:problem-solving` auto-triggers when 2+ hypotheses fail in Step 2
- `ck:brainstorm` multiple valid approaches, architecture decision (Deep only)
- `ck:context-engineering` fixing AI/LLM/agent code
- `ck:project-management` moderate+ for task hydration/sync-back
**Subagents:** `debugger`, `researcher`, `planner`, `code-reviewer`, `tester`, `Bash`
**Parallel:** Multiple `Explore` agents for scouting, `Bash` agents for verification
## Output Format
Unified step markers:
```
✓ Step 0: [Mode] selected
✓ Step 1: Scouted - [N] files, [M] deps
✓ Step 2: Diagnosed - Root cause: [summary]
✓ Step 3: [Complexity] detected - [workflow] selected
✓ Step 4: Fixed - [N] files changed
✓ Step 5: Verified + Prevented - [tests added], [guards added]
✓ Step 6: Complete - [action taken]
```
## References
Load as needed:
- `references/mode-selection.md` - AskUserQuestion format for mode
- `references/diagnosis-protocol.md` - Structured diagnosis methodology (NEW)
- `references/prevention-gate.md` - Prevention requirements after fix (NEW)
- `references/complexity-assessment.md` - Classification criteria
- `references/task-orchestration.md` - Native Claude Task patterns for moderate+ workflows
- `references/workflow-quick.md` - Quick: scout diagnose fix verify+prevent review
- `references/workflow-standard.md` - Standard: full pipeline with Tasks
- `references/workflow-deep.md` - Deep: research + brainstorm + plan with Tasks
- `references/review-cycle.md` - Review logic (autonomous vs HITL)
- `references/skill-activation-matrix.md` - When to activate each skill
- `references/parallel-exploration.md` - Parallel Explore/Bash/Task coordination patterns
**Specialized Workflows:**
- `references/workflow-ci.md` - GitHub Actions/CI failures
- `references/workflow-logs.md` - Application log analysis
- `references/workflow-test.md` - Test suite failures
- `references/workflow-types.md` - TypeScript type errors
- `references/workflow-ui.md` - Visual/UI issues (requires design skills)

View File

@@ -0,0 +1,73 @@
# Complexity Assessment
Classify issue complexity before routing to workflow. Assessment happens AFTER Step 1 (Scout) and Step 2 (Diagnose).
## Classification Criteria
### Simple (→ workflow-quick.md) — No Tasks
**Indicators:**
- Single file affected
- Clear error message (type error, syntax, lint)
- Keywords: `type`, `typescript`, `tsc`, `lint`, `eslint`, `syntax`
- Obvious fix location
- Root cause confirmed by diagnosis (not assumed)
**Task usage:** Skip. < 3 steps, overhead exceeds benefit.
**Examples:**
- "Fix type error in auth.ts"
- "ESLint errors after upgrade"
- "Syntax error in config file"
### Moderate (→ workflow-standard.md) — Use Tasks (6 phases)
**Indicators:**
- 2-5 files affected
- Root cause identified but fix spans multiple files
- Needs investigation to confirm diagnosis
- Keywords: `bug`, `broken`, `not working`, `fails sometimes`
- Test failures with root cause traced
**Task usage:** Create 6 phase tasks with dependencies. See `references/task-orchestration.md`.
**Examples:**
- "Login sometimes fails"
- "API returns wrong data"
- "Component not rendering correctly"
### Complex (→ workflow-deep.md) — Use Tasks with Dependency Chains (9 phases)
**Indicators:**
- System-wide impact (5+ files)
- Architecture decision needed
- Research required for solution
- Keywords: `architecture`, `refactor`, `system-wide`, `design issue`
- Performance/security vulnerabilities
- Multiple interacting components
- Root cause spans multiple layers/modules
**Task usage:** Create 9 phase tasks. Steps 1+2+3 run parallel (scout+diagnose+research). Full dependency chains. See `references/task-orchestration.md`.
**Examples:**
- "Memory leak in production"
- "Database deadlocks under load"
- "Security vulnerability in auth flow"
### Parallel (→ multiple fullstack-developer agents) — Use Task Trees
**Triggers:**
- `--parallel` flag explicitly passed (activate parallel routing regardless of auto-classification)
**Indicators:**
- 2+ independent issues mentioned
- Issues in different areas (frontend + backend, auth + payments)
- No dependencies between issues
- Keywords: list of issues, "and", "also", multiple error types
**Task usage:** Create separate task trees per independent issue (each with scout+diagnose+fix+verify). Spawn `fullstack-developer` agent per tree. See `references/task-orchestration.md`.
**Examples:**
- "Fix type errors AND update UI styling"
- "Auth bug + payment integration issue"
- "3 different test failures in unrelated modules"

View File

@@ -0,0 +1,133 @@
# Diagnosis Protocol
Structured root cause analysis methodology. Replaces ad-hoc guessing with evidence-based investigation.
## Core Principle
**NEVER guess root causes.** Form hypotheses through structured reasoning and test them against evidence.
## Pre-Diagnosis: Capture State (MANDATORY)
Before any investigation, capture the current broken state as baseline:
```
1. Record exact error messages (copy-paste, not paraphrase)
2. Record failing test output (full command + output)
3. Record relevant stack traces
4. Record relevant log snippets with timestamps
5. Record git status / recent changes: git log --oneline -10
```
This baseline is required for Step 5 (Verify) — you MUST compare before/after.
## Diagnosis Chain (Follow in Order)
### Phase 1: Observe — What is actually happening?
Read, don't assume. Use `ck:debug` (systematic-debugging Phase 1).
- What is the exact error message?
- Where does it occur? (file, line, function)
- When did it start? (check `git log`, `git bisect`)
- Can it be reproduced consistently?
- What is the expected vs actual behavior?
### Phase 2: Hypothesize — Why might this happen?
Activate `ck:sequential-thinking` skill. Form hypotheses through structured reasoning.
**Structured hypothesis formation:**
```
For each hypothesis:
1. State the hypothesis clearly
2. What evidence would CONFIRM it?
3. What evidence would REFUTE it?
4. How to test it quickly?
```
**Common hypothesis categories:**
- Recent code change introduced regression (`git log`, `git diff`)
- Data/state mismatch (wrong input, stale cache, race condition)
- Environment difference (deps version, config, platform)
- Missing validation (null check, type guard, boundary)
- Incorrect assumption (API contract, data shape, ordering)
### Phase 3: Test — Verify hypotheses against evidence
Spawn parallel `Explore` subagents to test each hypothesis simultaneously:
```
// Launch in SINGLE message — max 3 parallel agents
Task("Explore", "Test hypothesis A: [specific search/check]", "Verify H-A")
Task("Explore", "Test hypothesis B: [specific search/check]", "Verify H-B")
Task("Explore", "Test hypothesis C: [specific search/check]", "Verify H-C")
```
**For each hypothesis result:**
- CONFIRMED: Evidence supports this as root cause → proceed to root cause tracing
- REFUTED: Evidence contradicts → discard, note why
- INCONCLUSIVE: Need more data → refine hypothesis or gather more evidence
### Phase 4: Trace — Follow the root cause chain
Use `ck:debug` (root-cause-tracing technique). Trace backward:
```
Symptom (where error appears)
↑ Immediate cause (what triggered the error)
↑ Contributing factor (what set up the bad state)
↑ ROOT CAUSE (the original trigger that must be fixed)
```
**Rule:** NEVER fix where the error appears. Trace back to the source.
### Phase 5: Escalate — When hypotheses fail
If 2+ hypotheses are REFUTED:
1. Auto-activate `ck:problem-solving` skill
2. Apply Inversion Exercise: "What would CAUSE this bug intentionally?"
3. Apply Scale Game: "Does this fail with 1 item? 100? 10000?"
4. Consider environmental factors (timing, concurrency, platform)
If 3+ fix attempts fail after diagnosis:
1. STOP immediately
2. Question the architecture — is the design fundamentally flawed?
3. Discuss with user before attempting more
## Diagnosis Report Format
```markdown
## Diagnosis Report
**Issue:** [one-line description]
**Pre-fix state captured:** Yes/No
### Root Cause
[Clear explanation of the root cause, traced back to origin]
### Evidence Chain
1. [Observation] → led to hypothesis [X]
2. [Test result] → confirmed/refuted [X]
3. [Trace] → root cause at [file:line]
### Affected Scope
- Files: [list]
- Functions: [list]
- Dependencies: [list]
### Recommended Fix
[What to change and why — addressing root cause, not symptoms]
### Prevention Needed
[What guards/tests to add to prevent recurrence]
```
## Quick Mode Diagnosis
For trivial issues (type errors, lint, syntax), abbreviated diagnosis:
1. Read error message
2. Locate affected file(s) via scout results
3. Identify root cause (usually obvious for simple issues)
4. Skip parallel hypothesis testing
5. Still capture pre-fix state for verification

View File

@@ -0,0 +1,46 @@
# Mode Selection
Use `AskUserQuestion` at start of fixing workflow.
## AskUserQuestion Format
```json
{
"questions": [{
"question": "How should I handle the fix workflow?",
"header": "Fix Mode",
"options": [
{
"label": "Autonomous (Recommended)",
"description": "Auto-approve if quality high, only ask when stuck"
},
{
"label": "Human-in-the-loop",
"description": "Pause for approval at each major step"
},
{
"label": "Quick fix",
"description": "Fast debug-fix-review cycle for simple issues"
}
],
"multiSelect": false
}]
}
```
## Mode Recommendations
| Issue Type | Recommended Mode |
|------------|------------------|
| Type errors, lint errors | Quick |
| Single file bugs | Quick or Autonomous |
| Multi-file, unclear root cause | Autonomous |
| Production/critical code | Human-in-the-loop |
| System-wide/architecture | Human-in-the-loop |
| Security vulnerabilities | Human-in-the-loop |
## Skip Mode Selection When
- Issue is clearly trivial (type error keyword detected) → default Quick
- User explicitly specified mode in prompt
- Previous context already established mode

View File

@@ -0,0 +1,100 @@
# Parallel Exploration
Patterns for launching multiple subagents in parallel to scout codebase, verify implementation, and coordinate via native Tasks.
## Parallel Exploration (Scouting)
Launch multiple `Explore` subagents simultaneously when needing to find:
- Related files across different areas
- Similar implementations/patterns
- Dependencies and usage
**Pattern:**
```
Task(subagent_type="Explore", prompt="Find [X] in [area1]", description="Scout area1")
Task(subagent_type="Explore", prompt="Find [Y] in [area2]", description="Scout area2")
Task(subagent_type="Explore", prompt="Find [Z] in [area3]", description="Scout area3")
```
**Example - Multi-area scouting:**
```
// Launch in SINGLE message with multiple Task calls:
Task("Explore", "Find auth-related files in src/", "Scout auth")
Task("Explore", "Find API routes handling users", "Scout API")
Task("Explore", "Find test files for auth module", "Scout tests")
```
## Parallel Verification (Bash)
Launch multiple `Bash` subagents to verify implementation from different angles.
**Pattern:**
```
Task(subagent_type="Bash", prompt="Run [command1]", description="Verify X")
Task(subagent_type="Bash", prompt="Run [command2]", description="Verify Y")
```
**Example - Multi-verification:**
```
// Launch in SINGLE message:
Task("Bash", "Run typecheck: bun run typecheck", "Verify types")
Task("Bash", "Run lint: bun run lint", "Verify lint")
Task("Bash", "Run build: bun run build", "Verify build")
```
## Task-Coordinated Parallel (Moderate+)
For multi-phase fixes, use native Tasks to coordinate parallel agents.
See `references/task-orchestration.md` for full patterns.
**Pattern - Parallel issue trees:**
```
// Create separate task trees per independent issue
T_A1 = TaskCreate(subject="[Issue A] Debug", activeForm="Debugging A")
T_A2 = TaskCreate(subject="[Issue A] Fix", activeForm="Fixing A", addBlockedBy=[T_A1])
T_B1 = TaskCreate(subject="[Issue B] Debug", activeForm="Debugging B")
T_B2 = TaskCreate(subject="[Issue B] Fix", activeForm="Fixing B", addBlockedBy=[T_B1])
T_final = TaskCreate(subject="Integration verify", addBlockedBy=[T_A2, T_B2])
// Spawn agents per issue tree
Task("fullstack-developer", "Fix Issue A. Claim tasks via TaskUpdate.", "Fix A")
Task("fullstack-developer", "Fix Issue B. Claim tasks via TaskUpdate.", "Fix B")
```
Agents claim work via `TaskUpdate(status="in_progress")` and complete via `TaskUpdate(status="completed")`. Blocked tasks auto-unblock when dependencies resolve.
## When to Use Parallel
| Scenario | Parallel Strategy |
|----------|-------------------|
| Root cause unclear, multiple suspects | 2-3 Explore agents on different areas |
| Multi-module fix | Explore each module in parallel |
| After implementation | Bash agents for typecheck + lint + build |
| Before commit | Bash agents for test + build + lint |
| 2+ independent issues | Task trees per issue + fullstack-developer agents |
## Combining Explore + Tasks + Bash
**Step 1:** Parallel Explore to scout
**Step 2:** Sequential implementation (update Tasks as phases complete)
**Step 3:** Parallel Bash to verify
```
// Scout phase - parallel
Task("Explore", "Find payment handlers", "Scout payments")
Task("Explore", "Find order processors", "Scout orders")
// Wait for results, implement fix, TaskUpdate each phase
// Verify phase - parallel
Task("Bash", "Run tests: bun test", "Run tests")
Task("Bash", "Run typecheck", "Check types")
Task("Bash", "Run build", "Verify build")
```
## Resource Limits
- Max 3 parallel agents recommended (system resources)
- Each subagent has 200K token context limit
- Keep prompts concise to avoid context bloat
- Use `TaskList()` to check for available unblocked work

View File

@@ -0,0 +1,87 @@
# Prevention Gate
After fixing a bug, prevent the same class of issues from recurring. This step is MANDATORY.
## Core Principle
A fix without prevention is incomplete. The same bug pattern WILL recur if you only patch the symptom.
## Prevention Requirements (Check All That Apply)
### 1. Regression Test (ALWAYS required)
Every fix MUST have a test that:
- **Fails** without the fix applied (proves the test catches the bug)
- **Passes** with the fix applied (proves the fix works)
```
If no test framework exists:
→ Add inline verification or assertion at minimum
→ Note in report: "No test framework — added runtime assertion"
```
### 2. Defense-in-Depth Validation (When applicable)
Apply layered validation from `ck:debug` defense-in-depth technique:
| Layer | Apply When | Example |
|-------|-----------|---------|
| **Entry point validation** | Fix involves user/external input | Reject invalid input at API boundary |
| **Business logic validation** | Fix involves data processing | Assert data makes sense for operation |
| **Environment guards** | Fix involves env-sensitive operations | Prevent dangerous ops in wrong context |
| **Debug instrumentation** | Fix was hard to diagnose | Add logging/context capture for forensics |
**Rule:** Not every fix needs all 4 layers. Apply what's relevant. But ALWAYS consider each.
### 3. Type Safety (When applicable)
| Scenario | Prevention |
|----------|-----------|
| Null/undefined caused the bug | Add strict null checks, use `??` or `?.` |
| Wrong type passed | Add type guard or runtime validation |
| Missing property | Add required field to interface/type |
| Implicit any | Add explicit types |
### 4. Error Handling (When applicable)
| Scenario | Prevention |
|----------|-----------|
| Unhandled promise rejection | Add `.catch()` or try/catch |
| Missing error boundary | Add error boundary component |
| Silent failure | Add explicit error logging |
| No fallback for external dependency | Add timeout + fallback |
## Verification Checklist (Before Completing Step 5)
```
□ Pre-fix state captured? (error messages, test output)
□ Fix applied to ROOT CAUSE (not symptom)?
□ Fresh verification run? (exact same commands as pre-fix)
□ Before/after comparison documented?
□ Regression test added? (fails without fix, passes with fix)
□ Defense-in-depth layers considered? (applied where relevant)
□ No new warnings/errors introduced?
□ Parallel verification passed? (typecheck + lint + build + test)
```
## Output Format
```
Prevention measures applied:
- Regression test: [test file:line] — covers [specific scenario]
- Guard added: [file:line] — [description of guard]
- Type safety: [file:line] — [what was strengthened]
- Error handling: [file:line] — [what was added]
Before/After comparison:
- Before: [exact error/failure]
- After: [exact success output]
```
## Quick Mode Prevention
For trivial issues (type errors, lint), abbreviated prevention:
- Regression test: optional (type system IS the test)
- Parallel verification: typecheck + lint only
- Defense-in-depth: skip (not applicable for type fixes)
- Still require before/after comparison of typecheck output

View File

@@ -0,0 +1,77 @@
# Review Cycle
Mode-aware review handling for code-reviewer results.
## Autonomous Mode
```
cycle = 0
LOOP:
1. Run code-reviewer → score, critical_count, warnings, suggestions
2. IF score >= 9.5 AND critical_count == 0:
→ Output: "✓ Review [score]/10 - Auto-approved"
→ PROCEED to next step
3. ELSE IF critical_count > 0 AND cycle < 3:
→ Output: "⚙ Auto-fixing [N] critical issues (cycle [cycle+1]/3)"
→ Fix critical issues
→ Re-run tests
→ cycle++, GOTO LOOP
4. ELSE IF cycle >= 3:
→ ESCALATE to user via AskUserQuestion
→ Display findings
→ Options: "Fix manually" / "Approve anyway" / "Abort"
5. ELSE (score < 9.5, no critical):
→ Output: "✓ Review [score]/10 - Approved with [N] warnings"
→ PROCEED (warnings logged, not blocking)
```
## Human-in-the-Loop Mode
```
ALWAYS:
1. Run code-reviewer → score, critical_count, warnings, suggestions
2. Display findings:
┌─────────────────────────────────────┐
│ Review: [score]/10 │
├─────────────────────────────────────┤
│ Critical ([N]): [list] │
│ Warnings ([N]): [list] │
│ Suggestions ([N]): [list] │
└─────────────────────────────────────┘
3. Use AskUserQuestion:
IF critical_count > 0:
- "Fix critical issues"
- "Fix all issues"
- "Approve anyway"
- "Abort"
ELSE:
- "Approve"
- "Fix warnings/suggestions"
- "Abort"
4. Handle response:
- Fix → implement, re-test, re-review (max 3 cycles)
- Approve → proceed
- Abort → stop workflow
```
## Quick Mode Review
Uses same logic as Autonomous but:
- Lower threshold: score >= 8.5 acceptable
- Only 1 auto-fix cycle before escalate
- Focus on: correctness, security, no regressions
## Critical Issues (Always Block)
- Security vulnerabilities (XSS, SQL injection, OWASP)
- Performance bottlenecks (O(n²) when O(n) possible)
- Architectural violations
- Data loss risks
- Breaking changes without migration

View File

@@ -0,0 +1,98 @@
# Skill Activation Matrix
When to activate each skill and tool during fixing workflows.
## Always Activate (ALL Workflows)
| Skill/Tool | Step | Reason |
|------------|------|--------|
| `ck:scout` OR parallel `Explore` | Step 1 | Understand codebase context before diagnosing |
| `ck:debug` | Step 2 | Systematic root cause investigation |
| `ck:sequential-thinking` | Step 2 | Structured hypothesis formation — NO guessing |
## Task Orchestration (Moderate+ Only)
| Tool | Activate When |
|------|---------------|
| `TaskCreate` | After complexity assessment, create all phase tasks upfront |
| `TaskUpdate` | At start/completion of each phase |
| `TaskList` | Check available unblocked work, coordinate parallel agents |
| `TaskGet` | Retrieve full task details before starting work |
Skip Tasks for Quick workflow (< 3 steps). See `references/task-orchestration.md`.
## Auto-Triggered Activation
| Skill | Auto-Trigger Condition |
|-------|------------------------|
| `ck:problem-solving` | 2+ hypotheses REFUTED in Step 2 diagnosis |
| `ck:sequential-thinking` | Always in Step 2 (mandatory for hypothesis formation) |
## Conditional Activation
| Skill | Activate When |
|-------|---------------|
| `ck:brainstorm` | Multiple valid fix approaches, architecture decision (Deep only) |
| `ck:context-engineering` | Fixing AI/LLM/agent code, context window issues |
| `ck:ai-multimodal` | UI issues, screenshots provided, visual bugs |
| `ck:project-management` | Moderate+ workflows task hydration, sync-back, progress tracking |
## Subagent Usage
| Subagent | Activate When |
|----------|---------------|
| `debugger` | Root cause unclear, need deep investigation (Step 2) |
| `Explore` (parallel) | Scout multiple areas simultaneously (Step 1), test hypotheses (Step 2) |
| `Bash` (parallel) | Verify implementation: typecheck, lint, build, test (Step 5) |
| `researcher` | External docs needed, latest best practices (Deep only) |
| `planner` | Complex fix needs breakdown, multiple phases (Deep only) |
| `tester` | After implementation, verify fix works (Step 5) |
| `ck:code-review` | After fix, verify quality and security (Step 5) |
| `git-manager` | After approval, commit changes (Step 6) |
| `docs-manager` | API/behavior changes need doc updates (Step 6) |
| `project-manager` | Major fix impacts roadmap/plan status (Step 6) |
| `fullstack-developer` | Parallel independent issues (each gets own agent) |
## Parallel Patterns
See `references/parallel-exploration.md` for detailed patterns.
| When | Parallel Strategy |
|------|-------------------|
| Scouting (Step 1) | 2-3 `Explore` agents on different areas |
| Testing hypotheses (Step 2) | 2-3 `Explore` agents per hypothesis |
| Multi-module fix | `Explore` each module in parallel |
| After implementation (Step 5) | `Bash` agents: typecheck + lint + build + test |
| 2+ independent issues | Task trees + `fullstack-developer` agents per issue |
## Workflow → Skills Map
| Workflow | Skills Activated |
|----------|------------------|
| Quick | `ck:scout` (minimal), `ck:debug`, `ck:sequential-thinking`, `ck:code-review`, parallel `Bash` verification |
| Standard | Above + Tasks, `ck:problem-solving` (auto), `ck:project-management`, `tester`, parallel `Explore` |
| Deep | All above + `ck:brainstorm`, `ck:context-engineering`, `researcher`, `planner` |
| Parallel | Per-issue Task trees + `ck:project-management` + `fullstack-developer` agents + coordination via `TaskList` |
## Step → Skills Chain (Mandatory Order)
| Step | Mandatory Chain |
|------|----------------|
| Step 0: Mode | `AskUserQuestion` (unless auto/quick detected) |
| Step 1: Scout | `ck:scout` OR 2-3 parallel `Explore` map files, deps, tests |
| Step 2: Diagnose | Capture pre-fix state `ck:debug` `ck:sequential-thinking` parallel `Explore` hypotheses (`ck:problem-solving` if 2+ fail) |
| Step 3: Assess | Classify complexity create Tasks (moderate+) |
| Step 4: Fix | Implement per workflow follow root cause |
| Step 5: Verify+Prevent | Iron-law verify regression test defense-in-depth parallel `Bash` verify |
| Step 6: Finalize | Report `docs-manager` `TaskUpdate` `git-manager` `/ck:journal` |
## Detection Triggers
| Keyword/Pattern | Skill to Consider |
|-----------------|-------------------|
| "AI", "LLM", "agent", "context" | `ck:context-engineering` |
| "stuck", "tried everything" | `ck:problem-solving` |
| "complex", "multi-step" | `ck:sequential-thinking` |
| "which approach", "options" | `ck:brainstorm` |
| "latest docs", "best practice" | `researcher` subagent |
| Screenshot attached | `ck:ai-multimodal` |

View File

@@ -0,0 +1,110 @@
# Task Orchestration
Native Claude Task tools for tracking and coordinating fix workflows.
**Skill:** Activate `ck:project-management` for advanced task orchestration — provides hydration (plan checkboxes → Tasks), sync-back (Tasks → plan checkboxes), cross-session resume, and progress tracking patterns.
**Tool Availability:** `TaskCreate`, `TaskUpdate`, `TaskGet`, `TaskList` are **CLI-only** — disabled in VSCode extension (`isTTY` check). If these tools error, use `TodoWrite` for progress tracking instead. Fix workflow remains fully functional — Tasks add visibility and coordination, not core functionality.
## When to Use Tasks
| Complexity | Use Tasks? | Reason |
|-----------|-----------|--------|
| Simple/Quick | No | < 3 steps, overhead exceeds benefit |
| Moderate (Standard) | Yes | 6 steps, multi-subagent coordination |
| Complex (Deep) | Yes | 9 steps, dependency chains, parallel agents |
| Parallel | Yes | Multiple independent issue trees |
## Task Tools
- `TaskCreate(subject, description, activeForm, metadata)` - Create task
- `TaskUpdate(taskId, status, addBlockedBy, addBlocks)` - Update status/deps
- `TaskGet(taskId)` - Get full task details
- `TaskList()` - List all tasks with status
**Lifecycle:** `pending` `in_progress` `completed`
## Standard Workflow Tasks (6 phases)
Create all tasks upfront, then work through them:
```
T1 = TaskCreate(subject="Scout codebase", activeForm="Scouting codebase", metadata={step: 1, phase: "investigate"})
T2 = TaskCreate(subject="Diagnose root cause", activeForm="Diagnosing root cause", metadata={step: 2, phase: "investigate"})
T3 = TaskCreate(subject="Implement fix", activeForm="Implementing fix", metadata={step: 3, phase: "implement"}, addBlockedBy=[T1, T2])
T4 = TaskCreate(subject="Verify + prevent", activeForm="Verifying fix", metadata={step: 4, phase: "verify"}, addBlockedBy=[T3])
T5 = TaskCreate(subject="Code review", activeForm="Reviewing code", metadata={step: 5, phase: "verify"}, addBlockedBy=[T4])
T6 = TaskCreate(subject="Finalize", activeForm="Finalizing", metadata={step: 6, phase: "finalize"}, addBlockedBy=[T5])
```
Update as work progresses:
```
TaskUpdate(taskId=T1, status="in_progress")
// ... scout codebase ...
TaskUpdate(taskId=T1, status="completed")
// T3 auto-unblocks when T1 + T2 complete
```
## Deep Workflow Tasks (9 phases)
Steps 1+2+3 run in parallel (scout + diagnose + research).
```
T1 = TaskCreate(subject="Scout codebase", metadata={step: 1, phase: "investigate"})
T2 = TaskCreate(subject="Diagnose root cause", metadata={step: 2, phase: "investigate"})
T3 = TaskCreate(subject="Research solutions", metadata={step: 3, phase: "investigate"})
T4 = TaskCreate(subject="Brainstorm approaches", metadata={step: 4, phase: "design"}, addBlockedBy=[T1, T2, T3])
T5 = TaskCreate(subject="Create implementation plan", metadata={step: 5, phase: "design"}, addBlockedBy=[T4])
T6 = TaskCreate(subject="Implement fix", metadata={step: 6, phase: "implement"}, addBlockedBy=[T5])
T7 = TaskCreate(subject="Verify + prevent", metadata={step: 7, phase: "verify"}, addBlockedBy=[T6])
T8 = TaskCreate(subject="Code review", metadata={step: 8, phase: "verify"}, addBlockedBy=[T7])
T9 = TaskCreate(subject="Finalize & docs", metadata={step: 9, phase: "finalize"}, addBlockedBy=[T8])
```
**Note:** Steps 1, 2, and 3 run in parallel (scout + diagnose + research simultaneously).
## Parallel Issue Coordination
For 2+ independent issues, create separate task trees per issue:
```
// Issue A tree
TaskCreate(subject="[Issue A] Scout", metadata={issue: "A", step: 1})
TaskCreate(subject="[Issue A] Diagnose", metadata={issue: "A", step: 2})
TaskCreate(subject="[Issue A] Fix", metadata={issue: "A", step: 3}, addBlockedBy=[A-step1, A-step2])
TaskCreate(subject="[Issue A] Verify", metadata={issue: "A", step: 4}, addBlockedBy=[A-step3])
// Issue B tree
TaskCreate(subject="[Issue B] Scout", metadata={issue: "B", step: 1})
TaskCreate(subject="[Issue B] Diagnose", metadata={issue: "B", step: 2})
TaskCreate(subject="[Issue B] Fix", metadata={issue: "B", step: 3}, addBlockedBy=[B-step1, B-step2])
TaskCreate(subject="[Issue B] Verify", metadata={issue: "B", step: 4}, addBlockedBy=[B-step3])
// Final shared task
TaskCreate(subject="Integration verify", addBlockedBy=[A-step4, B-step4])
```
Spawn `fullstack-developer` subagents per issue tree. Each agent:
1. Claims tasks via `TaskUpdate(status="in_progress")`
2. Completes tasks via `TaskUpdate(status="completed")`
3. Blocked tasks auto-unblock when dependencies resolve
## Subagent Task Assignment
Assign tasks to subagents via `owner` field:
```
TaskUpdate(taskId=taskA, owner="agent-scout")
TaskUpdate(taskId=taskB, owner="agent-diagnose")
```
Check available work: `TaskList()` filter by `status=pending`, `blockedBy=[]`, `owner=null`
## Rules
- Create tasks BEFORE starting work (upfront planning)
- Only 1 task `in_progress` per agent at a time
- Mark complete IMMEDIATELY after finishing (don't batch)
- Use `metadata` for filtering: `{step, phase, issue, severity}`
- If task fails keep `in_progress`, create subtask for blocker
- Skip Tasks entirely for Quick workflow (< 3 steps)

View File

@@ -0,0 +1,28 @@
# CI/CD Fix Workflow
For GitHub Actions failures and CI/CD pipeline issues.
## Prerequisites
- `gh` CLI installed and authorized
- GitHub Actions URL or run ID
## Workflow
1. **Fetch logs** with `debugger` agent:
```bash
gh run view <run-id> --log-failed
gh run view <run-id> --log
```
2. **Analyze** root cause from logs
3. **Implement fix** based on analysis
4. **Test locally** with `tester` agent before pushing
5. **Iterate** if tests fail, repeat from step 3
## Notes
- If `gh` unavailable, instruct user to install: `gh auth login`
- Check both failed step and preceding steps for context
- Common issues: env vars, dependencies, permissions, timeouts

View File

@@ -0,0 +1,154 @@
# Deep Workflow
Full pipeline with research, brainstorming, and planning for complex issues. Uses native Claude Tasks with dependency chains.
## Task Setup (Before Starting)
Create all phase tasks upfront. Steps 1+2+3 run in parallel (scout + diagnose + research).
```
T1 = TaskCreate(subject="Scout codebase", activeForm="Scouting codebase", metadata={phase: "investigate"})
T2 = TaskCreate(subject="Diagnose root cause", activeForm="Diagnosing root cause", metadata={phase: "investigate"})
T3 = TaskCreate(subject="Research solutions", activeForm="Researching solutions", metadata={phase: "investigate"})
T4 = TaskCreate(subject="Brainstorm approaches", activeForm="Brainstorming", metadata={phase: "design"}, addBlockedBy=[T1, T2, T3])
T5 = TaskCreate(subject="Create implementation plan", activeForm="Planning implementation", metadata={phase: "design"}, addBlockedBy=[T4])
T6 = TaskCreate(subject="Implement fix", activeForm="Implementing fix", metadata={phase: "implement"}, addBlockedBy=[T5])
T7 = TaskCreate(subject="Verify + prevent", activeForm="Verifying fix", metadata={phase: "verify"}, addBlockedBy=[T6])
T8 = TaskCreate(subject="Code review", activeForm="Reviewing code", metadata={phase: "verify"}, addBlockedBy=[T7])
T9 = TaskCreate(subject="Finalize & docs", activeForm="Finalizing", metadata={phase: "finalize"}, addBlockedBy=[T8])
```
## Steps
### Step 1: Scout Codebase (parallel with Steps 2+3)
`TaskUpdate(T1, status="in_progress")`
**Mandatory:** Activate `ck:scout` skill or launch 2-3 `Explore` subagents in parallel:
```
Task("Explore", "Find error origin and affected components", "Trace error")
Task("Explore", "Find module boundaries and dependencies", "Map deps")
Task("Explore", "Find related tests and similar patterns", "Find patterns")
```
Map: all affected files, module boundaries, call chains, test coverage gaps.
See `references/parallel-exploration.md` for patterns.
`TaskUpdate(T1, status="completed")`
**Output:** `✓ Step 1: Scouted - [N] files, system impact: [scope]`
### Step 2: Diagnose Root Cause (parallel with Steps 1+3)
`TaskUpdate(T2, status="in_progress")`
**Mandatory skill chain:**
1. **Capture pre-fix state:** Record ALL error messages, failing tests, stack traces, logs.
2. Activate `ck:debug` skill (systematic-debugging + root-cause-tracing).
3. Activate `ck:sequential-thinking` — structured hypothesis formation.
4. Spawn parallel `Explore` subagents to test each hypothesis.
5. If 2+ hypotheses fail → auto-activate `ck:problem-solving`.
6. Trace backward through call chain to ROOT CAUSE origin.
See `references/diagnosis-protocol.md` for full methodology.
`TaskUpdate(T2, status="completed")`
**Output:** `✓ Step 2: Diagnosed - Root cause: [summary], Evidence: [chain]`
### Step 3: Research (parallel with Steps 1+2)
`TaskUpdate(T3, status="in_progress")`
Use `researcher` subagent for external knowledge.
- Search latest docs, best practices
- Find similar issues/solutions
- Gather security advisories if relevant
`TaskUpdate(T3, status="completed")`
**Output:** `✓ Step 3: Research complete - [key findings]`
### Step 4: Brainstorm
`TaskUpdate(T4, status="in_progress")` — auto-unblocks when T1 + T2 + T3 complete.
Activate `ck:brainstorm` skill.
- Evaluate multiple approaches using scout + diagnosis + research findings
- Consider trade-offs
- Get user input on preferred direction
`TaskUpdate(T4, status="completed")`
**Output:** `✓ Step 4: Approach selected - [chosen approach]`
### Step 5: Plan
`TaskUpdate(T5, status="in_progress")`
Use `planner` subagent to create implementation plan.
- Break down into phases
- Identify dependencies
- Define success criteria
- Include prevention measures in plan
`TaskUpdate(T5, status="completed")`
**Output:** `✓ Step 5: Plan created - [N] phases`
### Step 6: Implement
`TaskUpdate(T6, status="in_progress")`
Implement per plan. Use `ck:context-engineering`, `ck:sequential-thinking`, `ck:problem-solving`.
- Fix ROOT CAUSE per diagnosis — not symptoms
- Follow plan phases
- Minimal changes per phase
`TaskUpdate(T6, status="completed")`
**Output:** `✓ Step 6: Implemented - [N] files, [M] phases`
### Step 7: Verify + Prevent
`TaskUpdate(T7, status="in_progress")`
**Mandatory skill chain:**
1. **Iron-law verify:** Re-run EXACT commands from pre-fix state. Compare before/after.
2. **Regression test:** Add comprehensive tests. Tests MUST fail without fix, pass with fix.
3. **Defense-in-depth:** Apply all relevant prevention layers (see `references/prevention-gate.md`).
4. **Parallel verification:** Launch `Bash` agents: typecheck + lint + build + test.
5. **Edge cases:** Test boundary conditions, security implications, performance impact.
**If verification fails:** Loop back to Step 2 (re-diagnose). Max 3 attempts → question architecture.
See `references/prevention-gate.md` for prevention requirements.
`TaskUpdate(T7, status="completed")`
**Output:** `✓ Step 7: Verified + Prevented - [before/after], [N] tests, [M] guards`
### Step 8: Code Review
`TaskUpdate(T8, status="in_progress")`
Use `code-reviewer` subagent.
See `references/review-cycle.md` for mode-specific handling.
`TaskUpdate(T8, status="completed")`
**Output:** `✓ Step 8: Review [score]/10 - [status]`
### Step 9: Finalize
`TaskUpdate(T9, status="in_progress")`
- Report summary: root cause, evidence chain, changes, prevention measures, confidence score
- Activate `ck:project-management` for task sync-back, plan status updates, and progress tracking
- Use `docs-manager` subagent for documentation
- Use `git-manager` subagent for commit
- Run `/ck:journal`
`TaskUpdate(T9, status="completed")`
**Output:** `✓ Step 9: Complete - [actions taken]`
## Skills/Subagents Activated
| Step | Skills/Subagents |
|------|------------------|
| 1 | `ck:scout` OR parallel `Explore` subagents |
| 2 | `ck:debug`, `ck:sequential-thinking`, parallel `Explore`, (`ck:problem-solving` auto) |
| 3 | `researcher` (runs parallel with steps 1+2) |
| 4 | `ck:brainstorm` |
| 5 | `planner` |
| 6 | `ck:problem-solving`, `ck:sequential-thinking`, `ck:context-engineering` |
| 7 | `tester`, parallel `Bash` verification |
| 8 | `code-reviewer` |
| 9 | `ck:project-management`, `docs-manager`, `git-manager` |
**Rules:** Don't skip steps. Validate before proceeding. One phase at a time.
**Frontend:** Use `chrome`, `ck:chrome-devtools` or any relevant skills/tools to verify.
**Visual Assets:** Use `ck:ai-multimodal` for visual assets generation, analysis and verification.

View File

@@ -0,0 +1,72 @@
# Log Analysis Fix Workflow
For fixing issues from application logs. Uses native Claude Tasks for phase tracking.
## Prerequisites
- Log file at `./logs.txt` or similar
## Setup (if logs missing)
Add permanent log piping to project config:
- **Bash/Unix**: `command 2>&1 | tee logs.txt`
- **PowerShell**: `command *>&1 | Tee-Object logs.txt`
## Task Setup (Before Starting)
```
T1 = TaskCreate(subject="Read & analyze logs", activeForm="Analyzing logs")
T2 = TaskCreate(subject="Scout codebase", activeForm="Scouting codebase", addBlockedBy=[T1])
T3 = TaskCreate(subject="Plan fix", activeForm="Planning fix", addBlockedBy=[T1, T2])
T4 = TaskCreate(subject="Implement fix", activeForm="Implementing fix", addBlockedBy=[T3])
T5 = TaskCreate(subject="Test fix", activeForm="Testing fix", addBlockedBy=[T4])
T6 = TaskCreate(subject="Code review", activeForm="Reviewing code", addBlockedBy=[T5])
```
## Workflow
### Step 1: Read & Analyze Logs
`TaskUpdate(T1, status="in_progress")`
- Read logs with `Grep` (use `head_limit: 30` initially, increase if needed)
- Use `debugger` agent for root cause analysis
- Focus on last N lines first (most recent errors)
- Look for stack traces, error codes, timestamps, repeated patterns
`TaskUpdate(T1, status="completed")`
### Step 2: Scout Codebase
`TaskUpdate(T2, status="in_progress")`
Use `ck:scout` agent or parallel `Explore` subagents to find issue locations.
See `references/parallel-exploration.md` for patterns.
`TaskUpdate(T2, status="completed")`
### Step 3: Plan Fix
`TaskUpdate(T3, status="in_progress")` — auto-unblocks when T1 + T2 complete.
Use `planner` agent.
`TaskUpdate(T3, status="completed")`
### Step 4: Implement
`TaskUpdate(T4, status="in_progress")`
Implement the fix.
`TaskUpdate(T4, status="completed")`
### Step 5: Test
`TaskUpdate(T5, status="in_progress")`
Use `tester` agent. If issues remain → keep T5 `in_progress`, loop back to Step 2.
`TaskUpdate(T5, status="completed")`
### Step 6: Review
`TaskUpdate(T6, status="in_progress")`
Use `code-reviewer` agent.
`TaskUpdate(T6, status="completed")`
## Tips
- Focus on last N lines first (most recent errors)
- Look for stack traces, error codes, timestamps
- Check for patterns/repeated errors

View File

@@ -0,0 +1,82 @@
# Quick Workflow
Fast scout-diagnose-fix-verify cycle for simple issues.
## Steps
### Step 1: Scout (Minimal)
Locate affected file(s) and their direct dependencies only.
- Read error message → identify file path
- Check direct imports/dependencies of affected file
- Skip full codebase mapping
**Output:** `✓ Step 1: Scouted - [file], [N] direct deps`
### Step 2: Diagnose (Abbreviated)
Activate `ck:debug` skill. Activate `ck:sequential-thinking` for structured analysis.
- Read error message/logs
- **Capture pre-fix state:** Record exact error output (this is your verification baseline)
- Identify root cause (usually obvious for simple issues)
- Skip parallel hypothesis testing for trivial cases
**Output:** `✓ Step 2: Diagnosed - Root cause: [brief description]`
### Step 3: Fix & Verify
Implement the fix directly.
- Make minimal changes
- Follow existing patterns
**Parallel Verification:**
Launch `Bash` agents in parallel:
```
Task("Bash", "Run typecheck", "Verify types")
Task("Bash", "Run lint", "Verify lint")
```
**Before/After comparison:** Re-run the EXACT command from pre-fix state capture. Compare output.
See `references/parallel-exploration.md` for patterns.
**Output:** `✓ Step 3: Fixed - [N] files, verified (types/lint passed)`
### Step 4: Review + Prevent
Use `code-reviewer` subagent for quick review.
Prompt: "Quick review of fix for [issue]. Check: correctness, security, no regressions. Score X/10."
**Prevention (abbreviated for Quick):**
- Type errors/lint: type system IS the test → regression test optional
- Bug fixes: add at least 1 test covering the fixed scenario
- Still require before/after comparison of verification output
**Review handling:** See `references/review-cycle.md`
**Output:** `✓ Step 4: Review [score]/10 - [prevention measures]`
### Step 5: Complete
Report summary to user.
**If autonomous mode:** Ask to commit via `git-manager` subagent if score >= 9.0
**If HITL mode:** Ask user next action
**Output:** `✓ Step 5: Complete - [action]`
## Skills/Subagents Activated
| Step | Skills/Subagents |
|------|------------------|
| 1 | `ck:scout` (minimal) or direct file read |
| 2 | `ck:debug`, `ck:sequential-thinking` |
| 3 | Parallel `Bash` for verification |
| 4 | `code-reviewer` subagent |
| 5 | `git-manager` subagent |
**Extra:** `ck:context-engineering` if dealing with AI/LLM code
## Notes
- Skip if review fails → escalate to Standard workflow
- Total steps: 5
- No planning phase needed
- Pre-fix state capture is STILL mandatory (even for quick fixes)

View File

@@ -0,0 +1,120 @@
# Standard Workflow
Full pipeline for moderate complexity issues. Uses native Claude Tasks for phase tracking.
## Task Setup (Before Starting)
Create all phase tasks upfront with dependencies. See `references/task-orchestration.md`.
```
T1 = TaskCreate(subject="Scout codebase", activeForm="Scouting codebase")
T2 = TaskCreate(subject="Diagnose root cause", activeForm="Diagnosing root cause")
T3 = TaskCreate(subject="Implement fix", activeForm="Implementing fix", addBlockedBy=[T1, T2])
T4 = TaskCreate(subject="Verify + prevent", activeForm="Verifying fix", addBlockedBy=[T3])
T5 = TaskCreate(subject="Code review", activeForm="Reviewing code", addBlockedBy=[T4])
T6 = TaskCreate(subject="Finalize", activeForm="Finalizing", addBlockedBy=[T5])
```
## Steps
### Step 1: Scout Codebase
`TaskUpdate(T1, status="in_progress")`
**Mandatory skill chain:**
1. Activate `ck:scout` skill OR launch 2-3 parallel `Explore` subagents.
2. Map: affected files, module boundaries, dependencies, related tests, recent git changes.
**Pattern:** In SINGLE message, launch 2-3 Explore agents:
```
Task("Explore", "Find [area1] files related to issue", "Scout area1")
Task("Explore", "Find [area2] patterns/usage", "Scout area2")
Task("Explore", "Find [area3] tests/dependencies", "Scout area3")
```
See `references/parallel-exploration.md` for patterns.
`TaskUpdate(T1, status="completed")`
**Output:** `✓ Step 1: Scouted [N] areas - [M] files, [K] tests found`
### Step 2: Diagnose Root Cause
`TaskUpdate(T2, status="in_progress")`
**Mandatory skill chain:**
1. **Capture pre-fix state:** Record exact error messages, failing test output, stack traces.
2. Activate `ck:debug` skill. Use `debugger` subagent if needed.
3. Activate `ck:sequential-thinking` — form hypotheses through structured reasoning.
4. Spawn parallel `Explore` subagents to test hypotheses against codebase evidence.
5. If 2+ hypotheses fail → auto-activate `ck:problem-solving`.
6. Trace backward to root cause (not just symptom location).
See `references/diagnosis-protocol.md` for full methodology.
`TaskUpdate(T2, status="completed")`
**Output:** `✓ Step 2: Diagnosed - Root cause: [summary], Evidence: [brief], Scope: [N files]`
### Step 3: Implement Fix
`TaskUpdate(T3, status="in_progress")` — auto-unblocked when T1 + T2 complete.
Fix the ROOT CAUSE per diagnosis findings. Not symptoms.
- Apply `ck:problem-solving` skill if stuck
- Use `ck:sequential-thinking` for complex logic
- Minimal changes. Follow existing patterns.
`TaskUpdate(T3, status="completed")`
**Output:** `✓ Step 3: Implemented - [N] files changed`
### Step 4: Verify + Prevent
`TaskUpdate(T4, status="in_progress")`
**Mandatory skill chain:**
1. **Iron-law verify:** Re-run the EXACT commands from pre-fix state capture. Compare before/after.
2. **Regression test:** Add/update test(s) covering the fixed issue. Test MUST fail without fix, pass with fix.
3. **Defense-in-depth:** Apply prevention layers where applicable (see `references/prevention-gate.md`).
4. **Parallel verification:** Launch `Bash` agents:
```
Task("Bash", "Run typecheck", "Verify types")
Task("Bash", "Run lint", "Verify lint")
Task("Bash", "Run build", "Verify build")
Task("Bash", "Run tests", "Verify tests")
```
**If verification fails:** Loop back to Step 2 (re-diagnose). Max 3 attempts.
`TaskUpdate(T4, status="completed")`
**Output:** `✓ Step 4: Verified + Prevented - [before/after], [N] tests added, [M] guards`
### Step 5: Code Review
`TaskUpdate(T5, status="in_progress")`
Use `code-reviewer` subagent.
See `references/review-cycle.md` for mode-specific handling.
`TaskUpdate(T5, status="completed")`
**Output:** `✓ Step 5: Review [score]/10 - [status]`
### Step 6: Finalize
`TaskUpdate(T6, status="in_progress")`
- Report summary: root cause, changes, prevention measures, confidence score
- Activate `ck:project-management` for task sync-back and plan status updates
- Update docs if needed via `docs-manager`
- Ask to commit via `git-manager` subagent
- Run `/ck:journal`
`TaskUpdate(T6, status="completed")`
**Output:** `✓ Step 6: Complete - [action]`
## Skills/Subagents Activated
| Step | Skills/Subagents |
|------|------------------|
| 1 | `ck:scout` OR parallel `Explore` subagents |
| 2 | `ck:debug`, `ck:sequential-thinking`, `debugger` subagent, parallel `Explore`, (`ck:problem-solving` auto) |
| 3 | `ck:problem-solving` (if stuck), `ck:sequential-thinking` (complex logic) |
| 4 | `tester` subagent, parallel `Bash` verification |
| 5 | `code-reviewer` subagent |
| 6 | `ck:project-management`, `git-manager`, `docs-manager` subagents |
**Rules:** Don't skip steps. Validate before proceeding. One phase at a time.
**Frontend:** Use `chrome`, `ck:chrome-devtools` or any relevant skills/tools to verify.
**Visual Assets:** Use `ck:ai-multimodal` for visual assets generation, analysis and verification.

View File

@@ -0,0 +1,75 @@
# Test Failure Fix Workflow
For fixing failing tests and test suite issues. Uses native Claude Tasks for phase tracking.
## Task Setup (Before Starting)
```
T1 = TaskCreate(subject="Compile & collect failures", activeForm="Compiling and collecting failures")
T2 = TaskCreate(subject="Debug root causes", activeForm="Debugging test failures", addBlockedBy=[T1])
T3 = TaskCreate(subject="Plan fixes", activeForm="Planning fixes", addBlockedBy=[T2])
T4 = TaskCreate(subject="Implement fixes", activeForm="Implementing fixes", addBlockedBy=[T3])
T5 = TaskCreate(subject="Re-test", activeForm="Re-running tests", addBlockedBy=[T4])
T6 = TaskCreate(subject="Code review", activeForm="Reviewing code", addBlockedBy=[T5])
```
## Workflow
### Step 1: Compile & Collect Failures
`TaskUpdate(T1, status="in_progress")`
Use `tester` agent. Fix all syntax errors before running tests.
- Run full test suite, collect all failures
- Group failures by module/area
`TaskUpdate(T1, status="completed")`
### Step 2: Debug
`TaskUpdate(T2, status="in_progress")`
Use `debugger` agent for root cause analysis.
- Analyze each failure group
- Identify shared root causes across failures
`TaskUpdate(T2, status="completed")`
### Step 3: Plan
`TaskUpdate(T3, status="in_progress")`
Use `planner` agent for fix strategy.
- Prioritize fixes (shared root causes first)
- Identify dependencies between fixes
`TaskUpdate(T3, status="completed")`
### Step 4: Implement
`TaskUpdate(T4, status="in_progress")`
Implement fixes step by step per plan.
`TaskUpdate(T4, status="completed")`
### Step 5: Re-test
`TaskUpdate(T5, status="in_progress")`
Use `tester` agent. If tests still fail → keep T5 `in_progress`, loop back to Step 2.
`TaskUpdate(T5, status="completed")`
### Step 6: Review
`TaskUpdate(T6, status="in_progress")`
Use `code-reviewer` agent.
`TaskUpdate(T6, status="completed")`
## Common Commands
```bash
npm test
bun test
pytest
go test ./...
```
## Tips
- Run single failing test first for faster iteration
- Check test assertions vs actual behavior
- Verify test fixtures/mocks are correct
- Don't modify tests to pass unless test is wrong

View File

@@ -0,0 +1,33 @@
# Type Error Fix Workflow
Quick workflow for TypeScript/type errors.
## Commands
```bash
bun run typecheck
tsc --noEmit
npx tsc --noEmit
```
## Rules
- Fix ALL type errors, don't stop at first
- **NEVER use `any` just to pass** - find proper types
- Repeat until zero errors
## Common Fixes
- Missing type imports
- Incorrect property access
- Null/undefined handling
- Generic type parameters
- Union type narrowing
## Workflow
1. Run typecheck command
2. Fix errors one by one
3. Re-run typecheck
4. Repeat until clean
## Tips
- Group related errors (same root cause)
- Check `@types/*` packages for library types
- Use `unknown` + type guards instead of `any`

View File

@@ -0,0 +1,75 @@
# UI Fix Workflow
For fixing visual/UI issues. Requires design skills. Uses native Claude Tasks for phase tracking.
## Required Skills (activate in order)
1. `ck:ui-ux-pro-max` - Design database (ALWAYS FIRST)
2. `ck:ui-ux-pro-max` - Design principles
3. `ck:frontend-design` - Implementation patterns
## Pre-fix Research
```bash
python3 .opencode/skills/ui-ux-pro-max/scripts/search.py "<product-type>" --domain product
python3 .opencode/skills/ui-ux-pro-max/scripts/search.py "<style>" --domain style
python3 .opencode/skills/ui-ux-pro-max/scripts/search.py "accessibility" --domain ux
```
## Task Setup (Before Starting)
```
T1 = TaskCreate(subject="Analyze visual issue", activeForm="Analyzing visual issue")
T2 = TaskCreate(subject="Implement UI fix", activeForm="Implementing UI fix", addBlockedBy=[T1])
T3 = TaskCreate(subject="Verify visually", activeForm="Verifying visually", addBlockedBy=[T2])
T4 = TaskCreate(subject="DevTools check", activeForm="Checking with DevTools", addBlockedBy=[T3])
T5 = TaskCreate(subject="Test compilation", activeForm="Testing compilation", addBlockedBy=[T4])
T6 = TaskCreate(subject="Update design docs", activeForm="Updating design docs", addBlockedBy=[T5])
```
## Workflow
### Step 1: Analyze
`TaskUpdate(T1, status="in_progress")`
Analyze screenshots/videos with `ck:ai-multimodal` skill.
- Read `./docs/design-guidelines.md` first
- Identify exact visual discrepancy
`TaskUpdate(T1, status="completed")`
### Step 2: Implement
`TaskUpdate(T2, status="in_progress")`
Use `ui-ux-designer` agent.
`TaskUpdate(T2, status="completed")`
### Step 3: Verify Visually
`TaskUpdate(T3, status="in_progress")`
Screenshot + `ck:ai-multimodal` analysis.
- Capture parent container, not whole page
- Compare to design guidelines
- If incorrect → keep T3 `in_progress`, loop back to Step 2
`TaskUpdate(T3, status="completed")`
### Step 4: DevTools Check
`TaskUpdate(T4, status="in_progress")`
Use `ck:chrome-devtools` skill.
`TaskUpdate(T4, status="completed")`
### Step 5: Test
`TaskUpdate(T5, status="in_progress")`
Use `tester` agent for compilation check.
`TaskUpdate(T5, status="completed")`
### Step 6: Document
`TaskUpdate(T6, status="in_progress")`
Update `./docs/design-guidelines.md` if needed.
`TaskUpdate(T6, status="completed")`
## Tips
- Use `ck:ai-multimodal` for generating visual assets
- Use `ImageMagick` for image editing