This commit is contained in:
2026-04-12 01:06:31 +07:00
commit 10d660cbcb
1066 changed files with 228596 additions and 0 deletions

View File

@@ -0,0 +1,87 @@
# General Library Documentation Search
**Use when:** User asks about entire library/framework
**Speed:** ⚡⚡ Moderate (30-60s)
**Token usage:** 🟡 Medium
**Accuracy:** 📚 Comprehensive
## Trigger Patterns
- "Documentation for [LIBRARY]"
- "[LIBRARY] getting started"
- "How to use [LIBRARY]"
- "[LIBRARY] API reference"
## Workflow (Script-First)
```bash
# STEP 1: Execute detect-topic.js script
node scripts/detect-topic.js "<user query>"
# Returns: {"isTopicSpecific": false} for general queries
# STEP 2: Execute fetch-docs.js script (handles URL construction)
node scripts/fetch-docs.js "<user query>"
# Script constructs context7.com URL automatically
# Script handles GitHub/website URL patterns
# Returns: llms.txt content with 5-20+ URLs
# STEP 3: Execute analyze-llms-txt.js script
cat llms.txt | node scripts/analyze-llms-txt.js -
# Groups URLs: critical, important, supplementary
# Recommends: agent distribution strategy
# Returns: {totalUrls, grouped, distribution}
# STEP 4: Deploy agents based on script recommendation
# - 1-3 URLs: Single agent or direct WebFetch
# - 4-10 URLs: Deploy 3-5 Explorer agents
# - 11+ URLs: Deploy 7 agents or phased approach
# STEP 5: Aggregate and present
# Synthesize findings: installation, concepts, API, examples
```
## Examples
**Astro framework:**
```bash
# Execute scripts (no manual URL construction)
node scripts/detect-topic.js "Documentation for Astro"
# {"isTopicSpecific": false}
node scripts/fetch-docs.js "Documentation for Astro"
# Script fetches: context7.com/withastro/astro/llms.txt
# Returns: llms.txt with 8 URLs
node scripts/analyze-llms-txt.js < llms.txt
# {totalUrls: 8, distribution: "3-agents", grouped: {...}}
# Deploy 3 Explorer agents as recommended:
# Agent 1: Getting started, installation, setup
# Agent 2: Core concepts, components, layouts
# Agent 3: Configuration, API reference
# Aggregate and present comprehensive report
```
## Agent Distribution
**1-3 URLs:** Single agent
**4-10 URLs:** 3-5 agents (2-3 URLs each)
**11-20 URLs:** 7 agents (balanced)
**21+ URLs:** Two-phase (critical first, then important)
## Known Libraries
- Next.js: `vercel/next.js`
- Astro: `withastro/astro`
- Remix: `remix-run/remix`
- shadcn/ui: `shadcn-ui/ui`
- Better Auth: `better-auth/better-auth`
## Fallback
Scripts handle fallback automatically:
1. `fetch-docs.js` tries context7.com
2. If 404, script suggests WebSearch for llms.txt
3. If still unavailable: [Repository Analysis](./repo-analysis.md)

View File

@@ -0,0 +1,91 @@
# Repository Analysis (No llms.txt)
**Use when:** llms.txt not available on context7.com or official site
**Speed:** ⚡⚡⚡ Slower (5-10min)
**Token usage:** 🔴 High
**Accuracy:** 🔍 Code-based
## When to Use
- Library not on context7.com
- No llms.txt on official site
- Need to analyze code structure
- Documentation incomplete
## Workflow
```
1. Find repository
→ WebSearch: "[library] github repository"
→ Verify: Official, active, has docs/
2. Clone repository
→ Bash: git clone [repo-url] /tmp/docs-analysis
→ Optional: checkout specific version/tag
3. Install Repomix (if needed)
→ Bash: npm install -g repomix
4. Pack repository
→ Bash: cd /tmp/docs-analysis && repomix --output repomix-output.xml
→ Repomix creates AI-friendly single file
5. Read packed file
→ Read: /tmp/docs-analysis/repomix-output.xml
→ Extract: README, docs/, examples/, API files
6. Analyze structure
→ Identify: Documentation sections
→ Extract: Installation, usage, API, examples
→ Note: Code patterns, best practices
7. Present findings
→ Source: Repository analysis
→ Caveat: Based on code, not official docs
→ Include: Repository health (stars, activity)
```
## Example
**Obscure library without llms.txt:**
```bash
# 1. Find
WebSearch: "MyLibrary github repository"
# Found: https://github.com/org/mylibrary
# 2. Clone
git clone https://github.com/org/mylibrary /tmp/docs-analysis
# 3. Pack with Repomix
cd /tmp/docs-analysis
repomix --output repomix-output.xml
# 4. Read
Read: /tmp/docs-analysis/repomix-output.xml
# Single XML file with entire codebase
# 5. Extract documentation
- README.md: Installation, overview
- docs/: Usage guides, API reference
- examples/: Code samples
- src/: Implementation patterns
# 6. Present
Source: Repository analysis (no llms.txt)
Health: 1.2K stars, active
```
## Repomix Benefits
✅ Entire repo in single file
✅ Preserves directory structure
✅ AI-optimized format
✅ Includes metadata
## Alternative
If no GitHub repo exists:
→ Deploy multiple Researcher agents
→ Gather: Official site, blog posts, tutorials, Stack Overflow
→ Note: Quality varies, cross-reference sources

View File

@@ -0,0 +1,77 @@
# Topic-Specific Documentation Search
**Use when:** User asks about specific feature/component/concept
**Speed:** ⚡ Fastest (10-15s)
**Token usage:** 🟢 Minimal
**Accuracy:** 🎯 Highly targeted
## Trigger Patterns
- "How do I use [FEATURE] in [LIBRARY]?"
- "[LIBRARY] [COMPONENT] documentation"
- "Implement [FEATURE] with [LIBRARY]"
- "[LIBRARY] [CONCEPT] guide"
## Workflow (Script-First)
```bash
# STEP 1: Execute detect-topic.js script
node scripts/detect-topic.js "<user query>"
# Returns: {"topic": "X", "library": "Y", "isTopicSpecific": true}
# STEP 2: Execute fetch-docs.js script (handles URL construction automatically)
node scripts/fetch-docs.js "<user query>"
# Script constructs: context7.com/{library}/llms.txt?topic={topic}
# Script handles fallback if topic URL fails
# Returns: llms.txt content with 1-5 URLs
# STEP 3: Process results based on URL count
# - 1-3 URLs: Read directly with WebFetch tool
# - 4-5 URLs: Deploy 2-3 Explorer agents in parallel
# STEP 4: Present findings
# Focus on specific feature: installation, usage, examples
```
## Examples
**shadcn date picker:**
```bash
# Execute script (automatic URL construction)
node scripts/detect-topic.js "How do I use date picker in shadcn?"
# {"topic": "date", "library": "shadcn/ui", "isTopicSpecific": true}
node scripts/fetch-docs.js "How do I use date picker in shadcn?"
# Script fetches: context7.com/shadcn-ui/ui/llms.txt?topic=date
# Returns: 2-3 date-specific URLs
# Read URLs directly with WebFetch
# Present date picker documentation
```
**Next.js caching:**
```bash
# Execute scripts (no manual URL needed)
node scripts/detect-topic.js "Next.js caching strategies"
# {"topic": "cache", "library": "next.js", "isTopicSpecific": true}
node scripts/fetch-docs.js "Next.js caching strategies"
# Script fetches: context7.com/vercel/next.js/llms.txt?topic=cache
# Returns: 3-4 URLs
# Process URLs via 2 Explorer agents
# Present caching strategies
```
## Benefits
✅ 10x faster than full docs
✅ No filtering needed
✅ Minimal context load
✅ Best user experience
## Fallback
If topic URL returns 404:
→ Fallback to [General Library Search](./library-search.md)