Complete Guide to Claude Concepts
A comprehensive reference guide covering Slash Commands, Subagents, Memory, MCP Protocol, and Agent Skills with tables, diagrams, and practical examples.
Table of Contents
- Slash Commands
- Subagents
- Memory
- MCP Protocol
- Agent Skills
- Plugins
- Hooks
- Checkpoints and Rewind
- Advanced Features
- Comparison & Integration
Slash Commands
Overview
Slash commands are user-invoked shortcuts stored as Markdown files that Claude Code can execute. They enable teams to standardize frequently-used prompts and workflows.
Architecture
File Structure
Command Organization Table
| Location | Scope | Availability | Use Case | Git Tracked |
|---|---|---|---|---|
.claude/commands/ | Project-specific | Team members | Team workflows, shared standards | ✅ Yes |
~/.claude/commands/ | Personal | Individual user | Personal shortcuts across projects | ❌ No |
| Subdirectories | Namespaced | Based on parent | Organize by category | ✅ Yes |
Features & Capabilities
| Feature | Example | Supported |
|---|---|---|
| Shell script execution | bash scripts/deploy.sh | ✅ Yes |
| File references | @path/to/file.js | ✅ Yes |
| Bash integration | $(git log --oneline) | ✅ Yes |
| Arguments | /pr --verbose | ✅ Yes |
| MCP commands | /mcp__github__list_prs | ✅ Yes |
Practical Examples
Example 1: Code Optimization Command
File: .claude/commands/optimize.md
---
name: Code Optimization
description: Analyze code for performance issues and suggest optimizations
tags: performance, analysis
---
# Code Optimization
Review the provided code for the following issues in order of priority:
1. **Performance bottlenecks** - identify O(n²) operations, inefficient loops
2. **Memory leaks** - find unreleased resources, circular references
3. **Algorithm improvements** - suggest better algorithms or data structures
4. **Caching opportunities** - identify repeated computations
5. **Concurrency issues** - find race conditions or threading problems
Format your response with:
- Issue severity (Critical/High/Medium/Low)
- Location in code
- Explanation
- Recommended fix with code exampleUsage:
# User types in Claude Code
/optimize
# Claude loads the prompt and waits for code inputExample 2: Pull Request Helper Command
File: .claude/commands/pr.md
---
name: Prepare Pull Request
description: Clean up code, stage changes, and prepare a pull request
tags: git, workflow
---
# Pull Request Preparation Checklist
Before creating a PR, execute these steps:
1. Run linting: `prettier --write .`
2. Run tests: `npm test`
3. Review git diff: `git diff HEAD`
4. Stage changes: `git add .`
5. Create commit message following conventional commits:
- `fix:` for bug fixes
- `feat:` for new features
- `docs:` for documentation
- `refactor:` for code restructuring
- `test:` for test additions
- `chore:` for maintenance
6. Generate PR summary including:
- What changed
- Why it changed
- Testing performed
- Potential impactsUsage:
/pr
# Claude runs through checklist and prepares the PRExample 3: Hierarchical Documentation Generator
File: .claude/commands/docs/generate-api-docs.md
---
name: Generate API Documentation
description: Create comprehensive API documentation from source code
tags: documentation, api
---
# API Documentation Generator
Generate API documentation by:
1. Scanning all files in `/src/api/`
2. Extracting function signatures and JSDoc comments
3. Organizing by endpoint/module
4. Creating markdown with examples
5. Including request/response schemas
6. Adding error documentation
Output format:
- Markdown file in `/docs/api.md`
- Include curl examples for all endpoints
- Add TypeScript typesCommand Lifecycle Diagram
Best Practices
| ✅ Do | ❌ Don't |
|---|---|
| Use clear, action-oriented names | Create commands for one-time tasks |
| Document trigger words in description | Build complex logic in commands |
| Keep commands focused on single task | Create redundant commands |
| Version control project commands | Hardcode sensitive information |
| Organize in subdirectories | Create long lists of commands |
| Use simple, readable prompts | Use abbreviated or cryptic wording |
Subagents
Overview
Subagents are specialized AI assistants with isolated context windows and customized system prompts. They enable delegated task execution while maintaining clean separation of concerns.
Architecture Diagram
Subagent Lifecycle
Subagent Configuration Table
| Configuration | Type | Purpose | Example |
|---|---|---|---|
name | String | Agent identifier | code-reviewer |
description | String | Purpose & trigger terms | Comprehensive code quality analysis |
tools | List/String | Allowed capabilities | read, grep, diff, lint_runner |
system_prompt | Markdown | Behavioral instructions | Custom guidelines |
Tool Access Hierarchy
Practical Examples
Example 1: Complete Subagent Setup
File: .claude/agents/code-reviewer.md
---
name: code-reviewer
description: Comprehensive code quality and maintainability analysis
tools: read, grep, diff, lint_runner
---
# Code Reviewer Agent
You are an expert code reviewer specializing in:
- Performance optimization
- Security vulnerabilities
- Code maintainability
- Testing coverage
- Design patterns
## Review Priorities (in order)
1. **Security Issues** - Authentication, authorization, data exposure
2. **Performance Problems** - O(n²) operations, memory leaks, inefficient queries
3. **Code Quality** - Readability, naming, documentation
4. **Test Coverage** - Missing tests, edge cases
5. **Design Patterns** - SOLID principles, architecture
## Review Output Format
For each issue:
- **Severity**: Critical / High / Medium / Low
- **Category**: Security / Performance / Quality / Testing / Design
- **Location**: File path and line number
- **Issue Description**: What's wrong and why
- **Suggested Fix**: Code example
- **Impact**: How this affects the system
## Example Review
### Issue: N+1 Query Problem
- **Severity**: High
- **Category**: Performance
- **Location**: src/user-service.ts:45
- **Issue**: Loop executes database query in each iteration
- **Fix**: Use JOIN or batch queryFile: .claude/agents/test-engineer.md
---
name: test-engineer
description: Test strategy, coverage analysis, and automated testing
tools: read, write, bash, grep
---
# Test Engineer Agent
You are expert at:
- Writing comprehensive test suites
- Ensuring high code coverage (>80%)
- Testing edge cases and error scenarios
- Performance benchmarking
- Integration testing
## Testing Strategy
1. **Unit Tests** - Individual functions/methods
2. **Integration Tests** - Component interactions
3. **End-to-End Tests** - Complete workflows
4. **Edge Cases** - Boundary conditions
5. **Error Scenarios** - Failure handling
## Test Output Requirements
- Use Jest for JavaScript/TypeScript
- Include setup/teardown for each test
- Mock external dependencies
- Document test purpose
- Include performance assertions when relevant
## Coverage Requirements
- Minimum 80% code coverage
- 100% for critical paths
- Report missing coverage areasFile: .claude/agents/documentation-writer.md
---
name: documentation-writer
description: Technical documentation, API docs, and user guides
tools: read, write, grep
---
# Documentation Writer Agent
You create:
- API documentation with examples
- User guides and tutorials
- Architecture documentation
- Changelog entries
- Code comment improvements
## Documentation Standards
1. **Clarity** - Use simple, clear language
2. **Examples** - Include practical code examples
3. **Completeness** - Cover all parameters and returns
4. **Structure** - Use consistent formatting
5. **Accuracy** - Verify against actual code
## Documentation Sections
### For APIs
- Description
- Parameters (with types)
- Returns (with types)
- Throws (possible errors)
- Examples (curl, JavaScript, Python)
- Related endpoints
### For Features
- Overview
- Prerequisites
- Step-by-step instructions
- Expected outcomes
- Troubleshooting
- Related topicsExample 2: Subagent Delegation in Action
# Scenario: Building a Payment Feature
## User Request
"Build a secure payment processing feature that integrates with Stripe"
## Main Agent Flow
1. **Planning Phase**
- Understands requirements
- Determines tasks needed
- Plans architecture
2. **Delegates to Code Reviewer Subagent**
- Task: "Review the payment processing implementation for security"
- Context: Auth, API keys, token handling
- Reviews for: SQL injection, key exposure, HTTPS enforcement
3. **Delegates to Test Engineer Subagent**
- Task: "Create comprehensive tests for payment flows"
- Context: Success scenarios, failures, edge cases
- Creates tests for: Valid payments, declined cards, network failures, webhooks
4. **Delegates to Documentation Writer Subagent**
- Task: "Document the payment API endpoints"
- Context: Request/response schemas
- Produces: API docs with curl examples, error codes
5. **Synthesis**
- Main agent collects all outputs
- Integrates findings
- Returns complete solution to userExample 3: Tool Permission Scoping
Restrictive Setup - Limited to Specific Commands
---
name: secure-reviewer
description: Security-focused code review with minimal permissions
tools: read, grep
---
# Secure Code Reviewer
Reviews code for security vulnerabilities only.
This agent:
- ✅ Reads files to analyze
- ✅ Searches for patterns
- ❌ Cannot execute code
- ❌ Cannot modify files
- ❌ Cannot run tests
This ensures the reviewer doesn't accidentally break anything.Extended Setup - All Tools for Implementation
---
name: implementation-agent
description: Full implementation capabilities for feature development
tools: read, write, bash, grep, edit, glob
---
# Implementation Agent
Builds features from specifications.
This agent:
- ✅ Reads specifications
- ✅ Writes new code files
- ✅ Runs build commands
- ✅ Searches codebase
- ✅ Edits existing files
- ✅ Finds files matching patterns
Full capabilities for independent feature development.Subagent Context Management
When to Use Subagents
| Scenario | Use Subagent | Why |
|---|---|---|
| Complex feature with many steps | ✅ Yes | Separate concerns, prevent context pollution |
| Quick code review | ❌ No | Not necessary overhead |
| Parallel task execution | ✅ Yes | Each subagent has own context |
| Specialized expertise needed | ✅ Yes | Custom system prompts |
| Long-running analysis | ✅ Yes | Prevents main context exhaustion |
| Single task | ❌ No | Adds latency unnecessarily |
Agent Teams
Agent Teams coordinate multiple agents working on related tasks. Rather than delegating to one subagent at a time, Agent Teams allow the main agent to orchestrate a group of agents that collaborate, share intermediate results, and work toward a common goal. This is useful for large-scale tasks like full-stack feature development where a frontend agent, backend agent, and testing agent work in parallel.
Memory
Overview
Memory enables Claude to retain context across sessions and conversations. It exists in two forms: automatic synthesis in claude.ai, and filesystem-based CLAUDE.md in Claude Code.
Memory Architecture
Memory Hierarchy in Claude Code (7 Tiers)
Claude Code loads memory from 7 tiers, listed from highest to lowest priority:
Memory Locations Table
| Tier | Location | Scope | Priority | Shared | Best For |
|---|---|---|---|---|---|
| 1. Managed Policy | Enterprise admin | Organization | Highest | All org users | Compliance, security policies |
| 2. Project | ./CLAUDE.md | Project | High | Team (Git) | Team standards, architecture |
| 3. Project Rules | .claude/rules/*.md | Project | High | Team (Git) | Modular project conventions |
| 4. User | ~/.claude/CLAUDE.md | Personal | Medium | Individual | Personal preferences |
| 5. User Rules | ~/.claude/rules/*.md | Personal | Medium | Individual | Personal rule modules |
| 6. Local | .claude/local/CLAUDE.md | Local | Low | Not shared | Machine-specific settings |
| 7. Auto Memory | Automatic | Session | Lowest | Individual | Learned preferences, patterns |
Auto Memory
Auto Memory automatically captures user preferences and patterns observed during sessions. Claude learns from your interactions and remembers:
- Coding style preferences
- Common corrections you make
- Framework and tool choices
- Communication style preferences
Auto Memory works in the background and does not require manual configuration.
Memory Update Lifecycle
Practical Examples
Example 1: Project Memory Structure
File: ./CLAUDE.md
# Project Configuration
## Project Overview
- **Name**: E-commerce Platform
- **Tech Stack**: Node.js, PostgreSQL, React 18, Docker
- **Team Size**: 5 developers
- **Deadline**: Q4 2025
## Architecture
@docs/architecture.md
@docs/api-standards.md
@docs/database-schema.md
## Development Standards
### Code Style
- Use Prettier for formatting
- Use ESLint with airbnb config
- Maximum line length: 100 characters
- Use 2-space indentation
### Naming Conventions
- **Files**: kebab-case (user-controller.js)
- **Classes**: PascalCase (UserService)
- **Functions/Variables**: camelCase (getUserById)
- **Constants**: UPPER_SNAKE_CASE (API_BASE_URL)
- **Database Tables**: snake_case (user_accounts)
### Git Workflow
- Branch names: `feature/description` or `fix/description`
- Commit messages: Follow conventional commits
- PR required before merge
- All CI/CD checks must pass
- Minimum 1 approval required
### Testing Requirements
- Minimum 80% code coverage
- All critical paths must have tests
- Use Jest for unit tests
- Use Cypress for E2E tests
- Test filenames: `*.test.ts` or `*.spec.ts`
### API Standards
- RESTful endpoints only
- JSON request/response
- Use HTTP status codes correctly
- Version API endpoints: `/api/v1/`
- Document all endpoints with examples
### Database
- Use migrations for schema changes
- Never hardcode credentials
- Use connection pooling
- Enable query logging in development
- Regular backups required
### Deployment
- Docker-based deployment
- Kubernetes orchestration
- Blue-green deployment strategy
- Automatic rollback on failure
- Database migrations run before deploy
## Common Commands
| Command | Purpose |
|---------|---------|
| `npm run dev` | Start development server |
| `npm test` | Run test suite |
| `npm run lint` | Check code style |
| `npm run build` | Build for production |
| `npm run migrate` | Run database migrations |
## Team Contacts
- Tech Lead: Sarah Chen (@sarah.chen)
- Product Manager: Mike Johnson (@mike.j)
- DevOps: Alex Kim (@alex.k)
## Known Issues & Workarounds
- PostgreSQL connection pooling limited to 20 during peak hours
- Workaround: Implement query queuing
- Safari 14 compatibility issues with async generators
- Workaround: Use Babel transpiler
## Related Projects
- Analytics Dashboard: `/projects/analytics`
- Mobile App: `/projects/mobile`
- Admin Panel: `/projects/admin`Example 2: Directory-Specific Memory
File: ./src/api/CLAUDE.md
# API Module Standards
This file overrides root CLAUDE.md for everything in /src/api/
## API-Specific Standards
### Request Validation
- Use Zod for schema validation
- Always validate input
- Return 400 with validation errors
- Include field-level error details
### Authentication
- All endpoints require JWT token
- Token in Authorization header
- Token expires after 24 hours
- Implement refresh token mechanism
### Response Format
All responses must follow this structure:
```json
{
"success": true,
"data": { /* actual data */ },
"timestamp": "2025-11-06T10:30:00Z",
"version": "1.0"
}
```
### Error responses:
```json
{
"success": false,
"error": {
"code": "VALIDATION_ERROR",
"message": "User message",
"details": { /* field errors */ }
},
"timestamp": "2025-11-06T10:30:00Z"
}
```
### Pagination
- Use cursor-based pagination (not offset)
- Include `hasMore` boolean
- Limit max page size to 100
- Default page size: 20
### Rate Limiting
- 1000 requests per hour for authenticated users
- 100 requests per hour for public endpoints
- Return 429 when exceeded
- Include retry-after header
### Caching
- Use Redis for session caching
- Cache duration: 5 minutes default
- Invalidate on write operations
- Tag cache keys with resource typeExample 3: Personal Memory
File: ~/.claude/CLAUDE.md
# My Development Preferences
## About Me
- **Experience Level**: 8 years full-stack development
- **Preferred Languages**: TypeScript, Python
- **Communication Style**: Direct, with examples
- **Learning Style**: Visual diagrams with code
## Code Preferences
### Error Handling
I prefer explicit error handling with try-catch blocks and meaningful error messages.
Avoid generic errors. Always log errors for debugging.
### Comments
Use comments for WHY, not WHAT. Code should be self-documenting.
Comments should explain business logic or non-obvious decisions.
### Testing
I prefer TDD (test-driven development).
Write tests first, then implementation.
Focus on behavior, not implementation details.
### Architecture
I prefer modular, loosely-coupled design.
Use dependency injection for testability.
Separate concerns (Controllers, Services, Repositories).
## Debugging Preferences
- Use console.log with prefix: `[DEBUG]`
- Include context: function name, relevant variables
- Use stack traces when available
- Always include timestamps in logs
## Communication
- Explain complex concepts with diagrams
- Show concrete examples before explaining theory
- Include before/after code snippets
- Summarize key points at the end
## Project Organization
I organize my projects as:
```
project/
├── src/
│ ├── api/
│ ├── services/
│ ├── models/
│ └── utils/
├── tests/
├── docs/
└── docker/
```
## Tooling
- **IDE**: VS Code with vim keybindings
- **Terminal**: Zsh with Oh-My-Zsh
- **Format**: Prettier (100 char line length)
- **Linter**: ESLint with airbnb config
- **Test Framework**: Jest with React Testing LibraryExample 4: Memory Update During Session
Session Interaction:
User: Remember that I prefer using React hooks instead of class components
for all new components.
Claude: I'm adding that to your memory. Which memory file should this go in?
1. Project memory (./CLAUDE.md)
2. Personal memory (~/.claude/CLAUDE.md)
User: Project memory
Claude: ✅ Memory saved!
Added to ./CLAUDE.md:
---
### Component Development
- Use functional components with React Hooks
- Prefer hooks over class components
- Custom hooks for reusable logic
- Use useCallback for event handlers
- Use useMemo for expensive computationsMemory in Claude Web/Desktop
Memory Synthesis Timeline
Example Memory Summary:
## Claude's Memory of User
### Professional Background
- Senior full-stack developer with 8 years experience
- Focus on TypeScript/Node.js backends and React frontends
- Active open source contributor
- Interested in AI and machine learning
### Project Context
- Currently building e-commerce platform
- Tech stack: Node.js, PostgreSQL, React 18, Docker
- Working with team of 5 developers
- Using CI/CD and blue-green deployments
### Communication Preferences
- Prefers direct, concise explanations
- Likes visual diagrams and examples
- Appreciates code snippets
- Explains business logic in comments
### Current Goals
- Improve API performance
- Increase test coverage to 90%
- Implement caching strategy
- Document architectureMemory Features Comparison
| Feature | Claude Web/Desktop | Claude Code (CLAUDE.md) |
|---|---|---|
| Auto-synthesis | ✅ Every 24h | ❌ Manual |
| Cross-project | ✅ Shared | ❌ Project-specific |
| Team access | ✅ Shared projects | ✅ Git-tracked |
| Searchable | ✅ Built-in | ✅ Through /memory |
| Editable | ✅ In-chat | ✅ Direct file edit |
| Import/Export | ✅ Yes | ✅ Copy/paste |
| Persistent | ✅ 24h+ | ✅ Indefinite |
MCP Protocol
Overview
MCP (Model Context Protocol) is a standardized way for Claude to access external tools, APIs, and real-time data sources. Unlike Memory, MCP provides live access to changing data.
MCP Architecture
MCP Ecosystem
MCP Setup Process
Available MCP Servers Table
| MCP Server | Purpose | Common Tools | Auth | Real-time |
|---|---|---|---|---|
| Filesystem | File operations | read, write, delete | OS permissions | ✅ Yes |
| GitHub | Repository management | list_prs, create_issue, push | OAuth | ✅ Yes |
| Slack | Team communication | send_message, list_channels | Token | ✅ Yes |
| Database | SQL queries | query, insert, update | Credentials | ✅ Yes |
| Google Docs | Document access | read, write, share | OAuth | ✅ Yes |
| Asana | Project management | create_task, update_status | API Key | ✅ Yes |
| Stripe | Payment data | list_charges, create_invoice | API Key | ✅ Yes |
| Memory | Persistent memory | store, retrieve, delete | Local | ❌ No |
Practical Examples
Example 1: GitHub MCP Configuration
File: .mcp.json (project scope) or ~/.claude.json (user scope)
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}Available GitHub MCP Tools:
# GitHub MCP Tools
## Pull Request Management
- `list_prs` - List all PRs in repository
- `get_pr` - Get PR details including diff
- `create_pr` - Create new PR
- `update_pr` - Update PR description/title
- `merge_pr` - Merge PR to main branch
- `review_pr` - Add review comments
Example request:
```
/mcp__github__get_pr 456
# Returns:
Title: Add dark mode support
Author: @alice
Description: Implements dark theme using CSS variables
Status: OPEN
Reviewers: @bob, @charlie
```
## Issue Management
- `list_issues` - List all issues
- `get_issue` - Get issue details
- `create_issue` - Create new issue
- `close_issue` - Close issue
- `add_comment` - Add comment to issue
## Repository Information
- `get_repo_info` - Repository details
- `list_files` - File tree structure
- `get_file_content` - Read file contents
- `search_code` - Search across codebase
## Commit Operations
- `list_commits` - Commit history
- `get_commit` - Specific commit details
- `create_commit` - Create new commitExample 2: Database MCP Setup
Configuration:
{
"mcpServers": {
"database": {
"command": "npx",
"args": ["@modelcontextprotocol/server-database"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost/mydb"
}
}
}
}Example Usage:
User: Fetch all users with more than 10 orders
Claude: I'll query your database to find that information.
# Using MCP database tool:
SELECT u.*, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id
HAVING COUNT(o.id) > 10
ORDER BY order_count DESC;
# Results:
- Alice: 15 orders
- Bob: 12 orders
- Charlie: 11 ordersExample 3: Multi-MCP Workflow
Scenario: Daily Report Generation
# Daily Report Workflow using Multiple MCPs
## Setup
1. GitHub MCP - fetch PR metrics
2. Database MCP - query sales data
3. Slack MCP - post report
4. Filesystem MCP - save report
## Workflow
### Step 1: Fetch GitHub Data
/mcp__github__list_prs completed:true last:7days
Output:
- Total PRs: 42
- Average merge time: 2.3 hours
- Review turnaround: 1.1 hours
### Step 2: Query Database
SELECT COUNT(*) as sales, SUM(amount) as revenue
FROM orders
WHERE created_at > NOW() - INTERVAL '1 day'
Output:
- Sales: 247
- Revenue: $12,450
### Step 3: Generate Report
Combine data into HTML report
### Step 4: Save to Filesystem
Write report.html to /reports/
### Step 5: Post to Slack
Send summary to #daily-reports channel
Final Output:
✅ Report generated and posted
📊 47 PRs merged this week
💰 $12,450 in daily salesExample 4: Filesystem MCP Operations
Configuration:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["@modelcontextprotocol/server-filesystem", "/home/user/projects"]
}
}
}Available Operations:
| Operation | Command | Purpose |
|---|---|---|
| List files | ls ~/projects | Show directory contents |
| Read file | cat src/main.ts | Read file contents |
| Write file | create docs/api.md | Create new file |
| Edit file | edit src/app.ts | Modify file |
| Search | grep "async function" | Search in files |
| Delete | rm old-file.js | Delete file |
MCP vs Memory: Decision Matrix
Request/Response Pattern
Agent Skills
Overview
Agent Skills are reusable, model-invoked capabilities packaged as folders containing instructions, scripts, and resources. Claude automatically detects and uses relevant skills.
Skill Architecture
Skill Loading Process
Skill Types & Locations Table
| Type | Location | Scope | Shared | Sync | Best For |
|---|---|---|---|---|---|
| Pre-built | Built-in | Global | All users | Auto | Document creation |
| Personal | ~/.claude/skills/ | Individual | No | Manual | Personal automation |
| Project | .claude/skills/ | Team | Yes | Git | Team standards |
| Plugin | Via plugin install | Varies | Depends | Auto | Integrated features |
Pre-built Skills
Bundled Skills
Claude Code now includes 5 bundled skills available out of the box:
| Skill | Command | Purpose |
|---|---|---|
| Simplify | /simplify | Simplify complex code or explanations |
| Batch | /batch | Run operations across multiple files or items |
| Debug | /debug | Systematic debugging of issues with root cause analysis |
| Loop | /loop | Schedule recurring tasks on a timer |
| Claude API | /claude-api | Interact with the Anthropic API directly |
These bundled skills are always available and do not require installation or configuration.
Practical Examples
Example 1: Custom Code Review Skill
Directory Structure:
~/.claude/skills/code-review/
├── SKILL.md
├── templates/
│ ├── review-checklist.md
│ └── finding-template.md
└── scripts/
├── analyze-metrics.py
└── compare-complexity.pyFile: ~/.claude/skills/code-review/SKILL.md
---
name: Code Review Specialist
description: Comprehensive code review with security, performance, and quality analysis
version: "1.0.0"
tags:
- code-review
- quality
- security
when_to_use: When users ask to review code, analyze code quality, or evaluate pull requests
effort: high
shell: bash
---
# Code Review Skill
This skill provides comprehensive code review capabilities focusing on:
1. **Security Analysis**
- Authentication/authorization issues
- Data exposure risks
- Injection vulnerabilities
- Cryptographic weaknesses
- Sensitive data logging
2. **Performance Review**
- Algorithm efficiency (Big O analysis)
- Memory optimization
- Database query optimization
- Caching opportunities
- Concurrency issues
3. **Code Quality**
- SOLID principles
- Design patterns
- Naming conventions
- Documentation
- Test coverage
4. **Maintainability**
- Code readability
- Function size (should be < 50 lines)
- Cyclomatic complexity
- Dependency management
- Type safety
## Review Template
For each piece of code reviewed, provide:
### Summary
- Overall quality assessment (1-5)
- Key findings count
- Recommended priority areas
### Critical Issues (if any)
- **Issue**: Clear description
- **Location**: File and line number
- **Impact**: Why this matters
- **Severity**: Critical/High/Medium
- **Fix**: Code example
### Findings by Category
#### Security (if issues found)
List security vulnerabilities with examples
#### Performance (if issues found)
List performance problems with complexity analysis
#### Quality (if issues found)
List code quality issues with refactoring suggestions
#### Maintainability (if issues found)
List maintainability problems with improvementsPython Script: analyze-metrics.py
#!/usr/bin/env python3
import re
import sys
def analyze_code_metrics(code):
"""Analyze code for common metrics."""
# Count functions
functions = len(re.findall(r'^def\s+\w+', code, re.MULTILINE))
# Count classes
classes = len(re.findall(r'^class\s+\w+', code, re.MULTILINE))
# Average line length
lines = code.split('\n')
avg_length = sum(len(l) for l in lines) / len(lines) if lines else 0
# Estimate complexity
complexity = len(re.findall(r'\b(if|elif|else|for|while|and|or)\b', code))
return {
'functions': functions,
'classes': classes,
'avg_line_length': avg_length,
'complexity_score': complexity
}
if __name__ == '__main__':
with open(sys.argv[1], 'r') as f:
code = f.read()
metrics = analyze_code_metrics(code)
for key, value in metrics.items():
print(f"{key}: {value:.2f}")Python Script: compare-complexity.py
#!/usr/bin/env python3
"""
Compare cyclomatic complexity of code before and after changes.
Helps identify if refactoring actually simplifies code structure.
"""
import re
import sys
from typing import Dict, Tuple
class ComplexityAnalyzer:
"""Analyze code complexity metrics."""
def __init__(self, code: str):
self.code = code
self.lines = code.split('\n')
def calculate_cyclomatic_complexity(self) -> int:
"""
Calculate cyclomatic complexity using McCabe's method.
Count decision points: if, elif, else, for, while, except, and, or
"""
complexity = 1 # Base complexity
# Count decision points
decision_patterns = [
r'\bif\b',
r'\belif\b',
r'\bfor\b',
r'\bwhile\b',
r'\bexcept\b',
r'\band\b(?!$)',
r'\bor\b(?!$)'
]
for pattern in decision_patterns:
matches = re.findall(pattern, self.code)
complexity += len(matches)
return complexity
def calculate_cognitive_complexity(self) -> int:
"""
Calculate cognitive complexity - how hard is it to understand?
Based on nesting depth and control flow.
"""
cognitive = 0
nesting_depth = 0
for line in self.lines:
# Track nesting depth
if re.search(r'^\s*(if|for|while|def|class|try)\b', line):
nesting_depth += 1
cognitive += nesting_depth
elif re.search(r'^\s*(elif|else|except|finally)\b', line):
cognitive += nesting_depth
# Reduce nesting when unindenting
if line and not line[0].isspace():
nesting_depth = 0
return cognitive
def calculate_maintainability_index(self) -> float:
"""
Maintainability Index ranges from 0-100.
> 85: Excellent
> 65: Good
> 50: Fair
< 50: Poor
"""
lines = len(self.lines)
cyclomatic = self.calculate_cyclomatic_complexity()
cognitive = self.calculate_cognitive_complexity()
# Simplified MI calculation
mi = 171 - 5.2 * (cyclomatic / lines) - 0.23 * (cognitive) - 16.2 * (lines / 1000)
return max(0, min(100, mi))
def get_complexity_report(self) -> Dict:
"""Generate comprehensive complexity report."""
return {
'cyclomatic_complexity': self.calculate_cyclomatic_complexity(),
'cognitive_complexity': self.calculate_cognitive_complexity(),
'maintainability_index': round(self.calculate_maintainability_index(), 2),
'lines_of_code': len(self.lines),
'avg_line_length': round(sum(len(l) for l in self.lines) / len(self.lines), 2) if self.lines else 0
}
def compare_files(before_file: str, after_file: str) -> None:
"""Compare complexity metrics between two code versions."""
with open(before_file, 'r') as f:
before_code = f.read()
with open(after_file, 'r') as f:
after_code = f.read()
before_analyzer = ComplexityAnalyzer(before_code)
after_analyzer = ComplexityAnalyzer(after_code)
before_metrics = before_analyzer.get_complexity_report()
after_metrics = after_analyzer.get_complexity_report()
print("=" * 60)
print("CODE COMPLEXITY COMPARISON")
print("=" * 60)
print("\nBEFORE:")
print(f" Cyclomatic Complexity: {before_metrics['cyclomatic_complexity']}")
print(f" Cognitive Complexity: {before_metrics['cognitive_complexity']}")
print(f" Maintainability Index: {before_metrics['maintainability_index']}")
print(f" Lines of Code: {before_metrics['lines_of_code']}")
print(f" Avg Line Length: {before_metrics['avg_line_length']}")
print("\nAFTER:")
print(f" Cyclomatic Complexity: {after_metrics['cyclomatic_complexity']}")
print(f" Cognitive Complexity: {after_metrics['cognitive_complexity']}")
print(f" Maintainability Index: {after_metrics['maintainability_index']}")
print(f" Lines of Code: {after_metrics['lines_of_code']}")
print(f" Avg Line Length: {after_metrics['avg_line_length']}")
print("\nCHANGES:")
cyclomatic_change = after_metrics['cyclomatic_complexity'] - before_metrics['cyclomatic_complexity']
cognitive_change = after_metrics['cognitive_complexity'] - before_metrics['cognitive_complexity']
mi_change = after_metrics['maintainability_index'] - before_metrics['maintainability_index']
loc_change = after_metrics['lines_of_code'] - before_metrics['lines_of_code']
print(f" Cyclomatic Complexity: {cyclomatic_change:+d}")
print(f" Cognitive Complexity: {cognitive_change:+d}")
print(f" Maintainability Index: {mi_change:+.2f}")
print(f" Lines of Code: {loc_change:+d}")
print("\nASSESSMENT:")
if mi_change > 0:
print(" ✅ Code is MORE maintainable")
elif mi_change < 0:
print(" ⚠️ Code is LESS maintainable")
else:
print(" ➡️ Maintainability unchanged")
if cyclomatic_change < 0:
print(" ✅ Complexity DECREASED")
elif cyclomatic_change > 0:
print(" ⚠️ Complexity INCREASED")
else:
print(" ➡️ Complexity unchanged")
print("=" * 60)
if __name__ == '__main__':
if len(sys.argv) != 3:
print("Usage: python compare-complexity.py <before_file> <after_file>")
sys.exit(1)
compare_files(sys.argv[1], sys.argv[2])Template: review-checklist.md
# Code Review Checklist
## Security Checklist
- [ ] No hardcoded credentials or secrets
- [ ] Input validation on all user inputs
- [ ] SQL injection prevention (parameterized queries)
- [ ] CSRF protection on state-changing operations
- [ ] XSS prevention with proper escaping
- [ ] Authentication checks on protected endpoints
- [ ] Authorization checks on resources
- [ ] Secure password hashing (bcrypt, argon2)
- [ ] No sensitive data in logs
- [ ] HTTPS enforced
## Performance Checklist
- [ ] No N+1 queries
- [ ] Appropriate use of indexes
- [ ] Caching implemented where beneficial
- [ ] No blocking operations on main thread
- [ ] Async/await used correctly
- [ ] Large datasets paginated
- [ ] Database connections pooled
- [ ] Regular expressions optimized
- [ ] No unnecessary object creation
- [ ] Memory leaks prevented
## Quality Checklist
- [ ] Functions < 50 lines
- [ ] Clear variable naming
- [ ] No duplicate code
- [ ] Proper error handling
- [ ] Comments explain WHY, not WHAT
- [ ] No console.logs in production
- [ ] Type checking (TypeScript/JSDoc)
- [ ] SOLID principles followed
- [ ] Design patterns applied correctly
- [ ] Self-documenting code
## Testing Checklist
- [ ] Unit tests written
- [ ] Edge cases covered
- [ ] Error scenarios tested
- [ ] Integration tests present
- [ ] Coverage > 80%
- [ ] No flaky tests
- [ ] Mock external dependencies
- [ ] Clear test namesTemplate: finding-template.md
# Code Review Finding Template
Use this template when documenting each issue found during code review.
---
## Issue: [TITLE]
### Severity
- [ ] Critical (blocks deployment)
- [ ] High (should fix before merge)
- [ ] Medium (should fix soon)
- [ ] Low (nice to have)
### Category
- [ ] Security
- [ ] Performance
- [ ] Code Quality
- [ ] Maintainability
- [ ] Testing
- [ ] Design Pattern
- [ ] Documentation
### Location
**File:** `src/components/UserCard.tsx`
**Lines:** 45-52
**Function/Method:** `renderUserDetails()`
### Issue Description
**What:** Describe what the issue is.
**Why it matters:** Explain the impact and why this needs to be fixed.
**Current behavior:** Show the problematic code or behavior.
**Expected behavior:** Describe what should happen instead.
### Code Example
#### Current (Problematic)
```typescript
// Shows the N+1 query problem
const users = fetchUsers();
users.forEach(user => {
const posts = fetchUserPosts(user.id); // Query per user!
renderUserPosts(posts);
});
```
#### Suggested Fix
```typescript
// Optimized with JOIN query
const usersWithPosts = fetchUsersWithPosts();
usersWithPosts.forEach(({ user, posts }) => {
renderUserPosts(posts);
});
```
### Impact Analysis
| Aspect | Impact | Severity |
|--------|--------|----------|
| Performance | 100+ queries for 20 users | High |
| User Experience | Slow page load | High |
| Scalability | Breaks at scale | Critical |
| Maintainability | Hard to debug | Medium |
### Related Issues
- Similar issue in `AdminUserList.tsx` line 120
- Related PR: #456
- Related issue: #789
### Additional Resources
- [N+1 Query Problem](https://en.wikipedia.org/wiki/N%2B1_problem)
- [Database Join Documentation](https://docs.example.com/joins)
### Reviewer Notes
- This is a common pattern in this codebase
- Consider adding this to the code style guide
- Might be worth creating a helper function
### Author Response (for feedback)
*To be filled by the code author:*
- [ ] Fix implemented in commit: `abc123`
- [ ] Fix status: Complete / In Progress / Needs Discussion
- [ ] Questions or concerns: (describe)
---
## Finding Statistics (for Reviewer)
When reviewing multiple findings, track:
- **Total Issues Found:** X
- **Critical:** X
- **High:** X
- **Medium:** X
- **Low:** X
**Recommendation:** ✅ Approve / ⚠️ Request Changes / 🔄 Needs Discussion
**Overall Code Quality:** 1-5 starsUsage Example
User: Review this React component code
Claude: I'll conduct a comprehensive code review using my Code Review Skill.
[Loads SKILL.md]
[Analyzes code against checklist]
[Runs Python metrics script]
[Applies templates]
## Code Review Results
### Summary
Quality Score: 3.5/5
Key Findings: 7 items
Priority Areas: Performance optimization, error handling
### Critical Issues: 1
**Issue**: N+1 Query Problem
**Location**: UserComponent.tsx, lines 45-52
**Impact**: Renders 100+ database queries for list of 20 users
**Severity**: High
**Fix**:
```javascript
// Before: N+1 queries
const users = fetchUsers();
users.forEach(user => fetchUserPosts(user.id)); // 20+ queries
// After: Single query with JOIN
const users = fetchUsersWithPosts(); // 1 query
```
### Performance Findings
- Missing pagination on large lists
- Recommendation: Use React.memo() for items
- Database query: Could be optimized with index
### Quality Findings
- Function in line 20 is 127 lines (max: 50)
- Missing error boundary
- Props should have TypeScript typesExample 2: Brand Voice Skill
Directory Structure:
.claude/skills/brand-voice/
├── SKILL.md
├── brand-guidelines.md
├── tone-examples.md
└── templates/
├── email-template.txt
├── social-post-template.txt
└── blog-post-template.mdFile: .claude/skills/brand-voice/SKILL.md
---
name: Brand Voice Consistency
description: Ensure all communication matches brand voice and tone guidelines
tags:
- brand
- writing
- consistency
when_to_use: When creating marketing copy, customer communications, or public-facing content
---
# Brand Voice Skill
## Overview
This skill ensures all communications maintain consistent brand voice, tone, and messaging.
## Brand Identity
### Mission
Help teams automate their development workflows with AI
### Values
- **Simplicity**: Make complex things simple
- **Reliability**: Rock-solid execution
- **Empowerment**: Enable human creativity
### Tone of Voice
- **Friendly but professional** - approachable without being casual
- **Clear and concise** - avoid jargon, explain technical concepts simply
- **Confident** - we know what we're doing
- **Empathetic** - understand user needs and pain points
## Writing Guidelines
### Do's ✅
- Use "you" when addressing readers
- Use active voice: "Claude generates reports" not "Reports are generated by Claude"
- Start with value proposition
- Use concrete examples
- Keep sentences under 20 words
- Use lists for clarity
- Include calls-to-action
### Don'ts ❌
- Don't use corporate jargon
- Don't patronize or oversimplify
- Don't use "we believe" or "we think"
- Don't use ALL CAPS except for emphasis
- Don't create walls of text
- Don't assume technical knowledge
## Vocabulary
### ✅ Preferred Terms
- Claude (not "the Claude AI")
- Code generation (not "auto-coding")
- Agent (not "bot")
- Streamline (not "revolutionize")
- Integrate (not "synergize")
### ❌ Avoid Terms
- "Cutting-edge" (overused)
- "Game-changer" (vague)
- "Leverage" (corporate-speak)
- "Utilize" (use "use")
- "Paradigm shift" (unclear)Examples
✅ Good Example
"Claude automates your code review process. Instead of manually checking each PR, Claude reviews security, performance, and quality—saving your team hours every week."
Why it works: Clear value, specific benefits, action-oriented
❌ Bad Example
"Claude leverages cutting-edge AI to provide comprehensive software development solutions."
Why it doesn't work: Vague, corporate jargon, no specific value
Template: Email
Subject: [Clear, benefit-driven subject]
Hi [Name],
[Opening: What's the value for them]
[Body: How it works / What they'll get]
[Specific example or benefit]
[Call to action: Clear next step]
Best regards,
[Name]Template: Social Media
[Hook: Grab attention in first line]
[2-3 lines: Value or interesting fact]
[Call to action: Link, question, or engagement]
[Emoji: 1-2 max for visual interest]File: tone-examples.md
Exciting announcement:
"Save 8 hours per week on code reviews. Claude reviews your PRs automatically."
Empathetic support:
"We know deployments can be stressful. Claude handles testing so you don't have to worry."
Confident product feature:
"Claude doesn't just suggest code. It understands your architecture and maintains consistency."
Educational blog post:
"Let's explore how agents improve code review workflows. Here's what we learned..."Example 3: Documentation Generator Skill
File: .claude/skills/doc-generator/SKILL.md
---
name: API Documentation Generator
description: Generate comprehensive, accurate API documentation from source code
version: "1.0.0"
tags:
- documentation
- api
- automation
when_to_use: When creating or updating API documentation
---
# API Documentation Generator Skill
## Generates
- OpenAPI/Swagger specifications
- API endpoint documentation
- SDK usage examples
- Integration guides
- Error code references
- Authentication guides
## Documentation Structure
### For Each Endpoint
```markdown
## GET /api/v1/users/:id
### Description
Brief explanation of what this endpoint does
### Parameters
| Name | Type | Required | Description |
|------|------|----------|-------------|
| id | string | Yes | User ID |
### Response
**200 Success**
```json
{
"id": "usr_123",
"name": "John Doe",
"email": "john@example.com",
"created_at": "2025-01-15T10:30:00Z"
}
```
**404 Not Found**
```json
{
"error": "USER_NOT_FOUND",
"message": "User does not exist"
}
```
### Examples
**cURL**
```bash
curl -X GET "https://api.example.com/api/v1/users/usr_123" \
-H "Authorization: Bearer YOUR_TOKEN"
```
**JavaScript**
```javascript
const user = await fetch('/api/v1/users/usr_123', {
headers: { 'Authorization': 'Bearer token' }
}).then(r => r.json());
```
**Python**
```python
response = requests.get(
'https://api.example.com/api/v1/users/usr_123',
headers={'Authorization': 'Bearer token'}
)
user = response.json()
```
## Python Script: generate-docs.py
```python
#!/usr/bin/env python3
import ast
import json
from typing import Dict, List
class APIDocExtractor(ast.NodeVisitor):
"""Extract API documentation from Python source code."""
def __init__(self):
self.endpoints = []
def visit_FunctionDef(self, node):
"""Extract function documentation."""
if node.name.startswith('get_') or node.name.startswith('post_'):
doc = ast.get_docstring(node)
endpoint = {
'name': node.name,
'docstring': doc,
'params': [arg.arg for arg in node.args.args],
'returns': self._extract_return_type(node)
}
self.endpoints.append(endpoint)
self.generic_visit(node)
def _extract_return_type(self, node):
"""Extract return type from function annotation."""
if node.returns:
return ast.unparse(node.returns)
return "Any"
def generate_markdown_docs(endpoints: List[Dict]) -> str:
"""Generate markdown documentation from endpoints."""
docs = "# API Documentation\n\n"
for endpoint in endpoints:
docs += f"## {endpoint['name']}\n\n"
docs += f"{endpoint['docstring']}\n\n"
docs += f"**Parameters**: {', '.join(endpoint['params'])}\n\n"
docs += f"**Returns**: {endpoint['returns']}\n\n"
docs += "---\n\n"
return docs
if __name__ == '__main__':
import sys
with open(sys.argv[1], 'r') as f:
tree = ast.parse(f.read())
extractor = APIDocExtractor()
extractor.visit(tree)
markdown = generate_markdown_docs(extractor.endpoints)
print(markdown)Skill Discovery & Invocation
Skill vs Other Features
Claude Code Plugins
Overview
Claude Code Plugins are bundled collections of customizations (slash commands, subagents, MCP servers, and hooks) that install with a single command. They represent the highest-level extension mechanism—combining multiple features into cohesive, shareable packages.
Architecture
Plugin Loading Process
Plugin Types & Distribution
| Type | Scope | Shared | Authority | Examples |
|---|---|---|---|---|
| Official | Global | All users | Anthropic | PR Review, Security Guidance |
| Community | Public | All users | Community | DevOps, Data Science |
| Organization | Internal | Team members | Company | Internal standards, tools |
| Personal | Individual | Single user | Developer | Custom workflows |
Plugin Definition Structure
---
name: plugin-name
version: "1.0.0"
description: "What this plugin does"
author: "Your Name"
license: MIT
# Plugin metadata
tags:
- category
- use-case
# Requirements
requires:
- claude-code: ">=1.0.0"
# Components bundled
components:
- type: commands
path: commands/
- type: agents
path: agents/
- type: mcp
path: mcp/
- type: hooks
path: hooks/
# Configuration
config:
auto_load: true
enabled_by_default: true
---Plugin Structure
my-plugin/
├── .claude-plugin/
│ └── plugin.json
├── commands/
│ ├── task-1.md
│ ├── task-2.md
│ └── workflows/
├── agents/
│ ├── specialist-1.md
│ ├── specialist-2.md
│ └── configs/
├── skills/
│ ├── skill-1.md
│ └── skill-2.md
├── hooks/
│ └── hooks.json
├── .mcp.json
├── .lsp.json
├── settings.json
├── templates/
│ └── issue-template.md
├── scripts/
│ ├── helper-1.sh
│ └── helper-2.py
├── docs/
│ ├── README.md
│ └── USAGE.md
└── tests/
└── plugin.test.jsPractical Examples
Example 1: PR Review Plugin
File: .claude-plugin/plugin.json
{
"name": "pr-review",
"version": "1.0.0",
"description": "Complete PR review workflow with security, testing, and docs",
"author": {
"name": "Anthropic"
},
"license": "MIT"
}File: commands/review-pr.md
---
name: Review PR
description: Start comprehensive PR review with security and testing checks
---
# PR Review
This command initiates a complete pull request review including:
1. Security analysis
2. Test coverage verification
3. Documentation updates
4. Code quality checks
5. Performance impact assessmentFile: agents/security-reviewer.md
---
name: security-reviewer
description: Security-focused code review
tools: read, grep, diff
---
# Security Reviewer
Specializes in finding security vulnerabilities:
- Authentication/authorization issues
- Data exposure
- Injection attacks
- Secure configurationInstallation:
/plugin install pr-review
# Result:
# ✅ 3 slash commands installed
# ✅ 3 subagents configured
# ✅ 2 MCP servers connected
# ✅ 4 hooks registered
# ✅ Ready to use!Example 2: DevOps Plugin
Components:
devops-automation/
├── commands/
│ ├── deploy.md
│ ├── rollback.md
│ ├── status.md
│ └── incident.md
├── agents/
│ ├── deployment-specialist.md
│ ├── incident-commander.md
│ └── alert-analyzer.md
├── mcp/
│ ├── github-config.json
│ ├── kubernetes-config.json
│ └── prometheus-config.json
├── hooks/
│ ├── pre-deploy.js
│ ├── post-deploy.js
│ └── on-error.js
└── scripts/
├── deploy.sh
├── rollback.sh
└── health-check.shExample 3: Documentation Plugin
Bundled Components:
documentation/
├── commands/
│ ├── generate-api-docs.md
│ ├── generate-readme.md
│ ├── sync-docs.md
│ └── validate-docs.md
├── agents/
│ ├── api-documenter.md
│ ├── code-commentator.md
│ └── example-generator.md
├── mcp/
│ ├── github-docs-config.json
│ └── slack-announce-config.json
└── templates/
├── api-endpoint.md
├── function-docs.md
└── adr-template.mdPlugin Marketplace
Plugin Installation & Lifecycle
Plugin Features Comparison
| Feature | Slash Command | Skill | Subagent | Plugin |
|---|---|---|---|---|
| Installation | Manual copy | Manual copy | Manual config | One command |
| Setup Time | 5 minutes | 10 minutes | 15 minutes | 2 minutes |
| Bundling | Single file | Single file | Single file | Multiple |
| Versioning | Manual | Manual | Manual | Automatic |
| Team Sharing | Copy file | Copy file | Copy file | Install ID |
| Updates | Manual | Manual | Manual | Auto-available |
| Dependencies | None | None | None | May include |
| Marketplace | No | No | No | Yes |
| Distribution | Repository | Repository | Repository | Marketplace |
Plugin Use Cases
| Use Case | Recommendation | Why |
|---|---|---|
| Team Onboarding | ✅ Use Plugin | Instant setup, all configurations |
| Framework Setup | ✅ Use Plugin | Bundles framework-specific commands |
| Enterprise Standards | ✅ Use Plugin | Central distribution, version control |
| Quick Task Automation | ❌ Use Command | Overkill complexity |
| Single Domain Expertise | ❌ Use Skill | Too heavy, use skill instead |
| Specialized Analysis | ❌ Use Subagent | Create manually or use skill |
| Live Data Access | ❌ Use MCP | Standalone, don't bundle |
When to Create a Plugin
Publishing a Plugin
Steps to publish:
- Create plugin structure with all components
- Write
.claude-plugin/plugin.jsonmanifest - Create
README.mdwith documentation - Test locally with
/plugin install ./my-plugin - Submit to plugin marketplace
- Get reviewed and approved
- Published on marketplace
- Users can install with one command
Example submission:
# PR Review Plugin
## Description
Complete PR review workflow with security, testing, and documentation checks.
## What's Included
- 3 slash commands for different review types
- 3 specialized subagents
- GitHub and CodeQL MCP integration
- Automated security scanning hooks
## Installation
```bash
/plugin install pr-review
```
## Features
✅ Security analysis
✅ Test coverage checking
✅ Documentation verification
✅ Code quality assessment
✅ Performance impact analysis
## Usage
```bash
/review-pr
/check-security
/check-tests
```
## Requirements
- Claude Code 1.0+
- GitHub access
- CodeQL (optional)Plugin vs Manual Configuration
Manual Setup (2+ hours):
- Install slash commands one by one
- Create subagents individually
- Configure MCPs separately
- Set up hooks manually
- Document everything
- Share with team (hope they configure correctly)
With Plugin (2 minutes):
/plugin install pr-review
# ✅ Everything installed and configured
# ✅ Ready to use immediately
# ✅ Team can reproduce exact setupComparison & Integration
Feature Comparison Matrix
| Feature | Invocation | Persistence | Scope | Use Case |
|---|---|---|---|---|
| Slash Commands | Manual (/cmd) | Session only | Single command | Quick shortcuts |
| Subagents | Auto-delegated | Isolated context | Specialized task | Task distribution |
| Memory | Auto-loaded | Cross-session | User/team context | Long-term learning |
| MCP Protocol | Auto-queried | Real-time external | Live data access | Dynamic information |
| Skills | Auto-invoked | Filesystem-based | Reusable expertise | Automated workflows |
Interaction Timeline
Practical Integration Example: Customer Support Automation
Architecture
Request Flow
## Customer Support Request Flow
### 1. Incoming Email
"I'm getting error 500 when trying to upload files. This is blocking my workflow!"
### 2. Memory Lookup
- Loads CLAUDE.md with support standards
- Checks customer history: VIP customer, 3rd incident this month
### 3. MCP Queries
- GitHub MCP: List open issues (finds related bug report)
- Database MCP: Check system status (no outages reported)
- Slack MCP: Check if engineering is aware
### 4. Skill Detection & Loading
- Request matches "Technical Support" skill
- Loads support response template from Skill
### 5. Subagent Delegation
- Routes to Tech Support Subagent
- Provides context: customer history, error details, known issues
- Subagent has full access to: read, bash, grep tools
### 6. Subagent Processing
Tech Support Subagent:
- Searches codebase for 500 error in file upload
- Finds recent change in commit 8f4a2c
- Creates workaround documentation
### 7. Skill Execution
Response Generator Skill:
- Uses Brand Voice guidelines
- Formats response with empathy
- Includes workaround steps
- Links to related documentation
### 8. MCP Output
- Posts update to #support Slack channel
- Tags engineering team
- Updates ticket in Jira MCP
### 9. Response
Customer receives:
- Empathetic acknowledgment
- Explanation of cause
- Immediate workaround
- Timeline for permanent fix
- Link to related issuesComplete Feature Orchestration
When to Use Each Feature
Selection Decision Tree
Summary Table
| Aspect | Slash Commands | Subagents | Memory | MCP | Skills | Plugins |
|---|---|---|---|---|---|---|
| Setup Difficulty | Easy | Medium | Easy | Medium | Medium | Easy |
| Learning Curve | Low | Medium | Low | Medium | Medium | Low |
| Team Benefit | High | High | Medium | High | High | Very High |
| Automation Level | Low | High | Medium | High | High | Very High |
| Context Management | Single-session | Isolated | Persistent | Real-time | Persistent | All features |
| Maintenance Burden | Low | Medium | Low | Medium | Medium | Low |
| Scalability | Good | Excellent | Good | Excellent | Excellent | Excellent |
| Shareability | Fair | Fair | Good | Good | Good | Excellent |
| Versioning | Manual | Manual | Manual | Manual | Manual | Automatic |
| Installation | Manual copy | Manual config | N/A | Manual config | Manual copy | One command |
Quick Start Guide
Week 1: Start Simple
- Create 2-3 slash commands for common tasks
- Enable Memory in Settings
- Document team standards in CLAUDE.md
Week 2: Add Real-time Access
- Set up 1 MCP (GitHub or Database)
- Use
/mcpto configure - Query live data in your workflows
Week 3: Distribute Work
- Create first Subagent for specific role
- Use
/agentscommand - Test delegation with simple task
Week 4: Automate Everything
- Create first Skill for repeated automation
- Use Skill marketplace or build custom
- Combine all features for full workflow
Ongoing
- Review and update Memory monthly
- Add new Skills as patterns emerge
- Optimize MCP queries
- Refine Subagent prompts
Hooks
Overview
Hooks are event-driven shell commands that execute automatically in response to Claude Code events. They enable automation, validation, and custom workflows without manual intervention.
Hook Events
Claude Code supports 25 hook events across four hook types (command, http, prompt, agent):
| Hook Event | Trigger | Use Cases |
|---|---|---|
| SessionStart | Session begins/resumes/clear/compact | Environment setup, initialization |
| InstructionsLoaded | CLAUDE.md or rules file loaded | Validation, transformation, augmentation |
| UserPromptSubmit | User submits prompt | Input validation, prompt filtering |
| PreToolUse | Before any tool runs | Validation, approval gates, logging |
| PermissionRequest | Permission dialog shown | Auto-approve/deny flows |
| PostToolUse | After tool succeeds | Auto-formatting, notifications, cleanup |
| PostToolUseFailure | Tool execution fails | Error handling, logging |
| Notification | Notification sent | Alerting, external integrations |
| SubagentStart | Subagent spawned | Context injection, initialization |
| SubagentStop | Subagent finishes | Result validation, logging |
| Stop | Claude finishes responding | Summary generation, cleanup tasks |
| StopFailure | API error ends turn | Error recovery, logging |
| TeammateIdle | Agent team teammate idle | Work distribution, coordination |
| TaskCompleted | Task marked complete | Post-task processing |
| TaskCreated | Task created via TaskCreate | Task tracking, logging |
| ConfigChange | Config file changes | Validation, propagation |
| CwdChanged | Working directory changes | Directory-specific setup |
| FileChanged | Watched file changes | File monitoring, rebuild triggers |
| PreCompact | Before context compaction | State preservation |
| PostCompact | After compaction completes | Post-compact actions |
| WorktreeCreate | Worktree being created | Environment setup, dependency install |
| WorktreeRemove | Worktree being removed | Cleanup, resource deallocation |
| Elicitation | MCP server requests user input | Input validation |
| ElicitationResult | User responds to elicitation | Response processing |
| SessionEnd | Session terminates | Cleanup, final logging |
Common Hooks
Hooks are configured in ~/.claude/settings.json (user-level) or .claude/settings.json (project-level):
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "prettier --write $CLAUDE_FILE_PATH"
}
]
}
],
"PreToolUse": [
{
"matcher": "Edit",
"hooks": [
{
"type": "command",
"command": "eslint $CLAUDE_FILE_PATH"
}
]
}
]
}
}Hook Environment Variables
$CLAUDE_FILE_PATH- Path to file being edited/written$CLAUDE_TOOL_NAME- Name of tool being used$CLAUDE_SESSION_ID- Current session identifier$CLAUDE_PROJECT_DIR- Project directory path
Best Practices
✅ Do:
- Keep hooks fast (< 1 second)
- Use hooks for validation and automation
- Handle errors gracefully
- Use absolute paths
❌ Don't:
- Make hooks interactive
- Use hooks for long-running tasks
- Hardcode credentials
See: 06-hooks/ for detailed examples
Checkpoints and Rewind
Overview
Checkpoints allow you to save conversation state and rewind to previous points, enabling safe experimentation and exploration of multiple approaches.
Key Concepts
| Concept | Description |
|---|---|
| Checkpoint | Snapshot of conversation state including messages, files, and context |
| Rewind | Return to a previous checkpoint, discarding subsequent changes |
| Branch Point | Checkpoint from which multiple approaches are explored |
Accessing Checkpoints
Checkpoints are created automatically with every user prompt. To rewind:
# Press Esc twice to open the checkpoint browser
Esc + Esc
# Or use the /rewind command
/rewindWhen you select a checkpoint, you choose from five options:
- Restore code and conversation -- Revert both to that point
- Restore conversation -- Rewind messages, keep current code
- Restore code -- Revert files, keep conversation
- Summarize from here -- Compress conversation into a summary
- Never mind -- Cancel
Use Cases
| Scenario | Workflow |
|---|---|
| Exploring Approaches | Save → Try A → Save → Rewind → Try B → Compare |
| Safe Refactoring | Save → Refactor → Test → If fail: Rewind |
| A/B Testing | Save → Design A → Save → Rewind → Design B → Compare |
| Mistake Recovery | Notice issue → Rewind to last good state |
Configuration
{
"autoCheckpoint": true
}See: 08-checkpoints/ for detailed examples
Advanced Features
Planning Mode
Create detailed implementation plans before coding.
Activation:
/plan Implement user authentication systemBenefits:
- Clear roadmap with time estimates
- Risk assessment
- Systematic task breakdown
- Opportunity for review and modification
Extended Thinking
Deep reasoning for complex problems.
Activation:
- Toggle with
Alt+T(orOption+Ton macOS) during a session - Set
MAX_THINKING_TOKENSenvironment variable for programmatic control
# Enable extended thinking via environment variable
export MAX_THINKING_TOKENS=50000
claude -p "Should we use microservices or monolith?"Benefits:
- Thorough analysis of trade-offs
- Better architectural decisions
- Consideration of edge cases
- Systematic evaluation
Background Tasks
Run long operations without blocking the conversation.
Usage:
User: Run tests in background
Claude: Started task bg-1234
/task list # Show all tasks
/task status bg-1234 # Check progress
/task show bg-1234 # View output
/task cancel bg-1234 # Cancel taskPermission Modes
Control what Claude can do.
| Mode | Description | Use Case |
|---|---|---|
| default | Standard permissions with prompts for sensitive actions | General development |
| acceptEdits | Automatically accept file edits without confirmation | Trusted editing workflows |
| plan | Analysis and planning only, no file modifications | Code review, architecture planning |
| auto | Automatically approve safe actions, prompt only for risky ones | Balanced autonomy with safety |
| dontAsk | Execute all actions without confirmation prompts | Experienced users, automation |
| bypassPermissions | Full unrestricted access, no safety checks | CI/CD pipelines, trusted scripts |
Usage:
claude --permission-mode plan # Read-only analysis
claude --permission-mode acceptEdits # Auto-accept edits
claude --permission-mode auto # Auto-approve safe actions
claude --permission-mode dontAsk # No confirmation promptsHeadless Mode (Print Mode)
Run Claude Code without interactive input for automation and CI/CD using the -p (print) flag.
Usage:
# Run specific task
claude -p "Run all tests"
# Pipe input for analysis
cat error.log | claude -p "explain this error"
# CI/CD integration (GitHub Actions)
- name: AI Code Review
run: claude -p "Review PR changes and report issues"
# JSON output for scripting
claude -p --output-format json "list all functions in src/"Scheduled Tasks
Run tasks on a repeating schedule using the /loop command.
Usage:
/loop every 30m "Run tests and report failures"
/loop every 2h "Check for dependency updates"
/loop every 1d "Generate daily summary of code changes"Scheduled tasks run in the background and report results when complete. They are useful for continuous monitoring, periodic checks, and automated maintenance workflows.
Chrome Integration
Claude Code can integrate with the Chrome browser for web automation tasks. This enables capabilities like navigating web pages, filling forms, taking screenshots, and extracting data from websites directly within your development workflow.
Session Management
Manage multiple work sessions.
Commands:
/resume # Resume a previous conversation
/rename "Feature" # Name the current session
/fork # Fork into a new session
claude -c # Continue most recent conversation
claude -r "Feature" # Resume session by name/IDInteractive Features
Keyboard Shortcuts:
Ctrl + R- Search command historyTab- Autocomplete↑ / ↓- Command historyCtrl + L- Clear screen
Multi-line Input:
User: \
> Long complex prompt
> spanning multiple lines
> \endConfiguration
Complete configuration example:
{
"planning": {
"autoEnter": true,
"requireApproval": true
},
"extendedThinking": {
"enabled": true,
"showThinkingProcess": true
},
"backgroundTasks": {
"enabled": true,
"maxConcurrentTasks": 5
},
"permissions": {
"mode": "default"
}
}See: 09-advanced-features/ for comprehensive guide
Resources
Last updated: March 2026For Claude Haiku 4.5, Sonnet 4.6, and Opus 4.6Now includes: Hooks, Checkpoints, Planning Mode, Extended Thinking, Background Tasks, Permission Modes (6 modes), Headless Mode, Session Management, Auto Memory, Agent Teams, Scheduled Tasks, Chrome Integration, Channels, Voice Dictation, and Bundled Skills
