Onboarding & Quality
Getting Started with Onboarding & Quality
To transform Claude Code from a general assistant into a specialized member of your engineering team, you need to establish a baseline of project context and quality standards. This is handled through two primary workflows: project initialization via /onboard and automated health checks via /code-quality.
Project Initialization with /onboard
The /onboard command is the entry point for any new repository or developer environment. Instead of manually explaining your architecture to the LLM, this command triggers a discovery process that indexes your conventions.
When you run /onboard, the system:
- Scans Project Structure: Identifies core directories (e.g.,
/src,/app,/components) and configuration files. - Maps Tech Stack: Detects languages, frameworks (TypeScript, GraphQL, React), and testing utilities.
- Identifies Skills: Connects the repository to existing
.claude/skillsto ensure the agent uses your specific coding patterns immediately. - Validates Environment: Checks for necessary MCP servers and tool configurations (like JIRA or Linear integrations).
# Initialize the project context
/onboard
Automated Sanity Checks with /code-quality
Maintaining high standards across a growing codebase requires more than just static linting. The /code-quality command initiates an agentic audit of your current workspace.
Unlike a standard linter, /code-quality uses the Code Review Agent to perform a "Deep Review." It checks for:
- TypeScript Strictness: Ensuring types aren't bypassed with
any. - Error Handling: Verifying that new mutations or logic blocks include proper try/catch/finally patterns.
- Pattern Consistency: Confirming that new code matches the "skills" defined in your
.claude/skillsdirectory. - Side Effects: Scanning for unintended changes to global states or protected configuration files.
# Run a comprehensive quality audit on the current branch
/code-quality
Intelligent Quality Gates
The project uses a multi-layered approach to ensure quality is "baked in" rather than added as an afterthought.
Automated Hooks
Configured in .claude/settings.json, these hooks act as automated quality gates that trigger based on your activity:
- Pre-commit Checks: Automatically formats code and runs type-checking before Claude finalized an edit.
- Branch Protection: Includes logic to block direct edits on the main branch, forcing quality checks through a PR workflow.
- Contextual Testing: Runs specific test suites only when relevant files (e.g.,
*.test.tsx) are modified.
Skill Evaluation Engine
A core component of the quality pipeline is the Skill Evaluation Engine (.claude/hooks/skill-eval.js). This engine ensures that Claude always uses the most relevant project-specific knowledge for the task at hand.
The engine analyzes your prompts and file paths to calculate a "Skill Match" score based on:
- Keywords: Detects intent (e.g., "GraphQL," "UI," "Testing").
- Directory Mapping: Automatically activates the
core-componentsskill when you are working inside the/componentsdirectory. - Pattern Matching: Recognizes file extensions and naming conventions to suggest the appropriate coding standard.
// Example of how skills are prioritized during quality checks
{
"skill": "testing-patterns",
"priority": 10,
"triggers": {
"keywords": ["test", "jest", "maestro"],
"pathPatterns": ["**/*.test.ts", "**/__tests__/**"]
}
}
Scheduled Maintenance Agents
Quality is maintained over time through GitHub Workflows that run on a schedule:
- Monthly Docs Sync: Scans recent commits to ensure documentation hasn't drifted from the implementation.
- Weekly Quality Audit: Randomly reviews directories and auto-fixes minor style inconsistencies.
- Dependency Audit: Safely updates dependencies and verifies them against the existing test suite.