PAIML智能代理工具包
STDIOOfficial零配置AI代码分析与重构工具包
零配置AI代码分析与重构工具包
Zero-configuration AI context generation for any codebase. Analyze code quality, complexity, and technical debt across 17+ programming languages with extreme quality enforcement and Toyota Way standards.
📖 https://paiml.github.io/pmat-book/ - Complete documentation, tutorials, and guides
# Rust (recommended) cargo install pmat # macOS/Linux brew install pmat # Windows choco install pmat # npm (global) npm install -g pmat-agent
# Analyze codebase and generate AI-ready context pmat context # Analyze complexity pmat analyze complexity # Grade technical debt (A+ through F) pmat analyze tdg # Score repository health (0-110 scale) pmat repo-score . # Fast: scans HEAD only pmat repo-score . --deep # Thorough: scans entire git history # Find Self-Admitted Technical Debt pmat analyze satd # Test suite quality (mutation testing) pmat mutate --target src/
Install pre-commit hooks for automatic quality enforcement:
# Install git hooks (bashrs quality, pmat-book validation) pmat hooks install # Check hook status pmat hooks status # Dry-run to see what would be checked pmat hooks install --dry-run
Hooks enforce:
📖 PMAT Book - Complete guide with tutorials
Key chapters:
# Generate context for Claude/GPT pmat context --output context.md --format llm-optimized # Analyze TypeScript project pmat analyze complexity --language typescript # Technical debt grading with components pmat analyze tdg --include-components # Semantic search (natural language) pmat embed sync ./src pmat semantic search "error handling patterns" # Validate documentation for hallucinations pmat validate-readme --targets README.md
Pre-configured AI workflow prompts that enforce EXTREME TDD and Toyota Way quality principles. Perfect for piping to Claude Code, ChatGPT, or other AI assistants.
# List all available prompts pmat prompt --list # Show specific prompt (YAML format) pmat prompt code-coverage # Get prompt as text for AI assistants pmat prompt debug --format text | pbcopy # JSON format for programmatic use pmat prompt quality-enforcement --format json # Customize for non-Rust projects pmat prompt code-coverage \ --set TEST_CMD="pytest" \ --set COVERAGE_CMD="pytest --cov" # Save to file pmat prompt continue -o workflow.yaml
Available Prompts (11 total):
CRITICAL Priority:
code-coverage - Enforce 85%+ coverage using EXTREME TDDdebug - Five Whys root cause analysisquality-enforcement - Run all quality gates (12 gates)security-audit - Security analysis and fixesHIGH Priority:
continue - Continue next best step with EXTREME TDDassert-cmd-testing - Verify CLI test coveragemutation-testing - Run mutation testing on weak codeperformance-optimization - Speed up compilation and testsrefactor-hotspots - Refactor high-TDG codeMEDIUM Priority:
clean-repo-cruft - Remove temporary filesdocumentation - Update and validate docsKey Features:
pmat p --listDocumentation: Workflow Prompts Guide
Evaluate test suite quality by introducing code mutations and checking if tests detect them.
# Basic mutation testing pmat mutate --target src/lib.rs # With quality gate (fail if score < 85%) pmat mutate --target src/ --threshold 85 # Failures only (CI/CD optimization) pmat mutate --target src/ --failures-only # JSON output for integration pmat mutate --target src/ --output-format json > results.json
Mutation Score = (Killed Mutants / Total Valid Mutants) × 100%
Supported Languages: Rust, Python, TypeScript, JavaScript, Go, C++
Key Features:
Documentation:
Example Projects:
Zero-regression quality enforcement across local development, git workflows, and CI/CD pipelines.
# 1. Create quality baseline pmat tdg baseline create --output .pmat/tdg-baseline.json --path . # 2. Install git hooks (optional) pmat hooks install --tdg-enforcement # 3. Check for regressions pmat tdg check-regression \ --baseline .pmat/tdg-baseline.json \ --path . \ --max-score-drop 5.0 \ --fail-on-regression # 4. Enforce quality standards for new code pmat tdg check-quality \ --path . \ --min-grade B+ \ --new-files-only \ --fail-on-violation
Baseline System:
Quality Gates:
Git Hooks:
CI/CD Templates:
Create .pmat/tdg-rules.toml:
[quality_gates] rust_min_grade = "A" python_min_grade = "B+" max_score_drop = 5.0 mode = "strict" # strict, warning, or disabled [baseline] baseline_path = ".pmat/tdg-baseline.json" auto_update_on_main = true
GitHub Actions:
cp templates/ci/github-actions-tdg.yml .github/workflows/tdg-quality.yml
GitLab CI:
cp templates/ci/gitlab-ci-tdg.yml .gitlab-ci.yml
Jenkins:
cp templates/ci/Jenkinsfile-tdg Jenkinsfile
See Complete Guide: docs/guides/ci-cd-tdg-integration.md
Evidence-based Rust project quality scoring (0-211 points) inspired by elite projects (tokio, serde, clap, syn, regex) with academic foundation from 15+ peer-reviewed papers.
# Fast mode analysis (~3 minutes) pmat rust-project-score # Full mode with comprehensive checks (~10-15 minutes) pmat rust-project-score --full # Specific project path with JSON output pmat rust-project-score --path /path/to/rust/project --format json # Markdown report for documentation pmat rust-project-score --verbose --format markdown --output SCORE.md
Rust Tooling & CI/CD (130pts)
Code Quality (26pts)
Testing Excellence (20pts)
Documentation (15pts)
Performance & Benchmarking (10pts)
Dependency Health (12pts)
Formal Verification (8pts - bonus category)
🦀 Rust Project Score v2.0
📌 Summary
Score: 100.5/211
Percentage: 47.6%
Grade: B
📂 Categories
✅ Rust Tooling & CI/CD: 56.0/130 (43.1%)
⚠️ Code Quality: 20.0/26 (76.9%)
❌ Testing Excellence: 5.5/20 (27.5%)
...
💡 Recommendations
• Run 'cargo clippy --fix' to automatically fix warnings
• Add workspace-level lints to Cargo.toml
• Create .github/workflows for multi-platform CI
• Configure docs.rs metadata for better documentation
...
15+ Peer-Reviewed References (IEEE, ACM, arXiv 2013-2025):
# .github/workflows/rust-quality.yml name: Rust Quality Score on: [push, pull_request] jobs: rust-score: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: cargo install pmat - run: pmat rust-project-score --format json --output score.json - name: Check minimum score run: | SCORE=$(jq '.total_earned' score.json) if (( $(echo "$SCORE < 80" | bc -l) )); then echo "Score $SCORE below threshold" exit 1 fi
Fast Mode (default, ~3 minutes):
Full Mode (--full, ~10-15 minutes):
docs/specifications/learn-from-rust-giants-spec.mddocs/implementation-status-rust-project-score.mdTrack Technical Debt Grading (TDG) scores at specific git commits for "quality archaeology" workflows.
# Analyze with git context pmat tdg server/src/lib.rs --with-git-context # Query specific commit pmat tdg history --commit v2.178.0 # History since reference pmat tdg history --since HEAD~10 # Commit range pmat tdg history --range v2.177.0..v2.178.0 # Filter by file pmat tdg history --path server/src/lib.rs --since HEAD~5 # JSON output for scripting pmat tdg history --commit 60125a0 --format json
Use Cases:
Example Workflows:
# Find when quality dropped below B+ pmat tdg history --since HEAD~50 --format json | \ jq '.history[] | select(.score.grade | test("C|D|F"))' # Quality delta between releases pmat tdg history --range v2.177.0..v2.178.0 # Per-file quality trend pmat tdg history --path src/lib.rs --since HEAD~20
Features:
--commit v2.178.0)with_git_context: true parameter)PMAT provides 19 MCP tools for AI agents:
# Start MCP server pmat mcp # Use with Claude Code, Cline, or other MCP clients
Tools include: context generation, complexity analysis, TDG scoring, semantic search, code clustering, documentation validation, and more.
See MCP Tools Documentation for details.
Built by: Pragmatic AI Labs
License: MIT
Repository: github.com/paiml/paiml-mcp-agent-toolkit
Issues: GitHub Issues
Current Version: v2.167.0 (Sprint 44 - Coverage Remediation)
See ROADMAP.md for project status and future plans.
Quality Standards: