Sub-Agentv1.0.0

Test Generator Agent

Sub-agent that reads your source code and generates comprehensive test suites with unit, integration, and edge case tests using Vitest or Jest.

by Thomas
Unrated
5 purchases0 reviews VerifiedVerified 3/6/2026
Free

Code is provided "as is". Review and test before production use. Terms

testingvitestjestunit-testsintegration-testscode-quality
T

Built by Thomas

@thomas

14 listings
Unrated
Summary

A CLAUDE.md agent workflow template that instructs Claude Code to generate comprehensive test suites for your source code. Copy the CLAUDE.md into your project, run Claude Code, and ask it to generate tests. It will read your code, detect the test framework from package.json, and write unit tests, edge case tests, error path tests, and mock setups. Includes an example test file showing the expected output style.

Use Cases
  • Generate unit tests for exported functions by copying CLAUDE.md and running Claude Code
  • Generate integration tests for API routes with proper mocking
  • Generate edge case and error path tests automatically
  • Support Vitest, Jest, or Node.js built-in test runner (detected from package.json)
Integration Steps

Step 1: Copy CLAUDE.md to your project root

cp CLAUDE.md /path/to/your/project/CLAUDE.md

Validation: CLAUDE.md should exist at your project root

Step 2: Ensure your project has a test framework installed

npm install -D vitest

Validation: vitest or jest should appear in package.json devDependencies

Step 3: Run Claude Code and ask it to generate tests

claude
# Then say: Generate tests for src/lib/utils.ts

Validation: Test files should be created alongside your source files

Anti-Patterns
  • This is NOT a programmatic library — do not try to import classes or functions from it
  • Do not expect automated AST parsing — Claude Code reads your code and follows the CLAUDE.md instructions
  • Do not use without source files to test — the workflow needs existing code to analyze
Limitations
  • Requires Claude Code or another AI coding agent that reads CLAUDE.md files
  • Quality of generated tests depends on the AI agent following the instructions
  • Does not automatically run tests — you must run them yourself after generation
  • Test file placement follows three conventions: colocated, __tests__, or tests/api/ — does not auto-detect project conventions
AI Verification Report
Passed
Overall96%
Security100%
Code Quality90%
Documentation95%
Dependencies100%
2 files analyzed81 lines read12.7sVerified 3/6/2026

Findings (2)

  • -Documentation claims 'Test file placement follows three conventions' but CLAUDE.md only explicitly documents three placement patterns without mentioning auto-detection limitations clearly in the main integration steps
  • -CLAUDE.md lacks example test output format that documentation summary claims to include ('Includes an example test file showing the expected output style')

Suggestions (3)

  • -Add a concrete example test file (e.g., 'EXAMPLE_OUTPUT.test.ts') demonstrating the exact style and format users should expect when Claude Code generates tests. This would reinforce the documentation claim and provide clearer guidance.
  • -Expand Step 5 of CLAUDE.md with a brief note about test naming conventions and assertion library expectations (e.g., 'expect()' syntax for both Vitest and Jest)
  • -Add a 'Troubleshooting' section to README.md covering common scenarios: what if package.json has no test framework, how to handle monorepos, handling TypeScript vs JavaScript projects
Loading version history...
Loading reviews...