Test Generator Agent
Sub-agent that reads your source code and generates comprehensive test suites with unit, integration, and edge case tests using Vitest or Jest.
Code is provided "as is". Review and test before production use. Terms
Built by Thomas
@thomas
A CLAUDE.md agent workflow template that instructs Claude Code to generate comprehensive test suites for your source code. Copy the CLAUDE.md into your project, run Claude Code, and ask it to generate tests. It will read your code, detect the test framework from package.json, and write unit tests, edge case tests, error path tests, and mock setups. Includes an example test file showing the expected output style.
- Generate unit tests for exported functions by copying CLAUDE.md and running Claude Code
- Generate integration tests for API routes with proper mocking
- Generate edge case and error path tests automatically
- Support Vitest, Jest, or Node.js built-in test runner (detected from package.json)
Step 1: Copy CLAUDE.md to your project root
cp CLAUDE.md /path/to/your/project/CLAUDE.mdValidation: CLAUDE.md should exist at your project root
Step 2: Ensure your project has a test framework installed
npm install -D vitestValidation: vitest or jest should appear in package.json devDependencies
Step 3: Run Claude Code and ask it to generate tests
claude
# Then say: Generate tests for src/lib/utils.tsValidation: Test files should be created alongside your source files
- This is NOT a programmatic library — do not try to import classes or functions from it
- Do not expect automated AST parsing — Claude Code reads your code and follows the CLAUDE.md instructions
- Do not use without source files to test — the workflow needs existing code to analyze
- Requires Claude Code or another AI coding agent that reads CLAUDE.md files
- Quality of generated tests depends on the AI agent following the instructions
- Does not automatically run tests — you must run them yourself after generation
- Test file placement follows three conventions: colocated, __tests__, or tests/api/ — does not auto-detect project conventions
Findings (2)
- -Documentation claims 'Test file placement follows three conventions' but CLAUDE.md only explicitly documents three placement patterns without mentioning auto-detection limitations clearly in the main integration steps
- -CLAUDE.md lacks example test output format that documentation summary claims to include ('Includes an example test file showing the expected output style')
Suggestions (3)
- -Add a concrete example test file (e.g., 'EXAMPLE_OUTPUT.test.ts') demonstrating the exact style and format users should expect when Claude Code generates tests. This would reinforce the documentation claim and provide clearer guidance.
- -Expand Step 5 of CLAUDE.md with a brief note about test naming conventions and assertion library expectations (e.g., 'expect()' syntax for both Vitest and Jest)
- -Add a 'Troubleshooting' section to README.md covering common scenarios: what if package.json has no test framework, how to handle monorepos, handling TypeScript vs JavaScript projects