1. Prompt engineering — generate tests that don't lie
LLMs generate plausible-looking tests that often assert on nothing. Structure the prompt so the model must commit to behaviour.
# Specific, behaviour-focused, with constraints
Given this function:
```ts
function discountPrice(price: number, percent: number): number {
if (percent < 0 || percent > 100) throw new Error('Invalid percent');
return price * (1 - percent / 100);
}
```
Write Vitest tests covering:
1. Happy path at 0%, 50%, 100%
2. Boundary values: percent = -1, 0, 100, 101
3. Edge cases: price = 0, price < 0, non-integer percent
4. Error cases via expect().toThrow()
DO NOT:
- use `expect(true).toBe(true)` or other no-op assertions
- test implementation details (internal variable names)
- write a test that always passes regardless of bugs
Return ONLY the test file. Imports from 'vitest'.Write tests for this function:
function discountPrice(price, percent) { ... }How to verify
Read every generated test — does each assertion catch a real bug if the function is broken? Mutate the function slightly (flip a > to <) and run the tests. If they still all pass, the LLM generated decorative tests.
Gotcha
LLMs excel at syntax, fail at semantics. `expect(result).toBeDefined()` is a common fake assertion — looks legitimate, passes for any non-null return. Read every `expect(...)` and check what would make it fail.