The first platform where candidates use Claude Code — a real AI coding agent, not a chatbot. Full session capture. AI-powered scoring.
Free early access · No card required
Built for teams at
The Problem
Whiteboard algorithms. Toy problems. No AI allowed. These interviews measure memorization, not engineering. The best engineers ship with agents — your interview should test that.
Candidates use Claude Code — a full autonomous agent with file system, terminal, and multi-file project access. Not a prompt box.
Every keystroke, architectural decision, and agent interaction is captured and scored. You see exactly how engineers think.
The old whiteboard is dead. kodwai tests the skill that actually matters: wielding AI agents to ship production code.
How It Works
Pick from our library or create custom system-design and coding challenges calibrated to your stack.
They get a full Claude Code environment. Real agent, real tools, real constraints. 60 minutes.
AI-generated scorecard with granular breakdowns: decomposition, agent mastery, code quality, verification.
Comparison
| Feature | Traditional | kodwai |
|---|---|---|
| AI agents in interview | ✗ | ✓ |
| Real-world environment | ✗ | ✓ |
| Automated scoring | Partial | Full |
| Agent interaction analysis | ✗ | ✓ |
| Time to evaluate | 5–7 days | 60 min |
| Candidate experience | Stressful | Authentic |
| Signal-to-noise ratio | Low | Very High |
“We stopped asking candidates to reverse linked lists. Now we see how they actually build with AI.”
— VP Engineering, Series B startup
Hiring signal accuracy
Faster than traditional loops
Engineers on the waitlist
Join the waitlist. Be first to interview engineers the way they actually work.