Let Candidates Use AI: Rethinking Technical Interviews
We say we want AI experience, then forbid candidates from using it in interviews. That’s not just inconsistent—it selects for the wrong skills. In most modern roles, the advantage isn’t typing everything from scratch; it’s knowing how to co
TL;DR
Reading the post…
We say we want AI experience, then forbid candidates from using it in interviews. That’s not just inconsistent—it selects for the wrong skills. In most modern roles, the advantage isn’t typing everything from scratch; it’s knowing how to combine domain knowledge, AI, and light automation into reliable outcomes. Interviews should measure that.
Why banning AI is counterproductive
- Mismatch with real work: On the job, people use copilots, search, docs, and templates. Prohibiting those tools tests a world that doesn’t exist.
- Bias toward memorization: You reward recall over judgment, decomposition, and verification—the actual levers of quality.
- Signals the wrong culture: If you want innovators, let them demonstrate how they leverage new tech responsibly.
What you really want to measure
- Problem framing: Can they restate constraints, edge cases, and success metrics?
- Tool choice & prompts: Do they choose the right copilot/agent and craft effective, auditable prompts?
- Verification: Do they test, critique AI output, and add guardrails (types, contracts, checks)?
- Ethics & safety: Do they avoid leaking sensitive data and cite sources/assumptions?
- Iteration speed: Can they get from vague brief → shippable draft → measured improvement quickly?
A fair interview policy (copy/adapt)
- AI-Allowed, Evidence-Required: Candidates may use approved tools (copilots, docs, search). They must show their work: prompts, iterations, and reasoning.
- Privacy Guardrails: No past employer IP or personal data. Use provided mock repos, redacted datasets, and sandbox keys.
- Time-Boxed Access: Internet + AI access is on, but the task is scoped and observable; logs are saved for discussion.
- Originality Clause: AI assistance is expected; wholesale copy/paste without understanding is disqualifying.
Suggested format (60–90 minutes)
- 5 min — Brief: Problem, success criteria, constraints.
- 15 min — Plan: Candidate outlines approach, tools, risks, and test plan.
- 35–50 min — Build with AI: Use copilot/automation to draft, refine, and instrument.
- 10–15 min — Verify: Run tests, add checks, discuss failure modes and rollbacks.
- 10 min — Retro: What the AI got wrong, what the candidate fixed, and what they’d do next.
Evaluation rubric
- Decomposition (25%) — clear steps, trade-offs, and prior art considered.
- Prompt & Tool Use (20%) — targeted prompts, smart tool selection, minimal thrash.
- Verification (25%) — tests, contracts, sanity checks; catches AI errors.
- Security/Ethics (15%) — data handling, license awareness, privacy.
- Communication (15%) — concise narration, decisions recorded, rationale explained.
Addressing common concerns
- “But then the AI did the work.” Good—so does your IDE, framework, and CI. We’re hiring for judgment, not keystrokes.
- “How do we compare candidates fairly?” Use the rubric above and standardize tasks, datasets, and allowed tool list.
- “What about cheating?” Require screen-share or hosted IDEs, capture prompts/artifacts, and include a follow-up deep dive on decisions.
For non-engineering roles too
This applies to research, BPO, sales, marketing, design, finance, and ops. Replace code with briefs, analyses, campaigns, or workflows—but keep the same principles: tool literacy, verification, and measurable outcomes.
Bottom line
If AI fluency is essential at work, it should be essential in interviews. Stop screening for what people can memorize under pressure; start screening for how they think with tools, verify results, and ship value responsibly.