Essay
The Staff Engineer Readiness Test: What AI Can't Answer For You
A test that reveals how you think—judgment, tradeoffs, and ownership—not the kind you pass with right answers.
A strange thing happens when AI starts doing good engineering work.
It doesn't just help.
It exposes.
It exposes where we rely on rules instead of judgment. Where we confuse correctness with wisdom. Where we default to action instead of thought.
This post is a test — not the kind you pass with right answers, but the kind that reveals how you think.
I've used versions of these questions in real conversations, reviews, and self-reflection. In the AI era, they separate strong seniors from true staff engineers.
If you feel slightly uncomfortable reading them, that's the point.
The Setup
You are not being tested on:
- jargon
- confidence
- speed
- architectural purity
You are being tested on:
- your ability to say no
- your comfort with ambiguity
- whether you choose tradeoffs consciously
- whether you optimize the right layer
Let's begin.
1️⃣ Problem Framing Test
You're told:
"Our checkout conversion dropped 8% last month. We need to fix it urgently."
Before solutions, dashboards, or war rooms — pause.
The Question
What are the first three things you would explicitly not do, and why?
Not doing something is harder than doing something.
Staff engineers instinctively avoid:
- treating symptoms as causes
- moving fast without agreeing on the decision being made
- optimizing the wrong metric under urgency pressure
If your instinct is to jump into fixes, you're still operating at the execution layer.
Staff engineers create space before motion.
2️⃣ Tradeoff Ownership Test
You're designing a system:
- moderate traffic today
- potential 10× growth in a year
- small team
- leadership wants speed
AI confidently suggests:
- microservices
- event-driven architecture
- eventual consistency everywhere
It looks… modern. Scalable. Correct.
The Question
You decide not to follow this design.
How do you defend that decision to:
- a senior architect who loves scale
- a PM worried about deadlines
Same decision. Two audiences.
Staff engineers don't argue tools. They argue constraints.
They explain why now is different from someday — without being dismissive or dogmatic.
3️⃣ "Looks Correct but Feels Wrong" Test
You're reviewing an AI-generated ERD:
- normalized
- clean relationships
- passes basic review
- no obvious bugs
Yet something bothers you.
The Question
Name three signals — not rules — that would make you push back, even if you can't yet prove it's wrong.
This tests taste.
Examples of staff-level signals:
- future change looks painful
- ownership boundaries feel fuzzy
- reads like a snapshot, not a system
Staff engineers trust unease before they can formalize it.
4️⃣ System Evolution Test
You designed a system 18 months ago. It worked well.
Now:
- a new regulation requires partial data deletion
- analytics depends on that data
- leadership says, "don't break dashboards"
The Question
What do you optimize for first?
- correctness
- compliance
- system simplicity
- business continuity
Pick one.
Then say what pain you're willing to accept because of that choice.
Staff engineers don't promise zero pain. They choose which pain is acceptable.
5️⃣ AI-Native Leadership Test
A strong senior engineer says:
"Why are we even reviewing this? The AI-generated design is clearly better than anything we'd do manually."
They're not wrong.
But they're not fully right either.
The Question
How do you respond without:
- sounding defensive
- dismissing AI
- undermining the engineer
Your goal isn't to win.
It's to align the team around what humans still own.
Staff engineers don't compete with AI. They redefine the playing field.
How to Read Your Own Answers
You're not looking for certainty. You're looking for signals.
Staff-ready answers tend to:
- name assumptions explicitly
- avoid absolutist language
- include phrases like "we choose", "we accept", "we trade off"
- optimize for user impact over internal purity
If your answers feel slower, more careful, and slightly uncomfortable — that's a good sign.
The Quiet Truth
AI will keep getting better at:
- designing
- optimizing
- generating
It will not take responsibility.
Staff engineers exist to:
- decide what matters
- protect users from internal complexity
- choose failure modes consciously
- own consequences
That role isn't shrinking.
It's finally becoming visible.
A Final Note
If you didn't "ace" this test — good.
Staff engineering isn't a destination. It's a habit of thought.
One you practice every time you choose clarity over speed, judgment over motion, and ownership over correctness.
AI can help you build systems.
Only you can decide which ones deserve to exist.
What's next?