Advertisement
Curiosity
Why “Common Sense” Isn’t Always Common
You’re in a meeting where a rollout plan is on the screen. Someone says, “Come on, it’s just common sense: we ship Friday and fix whatever comes up.” A few people nod. You feel the pressure to nod too—because disagreeing makes you sound dramatic. Two weeks later, support tickets spike, refunds follow, and the team spends nights patching problems that were predictable.
What happened wasn’t a lack of intelligence. It was a mismatch between what felt obvious and what was actually true in that specific context.
This article is about using “common sense” the way it works in real life: as a rough starting point, not a decision tool. You’ll walk away with a practical framework for spotting when “common sense” is likely to fail, how to replace it with fast (not bureaucratic) reasoning, and a set of immediate tactics you can use in meetings, operations, parenting, product work, hiring—any environment where people assume the obvious is universal.
Why this matters right now (even if you’re not “in leadership”)
Complexity has crept into ordinary decisions. Many of us now operate inside systems with:
- Hidden dependencies (one change affects six other things you don’t see)
- Asymmetric risk (a small mistake costs far more than a small win helps)
- Distributed responsibility (the person deciding isn’t the person cleaning it up)
- Speed pressure (you’re rewarded for shipping, not for being right)
In those environments, “common sense” becomes a social shortcut: it signals confidence, practicality, and belonging. It also becomes a way to end debate. That’s the danger.
When someone says “it’s just common sense,” they’re often saying, “I don’t want to examine my assumptions out loud.”
Behavioral science offers a useful lens here. Our brains rely on heuristics—fast rules of thumb—to reduce cognitive load. Daniel Kahneman’s work popularized the idea of fast vs. slow thinking: quick intuition is efficient, but it’s sensitive to context, framing, and bias. In stable environments (driving the same route daily), intuition can be excellent. In changing environments (new product, new team, new market, new baby), it can be confidently wrong.
According to industry research summarized across safety engineering, healthcare, and aviation, many high-severity failures are not caused by “unknown unknowns” but by known risks that were minimized because they seemed unlikely or “overcautious.” The pattern repeats: what’s “obvious” to one person is invisible to another, and the system punishes the mismatch.
The core problem “common sense” doesn’t solve
“Common sense” is a bundle of three things that get confused:
1) Personal experience (valuable, but narrow)
If you’ve only seen one version of a situation, your “sense” will fit that version. Example: if every team you worked on treated documentation as optional, “common sense” says documentation is waste. Until you join a regulated environment or a distributed team across time zones where not documenting becomes operational debt.
2) Cultural norms (shared, but not universal)
What’s “obvious” about punctuality, direct feedback, risk tolerance, or authority varies widely. Even inside one country, different industries have different “commons.” A startup’s default is often experimentation; a hospital’s default is safety and traceability.
3) Cognitive shortcuts (fast, but distortable)
We use availability bias (recent events feel more likely), confirmation bias (we notice what supports our view), and authority bias (confident speakers feel correct). “Common sense” often smuggles these biases into decisions without labeling them.
So the real problem is not that common sense is “bad.” The problem is that it’s un-audited. It’s reasoning without a receipt.
Where “common sense” actually works—and where it breaks
You don’t need to banish intuition. You need to know when it’s safe to lean on it.
Common sense performs well when:
- The environment is stable (rules don’t change often)
- Feedback is fast and clear (you learn quickly when you’re wrong)
- Stakes are low (mistakes are recoverable)
- You have repeated practice (your intuition is trained by many cycles)
Common sense breaks when:
- Feedback is delayed (bad decisions look fine for weeks)
- Systems are coupled (small changes cascade)
- Incentives are misaligned (the “win” is local, the cost is global)
- The situation is novel (new market, new tool, new risk profile)
- Rare outcomes dominate (low probability, high impact)
Rule of thumb: The more a decision resembles risk management, the less you should trust “obvious.”
A structured framework: The C.O.M.M.O.N. Sense Audit
When someone invokes “common sense”—or when you feel yourself thinking it—run this quick audit. It’s designed to be fast enough for real meetings and real life, without turning everything into a spreadsheet.
C — Context: “Common where?”
Ask: In what environment is this ‘sense’ common? The fastest way to surface hidden assumptions is to name the context that trained them.
Useful prompts:
- “Is that common sense in our industry, or in your previous one?”
- “Is this how we’ve done it before, or how we should do it now?”
O — Outcomes: “What are we optimizing for?”
Two people can share “common sense” but optimize different outcomes: speed vs. quality, cost vs. safety, short-term metrics vs. long-term trust.
Useful prompts:
- “Which failure is worse: shipping late or shipping broken?”
- “What does success look like in 30 days and in 12 months?”
M — Mechanisms: “What has to be true for this to work?”
This is the anti-handwaving step. Turn the intuition into testable statements.
Useful prompts:
- “What assumptions are we making about users, volume, or edge cases?”
- “What dependencies could invalidate the ‘obvious’ path?”
M — Measurement: “How will we know early?”
Common sense often skips observability. If you can’t detect failure early, you’re betting.
Useful prompts:
- “What’s our earliest warning signal?”
- “Who sees it first—support, ops, customers?”
O — Ownership: “Who pays for being wrong?”
This is where many “sensible” decisions quietly turn unethical or impractical. If the decision-maker doesn’t bear the cost, the sense can look common while the burden is outsourced.
Useful prompts:
- “Who is on call if this breaks?”
- “Who absorbs the rework, reputation hit, or safety risk?”
N — Next step: “What’s the smallest safe move?”
Replace “do it” with “do the smallest version that teaches us.” This preserves speed without gambling.
Useful prompts:
- “Can we pilot this with a small segment?”
- “Can we add a rollback, kill switch, or review gate?”
The audit isn’t about slowing down. It’s about buying clarity at the cheapest possible price—before reality charges you interest.
What This Looks Like in Practice
Mini case scenario #1: The “obvious” policy change
Situation: A manager proposes removing approval steps because “everyone should just use common sense.”
Audit in action:
- Context: Team recently doubled; half are new hires.
- Outcomes: Want faster delivery, but also consistent compliance.
- Mechanisms: Assumes shared judgment about what needs review.
- Measurement: No leading indicators—issues show up in audits months later.
- Ownership: Ops and legal will clean up errors, not the manager.
- Next step: Replace blanket removal with a tiered policy: low-risk changes auto-approved; high-risk changes require review; random sampling for quality.
Result: Speed improves without turning compliance into roulette.
Mini case scenario #2: The “common sense” customer message
Situation: A team wants to email all customers about a pricing change with a single blunt sentence. “People hate long emails—common sense.”
Audit in action:
- Context: Customers vary: long-term enterprise vs. casual users.
- Outcomes: Reduce churn risk, maintain trust, reduce support volume.
- Mechanisms: Assumes brevity always signals clarity, not evasiveness.
- Measurement: Watch open rates and downstream support tickets and cancellations within 72 hours.
- Ownership: Support team will absorb the confusion if the message is too short.
- Next step: Segment: enterprise gets a clear rationale and options; casual users get a short note with a simple FAQ link. Pre-brief support with scripts.
Result: Less backlash, fewer angry tickets, and customers feel respected.
Imagine this scenario: The “obvious” safety shortcut
You’re helping a friend who runs a small warehouse. A veteran worker says, “Common sense—don’t waste time with the guard on that machine.” Nothing bad happens… until the one day a new hire reaches in the wrong way. This is the harshest version of the principle: common sense often describes what went right so far, not what is safe.
Decision Traps That Hide Inside “Common Sense” (and how to neutralize them)
This is where things usually go wrong—not because people are careless, but because the phrase “common sense” triggers social and cognitive traps.
Trap 1: “If it’s obvious, we don’t need to define it”
In practice, “obvious” is a sign you should define terms. Teams blow up over words like “done,” “urgent,” “high priority,” “safe,” or “acceptable.”
Neutralizer: Ask for a one-sentence operational definition. Example: “When we say ‘safe to ship,’ do we mean no P0 bugs, or no known data-loss paths, or passing a checklist?”
Trap 2: Social proof as a substitute for evidence
When several people nod, dissent feels irrational. But nodding may only mean “I want this meeting to end.”
Neutralizer: Use a quick silent vote with reasons. It reduces conformity pressure. In person, index cards work; in remote settings, chat works if you ask for one line of rationale.
Trap 3: The “reasonable person” myth
Many organizations design policies around a fictional “reasonable user/employee.” Real humans are tired, rushed, distracted, and sometimes new. Systems must be robust to that.
Neutralizer: Design for the 10th percentile day, not the best day. Ask: “What does this look like when someone is new, overloaded, or making their last task before heading home?”
Trap 4: Confusing confidence with competence
People who speak in “common sense” often sound decisive. But confidence is not calibration. Overconfident judgments are a known risk factor in forecasting and planning.
Neutralizer: Ask for probability ranges, not certainty. “How likely is this to work as expected—60% or 90%? What would move that number?”
When you turn certainty into odds, you turn ego into a conversation.
A fast comparison framework: Intuition vs. Checklist vs. Experiment
Not every decision needs the same tool. Here’s a practical way to choose.
| Approach | Best for | Main risk | How to keep it honest |
|---|---|---|---|
| Intuition (“common sense”) | Repeating patterns, low stakes, clear feedback | Overgeneralizing from personal experience | State assumptions out loud; set a tripwire metric |
| Checklist | Safety, compliance, handoffs, routine high-stakes tasks | Box-checking without thinking | Keep it short; tie items to specific failure modes |
| Experiment / pilot | Uncertainty, novelty, big downside, unclear demand | False confidence from weak tests | Define success metrics; limit scope; ensure reversibility |
If you’re busy, the key is this: when the downside is large or irreversible, graduate from intuition to structure.
The most common mistakes people make (and what to do instead)
Mistake #1: Using “common sense” to end a disagreement
It’s tempting because it sounds mature. But it usually shuts down the one person who sees the edge case.
Do instead: Invite the edge case explicitly: “Before we lock this in, what’s the failure mode we’d be most embarrassed by?”
Mistake #2: Treating “common sense” as a personality trait
People say, “She has great common sense.” What they often mean is “She matches my default assumptions.” In hiring and performance feedback, this becomes a bias amplifier.
Do instead: Evaluate judgment with scenarios. “Here’s a messy situation—what would you do first, and what would you measure?”
Mistake #3: Confusing simplicity with absence of detail
Simple decisions can still require specific guardrails. “We’ll just communicate clearly” is not a plan; it’s a wish.
Do instead: Make the simple plan explicit: owner, timeframe, definition of done, escalation path.
Mistake #4: Ignoring second-order effects
Common sense tends to focus on immediate outcomes (“ship faster”) while ignoring downstream costs (support load, trust loss, maintenance burden, burnout).
Do instead: Add a 2-minute second-order scan: “If this works, what becomes harder? If it fails, what breaks first?”
Mistake #5: Designing rules for experts and hoping novices behave like experts
The most “sensible” process can be unusable when you’re new. This shows up in onboarding, customer flows, and safety operations.
Do instead: Test with the newest person. If they can’t follow it without heroics, it’s not common sense—it’s tribal knowledge.
An immediate implementation plan you can use this week
You don’t need a culture overhaul to make this real. You need a few repeatable moves that reduce ambiguity and protect against predictable failure.
Step 1: Replace “common sense” with one of three acceptable statements
In your own speech, swap the phrase for something auditable:
- “My assumption is…” (makes it discussable)
- “Based on past cases…” (anchors it in evidence)
- “My preference is…” (admits it’s partly subjective)
This tiny change lowers defensiveness and improves clarity.
Step 2: Add one “tripwire metric” to any intuitive decision
If you’re going with the obvious path, name what would make you reverse course. Examples:
- “If support tickets increase by 20% in 48 hours, we pause.”
- “If error rates exceed X, we roll back.”
- “If the kid’s sleep gets worse for three nights, we stop the new routine and reassess.”
Tripwires convert hand-wavy confidence into accountable learning.
Step 3: Build a two-tier checklist for anything high-stakes
Long checklists get ignored. Use two tiers:
- Tier 1 (always): 5–7 items that prevent catastrophic failure
- Tier 2 (when relevant): conditional checks based on context
This mirrors how high-reliability fields operate: a small set of non-negotiables plus situational judgment.
Step 4: Schedule a 10-minute “pre-mortem” on big bets
A pre-mortem (a technique often used in project risk management) asks: “Assume this failed. What caused it?” It legitimizes skepticism without making anyone the villain.
Keep it tight: 10 minutes, each person names one failure cause, then you choose the top two to mitigate.
Step 5: Close the loop with a short after-action note
The most practical upgrade to “common sense” is institutional memory. Write 5 lines:
- What we expected
- What happened
- What surprised us
- What we’ll change next time
- Owner of the follow-up
This is how you turn individual experience into shared judgment—actual “common” sense.
A quick self-assessment: Is your “common sense” calibrated or just familiar?
Use this as a fast gut-check before you push an “obvious” decision through.
- Recency check: Am I overweighting something that happened last week?
- Reversibility check: Can we undo this cheaply if wrong?
- Stake check: Who gets hurt if this fails—and are they in the room?
- Novelty check: Have I actually seen this exact situation repeat, or just something vaguely similar?
- Signal check: What would tell me I’m wrong early?
If you can’t name the early signal, you’re not using common sense—you’re using hope.
Addressing the pushback: “Are we overthinking everything?”
This concern is legitimate. Over-structuring decisions can create its own failure mode: paralysis, bureaucracy, and performative process.
The goal isn’t to replace intuition with endless analysis. The goal is to match the decision tool to the risk.
Here’s a clean tradeoff:
- More structure buys you safety, consistency, and shared understanding.
- Less structure buys you speed, autonomy, and adaptability.
The mistake is committing to one mode as a moral identity (“We move fast” or “We’re cautious”). Competent teams and households switch modes deliberately.
Walking away with better judgment (not louder opinions)
“Common sense” isn’t useless—it’s incomplete. It becomes powerful when you treat it as a hypothesis and add lightweight validation.
Keep these takeaways practical:
- Use the C.O.M.M.O.N. Sense Audit when stakes are high, context is new, or feedback is delayed.
- Convert certainty into assumptions you can test or observe.
- Add tripwires so you can reverse course before damage compounds.
- Prefer small pilots and reversibility over bold leaps when downside is asymmetric.
- Write short after-action notes to turn individual experience into shared judgment.
The mindset shift is simple but durable: stop treating “common sense” as a trump card, and start treating it as raw material—useful, imperfect, and worth refining.
If you want one immediate move: the next time someone says “it’s just common sense,” ask, calmly, “Common where?” That question alone can save weeks of rework—or prevent the kind of failure you don’t get to redo.

