Guide
Analysis Fundamentals for Case Competitions
The core analysis moves every case team should master: market sizing, segmentation, and assumption hygiene.
Read resourceGuide
A rigorous but practical framework for building market sizing and evidence stacks that judges trust.
Good case teams do not just present numbers. They present decisions supported by numbers that are coherent, transparent, and actionable. This guide covers how to build a full market-sizing and research workflow that holds up in judging panels and Q&A.
If you want to apply these techniques against live deadlines, start by selecting a target brief from current CaseCrest competitions. A real prompt exposes weaknesses in your process faster than any generic drill.
Market sizing usually fails for one of four reasons:
A polished chart does not solve any of these issues. Only process discipline does.
Before calculating anything, define the decision objective in one sentence. Then define what market variable directly informs that decision. For instance:
Your market-sizing output should always map to a decision variable.
Use a three-level question hierarchy.
Level 1: strategic question
Level 2: analytical questions
Level 3: evidence questions
When teams skip hierarchy, they collect data fragments that cannot be assembled into a defensible answer.
Top-down sizing starts from macro totals and applies filters. Bottom-up sizing starts from unit behavior and scales up. Hybrid combines both and reconciles differences.
Use top-down when:
Use bottom-up when:
Use hybrid when:
A hybrid approach is usually best in competition finals because it provides both speed and defensibility.
Most teams misuse these terms. Keep definitions strict:
Your recommendation should never rely on TAM directly. Strategy decisions should be linked to SAM and SOM under explicit assumptions.
Also define time horizon:
Judges care about transition logic, not just static market snapshots.
Every assumption should pass three tests:
Assumptions that do not materially affect the decision can be simplified. Assumptions that materially affect the decision should be stress-tested.
Use assumption tables with columns:
This turns your model into a decision instrument rather than a black box.
Market size without economics is incomplete. Build at least one unit economic lens:
If a large market has poor unit economics, recommendation viability is weak. If a smaller market has superior economics and execution fit, it may dominate strategically.
Tie market potential to economic quality:
This matrix helps judges see why your recommendation is selective rather than simplistic.
One-point forecasts are fragile. Build scenario bands.
At minimum, use:
For each scenario, vary only key drivers:
Then show recommendation stability. If your strategy only works in the accelerated case, it is risky. If it remains viable in conservative conditions, confidence increases.
Do not rely on a single source for critical claims. Triangulate with at least two independent perspectives:
For each critical metric, annotate:
This signals analytical maturity and reduces vulnerability in Q&A.
Student teams often freeze when data is incomplete. Instead, use bounded estimation:
If sources conflict, do not hide it. Show conflict explicitly and justify your selection logic.
A sentence like “Sources vary from 4% to 9% CAGR; we use 6% base with 4% downside and 8% upside due to category maturity and channel concentration” demonstrates control over uncertainty.
Avoid demographic-only segmentation unless the prompt requires it. Prefer behavior and economics-based segments:
Then map strategy to segment priorities:
This makes your go-to-market recommendation actionable.
Many cases require market selection across geographies or channels. Build overlays rather than separate disconnected analyses.
For geography, compare:
For channels, compare:
Then produce a prioritized matrix. Recommendation should specify sequence, not just target.
Judges value clarity more than sophistication. Your model should be auditable quickly.
Rules:
In slides, you do not show all formulas. You show model logic and key drivers. Keep detailed structure in backup.
Analysis wins only when translated into story. Use this structure:
Each section should reference explicit evidence from your sizing work.
Avoid data dumping. Every chart should answer a specific question that advances the recommendation.
Prepare for recurring challenges:
Prepare concise responses with one evidence point and one mitigation.
Example response pattern:
This pattern prevents rambling and improves perceived command.
Do not spend all your time modeling. Use a balanced allocation:
If your team enters final rehearsal without a tested recommendation, reallocate immediately. Presentation polish cannot rescue weak logic.
Run this final checklist:
If any answer is no, fix that gap first.
Maintain shared templates for:
Template reuse improves speed and reduces errors. Over time, your team quality improves through process, not luck.
Pick a current brief and execute this workflow from setup to final. Use the public competition feed to select one with timelines that force discipline.
When your sizing process is coherent, your recommendation becomes harder to attack and easier to trust. That is the difference between a decent submission and a finals-ready one.
Guide
The core analysis moves every case team should master: market sizing, segmentation, and assumption hygiene.
Read resourceGuide
Go deeper with sensitivity analysis, triangulation, and prioritizing the highest-signal insights.
Read resourceCase brief
A distilled brief from a real ANZ retail case challenge focused on growth options, customer segments, and store format tradeoffs.
Read resourceNext step
Ready to apply this? Browse live competitions on CaseCrest.
Browse live competitions