Guide
Online vs In-Person Presentations
How to adapt your case presentation for online delivery versus in-person judging.
Read resourceGuide
A full operating playbook for turning a raw case brief into a final presentation that stands up under judge pressure.
Most teams lose before they present. They lose in the first ninety minutes because nobody converts the brief into decisions, ownership, and a delivery rhythm. This playbook is designed to prevent that collapse. It gives you an operating model you can apply across strategy cases, finance-heavy prompts, social impact challenges, and sponsor-led commercial briefs.
If you need live rounds to apply this system immediately, start with the public listings on CaseCrest competitions. Build your sprint schedule around actual deadlines instead of theoretical practice windows.
Treat the opening window as your setup phase, not your research phase. You are not trying to be smart in this phase. You are trying to be aligned.
Start by reading the brief silently once. Then read it aloud as a team and force everyone to mark hard constraints. Hard constraints are requirements that a judge can objectively check: timeline limits, budget constraints, target market boundaries, implementation limitations, legal assumptions, data restrictions, and deliverable format.
After that, write one shared problem statement in plain language. A strong problem statement contains four parts:
For example: “The retail bank strategy team must choose a profitable SME growth path in regional corridors while protecting customer acquisition cost and maintaining service quality within a 12-month operating window.”
Once you have this statement, assign explicit owners for five workstreams:
Do not let one person own two critical workstreams unless your team size forces it. If someone is overloaded, the team is already in failure mode.
Judges do not award points for effort. They award points for decision quality under uncertainty. The fastest way to create decision quality is to map the decision tree early.
A decision tree for case competitions does not need to be mathematically complex. It needs to expose dependencies. Start with the final recommendation node, then branch backward into the conditions that make each option viable.
Example branches might include:
For each branch, define the minimum evidence threshold required to claim confidence. This keeps your team from collecting random data. If a data point does not move a branch decision, it is noise.
At the same time, define kill criteria. Kill criteria are explicit reasons to stop pursuing an option. Teams that fail to define kill criteria tend to spend half their cycle polishing dead-end directions because sunk cost feels safe.
Create a hypothesis backlog immediately after the decision tree. A good backlog contains between eight and fifteen hypotheses. More than that and the team diffuses effort.
Each hypothesis should follow this pattern:
Run the backlog like a product sprint board:
untestedin progressvalidatedrejectedRevisit the board every few hours. This creates momentum and avoids emotional arguments because the team is debating evidence status, not personalities.
A common mistake is over-indexing on quantity. Teams download reports, pull many charts, and feel productive while failing to sharpen the recommendation.
Use a three-layer evidence stack instead:
You only need enough evidence to cross the decision threshold for each branch in your tree. If your evidence stack crosses those thresholds, stop collecting and switch to synthesis.
When citing external facts, record source reliability quickly:
high: audited reports, regulator releases, earnings callsmedium: reputable industry analysis and benchmark datasetslow: unattributed commentary or anecdotal sourcesIf a critical claim relies on low-reliability evidence, either weaken the claim or add a mitigation statement. Judges can forgive uncertainty. They do not forgive false confidence.
Most judging rubrics implicitly score four dimensions:
Your deck architecture should map one-to-one to those scoring dimensions.
A robust architecture is:
1. Executive summary with recommendation first 2. Decision criteria and evaluation framework 3. Option analysis with tradeoffs 4. Chosen strategy and supporting economics 5. Implementation roadmap and capability plan 6. Risk register with mitigation triggers 7. KPI system and governance cadence
Do not hide the recommendation until the end. Put it early, then earn credibility through evidence and tradeoff clarity.
Each section should answer one judge question. If one section answers multiple questions, split it.
You do not need a massive model to win. You need a transparent model.
Every financial section should include:
Use conservative assumptions where uncertainty is high, and explicitly label them. If you use aggressive assumptions, attach a mitigation plan or staged rollout.
A practical method is to run three scenarios:
Then show how your recommendation still performs in the stress case with adaptive controls. Judges care about resilience as much as peak return.
If your case is not finance-heavy, still quantify impact in operational units:
Numbers anchor credibility.
Weak teams present strategy as if execution is automatic. Strong teams present execution as a sequence of constrained choices.
Your implementation slide should include:
Include one “no-regret move” the organization can start immediately even before full rollout approval. This demonstrates practicality.
Also include one contingency trigger. For example, “If CAC exceeds threshold by month two, pause paid channel expansion and reallocate to referral-led acquisition.”
Judges view contingency planning as maturity, not pessimism.
Risk slides often become generic lists. Instead, use a risk matrix linked to your core recommendation.
For each critical risk, specify:
This converts risk from a compliance artifact into a management system.
You should also include one “assumption watchlist” section in speaker notes:
If judges challenge an assumption, you can respond with prepared alternatives rather than improvising.
Strong content fails when slides are hard to parse. Use these standards:
Avoid dense paragraphs. Replace with structured bullets that encode logic progression:
For visuals, prefer simple comparison structures:
If a visual needs a long explanation, the visual is not ready.
Rehearsal is not about memorization. It is about reducing coordination error.
Run three passes:
During each pass, log issues in one shared list with severity tags:
critical: breaks logic or invalidates recommendationmajor: weakens persuasivenessminor: polish issueFix critical and major items first. Never spend the final two hours polishing visuals if critical logic defects remain.
Many finals are decided in Q&A. Prepare for it like a separate deliverable.
Build a Q&A bank using four buckets:
For each likely question, prepare:
This structure keeps answers crisp and credible.
Also pre-assign question ownership by topic, but train cross-cover. If the owner freezes, another teammate should step in within two seconds.
Performance declines under stress when roles blur. Use explicit communication rules:
Use short standups every two to three hours:
This prevents silent divergence where each member builds a different answer.
Conflict is normal in high-pressure cases. Resolve it with decision criteria, not seniority. If criteria are unclear, pause and redefine them before deciding.
On final day, cognitive load is the main risk. Reduce it by freezing components in stages:
Do not keep changing the recommendation unless new evidence fundamentally breaks it.
Run a final compliance check:
Small procedural misses can disqualify otherwise strong work.
After each competition, conduct a structured debrief within 48 hours. Capture:
Maintain a shared library:
This turns every competition into training data for the next one.
If your goal is consistent high placement, this compounding loop matters more than one-off brilliance.
Use this cadence between live competitions:
Rotate team roles during practice so everyone understands cross-functional constraints. A presenter who has never touched modeling tends to over-promise. A modeler who has never presented tends to over-specify.
Consistent role rotation builds empathy and improves decision speed under pressure.
Theory matters only if applied against real constraints. Pick a live competition and run this system end-to-end.
Use the competitions directory to select your next round, then map your team schedule to actual deadlines.
The teams that improve fastest are not necessarily the smartest teams. They are the teams with the cleanest execution loop: clear hypotheses, disciplined evidence, explicit tradeoffs, practical implementation, and rehearsed Q&A.
That is what this playbook is built to produce.
Guide
How to adapt your case presentation for online delivery versus in-person judging.
Read resourceGuide
A plain-English overview of what case competitions are, how they run, and what judges expect.
Read resourceGuide
Systems for keeping your team calm under pressure, managing feedback loops, and delivering a polished submission.
Read resourceNext step
Ready to apply this? Browse live competitions on CaseCrest.
Browse live competitions