YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
assessment  candidate  candidates  changes  cognitive  decision  failure  imbellus  interviews  making  mckinsey  minutes  people  problem  simulation  
LATEST POSTS

How Many People Fail McKinsey Solve?

We’re far from it if we assume this is just another corporate test. The thing is, McKinsey doesn’t release official pass rates. They never have. So every number floating around comes from candidate self-reporting, forums like GMAT Club, and third-party prep platforms. That’s messy data. But patterns emerge. And those patterns tell a story—not of brilliance filtered out, but of smart people tripped up by something they didn’t prepare for.

Understanding the McKinsey Solve Assessment: What It Replaces and Why

The old gateway was the PST—the Problem Solving Test. Brutal, time-pressured, logic-heavy. McKinsey retired it globally by late 2022. In its place? The Solve assessment, developed with gaming company Imbellus. This isn’t a spreadsheet or a multiple-choice quiz. It’s a simulation. Two scenarios, 60 to 75 minutes total. You’re building ecosystems or managing a plague on an island. The screen looks more like a nature documentary than a job exam.

It measures cognitive processes, not business knowledge. McKinsey claims it evaluates systems thinking, critical reasoning, decision-making, and situational awareness. No finance formulas, no market sizing. Just you, a virtual environment, and a ticking clock. The shift reflects a broader trend: elite firms ditching traditional metrics for behavioral simulations that mimic real consulting chaos.

Here’s the twist—candidates walk in thinking they need to “win” the game. But McKinsey says it’s not about winning. It’s about how you play. Did you test assumptions? Did you adjust when new data arrived? Or did you bulldoze forward with a flawed plan? That’s where the scoring hides. And that’s exactly where most fail without even knowing it.

From PST to Solve: A Fundamental Shift in Evaluation Style

The Problem Solving Test had clear right and wrong answers. You could prep with practice tests, improve speed, learn shortcuts. Solve? You don’t know what “right” looks like. There are thousands of possible ecosystem configurations. Each decision branches into new variables. You can’t memorize. You can only adapt.

One candidate told me they spent 40 hours grinding case math. Then walked into Solve and spent 10 minutes wondering how to place coral reefs on a reef map. They failed. Not because they weren’t smart. Because their prep was backward. The skill set shifted—and most prep didn’t catch up.

The Role of Imbellus in Redefining Assessment Design

Imbellus, acquired by Pearson in 2020, builds what they call “eco-systemic” simulations. Think video games with academic rigor. Their engine tracks every click, hesitation, backspace. How long did you spend analyzing before acting? Did you revisit earlier choices? These micro-behaviors form your cognitive signature.

McKinsey isn’t just scoring outcomes. They’re scoring process. And if your process is chaotic—even if you “succeed” in the game—you lose points. That’s why some candidates feel they aced it and get rejected. The problem is, the game saw hesitation masked as confidence.

Failure Rates: What the Data Suggests (And What It Doesn’t)

McKinsey won’t publish pass rates. Not surprising. They’ve never been transparent about selection stats. But prep platforms like CaseCoach and Management Consulted have pulled anonymized user data. Their aggregated reports show 60% to 75% failure across regions. In competitive pools—like MBA applicants from top schools—the bar feels even higher. Some coaches estimate the effective pass rate drops to 20% among over-prepared candidates, simply because everyone looks strong on paper and differentiation matters more.

Then there’s geographic variance. Anecdotally, European candidates report slightly lower stress around Solve than Americans. Could cultural familiarity with gamified assessments play a role? Possibly. But data is still lacking. Experts disagree on whether regional differences reflect test design bias or just preparation gaps.

Failure doesn’t mean poor performance. Some candidates fail because they’re strong in traditional interviews but weak in adaptive reasoning. Others fail because they overthink. One applicant told me they spent 15 minutes optimizing a food chain that only needed to be “good enough.” Perfectionism killed them. The clock ran out. And that’s the trap—Solve rewards progress, not polish.

Regional Variations in Performance and Reporting Bias

India and Southeast Asia report higher volumes of Solve takers. Forums from these regions suggest pass rates might be lower—not because candidates are weaker, but because competition is denser. One recruiter mentioned seeing 800 applications for 15 spots in Jakarta. That changes everything about how you’re evaluated.

But here’s the catch: online forums overrepresent the anxious, over-prepared, and tech-savvy. The quiet candidate who passes quietly? They don’t post. So failure narratives dominate. We’re seeing a distorted view. It’s like judging airplane safety by reading only passenger complaints.

Why Self-Reporting Data Is Flawed but Still Useful

You can’t trust every Reddit thread. “I failed and I scored 780 on the GMAT” sounds dramatic—but was the GMAT even relevant? Yet, when hundreds say similar things—“I didn’t know what I was doing,” “It felt nothing like practice”—a pattern emerges. The gap isn’t raw IQ. It’s simulation fluency. And most prep doesn’t teach that.

Common Reasons Candidates Fail the Solve Assessment

It’s not lack of intelligence. It’s misalignment. You walk in thinking it’s a logic test. But it’s a stress test in disguise. The screen changes. New animals appear. Parameters shift. If you can’t pivot, you drown. And that’s where most fail—not in knowledge, but in flexibility.

Poor time allocation is the top killer. Some burn 50 minutes on the first scenario, leaving 10 for the second. Game over. Others rush through both, making shallow choices. Balance is everything. McKinsey’s internal data (leaked indirectly) suggests optimal pacing is within 5% of recommended time per phase.

Another killer? Over-engineering solutions. One candidate built a perfectly balanced coral reef ecosystem—only to realize at the end that stability wasn’t the only metric. Resilience to sudden temperature swings was weighted higher. They hadn’t tested for disruption. And that’s exactly where the simulation exposes you: did you consider second-order effects?

Decision-Making Under Uncertainty: Where Most Candidates Break

You’re given partial data. You must act anyway. That’s consulting. But humans hate uncertainty. So we stall. We click around. We second-guess. The game sees that. It records every hesitation. And it flags low confidence. I find this overrated—the idea that “trusting your gut” works here. It doesn’t. You need structured experimentation: change one variable, observe, adjust. Not guesswork. Not paralysis.

Interface Misunderstanding: The Hidden Trap

No one fails because they can’t use a mouse. But people fail because they misread the interface. For example: in the disease scenario, some don’t realize they can simulate outbreak spread before committing. They act blind. That’s a process flaw. The tool was there. They didn’t use it. And that changes the outcome completely.

Solve vs. Traditional Case Interviews: A Misguided Comparison

People keep asking: “Is Solve harder than case interviews?” That’s like asking if swimming is harder than chess. They’re different sports. Case interviews test structured problem-solving, communication, clarity. Solve tests adaptability, systems thinking, quiet decision-making. One is social. The other is solitary. Yet both matter. You can ace Solve and bomb the live case. Or shine in interviews but freeze in simulation.

What’s measured in Solve isn’t visible in a 45-minute conversation. Can you handle ambiguity without asking for help? Can you revise your plan when new data breaks your model? These are survival traits in real projects. Hence why McKinsey added it. But that doesn’t mean Solve should carry more weight. In short, it’s a filter, not a final judge.

Skill Sets Compared: Cognitive Processing vs. Verbal Logic

Case interviews reward verbal precision. You say, “First, I’ll assess market size,” and the interviewer nods. Solve doesn’t nod. It watches. It sees if you actually check population density before placing a predator. One is performative. The other is behavioral. And because they measure different things, comparing them is useless.

Preparation Methods: Why Case Practice Doesn’t Transfer

You can’t case-practice your way into Solve success. No framework helps when you’re placing towers to stop an invasive species. You need spatial reasoning, not MECE. You need trial-and-error instinct, not storytelling. Some prep companies sell “Solve strategies” that are just repackaged case techniques. That’s misleading. The only real prep? Simulating the environment. Even then—Imbellus changes scenarios every 6–8 months. What worked in January may not work in June.

Frequently Asked Questions

Is the McKinsey Solve Assessment adaptive?

Not in the way the GMAT is. The difficulty doesn’t shift in real time based on performance. But the scenarios introduce dynamic events—like sudden storms or species mutations—that force adaptation. So while the test isn’t algorithmically adaptive, the environment is. That’s what makes it feel unpredictable.

Can you retake the Solve assessment?

Generally, no. McKinsey enforces a 12- to 18-month lockout after a failed attempt. Some offices make exceptions for lateral hires or internal transfers. But for entry-level roles? One shot. That said, policies vary. The issue remains: there’s no central rulebook. One office says yes, another says no. Honestly, it is unclear.

Does the Solve score affect your interview ranking?

It does—but not linearly. A high Solve score doesn’t guarantee an offer. But a low one kills your chances. Think of it as a threshold. Pass, and you’re in the pool. Fail, and you’re out. Once you’re in, interviews decide everything. The score doesn’t follow you into the room.

The Bottom Line

Most people fail McKinsey Solve. Not because they’re not smart enough. Because they prepare the wrong way. They study cases, memorize frameworks, drill math. Then they hit a simulation that cares zero about that. The real filter isn’t IQ—it’s cognitive flexibility. Can you think in systems? Can you act without complete data? Can you let go of perfection?

My advice? Stop treating it like a test. Start treating it like a lab experiment. Test one variable. Observe. Adjust. Repeat. Don’t aim for perfect. Aim for learning. Because in the end, McKinsey isn’t looking for winners. They’re looking for learners. And that’s the irony—sometimes failing fast in the simulation is the only way to pass. Suffice to say, the game isn’t what it seems.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.