YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
confirmative  design  diagnostic  different  evaluation  feedback  formative  impact  launch  months  organizations  policy  question  scores  summative  
LATEST POSTS

What Are the Five Types of Evaluation?

Think about the last time you gave feedback on a draft proposal. Or graded a midterm. Or reviewed customer satisfaction scores after launching a new feature. That’s evaluation in motion—structured, intentional, and often mislabeled as just “assessment.” The thing is, not all evaluations are created equal. Some shape what happens next. Others judge what already occurred. And that’s exactly where confusion starts.

How Diagnostic Evaluation Shapes What Comes Next

Diagnostic evaluation happens before anything begins. It’s not about grading. It’s about gauging baseline knowledge, identifying gaps, or uncovering hidden assumptions. A teacher might use a short quiz on day one to see what students remember from last year. A software team might run discovery interviews before writing a single line of code. This kind of evaluation is less formal than others—but no less powerful.

It operates on the principle that you can’t plan effectively if you don’t know where you’re starting from. Imagine launching a corporate training program on data privacy without first checking whether employees understand basic cybersecurity hygiene. You’d risk either oversimplifying or losing people entirely. In one 2021 case at a mid-sized insurance firm in Zurich, a pre-training diagnostic revealed that 68% of staff couldn’t identify a phishing email. That changes everything.

And that’s why skipping diagnostic steps feels like building a house on sand. Because even if the blueprint looks perfect, the foundation might be cracked.

When to Use Diagnostic Evaluation

Timing is everything. You deploy this type when preparation matters more than results. Onboarding processes, academic course design, pilot programs—these are all spaces where insight beats output. But—and this is where people get tripped up—you don’t need an elaborate instrument. A five-question survey can do more than a 50-item exam if it hits the right nerve.

Common Tools in Diagnostic Evaluation

Simple checklists, concept maps, KWL charts (what I know, what I want to know, what I learned), and informal interviews dominate here. No statistical rigor required. What matters is relevance. And yes, sometimes it’s just a five-minute chat over coffee that reveals more than any standardized test ever could (not that HR departments like hearing that).

Formative Evaluation: The Real-Time Feedback Engine

Formative evaluation unfolds during the process. It’s ongoing, adaptive, and often low-stakes. Think of it as coaching rather than judging. A writer revising a manuscript based on peer comments. A surgeon refining technique after video review. Even a city council tweaking a bike lane design after public input. This kind of evaluation thrives on iteration.

Unlike summative methods, which wait until the end to pass judgment, formative evaluation intervenes while there’s still time to change course. That’s its superpower. One study in Ontario schools found that students receiving regular, actionable feedback improved test scores by an average of 23% over those who didn’t—despite identical teaching content. Same curriculum. Different feedback loops. That’s the gap.

And yet, so many organizations treat feedback as an afterthought. Quarterly reviews. Annual surveys. By then, it’s too late. The project shipped. The semester ended. The campaign ran. We're far from it being a standard practice, even though the evidence is overwhelming.

But here’s the rub: formative evaluation only works when recipients feel safe acting on it. If your team fears retaliation for admitting mistakes, they’ll ignore the feedback no matter how well-crafted. Culture eats methodology for breakfast, every time.

Strategies for Effective Formative Feedback

The most effective versions are specific, timely, and tied to observable behaviors. “Your presentation lacked structure” isn’t helpful. “The first 10 minutes jumped between three topics without transitions—consider outlining key points upfront” is. The latter gives someone something to work with.

Why Most Formative Systems Fail

Because they’re bolted on, not built in. Managers schedule “feedback sessions” like dental appointments—infrequent and dreaded. But real formative evaluation should be woven into the workflow. Slack comments. Pair programming notes. Quick stand-up reflections. It doesn’t need a title or a template. In fact, the more natural it feels, the better it works.

Summative Evaluation: Judgment at the Finish Line

Summative evaluation answers the question: Did it work? It arrives at the end of a cycle—semester, project, policy rollout—and makes a determination. Final grades. ROI calculations. Post-mortems. These are high-stakes assessments that often dictate rewards, promotions, or program survival.

It’s the performance review, the final exam, the auditor’s report. Because it carries consequences, it tends to be highly structured and quantifiable. Standardized tests, net promoter scores, budget variance analyses—all summative tools designed to deliver a verdict.

Suffice to say, it’s the most visible type of evaluation. But visibility doesn’t mean effectiveness. A famous example: in 2010, New York City spent over $20 million on a teacher evaluation system based largely on student test scores. After years of implementation, researchers found no significant improvement in learning outcomes. The system was summative, yes—but also reductive.

Because it reduced complex teaching practices to a single number, it incentivized test prep over critical thinking. Which explains why many educators resented it. You can measure what’s easy. Measuring what matters? Entirely different challenge.

Limitations of Summative Approaches

They’re backward-looking. They don’t help improve the current effort. And they often miss context—like external factors affecting results. A nonprofit might fail to meet fundraising targets not due to poor strategy, but because a major donor passed away mid-year. The report card says “F.” The reality is more complicated.

When Summative Evaluation Is Justified

Accountability demands it. Funders want proof. Boards require transparency. In those cases, summative evaluation isn’t just useful—it’s necessary. The issue remains: how much weight we assign it. One number shouldn’t erase 12 months of nuanced effort.

Confirmative Evaluation: Testing Long-Term Relevance

This one flies under the radar. Confirmative evaluation checks whether a program, product, or policy still serves its purpose months or years after launch. It asks: Is this still needed? Is it still effective? Has the environment changed?

It’s the post-launch reality check most organizations skip. A hospital might introduce a telehealth platform during the pandemic. Usage spikes. Success declared. But two years later, patient adoption drops by 41%. Why? Because in-person visits resumed, and the interface never improved. No confirmative evaluation meant no warning.

And that’s exactly where many digital transformation projects die—not with a crash, but a slow fade. Because nobody bothered to ask, “Is this still working?” six months later. It’s a bit like buying a subscription you no longer use but keep paying for. To give a sense of scale, Gartner estimated in 2023 that enterprises waste $118 billion annually on underused software licenses.

So confirmative evaluation isn’t about fault. It’s about relevance. Because needs shift. Markets evolve. People change. And what worked yesterday might just be noise today.

How to Implement Confirmative Checks

Set milestones. At 6, 12, and 24 months post-launch, reassess impact, user satisfaction, and alignment with current goals. Use both quantitative metrics and qualitative input. Then act—update, sunset, or double down.

Meta-Evaluation: Judging the Judges

Meta-evaluation assesses the evaluation itself. Is the assessment valid? Was it biased? Did it measure what it claimed to? Think of it as quality control for evaluative processes. An external panel reviewing a university’s accreditation self-study. A third-party audit of a government impact report. This level of scrutiny is rare—but vital.

If we don’t evaluate our evaluations, how do we know they’re trustworthy? In 2019, a WHO-commissioned meta-evaluation of global maternal health programs found that nearly 30% of impact claims were based on flawed methodologies. Some used non-representative samples. Others confused correlation with causation. Without meta-evaluation, those errors go unnoticed.

Yet, few institutions budget time or resources for this. It feels meta, abstract, even indulgent. But because the credibility of any judgment depends on the rigor behind it, skipping this step risks building entire strategies on shaky ground. Honestly, it is unclear how many organizations routinely conduct meta-evaluations—data is still lacking.

I find this overrated in practice. Everyone wants results. Nobody wants to audit the machine producing them.

Evaluation Types Compared: Purpose, Timing, and Pitfalls

Let’s lay them side by side. Diagnostic: before. Formative: during. Summative: after. Confirmative: much later. Meta-evaluation: always questioning the process. Each has strengths, each has blind spots.

Diagnostic prevents misalignment but can be dismissed as “just another survey.” Formative enables growth but requires psychological safety. Summative delivers accountability but often oversimplifies. Confirmative ensures longevity but gets ignored in favor of new initiatives. Meta-evaluation secures validity but feels like navel-gazing to skeptics.

The problem is, most organizations over-rely on summative methods while underinvesting in the rest. That’s like only checking your car’s odometer and never looking at the oil level. One tells you distance. The other might save the engine.

Frequently Asked Questions

Can one evaluation serve multiple purposes?

Yes—but carefully. A mid-year employee review might include formative feedback (to improve performance) and summative elements (to determine bonus eligibility). But mixing them risks diluting intent. Employees may distrust “developmental” feedback if it’s tied to pay. The issue remains: transparency about purpose.

Which type is most important?

It depends. In fast-moving environments, formative evaluation wins—it keeps things agile. In compliance-heavy fields, summative takes priority. For innovation, diagnostic is key. There’s no universal answer. That said, I am convinced that confirmative evaluation deserves more attention. Most failures aren’t sudden. They’re slow drifts no one noticed.

How do you choose the right evaluation type?

Ask: What do you need to know, and when? If it’s before launch—diagnostic. During—formative. After—summative. Months later—confirmative. And if you’re relying on any evaluation to make big decisions, consider a meta-evaluation first. Because trust isn’t assumed. It’s earned.

The Bottom Line

These five types aren’t a checklist. They’re a toolkit. Smart organizations use them in sequence, not isolation. They diagnose before they design. They form feedback loops while executing. They summarize for accountability. They confirm long-term value. And they meta-evaluate to stay honest.

Because evaluation isn’t just about measuring success. It’s about understanding it. And sometimes, the most powerful question isn’t “Did we hit the target?” but “Was it the right target to begin with?” That’s where real insight begins.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.