YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
actually  assessment  coherence  criteria  development  effectiveness  efficiency  evaluation  evaluators  global  impact  project  relevance  shifts  success  
LATEST POSTS

Beyond the Surface: Decoding the Six Assessment Criteria Used to Evaluate Global Development Projects

Beyond the Surface: Decoding the Six Assessment Criteria Used to Evaluate Global Development Projects

The Evolution of Accountability: Why We Need the Six Assessment Criteria Today

Evaluation used to be a messy, subjective affair where success was often defined by the person holding the loudest megaphone. But that changed. In 1991, a set of principles emerged to bring order to the chaos, though it wasn't until the 2019 revision that the framework we use today—the "DAC Criteria"—was truly solidified with the addition of coherence. The thing is, evaluating a water sanitation project in rural Ethiopia is fundamentally different from auditing a tech startup in San Francisco. You aren't just looking at profit; you are looking at human lives. Because these environments are so volatile, the criteria act as a North Star for evaluators trying to navigate the fog of conflicting data and local politics. And yet, even with these tools, the industry still struggles with bias. It’s an imperfect science, but it’s the best one we’ve got.

From Five to Six: The 2019 Paradigm Shift

For nearly three decades, the world got by with only five benchmarks. Then, the evaluation community realized a glaring hole existed: projects were operating in silos. Where it gets tricky is when a health initiative inadvertently undermines a local education program because no one checked if they were aligned. Coherence was introduced to fix this specific fracture. It asks if a project fits within the existing ecosystem of policies and other interventions. Honestly, it’s unclear why it took so long to realize that doing good in a vacuum isn't enough. We finally moved toward a "nexus" approach, recognizing that climate change, conflict, and poverty are tangled threads that cannot be pulled individually without affecting the whole tapestry.

The Weight of Subjectivity in Objective Frameworks

I find it fascinating that we pretend these criteria are purely mathematical. They aren't. While efficiency might involve hard data—like calculating the cost per vaccine delivered—something like sustainability is essentially a high-stakes prediction about the future. Evaluators often disagree on which criterion matters most. Is a project a "success" if it was wildly effective but completely unsustainable once the foreign funding dried up? Some say yes; I say we're far from it. This tension keeps the sector honest, forcing a constant debate between short-term wins and long-term systemic change.

Deconstructing Relevance and Coherence: The Foundation of Strategic Alignment

The first hurdle any project must clear is proving it actually belongs in the room. Relevance is about the "why" and the "who." It examines the extent to which the objectives and design of the intervention respond to beneficiaries’ needs, global priorities, and local policies. If you build a state-of-the-art digital library in a region where the electricity grid is down 18 hours a day, your project is irrelevant. It’s that simple. You have to look at the Target Population's actual lived reality rather than what looked good on a grant application in London or Washington D.C. Consistency with the 2030 Agenda for Sustainable Development is usually the benchmark here, ensuring that local actions feed into global Sustainable Development Goals (SDGs).

The Intricacies of External and Internal Coherence

Coherence is the new kid on the block, and it’s surprisingly complex. It is divided into two distinct streams. Internal coherence looks at the "synergy" within a specific organization—does the UN's food program talk to its refugee agency? External coherence, meanwhile, looks at how the project meshes with the work of other actors, like local NGOs or the private sector. People don't think about this enough, but duplication of effort is one of the greatest wastes of resources in the history of development aid. By forcing evaluators to check for "complementarity," the six assessment criteria actively discourage the "hero complex" where one organization tries to do everything alone. This shifts the focus from individual glory to collective impact, which changes everything about how we design interventions.

Adapting to Changing Contexts: The Fluidity of Relevance

Relevance isn't a "set it and forget it" metric. A project that was perfectly relevant in January 2020 might have become entirely useless by June 2020 due to the COVID-19 pandemic. That's a massive point of contention. Should we judge a project based on what we knew at the start (the ex-ante view) or what we know now (the ex-post reality)? Most experts now argue for "adaptive management," where the project design is tweaked mid-stream to stay relevant. But this is hard. It requires donors to be flexible with their money, and as anyone who has worked with a government agency knows, flexibility is not exactly their middle name. Yet, if the context shifts—say, through a sudden economic collapse or a natural disaster—and the project doesn't shift with it, the Theory of Change falls apart.

Effectiveness versus Efficiency: The Constant Tug-of-War Between Results and Resources

Once you’ve established that a project is relevant, you have to ask: did it actually work? This is effectiveness. It measures the extent to which the intervention achieved its objectives, including any differential results across different groups. But—and here is the kicker—effectiveness doesn't care about the price tag. You could spend $10 million to save one person, and technically, the project was effective. That’s why we immediately pair it with efficiency. Efficiency is the economic lens. It asks if the results were delivered in a timely and cost-effective way. We are talking about the "Value for Money" (VfM) ratio. In a world of finite resources, being effective but inefficient is a luxury we can no longer afford. As a result: evaluators spend an enormous amount of time pouring over financial audits and logframes to see if there was a cheaper way to get the same result.

Measuring the Intangible: When Effectiveness Isn't a Number

How do you measure the effectiveness of a "peace-building" seminar or a "women's empowerment" workshop? You can't just count heads. You have to look at qualitative indicators, which is where things get messy and beautiful. We use tools like the Most Significant Change (MSC) technique or "Outcome Harvesting" to capture stories that numbers miss. But don't be fooled; the donors still want their spreadsheets. The issue remains that we often favor things that are easy to measure (like "number of books distributed") over things that actually matter (like "improvement in literacy rates"). It is a trap that even seasoned evaluators fall into because it's safer to report a hard number than a nuanced sociological shift.

The Efficiency Trap: Is Cheaper Always Better?

There is a dangerous tendency to confuse efficiency with being "cheap." If you buy the lowest-quality seeds for a farming project to save money, and they all die in a week, you haven't been efficient—you've been wasteful. Real efficiency means optimal resource transformation. It takes into account the opportunity cost of the funds. Could that money have done more good elsewhere? For example, the Sightsavers organization is often cited for its high efficiency because it can perform a trachoma surgery for just a few dollars, but that efficiency is built on decades of supply-chain optimization, not just cutting corners. Which explains why we can't judge efficiency without looking at the quality of the outputs. A cheap bridge that collapses in three years is the height of inefficiency, no matter how little you spent on the concrete.

Alternative Frameworks: Can the Six Assessment Criteria Be Replaced?

Despite their dominance, the DAC criteria are not the only game in town. Some critics argue they are too "Western-centric" or too focused on top-down accountability. Alternatives like the ALNAP criteria for humanitarian action or the Utilization-Focused Evaluation (UFE) model proposed by Michael Quinn Patton offer different perspectives. UFE, for instance, argues that the most important thing is whether the evaluation is actually used by the people on the ground, regardless of whether it checks all six boxes. Except that most major donors—the World Bank, USAID, the EU—still demand the "Big Six." It's the universal language of aid. If you want the funding, you play by the rules, which is a bit of a cynical reality, but it’s the reality nonetheless.

Indigenous and Localized Evaluation Perspectives

What if the "beneficiaries" have a completely different definition of success? This is where the standard criteria often stumble. In many Pacific Island cultures, for example, the concept of reciprocity and relational harmony is far more important than "efficiency." If a project achieves its goals but destroys the social fabric of a village, was it a success? Under the traditional six criteria, it might still get a passing grade. But under a localized framework, it would be a failure. We are starting to see a push for "culturally responsive evaluation," which tries to blend the rigor of the DAC criteria with local values. It’s a slow transition, and many bureaucrats are resistant because it’s harder to standardize. But, if we want to move beyond the colonial roots of development, it's a conversation we have to have.

Common pitfalls and the trap of subjectivity

The problem is that most evaluators fall headfirst into the trap of confirmation bias before they even open the dossier. We often see practitioners treating assessment frameworks as a rigid checklist rather than a living dialogue. Because they seek to validate their initial gut feeling, they ignore the nuanced overlap between efficiency and impact. But how can one truly separate the two when a project’s internal mechanics are failing? The data suggests that roughly 38% of evaluations suffer from "halo effects" where a high score in relevance artificially inflates the sustainability rating. It is a mess of circular logic. Yet, we continue to pretend these pillars exist in a vacuum.

The transparency illusion

Let's be clear: citing "insufficient data" is frequently a convenient mask for a lack of methodological rigor. Professionals often bypass the six assessment criteria by focusing on the easiest metrics to quantify, such as budget burn rates or attendance figures. This is a catastrophic error. You cannot measure human dignity or systemic shifts with a simple spreadsheet. In short, when the metric becomes the goal, it ceases to be a good metric. We see this in 15% of public sector audits where "effectiveness" is claimed simply because the money was spent on time, regardless of whether the actual problem was solved. It is almost funny if it weren't so expensive.

Conflating output with outcome

Confusion reigns supreme here. An output is the physical bridge; the outcome is the increased trade between two previously isolated villages. Except that most reports stop at the bricks and mortar. If you fail to distinguish between these layers, your evaluative process is effectively toothless. Experts suggest that distinguishing these requires a 40% increase in qualitative interview time compared to standard quantitative surveys. The issue remains that stakeholders want quick wins, not deep truths. As a result: we get superficial success stories that crumble under the slightest longitudinal scrutiny.

The hidden lever: Coherence and systemic synergy

Often relegated to the sidelines, the criterion of coherence is actually the secret sauce of any high-level analysis. It asks whether the intervention fights against or aligns with existing policies. It is the "internal versus external" harmony. (Think of it as the tuning of a piano where every other instrument is slightly off-key). If your project provides clean water but another agency is subsidizing a nearby polluting factory, your project evaluation score for coherence should be zero. We must stop viewing interventions as lonely islands in a vast sea of indifference.

The expert edge on adaptive management

You should prioritize the "pivot" over the "plan." The most sophisticated evaluators are now using real-time feedback loops to adjust the six assessment criteria weightings mid-cycle. Static evaluations are relics of a slower age. Which explains why 72% of top-tier NGOs have moved toward iterative assessment models that prioritize learning over policing. My stance is firm: if you aren't willing to change your criteria when the context shifts, you aren't evaluating; you are merely documenting a failure you were too rigid to prevent. We admit that this requires a level of bravery many bureaucrats simply do not possess.

Frequently Asked Questions

How do these standards apply to private sector startups?

While the six assessment criteria were birthed in the world of international development, venture capital firms are increasingly adopting modified versions to track Environmental, Social, and Governance (ESG) performance. Data from recent market shifts indicates that startups with high strategic alignment scores are 2.4 times more likely to secure Series B funding. The logic is simple: investors want to see that a company isn't just profitable today, but sustainable and relevant in a shifting regulatory landscape. You have to prove that your "disruption" isn't just chaos, but a coherent response to a market gap. It turns out that impact assessment is just as much about survival as it is about altruism.

Can you skip one of the criteria if it feels irrelevant?

The short answer is a resounding no, although many try. Skipping a pillar like sustainability creates a massive blind spot that usually haunts the organization three years down the line. A study of 500 deactivated projects found that 60% failed because the "efficiency" focus ignored the long-term "impact" requirements. You might think you are saving time, but you are actually just deferring a crisis. Each of the six assessment criteria acts as a weight-bearing wall in the house of your project. Removing one might not cause an immediate collapse, but the cracks will appear the moment the wind blows.

What is the most difficult criterion to measure accurately?

Impact takes the crown for difficulty because it requires isolating the specific changes caused by your intervention from the noise of the global environment. To do this properly, you need counterfactual analysis or randomized control trials, which can increase evaluation costs by 25% or more. Most organizations lack the analytical capacity to track long-term shifts that occur five to ten years after the funding has dried up. Because human behavior is inherently unpredictable, mapping the causal chain remains the "holy grail" of the evaluative field. It is a grueling, expensive, yet unavoidable task if you want to claim any degree of real success.

Towards a more honest evaluative future

The six assessment criteria are not a divine commandment, but they are the best shield we have against institutional delusion. We must stop treating these performance benchmarks as a bureaucratic hurdle to be cleared with the minimum amount of honesty required. The reality is that most assessments are far too kind, shielding stakeholders from the uncomfortable truth that their "innovative" solutions are often redundant or inefficient. I believe we need a radical shift toward negative evaluation, where we actively hunt for reasons why a project failed rather than polishing its mediocre successes. If we continue to fear the data, we will continue to waste billions on projects that look good on paper but leave no trace on the ground. True expertise lies in the willingness to see the evaluative gaps and call them by their real names. It is time to stop the charade of perfect scores and start the difficult work of real accountability.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.