YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
algorithmic  biased  engine  future  internet  language  learning  machine  mirror  models  people  problem  result  software  systems  
LATEST POSTS

Beyond the Silicon Hype: Is 20% of AI Bad for the Future of Human Labor and Creativity?

Beyond the Silicon Hype: Is 20% of AI Bad for the Future of Human Labor and Creativity?

The Ghost in the Code: Defining the 20% of AI Bad for Global Progress

When people talk about artificial intelligence, they usually picture a seamless Jarvis-like assistant or a robotic surgeon. That’s the shiny marketing brochure version. But the thing is, there is a dark underbelly involving Reinforcement Learning from Human Feedback (RLHF) that most developers don't like to broadcast. This specific 20% slice of the industry covers everything from the carbon footprint of H100 GPUs to the erosion of the "human-in-the-loop" oversight. We aren't just talking about a few wrong answers here and there. We’re talking about a fundamental shift in how information is verified. Because once a model starts hallucinating with 99% confidence, the distinction between a helpful tool and a misinformation engine vanishes.

The Architecture of Error

Why do these systems fail so predictably? It isn't because the math is broken, but because the objective functions we set for these machines often prioritize plausibility over truth. Imagine a lawyer using a GPT-4 variant to cite precedents, only to find the machine invented three non-existent cases from the 1990s. This happened in New York in 2023. This is what makes that 20% of AI bad—it is "confidently wrong." The architecture relies on probabilistic next-token prediction, which means the machine doesn't "know" anything; it just guesses the most likely next word based on a massive, messy scrape of the internet. And let's be real, the internet is a landfill of bad ideas.

Market Saturation and the Dead Internet Theory

There is this growing fear called the "Dead Internet Theory" where the web becomes a closed loop of AI-generated content feeding other AI. If 20% of all content online is already synthetic and slightly "off," then future models will be trained on that flawed output. As a result: we face a model collapse. This is where the entropy of the "bad" 20% begins to degrade the "good" 80% through a process of digital inbreeding. If we don't fix the data provenance now, we're essentially building a cathedral on a foundation of quicksand (and the sand is made of Reddit comments from 2014).

The Hidden Infrastructure of Inequity: Why 20% of AI Bad Outcomes are Built-In

People don't think about this enough, but the most dangerous part of AI isn't a Terminator; it’s a spreadsheet that says "No" to your mortgage application for no discernible reason. This algorithmic opacity is the core of the problem. When we look at automated decision systems (ADS) used in 2024 for hiring or parole, we see that the 20% of AI bad output is often just a mirror of historical human prejudice. But wait, isn't AI supposed to be objective? That's the lie we've been sold. If you feed a machine 50 years of biased hiring data, it doesn't become fair; it becomes a more efficient way to be unfair at scale.

Data Sweatshops and Ethical Debt

Behind the sleek interfaces of Silicon Valley lies a massive workforce in Kenya, India, and the Philippines. These "ghost workers" spend 10 hours a day labeling horrific imagery to teach the AI what to filter out. This human-in-the-loop exploitation is the hidden cost of the 20% of AI bad ethics. We have offloaded the psychological trauma of cleaning the internet to underpaid contractors. Is a chatbot "good" if its politeness was bought with the mental health of thousands of invisible laborers? Yet, the industry continues to treat this as an "externality," a fancy word for someone else's problem.

The Power Consumption Paradox

Then there is the electricity. A single query to a large language model consumes about 10 times more electricity than a Google search. With data centers in Virginia and Ireland straining the local power grids, we have to ask if the utility of a generative AI poem about cats justifies the 2.9 liters of water evaporated for cooling every few dozen prompts. The issue remains that we are burning the planet to automate the very things—art and conversation—that make us feel human. Which explains why climate activists are growing increasingly hostile toward the rapid scaling of "compute-heavy" models.

Algorithmic Bias and the Myth of Neutrality in Machine Learning

I find it fascinating that we trust code more than we trust people, even when the code is written by people we don't trust. Take facial recognition technology. In 2020, Robert Williams was wrongfully arrested in Detroit because an algorithm flagged his face as a match for a shoplifter. The tech had a significantly higher error rate for Black men than for White men. This is the error rate disparity that constitutes the "bad" 20%. It isn't just a bug to be patched in the next update; it’s a reflection of the training set imbalance. But the tech companies kept selling the software to police departments anyway, because the 80% success rate looked good on a quarterly report.

The Black Box Problem

Where it gets tricky is when the engineers themselves can't explain why a model made a specific choice. We call these Neural Networks, but unlike biological brains, they lack a "moral compass" or a sense of context. They operate in a high-dimensional vector space where "justice" is just another coordinate. If a Convolutional Neural Network decides that a certain zip code is a "high risk" for insurance, it’s not being malicious; it’s just finding a pattern. Except that pattern might be a proxy for race or income level. That changes everything about the "neutrality" of math.

Comparing Generative AI to Traditional Software: The 20% Gap

Traditional software is deterministic—if you press "A," you get "A." AI is probabilistic. This shift represents the most significant change in engineering since the industrial revolution. In traditional coding, "bad" software is just broken software. In the world of Large Language Models (LLMs), "bad" AI is software that works perfectly according to its math but fails according to human values. We are far from a solution where we can guarantee a model won't turn "toxic" after three days of exposure to the public internet (remember Microsoft’s Tay in 2016?).

Symbolic AI vs. Connectionism

Maybe the answer lies in the past? Back in the 1980s, we had Symbolic AI, which relied on hard-coded rules and logic. It was rigid and couldn't handle nuance, but it was 100% explainable. Modern AI, or Connectionism, is the opposite—it’s incredibly fluid but 0% explainable. This creates a 20% of AI bad outcome because we've traded "knowing why" for "getting a result." And while that works for suggesting a playlist on Spotify, it is a catastrophic trade-off for medical diagnoses or autonomous weapon systems. Hence, the push for Explainable AI (XAI) is more than a trend; it's a desperate attempt to regain control of the steering wheel.

The "Good Enough" Fallacy

In short, we've accepted a "good enough" standard for technology that is rapidly becoming the infrastructure of our lives. We allow for a 20% failure rate in AI that we would never accept in a bridge or a passenger jet. Why? Because the productivity gains are too seductive to ignore. We are gambling with the integrity of our information ecosystem for the sake of 0.5% GDP growth. It’s a risky bet, and honestly, the house (Big Tech) always seems to win while the users bear the "probabilistic" risks. But what happens when the 20% starts to leak into the 80%, and we can no longer tell the difference between the signal and the noise? That’s where the real trouble begins.

The Trap of Binary Morality and Statistical Gaps

The problem is that we treat artificial intelligence like a sentient entity with a conscience rather than a high-dimensional mathematical projection. Most observers fall into the trap of assuming that if 20% of AI bad outputs exist, it is due to a ghost in the machine. It is not. It is data exhaustion. When we discuss whether 20% of AI bad behavior is inherent, we often ignore that hallucination rates in Large Language Models typically hover between 3% and 10% depending on the complexity of the prompt, yet the perception of "badness" scales with the stakes of the task. If a bot suggests a recipe that tastes like cardboard, you shrug. Except that if it hallucinates a legal precedent in a federal filing, as happened in the 2023 Mata v. Avianca case, that small percentage becomes a professional catastrophe. Accuracy is not a sliding scale of morality.

The Myth of Universal Bias Correction

You cannot simply "patch" bias out of a neural network like you fix a leaky faucet. Because these models are trained on the Common Crawl dataset, which contains petabytes of unfiltered human internet discourse, the rot is structural. Many believe that Reinforcement Learning from Human Feedback (RLHF) is a magic wand. It is not. Research from Stanford has shown that aggressive safety filtering can actually degrade the reasoning capabilities of a model by up to 25% in specific logic tasks. We are effectively lobotomizing the engine to keep it from swearing, which creates a different kind of "bad" AI: one that is polite but functionally useless. Is a stupid, safe AI better than a brilliant, edgy one? This tension defines the current development bottleneck.

The Confusion Between Logic and Probability

We often mistake fluent syntax for cognitive reasoning. But let's be clear: a model does not know that "2+2=4" because it understands math; it knows it because that string of characters is a statistical certainty in its training data. When a model fails a basic spatial reasoning test—like asking how many sisters a man has if he has three brothers and each brother has one sister—it isn't being "bad." It is simply failing to navigate a low-probability linguistic path. The issue remains that we anthropomorphize these failures. We call them lies. (They are actually just noisy vectors). As a result: we blame the tool for lacking a soul it never claimed to possess in the first place.

The Ghost in the Infrastructure: Latent Toxicity

Beyond the visible errors lies a darker, less-discussed reality regarding the environmental and labor costs of maintaining the "good" 80%. Every time you generate a whimsical image of a cat in a space suit, you are consuming approximately 0.3 kWh of energy, which is equivalent to charging your smartphone sixty times over. If 20% of AI bad usage includes frivolous, high-energy waste, then the ecological footprint becomes a primary ethical concern. We focus on what the AI says, yet we rarely look at the lithium mines or the cooling systems sucking millions of gallons of water from local aquifers in Iowa and Arizona. Expert advice is simple: stop using generative tools for tasks that a simple search engine or a calculator can solve. Efficiency is the only true hedge against systemic misuse.

Shadow Labor and the Human Cost

The "bad" 20% is often scrubbed away by a hidden army of data labelers in the Global South, specifically in Kenya and the Philippines, who earn less than $2 per hour. These workers spend eight hours a day viewing graphic violence, hate speech, and CP to ensure your corporate chatbot stays "brand safe." This is the invisible trauma tax of the AI revolution. Which explains why your experience feels so clean; it was filtered through human misery. To truly mitigate the negative impact of these systems, we must demand supply chain transparency in AI development. If the "good" parts of the model are built on the exploitation of the "bad" parts of the labor market, the entire mathematical architecture is compromised. I admit that we are currently addicted to this cheap intelligence, but the bill will eventually come due.

Frequently Asked Questions

Does the 20% figure represent a permanent technical limitation?

The idea of 20% of AI bad performance is a moving target rather than a fixed physical constant. While error rates in computer vision have dropped significantly, with Top-5 error rates on ImageNet falling below 2% in recent years, the complexity of "badness" in natural language is harder to quantify. Current transformer architectures suffer from diminishing returns, meaning we might need 10x more data to fix the final 5% of errors. As a result: we may be stuck with a baseline level of unpredictability for the foreseeable future. Total elimination of error is a mathematical impossibility in stochastic systems.

Can we use AI to police other AI systems?

Using a "Constitutional AI" approach, companies like Anthropic use a secondary model to critique and filter the outputs of a primary model. This creates a recursive feedback loop where the watchdog is just as prone to hallucinations as the actor. Data suggests that automated moderation systems still have a false positive rate of roughly 12% when detecting nuanced sarcasm or cultural slang. You cannot fix a mirror by looking at it through another cracked mirror. We need human-in-the-loop oversight to ensure the alignment problem doesn't spiral into a hall of mirrors.

Will the bad 20% eventually lead to a catastrophic AI takeover?

The fear of a "paperclip maximizer" or a rogue AGI is largely a distraction from the algorithmic harms happening right now. We are far more likely to be harmed by a biased credit scoring algorithm or a flawed recidivism prediction tool than a killer robot. In 2022, a survey of AI researchers found that while 36% feared a "catastrophic" outcome, the vast majority were concerned with immediate socio-economic displacement. The issue remains one of policy and corporate accountability. We don't need to fear the machine's intent; we need to fear the incompetence of its deployment.

Toward a Pragmatic Symbiosis

The obsession with whether 20% of AI bad outcomes define the technology is a binary distraction we can no longer afford. We must stop waiting for a perfectly sanitized oracle that will never arrive. The reality is that AI is a force multiplier for existing human intent, amplifying both our brilliance and our bigotry with equal indifference. I take the position that the "bad" 20% is not a bug to be deleted, but a mirror of our own messy data that we are finally being forced to confront. We should treat AI like a high-performance engine that lacks a steering wheel; the power is undeniable, but the direction is entirely our responsibility. If we fail to regulate the deployment context rather than just the code, we deserve the chaos that follows. Stop blaming the math for the sins of the architect.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.