YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
algorithms  biometric  companies  doesn't  global  illegal  increasingly  machine  police  privacy  public  social  specific  systems  technical  
LATEST POSTS

The Growing List of Forbidden Algorithms: Deciphering What Kind of AI is Illegal in the Modern Global Market

The Growing List of Forbidden Algorithms: Deciphering What Kind of AI is Illegal in the Modern Global Market

The Jurisdictional Nightmare of Defining Unlawful Intelligence

The thing is, "illegal" isn't a universal constant like the speed of light; it is a messy, shifting patchwork of territorial anxieties. If you are operating in the European Union, the EU AI Act (officially adopted in early 2024) is the heavy hitter, creating a strict hierarchy of risk. But try explaining that to a developer in a jurisdiction with zero oversight, and they might just laugh at you. We are witnessing a fragmented digital world where a facial recognition tool used by police in one city is a standard security feature, while in another, it’s a direct ticket to a multi-million dollar lawsuit. The issue remains that the technology doesn't respect borders, yet the police officers at those borders definitely do. Because of this, what kind of AI is illegal often comes down to the specific data privacy and civil liberty frameworks of the person being "processed" by the machine.

The Ghost in the Machine: Why Code Becomes Criminal

How does a mathematical model actually become a "crime"? It happens when the output of that model—those seemingly neutral weights and biases—violates a protected human right. When an algorithm decides who gets a mortgage or who stays in jail based on proxies for race, it isn't just a "bad model"; it's a violation of the Equal Credit Opportunity Act or similar anti-discrimination laws. Honestly, it's unclear if we can ever fully scrub these biases out, but the legal hammer doesn't care about your technical difficulties. I believe we have spent too much time worshipping "efficiency" while ignoring the fact that an efficient tool for discrimination is just a more dangerous weapon. Yet, we see companies still pushing the envelope, hoping their "black box" defense will hold up in a court of law (it usually doesn't).

Systems of Control: The Prohibited List You Need to Know

When asking what kind of AI is illegal, you have to look at the "Unacceptable Risk" category. The most notorious example is Social Scoring, a concept often associated with the 2014 Chinese government initiatives but now explicitly banned for general use in Western democracies. This involves systems that track your everyday behavior—what you buy, who you talk to, whether you cross the street on a red light—and distill your worth into a single numerical value. It’s the ultimate algorithmic panopticon. If a government uses an AI to grade its citizens and subsequently denies them access to public services based on that score, that system is illegal under the new global standards of human-centric AI.

Biometric Identification and the End of Anonymity

Real-time Remote Biometric Identification (RBI) in publicly accessible spaces is the new frontline of the legal war. Imagine walking through a crowded square in London or Paris and having a machine instantly cross-reference your face against a database of millions in under 200 milliseconds. Except that in most of Europe, doing this for general policing is now largely forbidden, save for very specific "ticking time bomb" scenarios like searching for a kidnapped child or preventing an imminent terrorist attack. People don't think about this enough, but the right to be anonymous in a crowd is a cornerstone of a free society. Which explains why Clearview AI has faced such a massive backlash and numerous cease-and-desist orders across multiple continents; scraping billions of photos from social media to build a private "search engine for faces" is a textbook example of what kind of AI is illegal in the eyes of privacy advocates.

Cognitive Behavioral Manipulation: The Subliminal Threat

Then we have the weirder, more insidious stuff. Systems that use subliminal techniques to distort a person's behavior in a way that causes physical or psychological harm are strictly off-limits. Think of an AI-driven toy that uses "secret" audio cues to encourage a child to engage in dangerous activities, or a workplace algorithm that uses micro-nudges to push employees past the point of exhaustion. That changes everything. We aren't just talking about privacy anymore; we are talking about the integrity of the human will. Is it illegal to be a little bit manipulative in advertising? No. But is it illegal to use AI to bypass a person's conscious decision-making? In the EU, absolutely.

The Technical Threshold: When High-Risk Becomes "Too High"

Where it gets tricky is the "High-Risk" category, which isn't illegal per se, but is so heavily regulated that it might as well be if you don't have a team of fifty lawyers. These are systems used in critical infrastructure, education, or law enforcement. If you are building an AI to grade SAT essays or to filter resumes for a Fortune 500 company, you are playing in a legal minefield. As a result: you must provide technical documentation, ensure human oversight, and guarantee a level of robustness and accuracy that most startups frankly can't meet. The issue remains that "accuracy" is a moving target. Can you really prove your AI is 99% accurate across every demographic? Many companies are finding out the hard way that their "proprietary" algorithms are actually illegal undercurrents of bias.

Predictive Policing and the Fallacy of Minority Report

Predictive policing—using algorithms to "forecast" where crimes will happen or who will commit them—is a primary target for regulators. The United Nations has repeatedly warned that these tools often just automate existing police bias, creating a feedback loop where marginalized neighborhoods are over-policed because the data says they are high-crime, which leads to more arrests, which reinforces the data. It's a self-fulfilling prophecy. Because of this, using AI to predict an individual’s likelihood of committing a crime based solely on personality traits or past behavior is increasingly viewed as an illegal infringement on the presumption of innocence. We are far from a world where "Pre-Crime" is a reality, and the law is making sure it stays that way.

Comparison: The Narrow Gap Between Innovation and Infringement

To understand what kind of AI is illegal, we must compare "permissible" data scraping with "illegal" mass surveillance. For instance, using a Large Language Model (LLM) to summarize public court records is generally fine, even if it’s transformative. However, using that same LLM to scrape private LinkedIn profiles to create a "reliability index" for insurance companies without the users' consent? That’s where you hit a wall. The difference isn't the code; it's the context of the data and the power dynamic between the user and the system. Experts disagree on exactly where the line should be drawn for "fair use" in training data, but the New York Times vs. OpenAI lawsuit (filed in late 2023) suggests that the era of "free-for-all" data harvesting is ending.

The "Black Box" Problem vs. The Right to Explanation

In the past, you could hide behind the "it's too complex to explain" excuse. Not anymore. The General Data Protection Regulation (GDPR) and the new AI-specific laws have introduced the "Right to Explanation." If an AI rejects your loan application, the bank can't just say "the computer said no." They have to explain the principal parameters that led to that decision. But here is the irony: many of the most advanced neural networks are so complex that even their creators don't fully understand why a specific neuron fired. This creates a fascinating legal paradox. If an AI's decision-making process is fundamentally unexplainable, and the law requires an explanation, does that make the AI itself illegal? For certain high-stakes applications, the answer is increasingly "yes."

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.