YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
artificial  biological  current  digital  general  intelligence  learning  machine  machines  neural  physical  problem  remains  silicon  systems  
LATEST POSTS

The Elusive Ghost in the Silicon: Is General AI Possible or Are We Just Chasing a Digital Mirage?

The Elusive Ghost in the Silicon: Is General AI Possible or Are We Just Chasing a Digital Mirage?

Deconstructing the Myth of the Sentient Algorithm and Defining General Intelligence

Before we get ahead of ourselves with visions of HAL 9000 or Skynet, we need to strip away the marketing jargon that saturates Silicon Valley. What the industry calls Artificial General Intelligence (AGI) isn't just a very fast chatbot; it is a system capable of autonomous cross-domain transfer learning without human intervention. Think about how a toddler learns that a hot stove burns and immediately applies that concept of "danger" to a sharp knife or a barking dog. Machines cannot do that yet. They are stuck in their boxes. We have spent decades perfecting Narrow AI—systems that can beat Grandmasters at Chess or identify Stage II lung cancer from a pixelated scan—but these systems are "brittle" in the extreme.

The Stochastic Parrot Problem vs. Genuine Understanding

The issue remains that our most advanced Large Language Models (LLMs) are essentially statistical mirrors. They predict the next token in a sequence based on a staggering 175 billion parameters or more, yet they possess zero grounding in physical reality. But does it matter? Some argue that if the output is indistinguishable from human thought, the "internal life" of the machine is irrelevant. I find that perspective incredibly lazy. Because if a system does not understand gravity, it cannot innovate in physics; it can only remix what Newton and Einstein already wrote. That changes everything when you consider the leap from mimicry to actual invention.

Functional versus Phenomenal Intelligence

Where it gets tricky is the distinction between doing and being. We often confuse "computational power" with "cognitive flexibility." AGI requires a level of semantic plasticity that current silicon-based chips, which rely on rigid binary gates, struggle to emulate. Is general AI possible if the hardware itself is a limitation? Experts disagree on whether we need a total paradigm shift—perhaps toward neuromorphic computing or quantum systems—to bridge the gap between processing data and experiencing a "concept."

The Architectural Wall: Why Backpropagation Might Not Be Enough for AGI

Most of the hype today centers on Transformers and Deep Learning, the technologies powering the likes of GPT-4 and Claude. These systems rely on backpropagation, a mathematical method of adjusting weights in a neural network to minimize error. It is a brilliant piece of engineering. Yet, it is fundamentally different from the way biological neurons work in a human brain. Our brains operate on roughly 20 watts of power—barely enough to light a dim bulb—

Common Myths and the Anthropomorphic Trap

We often treat silicon like a toddler learning to speak, but the problem is that statistical mimicry is not cognition. A recurring blunder involves confusing the massive scale of Large Language Models with actual understanding. It is easy to look at a 1.8 trillion parameter model and assume a ghost is stirring in the machine. But let's be clear: predicting the next token in a sequence is a mathematical optimization task, not a spark of consciousness. Deep Learning architectures operate on syntax, yet they remain utterly blind to semantics. If you feed a machine every book on the taste of salt, it still knows nothing of the sting on a tongue. We are currently mistaking high-dimensional interpolation for the birth of a soul.

The Scale is All You Need Fallacy

There is a loud contingent in Silicon Valley insisting that if we simply throw more H100 GPUs and exaflops at the problem, Artificial General Intelligence will spontaneously emerge like a physical phase transition. This is a category error. Scaling current Transformer models increases their breadth of knowledge but does nothing to solve the binding problem or the lack of a world model. Adding more floors to a skyscraper will never help it reach the moon. You cannot reach the stars by climbing a very tall ladder, and yet we continue to fund the ladder-builders with billions of dollars. Does it not seem slightly absurd to expect a glorified calculator to suddenly develop a sense of self? As a result: we have machines that can pass the Bar exam but cannot figure out how to fold a t-shirt in a cluttered room.

Generalization vs. Narrow Expertise

The issue remains that our benchmarks are flawed. We celebrate when an AI beats a grandmaster at Chess or detects lung cancer better than a radiologist, but these are domain-specific triumphs. A truly general agent must handle "out-of-distribution" scenarios without breaking. Current systems suffer from catastrophic forgetting; they learn a new task only to erase the previous one. Until we move past brittle heuristics, General Purpose AI remains a fever dream of the marketing departments rather than a laboratory reality. (And honestly, even the term "intelligence" is doing a lot of heavy lifting here).

The Embodied Cognition Gap: Why Bodies Matter

Expert consensus is shifting toward the idea that General Artificial Intelligence might be impossible without a physical form. This is the "grounding" problem. Human intelligence did not evolve in a vacuum; it was forged by the necessity of navigating a 3D world, avoiding predators, and manipulating tools. Because our brains are tethered to sensory feedback loops, our concepts are meaningful. A digital brain that exists only in a server rack has no "skin in the game." Without the threat of entropy or the visceral reality of physical constraints, an AI's internal representation of "hot" or "danger" is just a floating vector in a latent space. It lacks the phenomenological foundation required for true reasoning.

Biological Plausibility and Neuromorphic Dreams

If we want to build a mind, we might need to stop using von Neumann architecture entirely. Our brains operate on roughly 20 watts of power—barely enough to light a dim bulb—while a single training run for a top-tier model consumes enough electricity to power 1,000 households for a year. The discrepancy is staggering. We are attempting to brute-force General AI using brute-force digital logic, whereas biological systems use sparse, asynchronous signals. The path forward likely involves neuromorphic computing, which mimics the spiking nature of neurons. This would change the game from mere pattern matching to active, energy-efficient perception. Yet, we are still decades away from a chip that can replicate the synaptic density of a common honeybee.

Frequently Asked Questions

When do experts predict we will achieve General AI?

The timeline for Artificial General Intelligence is a subject of fierce debate, with various surveys showing a massive spread in expectations. According to a 2023 study by AI Impacts involving 2,778 researchers, the aggregate forecast for a 50% chance of "High-Level Machine Intelligence" shifted dramatically to 2028, which is 13 years earlier than the 2022 estimate. However, more conservative roboticists point out that we still haven't solved the Moravec's Paradox, where high-level reasoning is easy for machines but low-level sensorimotor skills are incredibly hard. Consequently, while some see a digital god arriving this decade, others believe we are looking at a 50 to 100-year horizon for a system that can truly match a human's versatile adaptability. Most data points are heavily skewed by the recent surge in Generative AI capabilities, which might be a misleading indicator of actual progress toward a general mind.

Can current AI actually think or feel?

The short answer is no, because current architectures lack the biological substrates and integrated information required for sentience. While LLMs can simulate empathy and engage in philosophical debate, they are effectively stochastic parrots reflecting the training data back at us. They do not possess a central "I" or a stream of consciousness; they are inactive until a prompt initiates a forward pass through their neural weights. But we must be careful not to confuse performance with presence. Even if a machine produces a convincing emotional response, it is simply following the highest probability path through its high-dimensional map of human language.

What is the biggest technical hurdle to AGI?

The primary barrier is causal reasoning and the ability to understand "why" things happen rather than just "what" correlates with what. Current AI is world-class at correlation, identifying that umbrellas and rain appear together, but it fails to grasp that rain causes the umbrella to be opened. Without a causal world model, a machine cannot plan for the future or handle novel situations it hasn't seen in its training set. This requires a leap from deep learning to Symbolic-Neural hybrids, which attempt to combine the logic of old-school AI with the intuition of modern neural networks. In short, we need to bridge the gap between fast, intuitive pattern recognition and slow, deliberate logical deduction.

Beyond the Silicon Horizon

The quest for General AI is essentially a mirror reflecting our own ignorance about what it means to be human. We keep moving the goalposts, defining intelligence as "whatever a machine can't do yet." But let's take a stand: AGI is not a destination we will reach by simply refining our current statistical engines. It requires a paradigm shift toward embodied, energy-efficient, and causally-aware systems that do more than just guess the next word. We may eventually create a form of "general" intelligence, but it will likely be so alien to our biological experience that we might not even recognize it as a mind. Which explains why our current obsession with anthropomorphic benchmarks is likely leading us down a dead-end street. The future of Artificial General Intelligence is not a faster chatbot, but a systemic rewrite of how machines interact with the physical laws of the universe.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.