YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
actually  artificial  computer  father  geoffrey  hinton  intelligence  learning  machine  modern  network  networks  neural  schmidhuber  turing  
LATEST POSTS

Forget Alan Turing: Why Geoffrey Hinton is the True Modern Day Father of AI

Forget Alan Turing: Why Geoffrey Hinton is the True Modern Day Father of AI

The messy evolution of machine intelligence and where it all went wrong

We need to clear the air about what artificial intelligence actually means in the 2020s. For decades, the discipline was trapped in a dogmatic cul-de-sac known as symbolic AI, or Good Old-Fashioned AI. Programmers spent millions of hours writing explicit, top-down rules—if-then statements of agonizing complexity—trying to teach computers how to understand a cup or a cat. It was a disaster. The systems were brittle, expensive, and profoundly stupid. People don't think about this enough, but you cannot hardcode the nuances of human perception into a rigid logical matrix. Except that a small, stubborn faction of researchers refused to give up on a completely different approach: artificial neural networks. Instead of programming rules, they wanted to build digital brains that could learn from data organically. This was the connectionist movement, a rebellious subculture that looked at biology rather than formal logic for inspiration. The issue remains that during the infamous AI Winter of the 1970s and 1980s, mentioning neural networks was a surefire way to have your research grant brutally rejected. The establishment viewed connectionism as a dead-end pseudoscience, a naive dream championed by eccentric academic outcasts who didn't understand the limitations of contemporary hardware.

The computational wilderness and the weight of mathematical skepticism

Imagine spending twenty years working on a mathematical framework that your peers openly mock. That was Hinton’s reality. The dominant scientific consensus, spearheaded by Minsky and Papert's devastating 1969 book, had mathematically "proven" that simple perceptrons were incapable of solving complex, non-linear problems. Hence, the entire field starved. But Hinton, working out of the University of Toronto and later funded by the Canadian Institute for Advanced Research, kept tinkering with backpropagation algorithms because he intuitively understood that the brain doesn't use symbolic logic to recognize a face, so why should a machine?

How Geoffrey Hinton cracked the code of deep learning

Where it gets tricky is understanding how Hinton actually broke the deadlock. In 1986, alongside David Rumelhart and Ronald Williams, he published a seminal paper that popularized the backpropagation algorithm. This changes everything. Backpropagation allowed multi-layered neural networks to adjust their internal weights based on errors, meaning the network could systematically correct itself. But the world wasn't ready. The math worked, yet the computers of the late 1980s lacked the raw horsepower and the massive datasets required to prove the architecture's true utility. And so, Hinton waited. He waited until 2012, a year that will go down in computer science history as the moment the old guard died. Alongside his graduate students Alex Krizhevsky and Ilya Sutskever—who would later co-found OpenAI—Hinton unleashed AlexNet during the ImageNet Computer Vision Competition. AlexNet didn't just win; it obliterated the competition, achieving an unprecedented error rate of just 15.3% compared to the runner-up's 26.2%. They did this by running deep convolutional neural networks on consumer Graphics Processing Units, transforming video game hardware into engines of pure cognitive synthesis.

The mechanics of the backpropagation revolution

How does this wizardry actually function on a granular level? Think of a deep neural network as an immense corporate hierarchy where information flows from the entry-level interns up to the executive suite, but with a crucial twist—when the CEO makes a bad decision, the blame is mathematically calculated and distributed backward down the chain so everyone learns exactly how much they screwed up. By utilizing non-linear activation functions, AlexNet proved that stacking multiple hidden layers could extract increasingly abstract features from raw pixels. It was the definitive birth of modern deep learning, cementing Hinton's status as the modern day father of AI.

The 2018 Turing Award and the validation of a radical vision

The academic establishment finally bowed to reality in 2018, when the Association for Computing Machinery awarded the prestigious A.M. Turing Award—the Nobel Prize of computing—jointly to Geoffrey Hinton, Yann LeCun, and Yoshua Bengio. This trio, affectionately dubbed the Godfathers of AI, had spent decades in the trenches together. But even among these titans, Hinton was widely recognized as the ideological compass, the man whose unrelenting faith in connectionist architectures saved the entire field from permanent stagnation.

The cognitive leap: Beyond simple pattern recognition

The thing is, Hinton was never just interested in engineering cooler software; he wanted to understand the human mind. His background in experimental psychology gave him an unorthodox perspective that traditional computer scientists lacked. While others saw neural nets as mere statistical optimization tools, Hinton saw them as a mirror to human consciousness. This philosophical undertone is precisely what allowed him to pioneer concepts like Restricted Boltzmann Machines and unsupervised pre-training, techniques that allowed machines to discover structure in data without human supervision. But we're far from it if we think this journey was a smooth, logical progression toward enlightenment. It was messy, full of false starts, and marked by a bizarre, almost religious devotion to architectures that frequently failed before they finally succeeded. Consider the unreasonable effectiveness of data—a phenomenon where algorithms suddenly become exponentially smarter not because the code changes, but simply because you feed them billions of parameters instead of millions. Hinton anticipated this scaling effect long before the internet even existed to provide the data.

The rival claims to the crown: McCarthy, Turing, and Schmidhuber

Naturally, experts disagree on who deserves the ultimate title. If you want to be a historical purist, John McCarthy coined the term artificial intelligence back in 1955, and Jürgen Schmidhuber will gladly remind anyone who listens—often with exhaustive, multi-page citations—that his Long Short-Term Memory networks basically laid the groundwork for modern sequence learning. I find the Schmidhuber arguments technically fascinating, but there is a profound difference between inventing a mathematical mechanism and catalysts who spark a global industrial revolution. As a result: we must distinguish between the ancient fathers of the field and the modern day father of AI. Turing gave us the theoretical limits of computation; McCarthy gave us the academic discipline; but Hinton gave us the actual, working intelligence that is currently disrupting global economics. Without Hinton’s stubborn refusal to abandon neural networks during the dark ages of the late 20th century, your smartphone wouldn't recognize your voice, medical AIs wouldn't be diagnosing tumors with superhuman accuracy, and generative models would still be a distant science fiction fantasy.

Common mistakes and misconceptions about the true patriarch of machine learning

The single-inventor fallacy

We love a neat, linear narrative. The human brain craves a solitary genius, a Prometheus stealing fire from the digital gods, which explains why many amateur historians crown a lone modern day father of AI. But let's be clear: this is a structural illusion. Yann LeCun did not invent the convolutional neural network in an intellectual vacuum at Bell Labs in 1989. He stood squarely on the shoulders of Kunihiko Fukushima, whose 1980 Neocognitron laid the blueprints. To isolate any single researcher—be it Geoffrey Hinton, Yoshua Bengio, or Jürgen Schmidhuber—as the exclusive source of our current generative epoch ignores the messy, collaborative architecture of global academia. Science moves in messy swarms, not isolated sparks.

Confusing foundational philosophy with architectural execution

Alan Turing defined the metaphysical boundaries of machine thought in 1950, yet he never witnessed a gradient descent optimization loop function in real-time. The problem is that enthusiasts routinely conflate theoretical visionaries with the engineers who actually built the modern day father of AI paradigms we exploit now. Turing gave us the conceptual permission to dream of thinking machines. Conversely, the 2012 AlexNet breakthrough, which utilized deep convolutional networks powered by parallel GPU processing, shifted us from philosophical speculation to raw engineering reality. One group mapped the territory; the other actually laid the high-speed fiber-optic cables.

The recency bias of the transformer era

OpenAI dropped ChatGPT, and suddenly the public assumed the lineage of artificial intelligence began with attention mechanisms in 2017. Absolute nonsense. The true architectural spine of modern conversational systems relies heavily on Long Short-Term Memory networks, conceptualized way back in 1997. If you only look at the current LLM gold rush, you miss the decades of mathematical exile these researchers endured during the harsh AI winters. They were mocked, defunded, and dismissed as eccentric outliers before their models achieved global dominance.

The silent driver: Compute power over algorithmic novelty

The bitter lesson of silicon scaling

Here is an uncomfortable truth that many purist computer scientists simply hate to admit: the ultimate catalyst was never a sudden flash of algorithmic brilliance, but rather the raw, brute-force scaling of hardware. Rich Sutton famously articulated this in his seminal essay, noting that clever structural tweaks consistently get crushed by sheer computational power over time. You can design the most elegant, biologically inspired network architecture imaginable, except that it will remain utterly useless without billions of parameters spinning through massive server farms. The modern day father of AI isn't just a person; it is an unholy marriage between advanced calculus and Nvidia hardware. We must recognize that the true pivot point occurred when researchers stopped trying to hand-code human knowledge and instead let massive, unguided neural networks chew through petabytes of data on web-scale infrastructure.

Expert advice: Look toward the energy frontier

If you want to anticipate where the next paradigm shift will emerge, stop obsessing over current software iterations. Focus instead on alternative hardware paradigms like neuromorphic computing or silicon photonics, because the algorithmic models of the contemporary foundational patriarch of artificial intelligence are rapidly hitting a thermodynamic wall. Training a single massive frontier model can consume upwards of 10 to 15 gigawatt-hours of electricity, a staggering expenditure that is fundamentally unsustainable. The future belonging to the next generation of pioneers will require achieving intelligence at a fraction of that energy cost, mimicking the highly efficient 20-watt requirement of the biological human brain.

Frequently Asked Questions

Who officially received the Turing Award for modern deep learning breakthroughs?

The Association for Computing Machinery broke tradition in 2018 by jointly awarding the prestigious Turing Award—often dubbed the Nobel Prize of computing—to Geoffrey Hinton, Yann LeCun, and Yoshua Bengio. This historic trinity, frequently celebrated as the Godfathers of Deep Learning, secured the 1 million dollar prize for their conceptual and engineering breakthroughs in deep neural networks. Their collective persistence during the 1990s and 2000s directly enabled the computer vision and natural language processing revolutions we witness today. While Hinton led the backpropagation charge, LeCun mastered vision grids, and Bengio untangled sequence modeling, cementing their status as a collective modern day father of AI entity. Their unified recognition proves that modern artificial intelligence is far too expansive for a single crown.

Did Jürgen Schmidhuber contribute significantly to this title?

Jürgen Schmidhuber maintains a fiercely vocal, highly documented claim to the title due to his pioneering work at the Swiss AI Lab IDSIA. Alongside Sepp Hochreiter, Schmidhuber published the Long Short-Term Memory network architecture in 1997, a revolutionary framework that dominated speech recognition and machine translation for over two decades. His models ran on billions of smartphones daily, powering systems built by tech giants like Google and Apple long before transformers gained traction. (He is also notoriously quick to remind the academic community of this fact whenever his peers receive mainstream media adulation). Despite his immense, undeniable mathematical contributions, his historical legacy remains somewhat segregated from the dominant corporate narrative forged by the Anglo-American tech ecosystem.

Why is Ilya Sutskever considered a contemporary claimant to the legacy?

Ilya Sutskever represents the crucial bridge connecting pure academic research with historical, world-altering industrial implementation. As a co-author of the seminal 2012 AlexNet paper alongside Geoffrey Hinton, he proved definitively that deep convolutional neural networks could obliterate traditional computer vision benchmarks by an unprecedented 10.8 percentage point margin. Later, as the Chief Scientist of OpenAI, he championed the scaling hypothesis that directly led to the creation of GPT-4 and the broader generative phenomenon. Sutskever did not just theorize about deep learning; he actively engineered the exact systems that turned abstract mathematics into a disruptive global infrastructure. His work demonstrates that the title of modern day father of AI belongs as much to the scaling executors as it does to the original theorists.

A definitive verdict on the digital lineage

To declare a solitary champion of this cognitive revolution is to fundamentally misunderstand the collaborative, iterative nature of computer science. If forced to take a definitive, unyielding stance, we must crown Geoffrey Hinton as the primary intellectual node, not because he worked in isolation, but because his pedagogical lineage and relentless advocacy kept the flame of connectionism alive when the rest of the world abandoned it. But let us not romanticize this victory. The architecture we inhabit today is a sprawling, multi-authored matrix built on Swiss mathematics, Japanese intuition, French engineering, and American capital. Do you truly believe a single human mind could birth a synthetic consciousness? The modern day father of AI is an emergent collective entity, a distributed network of human intellect that mirrors the very neural webs they fought so hard to create.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.