YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
altman  betrayal  billion  closed  company  google  microsoft  models  openai  original  profit  remains  safety  silicon  source  
LATEST POSTS

The Silicon Schism: Why Elon Musk Really Soured on OpenAI and the Future of AGI

The Silicon Schism: Why Elon Musk Really Soured on OpenAI and the Future of AGI

The Genesis of a Grudge: From Altruism to a Corporate Fortress

The Non-Profit Mirage of 2015

In the beginning, specifically back in December 2015, the vibe was almost utopian. Musk, along with Sam Altman, Greg Brockman, and several others, threw $1 billion in pledged funding at a new entity. The goal? Creating a counterweight to Google’s dominance in the AI space. At the time, Google had just swallowed DeepMind, and the fear was that the most powerful technology in human history would be locked behind a search engine’s proprietary walls. We were told this would be an open playground for researchers. Musk’s original $45 million contribution was supposed to buy a future where "Open" actually meant open-source. But things didn't stay idyllic for long. Because, as it turns out, training Large Language Models is an expensive habit that eats through cash faster than a rocket launch sequence.

The 2018 Departure and the Power Vacuum

Musk walked away from the board in 2018. The official story? A potential conflict of interest with Tesla’s Autopilot development. Yet, the issue remains that there was a failed takeover attempt. Musk allegedly offered to run OpenAI himself because he thought they were falling behind Google. When the board said no, he took his ball and went home. People don't think about this enough, but that specific rejection likely bruised an ego accustomed to being the smartest guy in the room. By 2019, OpenAI underwent a radical transformation, creating a "capped-profit" subsidiary. This allowed them to take massive investments. But the thing is, you can’t claim to be a charity while building a multi-billion-dollar product funnel for Redmond, Washington. Honestly, it’s unclear if Musk is more mad about the safety risks or the fact that he isn't the one steering the ship toward Artificial General Intelligence (AGI).

The Microsoft Marriage and the Death of "Open" Source

The Multi-Billion Dollar Betrayal

When Microsoft injected its first $1 billion investment in 2019—a figure that has since ballooned to a rumored $13 billion total—the original OpenAI died. At least, that is the version of the story Musk tells in his recent lawsuits. Think about it: a company founded to be the literal opposite of a closed corporate monopoly is now essentially the research and development wing of the Windows empire. Which explains why Musk is so litigious lately. He views the release of GPT-4 not as a triumph for mankind, but as a proprietary secret that serves Microsoft’s bottom line first and foremost. We are far from the days of GitHub repos and shared weights. Today, if you want the "secret sauce," you pay for an API key or a Plus subscription.

Closed Models vs. The Cathedral and the Bazaar

The technical shift from open-source to "closed" source is the biggest pivot in the history of Silicon Valley. Early models like GPT-2 were released with warnings, sure, but they were eventually accessible. GPT-4? That’s a black box. And this is where the nuance gets interesting. While Musk screams about transparency, his own venture, xAI, only open-sourced the weights for Grok-1 after he faced immense public pressure to prove he wasn't a hypocrite. Is it a genuine ideological stance or just a clever marketing tactic for his own AI startup? Experts disagree on his motives, but the reality is that the 175 billion parameters of GPT-3 marked the end of the "sharing is caring" era of high-level machine learning. This changes everything for researchers who can no longer audit the bias or safety protocols of the models they rely on.

Waging War on "Woke" Algorithms and Safety Goggles

The Crusade Against Correctness

Musk often complains that OpenAI is training its models to be "woke"—a term he uses to describe what he sees as a political bias ingrained in the RLHF (Reinforcement Learning from Human Feedback) process. He argues that by forcing an AI to be polite or socially conscious, the developers are essentially teaching it to lie. And that, in his view, is the first step toward a catastrophic failure of alignment. But isn't it also possible that he just wants a chatbot that reflects his own "anti-woke" sensibilities? He envisions a "Maximum Truth-Seeking AI" that doesn't care about your feelings, which is a stark contrast to the safety-guarded, sanitized experience you get with ChatGPT. It’s a battle over the soul of the machine’s personality. One side wants a safe, corporate assistant; the other wants a digital frontier scout that says the quiet parts out loud.

Existential Risk as a Legal Strategy

In his 2024 lawsuit filed in San Francisco, Musk’s lawyers leaned heavily on the idea that AGI has already been reached. Why does this matter? Because the contract with Microsoft reportedly excludes AGI-level technology. If Musk can prove that GPT-4 is a "proto-AGI," he can theoretically blow up the entire Microsoft deal. It is a bold, perhaps desperate, legal maneuver—but it highlights his obsession with the singularity. He truly believes we are summoning a demon. Except that he's also building a humanoid robot named Optimus at Tesla that will eventually run on similar neural networks. The irony is thicker than a rocket's heat shield. You have to wonder: if the tech is so dangerous, why is the solution always just "let Elon build it instead"?

The Pivot to xAI: Competition Under the Guise of Protection

The Birth of Grok and the Compute War

While trashing Sam Altman on X (formerly Twitter), Musk was busy poaching engineers from DeepMind and Tesla to form xAI. This new entity, based in Nevada, is his direct answer to the "OpenAI lie." By leveraging the real-time data stream of the X platform, he’s trying to build a model that understands the world as it happens, rather than relying on stale datasets. As a result: he has created a vertical integration where his social media company feeds his AI company, which in turn powers his car company. It’s a massive play for compute dominance, especially considering his purchase of 10,000 H100 GPUs from Nvidia when everyone else was struggling to find even a dozen. He isn't just complaining about the game; he’s trying to buy the stadium and change the rules.

How xAI Differs from the GPT Paradigm

Musk’s approach with xAI is supposedly more "truth-oriented," but let's be real—it’s also deeply integrated with his personal brand. Grok is designed to have a "rebellious streak" and answer the "spicy" questions that ChatGPT would normally dodge with a canned response about inclusivity. This represents a divergence in AI philosophy. On one side, you have the institutional safety of OpenAI, which treats the model like a high-risk medical device. On the other, Musk treats it like a First Amendment tool. But the issue remains that training a model on Twitter data might just give us an AI that is really good at arguing and posting memes, rather than solving cold fusion. Still, it provides a necessary alternative to the duopoly of Google and Microsoft, even if the alternative comes with its own set of billionaire-flavored biases.

Common misconceptions regarding the Musk-OpenAI rift

The prevailing narrative often paints this friction as a simple case of founder's remorse or petty jealousy. It is easy to assume Elon Musk hates the entity because it succeeded without him. Except that the problem is far more structural than a bruised ego. Many observers wrongly claim Musk walked away in 2018 solely because he wanted to merge OpenAI with Tesla. While a merger proposal existed, the core fracture occurred because the organization transitioned from a non-profit research lab to a capped-profit vehicle. We must distinguish between professional rejection and ideological betrayal. Musk argues the original Founding Agreement was a binding pact to prevent AGI from becoming a proprietary weapon.

The myth of the failed takeover

Critics frequently cite Musk’s 2018 offer to run the company as evidence of his desire for absolute control. Yet, his primary concern was that OpenAI was falling hopelessly behind Google’s DeepMind. At the time, Google’s compute resources were massive, and Musk believed a $1 billion funding commitment from him would still be insufficient to compete. He did not seek a crown; he sought a defensive wall against a perceived monopoly. When the board rejected his leadership bid, he withdrew his funding, which eventually forced Sam Altman to seek $13 billion in investment from Microsoft. Because he saw this as selling the soul of the mission, the relationship turned radioactive.

Is it just about the money?

There is a persistent belief that Musk is simply litigious because he missed out on the financial windfall. Let’s be clear: Musk donated approximately $44.8 million to OpenAI between 2016 and 2020. If he wanted profit, he would have structured his early contributions as equity rather than philanthropic gifts. The issue remains that a non-profit becoming a $157 billion commercial titan sets a dangerous precedent for tax-exempt innovation. (And honestly, who wouldn't be a bit annoyed if their charity donation turned into a competitor’s profit margin?) He views the current structure as a de facto subsidiary of Microsoft, which contradicts the transparency he originally championed.

The overlooked dimension of X.AI and the data wars

Beyond the philosophical shouting matches, a more pragmatic war is being fought over the digital ore that fuels LLMs. Musk’s distaste for OpenAI intensified when he realized they were using Twitter—now X—data to train their models for free. In 2023, he shut down API access to prevent what he termed data scraping by "woke" AI entities. This was not just a business move; it was a strategic decapitation. He believes OpenAI is lobotomizing its models to be politically correct, which he fears will lead to an AI that lies to satisfy its programming. Why does Elon Musk not like OpenAI? Part of the answer lies in the semantic divergence between his "truth-seeking" xAI and Altman’s "safety-aligned" ChatGPT.

The expert's perspective on the closed-source pivot

The most biting irony is that the "Open" in OpenAI is now purely vestigial. Experts note that GPT-4’s architecture remains a black box, a far cry from the 2015 promise to share all research. Musk’s move to open-source the weights of Grok-1, a model with 314 billion parameters, was a direct jab at the secrecy of his former colleagues. This suggests his animosity is a calculated performance to position himself as the last bastion of open-access intelligence. Whether this is genuine altruism or a marketing masterstroke for xAI is a question that divides the industry. We can admit our limits in knowing his heart, but we cannot ignore the 19-page legal complaint he filed in California as evidence of his commitment to this feud.

Frequently Asked Questions

What was the specific catalyst for the 2024 lawsuit?

The lawsuit centered on the claim that OpenAI breached its charitable mission by keeping its most advanced technology, specifically GPT-4, a secret. Musk’s legal team argued that the $100 billion valuation reached by the company was built on the backs of early non-profit donors. The filing alleges that the current board is no longer independent but is instead beholden to Microsoft’s commercial interests. Data points to the fact that Microsoft owns a 49 percent stake in the for-profit arm, which Musk claims is a "closed-source" betrayal. As a result: the courts must now decide if a non-profit mission statement is a legally enforceable contract.

How does the rivalry affect the speed of AI development?

The competition has sparked a bilateral arms race that has accelerated release cycles across the entire Silicon Valley ecosystem. While OpenAI focuses on iterative refinement and multi-modal capabilities, Musk’s xAI leverages the real-time data stream of the X platform. This pressure forced OpenAI to release features like Voice Mode and search integrations faster than originally planned to maintain market dominance. The issue remains that this "move fast" mentality might bypass the very safety guardrails both parties claim to prioritize. In short, their mutual loathing is the engine of the current AI boom.

Will Musk and Sam Altman ever reconcile?

A reconciliation seems statistically improbable given the personal nature of their public exchanges on social media. Musk has referred to the CEO as a "megalomaniac," while Altman has described Musk as a "jerk" who is nonetheless doing important work. The divergent business models of their respective companies create a zero-sum game for talent and compute power. Furthermore, Musk’s launch of Colossus, a massive 100,000 H100 GPU cluster, signals his intent to out-build OpenAI rather than rejoin them. The bridge has not just been burned; the ashes have been scattered into the vacuum of space.

The Verdict: An Existential Divorce

The friction between Elon Musk and OpenAI is not a mere footnote in tech history; it is the defining schism of the silicon age. We are witnessing a clash between proprietary safety and open-source accelerationism. Musk’s refusal to accept the for-profit pivot is a desperate, perhaps flawed, attempt to hold the industry to its primordial promises. It is easy to dismiss his critiques as noise, yet his warnings about the monopolization of intelligence resonate with anyone wary of corporate hegemony. Ultimately, OpenAI has chosen the path of the cathedral, while Musk is attempting to build a competing, more chaotic bazaar. The tragedy is that their combined genius was once the world's best hope for aligned AGI. Now, that unity is dead, and we are all forced to choose a side in their high-stakes divorce.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.