YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
accuracy  actually  answer  content  digital  engine  hallucination  models  perplexity  platform  remains  results  search  specific  version  
LATEST POSTS

The Great Erosion: Why Is Perplexity Bad Now and How the AI Search Darling Lost Its Way

The Great Erosion: Why Is Perplexity Bad Now and How the AI Search Darling Lost Its Way

You remember the first time you used it. It felt like magic. Instead of wading through twelve pages of affiliate links for the best toaster, you got a synthesized paragraph that actually made sense. But lately, something shifted. I find myself fact-checking the "facts" provided by the Pro models more often than I care to admit. The thing is, the platform is suffering from a systemic bloat that is turning a once-streamlined experience into a frustrating exercise in digital skepticism. We are witnessing the inevitable "enshittification" of AI search, and honestly, it’s unclear if there is any coming back from this specific brand of decay.

The Rise and Rapid Stagnation of the Answer Engine Dream

To understand why the consensus has flipped, we have to look at the initial promise of generative search. Perplexity launched as the antithesis of the blue-link era, leveraging RAG (Retrieval-Augmented Generation) to ground LLMs in real-time data. It was the "Google Killer" because it valued your time. But as the user base ballooned toward 10 million monthly active users by early 2024, the infrastructure began to buckle under the dual pressures of scaling costs and the need for a viable business model. The issue remains that serving a high-quality inference for every query is astronomically more expensive than a traditional keyword index search.

From Information Utility to Content Aggregator

There was a window of time where the citations were impeccable. Now? You see a source cited for a claim that doesn't actually exist within the linked text. It’s a phenomenon some researchers call "source drift," where the model generates a plausible-sounding answer and then retroactively forces a citation onto it, regardless of the actual content. People don't think about this enough, but when an AI hallucinates with a footnote, it’s arguably more dangerous than a standard chatbot hallucination because the visual cues of authority bypass our natural skepticism. That changes everything about the trust relationship we built with these tools.

The Incentive Trap of Silicon Valley Growth

Because the venture capital world demands exponential returns, the pivot from "useful tool" to "engagement platform" was inevitable. We saw this with the introduction of Perplexity Pages and other social features that feel tacked on and desperate. Instead of refining the core search logic, the development team seems distracted by aesthetic flourishes and content curation features that nobody actually asked for. Which explains why the core latency has increased while the actual "intelligence" of the responses feels like it has hit a ceiling. It is a classic case of a company trying to be everything to everyone and ending up being mediocre at the one thing it did best.

The Technical Rot Behind the Interface

The most egregious technical decline lies in the model routing. Perplexity uses a mixture of models—including Claude 3, GPT-4o, and their own fine-tuned versions of Llama—but the way it selects which "brain" to use for a specific query has become increasingly opaque and seemingly optimized for cost-cutting rather than accuracy. But here is the nuance: while the hardware is faster, the logic is lazier. If you ask a complex multi-step question about the specific thermal conductivity of graphene at 300K, you are now just as likely to get a generic summary as you are a technical breakdown. As a result: the platform often defaults to the path of least resistance, giving you a "vibes-based" answer instead of a data-driven one.

The Hallucination Paradox in RAG Systems

We are far from the days when RAG was seen as the silver bullet for AI accuracy. The system now frequently falls into "recursive loops" where it cites one article that was actually written by another AI, which was originally sourcing an older, incorrect Perplexity answer. This Ouroboros of misinformation is poisoning the well. And since the web is being flooded with AI-generated SEO junk, Perplexity’s crawlers are feasting on digital garbage. Did the model get dumber, or did the internet just get harder to parse? Experts disagree on the ratio, but the output remains the same: a noticeable drop in the "Eureka" moments that characterized the early beta days.

The Latency-Accuracy Tradeoff

Nobody likes waiting four seconds for a search result, but I would take a five-second wait over a two-second lie. Recent updates have clearly prioritized "time to first token," meaning the response starts appearing almost instantly. Yet, this speed comes at a heavy cost. The model often commits to an answer before it has fully "read" the search results it pulled, leading to those awkward mid-sentence corrections or outright contradictions. (I once saw it claim a CEO had resigned in the first sentence, only to cite his current 2026 initiatives in the third). Is this efficiency? Hardly. It is a frantic attempt to mimic the speed of Google without having the underlying architecture to support it.

Degrading User Experience and the Monetization Pivot

Let’s talk about the ads, or "sponsored follow-up questions" as they prefer to call them. This is where it gets really messy for the end user. When your search engine starts nudging you toward a specific brand under the guise of "helpful suggestions," the integrity of the entire system collapses. The thing is, the interface has become cluttered with visual noise—trending topics, discovery feeds, and social sharing buttons—that actively distract from the mission of finding information. It feels less like a library and more like a tabloid magazine where the articles are written by a tired intern who is skimming Wikipedia.

The Subscription Value Proposition Gap

For $20 a month, the "Pro" tier used to feel like a steal. You got the best models and unlimited file uploads. Yet, the gap between the free experience and the paid one is narrowing, not because the free version is getting better, but because the Pro version is stagnating. Why am I paying for a premium service that still struggles to summarize a basic PDF without missing the most important statistical outliers? The issue remains that the "added value" features, like image generation via DALL-E 3 or Flux, feel like parlor tricks rather than professional tools. It’s a classic bait-and-switch where the utility is sacrificed for a broader, shallower feature set.

The Alternatives Gaining Ground in the Shadows

While Perplexity was busy trying to become a media company, others were quietly refining the tech. Search engines like Genspark or even the refined SearchGPT prototype have shown that it is possible to maintain a focus on deep research without the fluff. Even the Kagi search engine, which requires a subscription, offers a much cleaner and more reliable "summarize" function because it isn't trying to sell your eyeballs to the highest bidder. Hence, the migration of the "power user" has already begun. They are moving away from the flashy interface and toward tools that respect the sanctity of a raw data query. In short, the "early adopters" are exiting the building just as the general public is starting to arrive.

Is Google's AI Overviews Actually Winning?

This is the contrarian take that people hate: Google’s AI Overviews, despite the "glue on pizza" memes of 2024, has become surprisingly stable. Because Google has the largest index on the planet, their grounding is—dare I say—occasionally superior to Perplexity's scraped snippets. It’s a bitter pill to swallow for those of us who wanted a disruptor to win. But the massive infrastructure advantage of the Mountain View giant means they can afford to play the long game. Perplexity, by contrast, is acting like a company that knows its runway is getting shorter. And when a company gets desperate, the product is always the first thing to suffer.

Common pitfalls and the great hallucination trap

The source quality illusion

You probably think that because a tool cites its sources, the output is inherently verified. This is the first major misconception. While Perplexity AI acts as a sleek librarian, it occasionally suffers from a digital version of confirmation bias. It scrapes the web at breakneck speed, yet it cannot distinguish between a peer-reviewed paper and a disgruntled redditor's rant if the SEO signals are strong enough. The problem is that citations create a veneer of credibility that lulls you into a false sense of security. Because the interface looks like a scholarly database, we treat it like one. But let's be clear: a citation is merely a link, not a badge of truth. Statistics show that up to 15% of AI-generated citations in rapid-search modes can be misattributed or contextually skewed. This happens because the model prioritizes relevance over veracity. And if the top search results are garbage, your answer will be expensive, high-tech garbage. Which explains why relying on AI search without clicking the actual links is a recipe for professional disaster.

The prompt engineering myth

Many power users believe they can fix the decline in quality by simply writing better prompts. They spend hours crafting complex instructions. Except that no amount of prompt wizardry can overcome a degrading retrieval-augmented generation pipeline. If the underlying index is cluttered with AI-generated filler content from the open web, the model is essentially eating its own tail. We call this "model collapse" in the industry. As a result: the more you try to steer the AI with long prompts, the more likely it is to drift into tangential nonsense or ignore your negative constraints entirely. It is a game of diminishing returns.

The hidden cost of the "Pro" transition

Architectural thinning for speed

The issue remains that scaling a global search engine is absurdly expensive. To keep latency low for millions of users, there is a quiet, technical thinning of the models occurring behind the scenes. Expert analysis suggests that inference costs for massive models like GPT-4o or Claude 3.5 Sonnet are unsustainable for a "search" use case without heavy quantization. (This is basically the digital equivalent of watering down the soup). Why is Perplexity bad now? It might be because the balance between raw intelligence and operational speed has tipped too far toward the latter. When you ask a complex query, the system might only be processing a shredded version of the source text to save on compute tokens. The irony is that we pay for "Pro" features while receiving an optimized, lightweight version of the intelligence we were promised. If you are noticing a drop in nuance, you are not imagining it; you are witnessing the economic reality of the AI boom hitting a ceiling. My stance is firm: we are currently sacrificing depth for the sake of a three-second response time.

Frequently Asked Questions

Has the accuracy of Perplexity decreased in 2026?

Recent benchmarks indicate a fluctuating performance curve, specifically in complex multi-step reasoning tasks where accuracy dropped by approximately 12% compared to early 2025 iterations. This decline is largely attributed to the "dead internet theory" becoming a reality, as synthetic data now makes up an estimated 40% of the indexed web. The engine frequently gets trapped in recursive loops where it cites articles that were themselves written by AI. As a result: the factual density of the outputs feels thinner than it did eighteen months ago. You might find the surface-level answers acceptable, but the granularity of data is objectively eroding under the weight of automated content farms.

Why does the AI ignore specific instructions in my search?

The problem is the internal conflict between the search retriever and the language model's latent biases. When you ask Why is Perplexity bad now? or request a specific format, the system often prioritizes the "Search Snippet" logic over your custom instructions. This leads to generic summaries that skip over your specific constraints. It happens because the "context window" is being flooded with raw HTML data from the search results, leaving less "mental space" for the model to remember your formatting rules. In short, the search results are suffocating the prompt instructions, leading to a frustratingly repetitive user experience.

Is there a better alternative for academic research?

While Perplexity remains the most visible player, specialized tools like Consensus or Elicit are gaining ground by indexing only validated research papers rather than the entire chaotic web. These platforms maintain a higher "truth floor" because they do not let marketing blogs influence their logic. The issue remains that a general-purpose tool will always struggle with high-stakes accuracy compared to a niche vertical search engine. If you require 100% verifiable data points, you must move away from general LLM searchers. Relying on a generalist AI for specialist work is like using a Swiss Army knife to perform heart surgery; it is technically a blade, but it is the wrong tool for the job.

The verdict on the search revolution

We are currently living through the awkward teenage years of AI-integrated search. The initial magic of instant synthesis has worn off, revealing a skeletal structure that is struggling to support its own weight. I believe we have reached a point where "more data" has become the enemy of "better answers." We should stop pretending that an automated summary of the internet is the same thing as knowledge. The current trajectory suggests that Perplexity AI is becoming a victim of the very speed it pioneered. If the platform does not pivot toward a more aggressive, elitist filtering of its sources, it will soon be nothing more than a high-speed echo chamber for mediocrity. Truth is expensive, and right now, the industry is trying to sell it to us at a discount. We must demand a return to depth over velocity or accept that our search results are becoming a digital hall of mirrors.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.