YOU MIGHT ALSO LIKE
ASSOCIATED TAGS
actually  collecting  complexity  digital  modern  processing  remains  storage  validity  variability  variety  velocity  veracity  viscosity  volume  
LATEST POSTS

The Exponential Expansion of Digital Information: Unpacking What Are the 12 V's of Big Data in Modern Analytics

The Exponential Expansion of Digital Information: Unpacking What Are the 12 V's of Big Data in Modern Analytics

Beyond the Hype: Why the Definition of Big Data Had to Evolve

Remember 2001? Doug Laney from MetaGroup (now Gartner) dropped the 3 V's—Volume, Velocity, and Variety—and for a decade, that was the gold standard for defining our digital deluge. It was elegant. It was simple. But, honestly, it’s unclear why we clung to it for so long when the reality on the ground was getting infinitely more messy. Because as sensors started populating every street corner in London and every industrial turbine in GE's aviation division, the old triad broke down. The issue remains that having a petabyte of data is useless if that data is lying to you or if it’s so volatile that it spoils like milk in the sun. Which explains why researchers and practitioners kept tacking on new letters like post-it notes on a whiteboard.

The Death of the Three-Pillar Model

I find the obsession with "more" data to be one of the great fallacies of the early 2010s. We were told that volume was king, yet companies were drowning in Data Lakes that quickly turned into stagnant Data Swamps. And where it gets tricky is that the velocity of incoming streams—think 500 million tweets per day or the 4 terabytes generated by a single Boeing engine—doesn't matter if your processing architecture has the latency of a dial-up modem. The traditional model ignored the human element and the inherent messiness of the real world. We needed a more granular lens to look at the 12 V's of big data because the "V" for Veracity (truthfulness) alone can make or break a predictive maintenance algorithm. But wait, is even 12 enough, or are we just playing a linguistic game of Scrabble? Experts disagree on the final count, but the 12-factor model has emerged as the most robust way to categorize unstructured, semi-structured, and structured datasets.

Decoding the Core Infrastructure: Volume, Velocity, and Variety Revisited

To understand what are the 12 V's of big data, we have to start with the foundational bricks, though with a 2026 perspective that acknowledges they aren't what they used to be. Volume is the obvious one, the sheer gargantuan scale of bits and bytes, often measured in Zettabytes (the world is expected to generate 175 Zettabytes by 2025). Yet, volume is the least interesting part of the equation now. Storage is cheap. Cloud providers like AWS and Azure have made the physical act of "holding" data a commodity. The real challenge has shifted to the next two pillars. Velocity isn't just about speed; it's about the relative decay of data value. If a high-frequency trading bot at a firm like Citadel doesn't process market movements in microseconds, that data is worthless. It's an expired lottery ticket. And then we have Variety. This isn't just "SQL vs. NoSQL" anymore. We are talking about LiDAR point clouds, biometric heart-rate streams, and natural language processing of sarcastic Reddit threads. That changes everything about how we design schemas.

The Friction of Disparate Data Types

People don't think about this enough: Variety creates a massive "tax" on your engineering team. When you are trying to reconcile JSON logs from a mobile app with legacy COBOL mainframe records from a bank in 1984, the Variety V becomes a nightmare of ETL (Extract, Transform, Load) pipelines. As a result: the complexity doesn't scale linearly; it scales exponentially. We're far from the days when "big data" just meant a really big Excel spreadsheet. Today, variety includes Spatial-Temporal data—knowing not just what happened, but exactly where and in what sequence. Did the customer walk past the kiosk before or after they received the push notification? That nuance is where the 12 V's of big data start to show their worth.

The Truth and the Trash: Veracity, Variability, and Validity

This is where the 12 V's of big data move from the server room to the boardroom. Veracity is the hidden killer. It refers to the quality and provenance of the data. If your sensors in a Tesla Autopilot system are miscalibrated by 1%, the resulting "big data" is actively dangerous. You can have all the volume in the world, but if the signal-to-noise ratio is skewed, your Machine Learning models will hallucinate. Then comes Variability. This is often confused with Variety, but it’s actually about the inconsistency of data flow. Think about a retail website during Black Friday; the data peaks are mountains compared to the valleys of a Tuesday morning. Because if your system can't handle the swing, it crashes. In short, variability is about the "heartbeat" of the data—is it steady, or is it tachycardic?

Does Your Data Have a Use-By Date?

Validity is the cousin of Veracity, but it focuses on the appropriateness of the data for a specific use case. It’s about contextual accuracy. Is a dataset of historical weather patterns from 1950 valid for predicting 2026 urban heat islands? Maybe, but probably not. The 12 V's of big data force us to ask these uncomfortable questions before we spend $500,000 on a GPU cluster to train a model. We have to ensure the data is "clean" (a term engineers use loosely, while knowing nothing is ever truly clean) and that it aligns with the regulatory frameworks like GDPR or CCPA. But why do we often prioritize the gathering over the validating? It’s likely because gathering feels like progress, whereas validating feels like chores. Except that chores prevent the house from falling down.

The Economic Reality: Value and Viability as Competitive Moats

Let's be blunt: if you aren't getting Value out of your stack, you're just running an expensive digital museum. Value is the most important of the 12 V's of big data because it represents the Return on Investment (ROI). It’s the "so what?" at the end of the analysis. A company like Netflix uses its data to decide which shows to greenlight—that is a billion-dollar application of value. On the flip side, Viability asks if the data is actually worth the cost of its own processing. Is the 0.5% increase in predictive accuracy worth the $2 million in compute credits it took to get there? Sometimes the answer is a hard "no." This is a sharp departure from the "collect everything" mantra of the last decade. Now, the 12 V's of big data are being used to prune the garden as much as to grow it. We are seeing a shift toward Data Frugality, where the viability of a project is scrutinized under the lens of sustainability and energy consumption.

Strategic Selection vs. Hoarding

The issue remains that most companies are still in the hoarding phase. They see the 12 V's of big data as a checklist of things to have, rather than a set of filters to apply. But—and this is a big "but"—the most sophisticated players are looking at Volatility. How long do we need to keep this? In a high-frequency trading environment, the volatility is extreme; the data is "alive" for seconds. In genomic research, the data might need to be stored for 50 years. Understanding the 12 V's of big data means matching your storage architecture to the lifespan of the information's relevance. It’s a balancing act between the "forever" of the archive and the "now" of the stream. Which leads us to a crucial realization: the 12 V's are not just technical specs; they are economic indicators of a firm's digital maturity.

Common mistakes and misconceptions about the 12 V's of big data

Thinking that more letters in your alphabetical soup translates to better insights is a trap. Numerical bloat often masquerades as progress. Let's be clear: collecting every byte because storage is cheap does not mean the information is useful. You might think having a handle on Vulnerability and Volatility makes your infrastructure invincible. It doesn't. Many architects obsess over Volume while ignoring the rot at the core of their datasets. It is a mess. Because if the Veracity is non-existent, you are just performing high-speed calculations on digital garbage. Why do we keep building bigger silos for smaller truths?

The trap of the "Universal Metric"

Engineers often fall into the pit of treating all 12 V's as equal. They are not. Some frameworks prioritize Variability—the unpredictable peaks in data flow—above the actual Value extracted at the end of the pipeline. The problem is that a 10% increase in data ingestion speed rarely results in a 10% increase in profit. Gartner once suggested that through 2022, 85% of big data projects would fail to reach production. That staggering statistic stems from teams chasing every "V" simultaneously without a specific business objective. You cannot optimize for Viscosity and Velocity with the same hardware configuration without encountering physics-based bottlenecks. It is an expensive delusion. Yet, we see companies burning through venture capital trying to achieve Visions that are mathematically impossible given their current Validity checks.

Confusing Variety with Complexity

A massive misconception involves the definition of Variety. Most assume it just means having JSON files next to SQL tables. Except that true variety includes multi-structured data like biometric sensors, LIDAR point clouds, and sentiment-heavy social streams. Complexity is a byproduct, not a goal. But many developers intentionally over-engineer systems to handle theoretical Vagueness that never actually appears in their specific industry. If you are a local retail chain, you probably do not need to worry about the Valence of interconnected global financial nodes. Focus on what moves the needle.

The hidden cost of Data Viscosity

An expert look at the friction of flow

The issue remains that Data Viscosity is the silent killer of the modern enterprise. We talk about Velocity as the speed of light, but viscosity is the thickness of the "honey" through which that light must travel. Imagine a Petabyte-scale migration where every transformation layer adds 50 milliseconds of latency. In a high-frequency trading environment, that is an eternity. Experts know that reducing viscosity involves flattening the architecture of big data (a grueling process) to ensure that Validity is checked at the edge rather than the core. As a result: your insights arrive while they are still actionable. If your data is too thick to move, it is just digital sediment. (And nobody ever got rich off sediment unless they were mining for literal gold). We must acknowledge that Venue—the physical or virtual location where processing happens—dictates the resistance of the entire 12-V stack.

Frequently Asked Questions

Which of the 12 V's is the most expensive to manage?

While Volume used to take the crown, Veracity has become the primary budget consumer in the 12 V's of big data era. Cleaning "dirty" data accounts for roughly 80% of a data scientist's time, representing a massive hidden labor cost. Organizations often spend 3 to 4 times more on data governance and quality assurance than they do on the actual storage hardware. If 25% of your records are duplicated or incorrect, your Value proposition evaporates regardless of your processing speed. In short, the price of truth is significantly higher than the price of space.

How does the concept of Valence impact modern social media analytics?

Valence refers to the complexity of connections between data points, much like atoms in a molecule. In the context of the twelve dimensions of big data, it describes how one event can trigger a massive cascade across a network. For example, a single tweet can influence stock prices, which then triggers automated sell-orders, which then impacts consumer sentiment. Monitoring these dynamic interactions requires graph databases rather than traditional relational systems. The issue remains that failing to track valence leads to a total misunderstanding of how Volatility actually spreads through a system.

Can a small business ignore the 12 V's of big data?

Small businesses cannot afford to ignore the framework of big data, though they should prioritize different pillars. While Volume might be manageable, Validity and Value are non-negotiable for survival in a competitive market. A company with only 500 customers still deals with Variability in seasonal buying patterns and the Velocity of digital payments. Ignoring these factors leads to Vulnerability against larger, data-driven competitors. Data is the new oil, and even a small engine needs clean fuel to run efficiently.

Beyond the buzzwords: A final stance

The 12 V's of big data represent a necessary evolution of a tired concept, but they are also a warning against corporate gluttony. We have reached a point where the obsession with Volume and Velocity has outpaced our cognitive ability to actually understand the results. My position is simple: if you cannot turn your Petabytes into a single sentence of actionable advice, you have failed. The complexity of the twelve attributes of big data should serve as a diagnostic tool, not a checklist for ego-driven expansion. We are drowning in Viscosity while starving for Veracity. Stop collecting "everything" and start collecting what matters. The future belongs to those who can simplify the 12 V's into one single "R"—Results.

💡 Key Takeaways

  • Is 6 a good height? - The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.
  • Is 172 cm good for a man? - Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately.
  • How much height should a boy have to look attractive? - Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man.
  • Is 165 cm normal for a 15 year old? - The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too.
  • Is 160 cm too tall for a 12 year old? - How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 13

❓ Frequently Asked Questions

1. Is 6 a good height?

The average height of a human male is 5'10". So 6 foot is only slightly more than average by 2 inches. So 6 foot is above average, not tall.

2. Is 172 cm good for a man?

Yes it is. Average height of male in India is 166.3 cm (i.e. 5 ft 5.5 inches) while for female it is 152.6 cm (i.e. 5 ft) approximately. So, as far as your question is concerned, aforesaid height is above average in both cases.

3. How much height should a boy have to look attractive?

Well, fellas, worry no more, because a new study has revealed 5ft 8in is the ideal height for a man. Dating app Badoo has revealed the most right-swiped heights based on their users aged 18 to 30.

4. Is 165 cm normal for a 15 year old?

The predicted height for a female, based on your parents heights, is 155 to 165cm. Most 15 year old girls are nearly done growing. I was too. It's a very normal height for a girl.

5. Is 160 cm too tall for a 12 year old?

How Tall Should a 12 Year Old Be? We can only speak to national average heights here in North America, whereby, a 12 year old girl would be between 137 cm to 162 cm tall (4-1/2 to 5-1/3 feet). A 12 year old boy should be between 137 cm to 160 cm tall (4-1/2 to 5-1/4 feet).

6. How tall is a average 15 year old?

Average Height to Weight for Teenage Boys - 13 to 20 Years
Male Teens: 13 - 20 Years)
14 Years112.0 lb. (50.8 kg)64.5" (163.8 cm)
15 Years123.5 lb. (56.02 kg)67.0" (170.1 cm)
16 Years134.0 lb. (60.78 kg)68.3" (173.4 cm)
17 Years142.0 lb. (64.41 kg)69.0" (175.2 cm)

7. How to get taller at 18?

Staying physically active is even more essential from childhood to grow and improve overall health. But taking it up even in adulthood can help you add a few inches to your height. Strength-building exercises, yoga, jumping rope, and biking all can help to increase your flexibility and grow a few inches taller.

8. Is 5.7 a good height for a 15 year old boy?

Generally speaking, the average height for 15 year olds girls is 62.9 inches (or 159.7 cm). On the other hand, teen boys at the age of 15 have a much higher average height, which is 67.0 inches (or 170.1 cm).

9. Can you grow between 16 and 18?

Most girls stop growing taller by age 14 or 15. However, after their early teenage growth spurt, boys continue gaining height at a gradual pace until around 18. Note that some kids will stop growing earlier and others may keep growing a year or two more.

10. Can you grow 1 cm after 17?

Even with a healthy diet, most people's height won't increase after age 18 to 20. The graph below shows the rate of growth from birth to age 20. As you can see, the growth lines fall to zero between ages 18 and 20 ( 7 , 8 ). The reason why your height stops increasing is your bones, specifically your growth plates.