Enter a search term such as “mobile analytics” or browse our content using the filters above.
That’s not only a poor Scrabble score but we also couldn’t find any results matching
Check your spelling or try broadening your search.
Sorry about this, there is a problem with our search at the moment.
Please try again later.
Thanks to the 'trackability' of digital media and the rise of Big Data, more and more companies are hoping that decisions they once made on gut instinct or educated guesstimates can and will be made on hard data.
Which, in theory, is a good thing: data-driven decisions should enable businesses to understand the dynamics in their market and use that knowledge to better serve their customers.
No metrics, 'mo problems
The data behind data-driven decisions, of course, is of limited use in its raw form. You'll probably never find a company where an executive made a decision because he personally sifted through 5m tweets.
Instead, we define metrics and build software that turns data into numbers that humans can easily digest. Unfortunately, given the huge volumes of data businesses increasingly have access to, one of the biggest challenges companies face today is choosing the metrics worth adopting. In many cases, the ideal metrics simply don't exist. In the social media realm, for instance, establishing metrics has been a real source of frustration. Complicating matters: companies will in many cases have to wait on the owners of first party data, like Facebook, to make their dream metrics a reality.
The math behind the metrics
The fact that in most channels and markets, there is a real desire for better metrics doesn't mean that half-decent metrics don't exist. They do, and as imperfect as many of them are, the good news is that companies are using them to build businesses that are capable of making more informed decisions.
There's bad news, however: the math behind the metrics is often flawed. What's worse: this is true even for common metrics that are seen as being as basic as they come.
Take, for instance, customer lifetime value (CLV). This is a common metric that can and is frequently used in a number of markets, but regardless of the market you choose to look at, you're bound to encounter companies calculating customer lifetime value in the most flawed way possible: taking total revenue and dividing by total number of customers.
Such a flawed approach isn't seen everywhere; fortunately many companies are more sophisticated than that. But this doesn't mean that the approaches they take are much better. A company might, for example, calculate the average revenue per user (ARPU) on a monthly basis and multiply it over a desired period (eg. 36 months) to calculate the value of a customer over that period. But if that company has a lot of new customers, for instance, the value will naturally be skewed and therefore of limited use.
Moving too fast
This begs the question: if lots of companies can't get metrics like CLV and ARPU right, what are the odds that they'll get more complicated metrics, both existing and not-yet-developed, right?
Arguably, part of the problem is that we place more value on numbers than the process by which those numbers are calculated. The math is not sexy; the data to which the math is applied and the output of the math is, so that's what we focus on.
But that's dangerous given the velocity at which data is being collected today and the credibility that businesses increasingly give to metrics in the decision-making process. After all, decisions based on numbers that don't mean anything, and that are the product of a flawed calculation, could very well prove to be worse than decisions made on instinct. So as companies seek the answer to the question "What should we do?" in data and the numbers that are generated by data, they might want to slow down and make sure that the math behind the numbers is really as solid as they've come to believe.