POWER READ
In this age of e-commerce and web analytics, there is an abundance of data available at your disposal to evaluate how consumers are responding to your website, marketing campaigns, or user interface. Naturally, analysing data is an extremely powerful tool in any business’ arsenal when designing a business or marketing strategy.
However, the sheer abundance of data can be daunting. Where do you even begin to look? What types of analytics should your business be purchasing in order to fortify business performance? Is qualitative data gathered from interviews with focus groups a more reliable reflection of consumer preferences for a promotion you’re running, or should you be looking at click-through rates for the promotion on your website? In an absolute sea of every answer imaginable, it is imperative to ask the right questions.
When we leveraging consumer insights, the general framework is as follows:
First, we identify the business objective we want to meet. These objectives could be specific to the goals of your team or department, or could be company-wide objectives, such as improving the e-commerce website’s performance, or improving the return on advertising spending. Having clear objectives sets a boundary for what you’ll consult the data for.
Once you have identified objectives, then you can determine your metrics for evaluating your success. These metrics will likely be based on the consumer data you can collect – an example from my work in e-commerce analytics would be the percentage of users who abandon their cart. Of course, picking the right metrics that can shed light on the objectives is key. We will discuss this in greater detail later.
Let’s say your metrics will show that your performance can be improved. How do you convert the data into an actionable step? This is where you analyse the data and generate a hypothesis on why the KPIs are not being met. Doing so requires looking past the superficial KPI figure and looking at the actual data. What is the problem that is hampering consumers?
The last step in this framework is devising a solution that addresses the problem proffered in the hypothesis. Typically, if you have rigorously analysed the data and have a clear and specific hypothesis, the solution to the problem will be obvious.
However, many a time, you will inevitably have to try different solutions before you stumble at the perfect one. Don’t be afraid of not getting it right the first time. This analytics process is ultimately iterative – there is rarely a point where your solution is the actual best solution to the problem. Test, learn, iterate. If you analyse the problem using the steps in this analytical framework, you will put yourself in a position to be closer to identifying the “correct” answer, even if it takes a few rounds of iteration.
Now that we have laid out the general flow of how analysing problems using consumer insights should occur, let’s take a deeper look at how each step of this process should be implemented. In doing so, I’ll also cover some of the common pitfalls that occur at the various stages of this process.
The most important step in analysing consumer insights is defining clear business objectives – conversely, failing to do so is the most commonly-faced problem. Why is it so important? Because every subsequent action you take stems from the key objective. It is from your objectives that you identify your metrics of success, and from there you analyse the data to identify the problems you’re trying to fix. If your objective is misguided or vague, you may select the wrong metrics, leading you down a wild goose chase where there are no improvements to show for, despite your analysis.
Let’s say I’m selling a product to consumers through a website. Obviously, I want more people to buy my product – I could make that my objective. Or should my objective be making people buy higher-valued products? Those are not the same, and my success metric would change entirely. If my goal is to have more people buy products, then I would be looking at the conversion rate of consumers who visit my site. If I want people to buy higher-valued products, then I would instead be looking at average order value. Naturally, the solutions to be taken to increase the performance of those metrics would differ.
Of course, every business would want both of those objectives, right? That is well and good. But when analysing consumer insights, you can’t simply be looking at whether “it is good for the business.” What exactly does good for the business mean? What metric should I be prioritising? Many times, businesses don’t identify their hypotheses clearly and are happy with an improvement in any KPI, which they can put into the business report and claim success. But if you adopt this “anything goes” mentality to consumer analytics, you could be making decisions that are detrimental to your business without even knowing, simply because you’re looking at the wrong metrics.
An example from my work with Dell is when we introduced a pop-up window on our website reminding visitors to log in if they are registered with Dell. The point of this is so I have more information on the purchasing habits of our registered users, and also prompt people to register if they have not done so. In this case, what metric should I use to evaluate success? If I were targeting existing registered users, I should look at the commonly used “click-through rate” (CTR) as a measure of success. But if my main goal is to increase registrations, then I should instead be looking at how many registrations actually occurred after implementing the pop-up window.
Neither are wrong – it depends on which objective is my priority. It could be both! The point is not that there is a clear answer in every instance, but to be mindful that the specificity of your objectives changes your metrics, and thus your solutions, is very important.
So at this stage, we have clear objectives and metrics. We’re at the stage where we are looking at the data to identify points of interest to pursue to improve performance in our KPIs.
A key tip I have is to combine both quantitative and qualitative data to get meaningful insights. Many times, especially if you’re working in e-commerce, you can look at data analytics on your website and have no idea where to begin to look for areas of improvement. Quantitative user data, while useful because of its scale and manipulability, can be noisy. It doesn’t really provide a strong direction as to where to look. Sometimes, you can really drive yourself up the wall trying to identify points where your website can improve. You might end up shooting in the dark and implementing certain changes without real impetus to do so.
In these situations, having qualitative insights where the consumer tells you directly what their problems are can be very helpful. These can come in the form of site-based user feedback, or focus group surveys, or social media responses. Qualitative data from consumers tends to be very directed, which can point you in a direction when it comes to quantitative analysis, so you can generate useful insights and hypotheses on areas of improvement.
Sometimes, qualitative data can also point you in directions where the data may not necessarily reflect.
Back in my days working in marketing for a Telecom company, I used data when deciding how to market packages for international long-distance calls. At the time, there were certain countries that accounted for the majority of the calls I was managing. Based on the data, I was fairly certain that packages should focus on these products. However, when I spoke to retailers at trade shows, they mentioned other countries that they felt consumers would like a product catered for.
Now, these other countries did not come up in the data I had access to. But when I came up with a product that included these other countries, it performed very well, increasing business revenue for that area that improved our market share, because the countries we were offering in our packaged product was something no other company was offering at the time.
Another important source of data that you should rely on is also data from other teams within your organisation. Often when we are performing consumer data analysis, we rely on the sources we have easy access to. If I’m looking at improving my company’s website, for instance, my first port of call would be the data generated from the site – click-through rates, page views, and so on.
But if my company is working in e-commerce, chances are we have a social media team, who will be receiving feedback from users directly. Maybe the company also has a product management team that’s doing their own data collection from consumers. While the data is collected by other departments, they may provide insights into what consumers are looking for when they visit our website. If I have access to this information and integrate this intelligence into the analytical framework, my findings and analyses will be far more complete and reliable.
Naturally, access to other departments depends on the organization and how silo-ed teams operate, so it may not necessarily be easy to obtain. But it’s something to take note of – some data points that are not immediately visible to you may be available within the same organization, and it’s worth exploring whether you can add those tools into your arsenal.
Of course, it is worth noting that when considering data points, you have to take note of the sample sizes. If you rely on a business review a team produced where 80% of the consumers surveyed said they loved the product, but the sample size was only 20 when the customer base is 1 million people, of course you should take that information with a grain of salt.
When sample sizes for the data is small, it is a better idea to look at the data from a qualitative perspective – that is to say, instead of enquiring whether the reception was positive or negative, perhaps investigate the reasons behind consumers liking or disliking the product.
For data with larger sample sizes, you have the luxury of making the typical quantitative inferences with greater certainty. Delineating the intelligence you receive from your data based on the sample sizes is a useful way to compartmentalise and get the most out of what consumers are telling you across all mediums.
At this stage, we have a clear business objective, and have identified a metric of success, derived from a wide array of data sources. Now, we have to analyse what the data is telling us about how consumers are responding to our product, in order to come up with solutions that can improve performance in our success metrics.
When we analyse data, we should always endeavour to view the data with the bigger picture in mind. A common problem that arises is called testing myopia. This is when you focus purely on the segment and the metric you have identified, without looking at the larger picture.
Let’s say we are looking at making changes to our e-commerce site, and we want to add product banners to add to the main site, where clicking on it will bring you to a specific product page. Our marketing team has designed two banners with different calls to action, and we are evaluating which banner is performing better.
When looking at the data, we find that for Banner A, 10 people clicked on the banner and 2 people bought the product. For Banner B, 3 people clicked on it and 1 person bought the product.
If we only look at the conversion rate of the banner, it appears that Banner B is more successful than Banner A, as a 33% conversion rate is better than 20%. But, as I’m sure you’d have realised, Banner B led to less sales than Banner A.
Normally, conversion rate is usually a useful metric for evaluating success of product banners and site improvements. However, in this case, it’s obvious than Banner A is more successful in terms of actually driving sales. What does this mean? Maybe you might have identified the wrong metric for evaluating success of the banner. Maybe the metric is right for the objective, but the objective was misidentified, pigeon-holing you into a narrow metric.
It’s common to make these types of mistakes, and you shouldn’t be afraid of them. That is why, yet again: test, learn, iterate. The lesson to take away here is, when testing and learning, be aware of the bigger picture, so you will realise when you are making a mistake and can learn from it.
When analysing data, bias is a big problem that can affect the reliability of your results. Bias typically manifests when you are inclined towards a certain hypothesis or solution, perhaps one that you’ve had familiarity with before. In that case, you only look at the data that confirms your theory, and ignores those that may contradict it. This is commonly known as confirmation bias. Obviously, this can lead to poor data analysis, and the implementation of solutions that will not work.
The challenge of dealing with bias is that it is very hard to spot, and can plague even the most experienced of analysts. It becomes especially pernicious when the experienced director of your company, who has maybe spent 15 years working in this field, has a very strong belief that a certain approach is the best way forward, when perhaps times have changed and that approach is no longer as optimal. Because of their experience, it is easy to buy their conviction as pearls of wisdom. Often, it is the work of bias.
It is important to ensure that you, as well as your team, try your collective best to stay neutral and objective, and not favour your own ideas over somebody else’s. Rely on what the data can prove, rather than on hunches or experience.
This is especially important when gathering data directly from consumers. If you’re biased towards a certain solution, you may have the tendency to couch your questions in a leading way that would give you the data that you need to prove your case. Ensure your data collection processes are such that there is a clear protocol of remaining unbiased and asking open-ended questions.
As far as your influence allows, try to implement a team culture that is data driven. If everyone is on board with remaining neutral, the inadvertent biases of several team members can be addressed by other team members who are more neutral.
Analysing consumer data is only meaningful if you know what you want to get out of it. Identify the objectives that are best for your business. De-prioritise other concerns.
Make sure that your analysis is targeting the metrics that will meet your objectives. This comes from asking the right questions with the big picture in mind.
Ensure your data from consumers will enable you to answer your questions and provide enough insights to come up with meaningful solutions. This comes from looking at both quantitative and qualitative sources, with the mindset of trying to understand what the consumer is telling you through the data.
Sign up for our newsletter and get useful change strategies sent straight to your inbox.