A question that always seems to comes up when working with someone new to web analytics is how do you go about analysing/interrogating the data. I find it a hard question to answer in a way as I see web analytics as more of an art than a science and therefore there is no distinct sequence of steps to get from an Executive dashboard to insights to recommended actions. However I do have a number of approaches I use to drill through the data to identify where issues and opportunities are.
These approaches overlap so in explaining one approach, I will need to reference the other approaches – hopefully it will all make sense by the end. For ease of reading, this will be split into a series of posts for easier reading, although I am definitely completing the series this time. And for reference, the four approaches I will be discussing are:
- Comparing against reference numbers
- Trending the data and looking for patterns
- Categorising and grouping values
What are you analysing?
To make this simpler to explain, I am going to assume that you (the web analyst) have been asked if just completed week was a good or bad week for performance. Your executive team has various dashboards and reports but they don’t understand how to evaluate the overall performance.
Comparing against reference numbers
The first step should be to compare the performance this week against another set of numbers. If the KPIs have improved (whether that is by going up or down), then it looks like it was a good week. If they have done the reverse, then the week was not so good and people will want to know why. Any significant difference in performance is an indication that something has changed. This may be due to an external factor or a change that your company has made, whether or not you have any influence over it, the factor should at least be identified.
I have used the term “significant difference” as there is always a degree of “noise” in performance. At this stage I am not going to qualify what is significant beyond saying that approaches for judging this include standard deviation, percentiles, outliers and the old fashioned rule of thumb.
The previous week
The easiest option for comparison is generally the previous week’s performance. Is this week’s performance up or down against the previous week? A key thing to remember is that if without the impact of an external or internal factor, performance should not vary from week to week (except for random fluctuations). As such, the performance in one period is the best indicator of what performance should be in the next period.
The same period in the previous year
The most common comparison set of numbers as requested by senior executives appears to be the same period last year. It is the traditional comparison for bricks and mortar retail, it demonstrates if the business is growing year on year, and it is the number that managers are used to seeing. It is also something that I have argued against using many a time without success.
Yes, business growth year on year is important. And using a year on year comparison can reduce the impact of seasonality, especially if marketing campaigns are being run regularly at the same time each year. However, in my opinion, the online world is moving at too quick a pace for a year on year comparison to be truly useful in most cases.
There is a general growth in internet usage, websites are constantly changing, online marketing is adjusted for new technologies/opportunities, etc. So while some factors may be similar to the previous year, for most businesses, a year on year comparison is not a true or useful like for like comparison. Remember, web analytics is not about reporting, it is about insights. And you need a better method of comparison to get those insights.
The second most common number I will be asked for is a benchmark. What is normal? How are our competitors performing? Which basically means, like all other comparisons, how are we performing and are we doing a good job?
Again, these are numbers that I recommend not using for a variety of reasons. Key reasons include:
- If your competitors are not performing well themselves, being the best of a bad bunch does not mean you are performing well
- It is incredibly difficult to get a hold of these numbers usually
- Most websites, even in the same sectors, are actually quite different meaning you are again not doing a true like for like comparison.
Having said all that, internal benchmarks are potentially more useful. If you know what the bounce rate for a particular page type is on your website, say an article page, you can use that internal benchmark to evaluate the performance of new article pages.
I see targets as the improved version of benchmarks. They are the numbers your business should be trying to achieve given current performance, the type of website and the type of industry. Targets can be derived from benchmarks but are then modified based on experience to what is possible to achieve. Over time, targets may be moved as performance improves and existing targets are met. There is a risk in management creating stretch or unrealistic targets but, returning to the purpose of this exercise, these targets are not useful in understanding the performance of a website.
On a quick side note, there is actually a secondary types of target with this being targets for an optimised website. These targets can deliver real business value through evaluating the gap between existing performance and what could be achieved. The potential value of business changes (to marketing spend, website or business strategy) can then be calculated with business cases developed based on these calculations and alternative options compared for the value they will add. Through this approach, businesses can focus their attention and resources on the changes that will deliver the highest ROI, not on what seems most obvious/fun/easy. More on these sort of targets and how they can be used in future posts.
I have saved the best set of comparison numbers till last – in my opinion the best way to understand the performance of your company’s website is by comparing the weekly performance numbers against what the numbers were forecast to be. This is of course dependent on the availability of an accurate forecast but let’s assume that this is the case (I will provide information, examples and templates for forecasting website performance in future posts).
A good forecast takes into account historical performance (improves on year on year comparisons), current performance (improves on previous week comparisons) and adds in all known factors such as changes in marketing spend, business strategy or to the website. If done properly, a significant difference between actual and forecast performance is due to a factor not being given the appropriate weighting or to unknown factors that need to be investigated. Ideally, forecasts should be for more than just the key two or three metrics (visits, orders, revenue) and should instead be segmented and applied to all metrics that are being reported on.
This leads to explanations to the executive team such as:
- Traffic levels are 6.4% under forecast this week due to increased competition in key industry paid search terms that lead to Paid Search traffic coming in 40% under forecast. Other traffic sources were on forecast and the bid management tool has been adjusted to allow for a higher CPC with Paid Search traffic forecasts downgraded for the next four weeks to compensate.
- The website conversion improved by 8.2% last week from 4.30% to 4.65% with this attributed to a better than expected performance of the new product ranges released last week. The Product Page Success Rate for these categories is averaging 18.6% against the forecast and site average of 13.6% while other stages of the ecommerce funnel are in line with internal benchmarks. We have increased the visibility of these products within the website, the Merchandising team is ordering additional stock based on new forecasts, the Paid Search team is adding additional related search terms and we are evaluating the business impact of increasing the price point for new products within these ranges.
As an additional benefit to using forecasts for comparisons, the difference between actual and forecast performance should have a normal distribution, allowing for standard deviations to be used to judge what a “significant difference” is.
Summary of Comparing against Reference Numbers
The first step to understanding performance is to give numbers context by comparing the current performance against a set of reference numbers. The difference indicates if the performance for that week has been good or bad and can highlight which numbers should be investigated further. There are numerous sets of reference numbers that can be used, some easy to obtain and some more useful.
In the ideal world, performance would be compared against an accurate forecast. Not only does this lead to a much clearer understanding of business performance but it enables decisions to be made that will resolve/minimise issues and takes advantage of opportunities – the whole purpose behind web analytics.
Links to other posts in the series