Conversion Rate Optimisation has the great benefit of being a highly measurable industry. Unlike some other marketing and digital disciplines, there is a clearer idea of whether a test is a “success” or a “failure” as the process is designed to be as scientific as possible. A scientific approach inevitably leads to the application of cold, hard maths and an undeniable logic. As long as your test is set up correctly. I’ve written about getting started with CRO here.
Due to the clear it nature of tests it can then lead to the assumption that a test hypothesis that has been disproven is, therefore, a “failure”. But should we classify a disproven hypothesis as a “failure”? In fact, I’d argue that in Conversion Rate Optimisation, there is no such thing as failure.
It’s called “testing” for a reason.
When tackling the question of failure, as we often say as optimisers:
If we knew all of the answers already, there’d be no need to run a test whatsoever.
This is an immutable fact. “Testing”, by its very nature, will produce outcomes that cannot be accurately predicted at the outset of the test; otherwise it wouldn’t be testing. Therefore any business that actively participates in Conversion Rate Optimisation should have already accepted that there will be unforeseen outcomes that aren’t necessarily what the company wanted. That doesn’t make it a failure, merely an unavoidable by-product of the process.
Lack of immediate ROI can mask greater value. Conversion Rate Optimisation can be a misleading term as often the “optimisation of conversion rates” is a means to an end, generating additional business value as well as revenue. And while that is certainly a broader topic for another day, it does form a core part of the “evidence” for failure existing in CRO.
The argument goes something like this:
Our goal for our CRO programme is incremental revenue gains.
This test has not shown any incremental improvement in revenue.
Therefore this test is a failure.
However, this is, of course, a very short-sighted approach. If there is statistically valid evidence that the test has delivered a better user experience for non-financially-measured metrics, there is likely a long-tail impact on the lifetime value and repeat purchase for example. And that’s not to mention social or word-of-mouth referral because the business is just offering a great service to its customers.
Mathew Syed’s book ‘Black Box Thinking’ provides an interesting anecdote to the benefit of testing. Jamie Divine, one of Google’s top designers had come up with a new shade of blue to use on the Google toolbar. He believed it would boost the number of click-throughs.
Google decided to test the theory. As it turned out, the blue colour didn’t have any effect on the click-through rate. The test was a ‘failure’. However, Google Executives realised this wasn’t conclusive. As Mathew Syed says “who is to say that this particular shade is better than all other shades?’
Google instead of viewing the single test as a failure, viewed it as part of a process of elimination. They tested a spectrum of different shades and through trial and error found the optimum shade. As of 2010 Google was carrying out 12,000 tests every year, which is an astonishing number but it’s proven to generate huge business value. Finding the optimum shade of blue, for example, generated an additional $200 million dollars for Google.
Jamie Divine’s theory might be regarded as a failure as it didn’t deliver the expected result, but the result of the experimentation was an additional $200 million dollars in revenue, which can only be seen as a success.
So it didn’t work for everyone, but did it work for someone?
At the outset of a Conversion Rate Optimisation programme, the majority of tests are shown to all visitors. And this is perfectly natural, as it is the beginning of developing a deeper understanding of visitor types and their various likes and dislikes.
But when a test is served to a wide variety of visitor types, sometimes a positive result for a sub-section of those visitors can be missed unless that data is properly reviewed. For example:
NEW VISITORS ONLY
A very basic example, yes, but understanding that there are sub-sets of visitors that could react very differently to others is an essential part of ensuring that the right learnings are taken from any tests. So from looking at a test that could be deemed a “failure” (no discernible uplift), now there is a test that has been proven to be hugely successful for New Visitors. And there can never be such a thing as “failure” when there are clear learnings to be gained.
Conversion Rate Optimisation Is A Journey, Not A Destination.
There is an idea in some circles that Conversion Rate Optimisation can be “achieved”. Well in the short-term, possibly, but in the medium-to-long-term, definitely not. And when it is considered logically, there’s no way to achieve total optimisation Infinitum.
For example, think about the websites from the year 2000; there’s no way that in today’s economy those websites would convert at the same rate. For a start, they wouldn’t be optimised for small screens, but beyond that, visitors’ expectations of a site’s form and function have continued (and will continue) to increase. So the conversion pattern of an unchanged website looks something like this:
So if a site was “optimised” at #1 on the graph, by the time we reach #6, the conversion rate has already started to drop, and therefore further optimisation is required and so forth.
And that makes Conversion Rate Optimisation incompatible with “failure”. Failure is finite & absolute, whereas Conversion Rate Optimisation is infinite, continuous & above all, relative.
What If You Didn’t Test It At All?
Quite possibly the strongest refutation of them all. There may be some who cast aspersions on tests as “failures” or even that the whole Conversion Rate Optimisation programme of the business is a “failure” but the real question here is about “What if…?”.
Picture the scene; a meeting of senior managers is winding to a close and to keep everyone engaged, the CEO suggests a radical change of direction for the company’s online sales channel. And despite most of them thinking it’s a terrible idea, the rest of the attendees are just too tired to argue and therefore acquiesce to the CEO’s demands for a “shake-up” of online. Now there are two ways that this pans out from here:
1. The Director of Online is called in and told to get his development team working on the new direction straight away. He does as he’s told and when it launches, the company loses £5m per quarter
2. The Director of Online is called in and told to get his development team working on the new direction straight away. But given that they are a test-and-learn focused organisation, he suggests setting up a test, and when it launches, the company loses almost £400k in two weeks. And then, funnily enough, it gets shut down. The report that is written on that test then states that:
By testing the new variation and finding the Control to be the winner, we have potentially saved £5m/quarter in revenue vs. a straight implementation of the new version.
So when faced with questions about Conversion Rate Optimisation “failures”, it’s often better to think of them as “savings”, or “balanced opportunity costs” — in the above example, the opportunity cost of potentially saving £5m is £400k. And that’s a sum that every CEO can get on board with.
If testing correctly there can be no such thing as failure. If your CRO test disproved your original hypothesis, it’s worth asking yourself a few questions. Consider why and how you arrived at the original hypothesis? Was the test correctly set up with a controlled version in place? Was there a difference in results from different audiences? What have you learned from the test so far? Remember a testing can save millions from lost revenue as well as generate it.