Are your attempts at conversion rate optimization failing? Discover if you're doing one of these 6 typical statistical mistakes - and how to prevent them in the future.
Have you ever tested a landing page, applied it, and then seen your metrics fall? Despite our best attempts, it happens to the majority of us. Why?
Oftentimes it's just a fluke. Adjustments in the marketplace might also trigger it. But sometimes it's something completely different: statistical mistake.
We've compiled a list of six statistical blunders that CROs and marketers who engage in CRO frequently make. Let's have a look to see if we can prevent them.
1. Not Understanding the Importance of Statistics
Not long ago, marketers would do split tests all without addressing statistical value. Fortunately, because most split tests are now done using technologies that automatically quantify it, this conclusion is significantly less typical.
When your result achieves 95% statistical confidence, it does not imply that you have a 95% probability of picking a winner. That's a compelling conclusion, but what it actually implies is that there's a 5% chance your outcome will be at least as severe, even if your long-term output is nil.
That indicates that even if all of your split tests made any difference, 1 in 20 people would still obtain a "95% confident" answer!
Even when you reach the 95% confidence level, a statistically significant outcome might be a fluke. Keep track of how many of your outcomes are favorable. If you receive one positive result out of every ten tests, chances are that half of your positive findings are coincidental. If you have cause to dispute the prior result, be prepared to test again.
2. Being Duped by Bots
Spam bots, as well as ghost referrer spams, can activate the JavaScript in your split testing software or Google Analytics, causing the outcomes of the tests to be messed up and the site's overall conversion rates & bounce rates to be tossed out of whack, among other things.
Bots capable of activating JavaScript can account for one-third of traffic on large sites and the entirety of traffic on smaller ones. How can we rule out bots as a factor in the split tests?
Begin by scanning the referral sources for new ones. Look for evaluations of unknown sources to discover whether they are reliable. You should also verify the pages to see if there are any nonexistent sites mentioned and look for fraudulent events in your statistics.
3. Consider Correlation to be Causation.
Correlation follows from causation, although correlation does not necessarily imply that one item causes another. There may be no cause-and-effect link, or the relationship may be reversed. Although the majority of us are aware of this by now, it is all too easy to confine this information to our cerebral brain and then ignore it in reality, often without even recognizing it.
You can always create a plan of action and perhaps some metrics to track. In the absence of split tests, you may delay the actions, execute them in different areas, and check if the effects follow a consistent pattern.
4. Confusion Between Statistical and Practical Importance
Weak statistical confidence does not imply that your versions aren't very distinct from one another; it might just indicate that you haven't tested many views. Strong statistical confidence does not necessarily imply that one version is vastly superior to the other; it might just indicate that you conducted the test for an extended period of time.
Small businesses with minimal traffic should concentrate on making major improvements that have a big impact on conversion rate. That means concentrating on modifications that can be tested rapidly. If your versions are so comparable that you can't generate a statistically significant result in a reasonable amount of time, you may not be testing large enough.
5. Ignoring the Traffic Source
Don't anticipate results from one source of the traffic to be carried over to another. If you create a landing page for AdWords, don't anticipate the same results when you move to Facebook. Don't experiment with a fresh homepage using sponsored traffic and then expect organic traffic to react similarly. This is an excellent method to waste money quickly. Always test a landing page for the type of traffic it will receive. There's not much else to say; I just see this flawed thinking all the time.
6. Neglecting Micro-conversions (Or Macro-conversions)
We've seen people try various layouts and give up when they don't notice a difference in conversions, not recognizing that there was a strong and significant influence on how far visitors made it down the funnel. We've also seen individuals try landing pages that increased click-through rates to the basket, only to discover that their sales dropped when the page was updated.
It's critical not to allow your main KPIs to blind you to what's going on around you. By neglecting to address a consumer complaint, increasing the click-through rate here may result in a drop in sales there. Further times, it may remove a bottleneck without increasing overall sales, revealing the location of other bottlenecks in the funnel.
To implement successful CRO strategies, get a reliable CRO agency. Contact us today!