Conversion rate optimization (CRO) testing is pretty much a no-brainer for ecommerce brands. At least it should be! CRO tests are simple, affordable and clearly show what works and what doesn’t on your website, helping to increase on-site engagement and, more importantly, increase sales.
Several CRO tools exist, but we’re partial to Optimizely. It’s simple to get started and requires very little technical or programming knowledge or resources. A simple line of code is all you need to add to your site. You won’t have to speak to your developers again until you have a winning design and need to make permanent changes! (But you should at least say “Howdy” in hallways, they’re good people and your ecommerce company would be nowhere without them!)
While you can get a test started in a few short hours, we recommend putting some good thought into what you want to test. Willy-nilly should not be the approach. Rather, take the time to plan your test based on real data and real problems. Follow these 5 steps for every CRO test you run.
1. Define Your Goals & Objectives
Most likely your objective will be to increase conversions, that’s pretty obvious. But deeper than that, you must understand what the goals and needs of the testing are, and how they will need to be refined and tracked throughout the project lifecycle.
Defining all of these items up front is how you must get started.
2. Set Your Baseline Analysis
Study current desktop and mobile site data, as well as any available user testing and profile data to build our baseline and identify user engagement and conversion issues.
Let’s say you have a three step checkout process, and you know that there’s a high drop off after step one. What’s happening at step two that is causing folks to bail? Are you asking for too much information? Are you asking for the wrong information? Is the checkout process frustrating on mobile devices? What assumptions do other stakeholders on your team have and what do they hope to accomplish with CRO?
3. Create a Hypothesis
Act like a scientist, minus the lab coat and beakers, and create a hypothesis for your experiment. That simply means, define what you expect to happen, so that you can determine whether it not it does happen. Your expectations should be based on your objectives.
Back to the three-step checkout process example, if you may believe that asking for a phone number on step two is what’s causing the drop off since it’s not necessary to complete an order. Your hypothesis would then be that removing the phone number field will increase conversion rates.
4. Develop Optimization Plan
Utilize insights from your baseline analysis to create A/B and/or multivariate test scenarios.
Firstly, define approximately how long you’ll allow each test scenario to run before making any conclusions. It’s important not to get over zealous here. Results can be unsteady during the first week or so of any test, and tend to level out after running for some time. Consider that users behave different during the week than on weekends, and that other marketing efforts or events may impact results. Make sure you all for enough time to amass statistically significant results. Optimizely provides a Sample Size Tool determining the appropriate amount of traffic you need, which can help you to understand how long each test must run.
You’ll also want to determine if the tests will run on desktop, mobile devices, tablets or all devices, and what audience segment will be exposed to the tests.
5. Build Test and Launch
And now the fun begins! This is when you build the campaigns in Optimizely (or your testing tool of choice) and launch. If you’re like us, you’ll get giddy watching the results come in and seeing the different versions compete. But remember, don’t get too excited when one version pulls ahead in the first few days. Patience and pragmatism are key characteristics here. Let the tests run their course for accurate and actionable data.
6. Identify Winning Version
When you’ve reach the conclusion of your test and have statistically significant results, it’s time to crown the winner! If that version was the original, well, your hypothesis was disproved and it’s time to go back to step one. Don’t get discouraged. At least you know that, for example, the phone number field wasn’t the problem. And you get to run more tests!
If a new version was the winner, you’ll want to make that version the official one. Now’s when you need to re-engage that friendly web developer to implement the changes permanently.
The fun doesn’t have to stop here, though. There are always more tests to run and more improvements to make! Run through these steps often to continue increasing performance and driving more sales. Happy testing!