What is conversion rate optimization testing? Conversion rate optimization (known as CRO or conversion testing) is the methodology to segment traffic and test different variations of the buying process to identify which presentation of content is best. Your desired customer action, or goal, could be providing an email address, completing a purchase, creating an account, completing a survey, or following your brand on social media.
Optimization focuses on analyzing the behavior of your ecommerce site visitors and understanding what motivates a particular segment to engage in a specific way with your brand. The ultimate goal of conversion testing is to improve the user experience to persuade visitors to take the desired action on the site or within a specific email or social media campaign.
The Ideal Conversion Testing Process
During this phase, you’ll dig into two types of data: quantitative and qualitative. Quantitative data are the numbers—eg. the average amount of visitors per day, how long they stay on your site, number of pages they visit, conversion rate, bounce rate, etc. However, these numbers alone won’t tell you why your visitors behave a certain way. This is where qualitative data comes in.
Qualitative data will explain the reasoning behind specific actions. For example, say you have a high bounce rate on the checkout page. This could be due to a multitude of reasons: the shopper could be abandoning the checkout because they didn’t feel like entering all of their payment information, or they didn’t want to create an account. Information like this will help you determine what kind of changes to make to your design or conversion funnel.
Create a Conversion Roadmap
After you dive into your customer data and figure out their pain points and frictions, then you can create your conversion roadmap. When creating a roadmap, assess which pages you want to fix and outline the problem areas on each page. Leah Ferguson, Head of Strategy and Analytics at Blue Acorn, suggests prioritizing these pages based on your business goals. For example, is your goal or conversion testing is to increase AOV, return customers, email click-thrus, or purchase completion?
A hypothesis is an educated guess on what changes to your pages or funnels could result in a desired change—X change will affect Y page due to Z reason. For example, removing the phone number field on the checkout page will result in 5 percent more completed purchases because it takes less time to fill out the form. Creating a well-thought-out hypothesis is necessary as this will pave the direction of your optimization plan. Once you create your hypotheses, you’ll need to update the design or create new pages based on the statements to run your tests.
There are three types of tests you could run: A/B testing, split testing, and multivariate testing. A/B testing is typically the most common method and is used when the page or funnel changes are relatively simple. Frequentist method and Bayesian method are the two subtypes of A/B testing. Frequentist is the more traditional method; it’s what you likely learned in your entry-level statistics class in college. Bayesian is a stats engine that runs behind your optimization platform. In most cases, this method can reach statistical significance and develop actionable results in half the time of the frequentist method. According to Jared Hellman, Director of Insights and UX at Blue Acorn, Bayesian is the go-to A/B test for brands.
Split testing, also known as split URL testing, is ideal when the design or funnel changes are so complex that it’s easier to test the changes on two different URLs. Multivariate testing is used when there are more multiple changes on a single page, and you want to test different combinations of each change. Email marketers will typically use split or multivariate testing to test subject lines, messaging, CTAs, or landing pages with the ultimate goal of increasing conversions. Platforms like Pardot, Marketo, or Salesforce Marketing Cloud make it easy to run email optimization tests and report on the results.
A/B testing, split testing, and multivariate testing can all be used for personalization campaigns, full-page redesigns, promotional content, checkout funnels, and messaging.
Analyze the Results of Conversion Testing
First, determine if there is a winner. If there is, you will need to evaluate a deployment cost and timeline including both development and design hours. You will also want to analyze the test data to see if there are any other optimization opportunities. If there is no clear winner, then put a hold on deployment. Figure out if there’s a way to refine your hypothesis for more impact.
Sub-segmenting is another approach to consider to find out more about a specific group of customers. By sub-segmenting, you can reveal customer behaviors based on the type of device, spending patterns, style, or date of last purchase. This is an effective way to deliver targeted user experiences under the logic that customers are more complex than your brand’s three or four buyer personas.
What is “statistical significance” and how much time does it take to achieve a statistically significant outcome?
Optimizely defines statistical significance this way:
“The likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance…Statistical significance is a way of mathematically proving that a certain statistic is reliable. When you make decisions based on the results of experiments that you’re running, you will want to make sure that a relationship actually exists.”
Let’s say you’re conversion testing, and after the first ten visitors, the variation strongly outperforms the original. Despite appearances, ten visitors is not a broad enough representation of your overall audience. Ferguson says, “You need to reach a point where you have a large enough sample size and number of conversions before confidently declaring a winner.”
Optimization experiments typically run at a 90 percent statistical significance, or “the probability at which the winner is chosen by the pure intention of the customers,” says Hellman. This means there’s only a ten percent chance the variation outperformed the baseline by accident. Anything below 90 percent will increase the risk of deploying a “loser” page or funnel.
Before you run a test, decide how long you need to let it run. Keep in mind, how long or short you choose to run the test could affect your statistical significance. According to Ferguson, “the amount of time it takes to reach statistical significance really depends on your traffic levels and velocity. If you’re testing something high-touch, like the navigation, you won’t see a high impact in a short timeframe. But, if you’re testing the checkout funnel, you should see a high impact on the conversion rates.”
According to Optimizely, it takes fewer visitors to detect large differences in conversion rates, and as your baseline conversion rate gets higher, the smaller the sample size you need to measure improvement. Baseline conversion rate is the current conversion rate of the page you’re testing. Optimizely provides two conversion equations that will help you figure out the number of days to run an experiment:
- Sample Size x Number of Variations in Your Experience = Total Number of Visitors You Need
- Total Number of Visitors You Need / Average Number of Visitors Per Day = Estimated Number of Days to Run the Experiment
What technology is needed to perform CRO tests?
“You definitely need an optimization platform,” says Hellman, “Optimizely, Dynamic Yield, and Monetate are all options.” The purpose of most of these platforms is to build the experiences and report on the results. Ferguson adds, “They don’t typically tell you an estimation of how much traffic or how long it will take to reach statistical significance.” Many brands will also use Google Analytics as a secondary tool to help determine such calculations and more.
What are the advantages of CRO testing during the holidays?
If you want to maximize your sales during the holidays (who doesn’t?), then CRO testing is one of the best avenues you can take. You can test sub-segments surrounding gift-giving, such as the early holiday shoppers, last-minute shoppers, industry-familiar shoppers, and industry-novice shoppers.
“Holiday season is an opportune time for promotional and messaging testing. During this time, you’re typically moving at a faster pace, so you can quickly determine which messaging works best and push it out to the right audience.” However, Hellman says, “We do not recommend any significant UX changes during the holiday season due to the higher volume of traffic. You don’t want to risk disrupting the buying process with conversion testing.”
Holiday shoppers tend to have a higher purchase intent, are likely bargain hunting, and looking for free two-day shipping—this is the prime time to learn more about these differences as shoppers are on an accelerated conversion path. Some of the more obvious behavioral shifts occur on major shopping days Black Friday and Cyber Monday.
Should I be concerned about the varying amount of visitors to my site?
“No,” says Hellman. “All brands have varying amounts of visitors depending on the seasonality of their products. Ferguson says that the slower periods can actually work in the brand’s favor. “Slow periods are an ideal time to test any big UX changes.”
Every brand benefits from ongoing conversion tests—there is always room for improvement. As customer behaviors and technology evolve, conversion testing will take out the guesswork. It enables you to get more value from your visitors, lower acquisition costs, and improve your retention rate.
If you’re looking for ways to improve your existing online shopping experience, learning more about conversion testing, or considering a new ecommerce site, don’t hesitate to reach out to Blue Acorn.