In conversion optimization, two things hold true: There are no silver bullets, and there can be no significance without solid traffic.
Every ecommerce business has its own set of unique goals and challenges, therefore no tactic or strategy is one-size-fits-all. When developing test ideas for an optimization program, not only do you need a custom strategy, but you also need a constant flow of traffic to your site in order to achieve statistical significance.
To watch the webinar recording, complete this form on Magento’s site.
If you have enough traffic and used tailored tactics and strategies, a successful optimization program will allow you to achieve two primary goals:
- Learn more about your customers
- Drive measured KPI (key performance indicator) improvements
During our recent webinar with Magento and Signature Hardware, we took a close look at these two goals, and then supported them with more than a dozen tests.
During the webinar, Signature Hardware’s Director of Ecommerce, Sean Fisher, explained why he focuses on the first goal:
“Much of the testing we do is not only to learn about our customers, but also to understand the differences between what they say and what they do. We see testing as a way of finding out what our customers are trying to tell us but can’t.
What We Test
According to Blue Acorn CEO Kevin Eichelberger, optimization tests come in many different shapes and forms, but among the most common we do for clients are those that focus on user interactions, site interfaces, and the psychological factors that drive your prospects and customers. These tests can focus on anything from your brand voice to your site’s features and usability.
To learn more about what we test and why, see Kevin’s slides from the webinar:
Your Questions Answered
Toward the end of the webinar, we opened the mic to our attendees so that they could ask their pressing questions. Although we were not able to get to all of them during the event, there was one particular question that stood out, so we dug into it.
Vishal asked if a separate mobile site is recommended versus that of a responsive site, and what approach should be taken when testing a responsive site. For some context, during the presentation Kevin discussed how some features on a desktop site simply don’t convert to mobile, and if it is responsive, the feature typically would appear on mobile.
“When running A/B tests for responsive sites, you need to consider how that test will be executed across the various resolutions and devices. In addition, you need to take into account that oftentimes, the user’s intent may be different on a mobile device than on a computer. Given that, in some cases you may choose to run a single test that applies to all devices, or you may choose to isolate your test to one specific device type. We would look toward user behavioral data (for example, user testing) to help us determine which scenario makes more sense. In any case, if you run a singular test across all device types, segment your resulting test data down to the device type as part of your analysis. You may find in some cases that the test performed wildly different from one device to another.”
Stay tuned for our next webinar on testing with AddShoppers set for June 30. It will feature our Director of Optimization, Jay Atkinson. Want even more? Sign up for our new twice-monthly newsletter, Actionable Ecommerce: