Guidance for e-commerce, marketing and SaaS directors, managers, analysts and specialists.

  • What you know as “A/B testing” should be thought of as “website testing.”

  • Testing results only matter if the data obtained are valid and can be properly interpreted and applied.

  • Website testing takes time and tests often fail, but failed testing is as instructive as test wins.

You’ve probably read or been told that A/B testing is the key to conversion optimization. Take one version, run it alongside another, see which one folks like better and, voila! – you go with the winner and the digit counters spin.

 

The thing is, A/B testing isn’t the end-all solution to conversion optimization. It’s also not something you can just jump right into, no matter how many blog posts you’ve read about “5 quick things to start A/B testing today.”

 

Technically, A/B testing is defined as a form of statistical hypothesis testing with two variants. As applied to websites, A/B tests are a type of usability or user acceptance testing.

 

In actuality, we should not even be talking simply about “A/B testing.” From a conversion optimization services standpoint, we should be using the phrase “website testing.” There are multiple forms of testing a website should undergo, and A/B testing is only one of them.

 

Here are some of the other forms of website testing:

  • User Acceptance Testing
    • Split tests
    • Multivariate tests
    • Full-factorial tests
  • Performance Testing
    • Stress tests
    • Load tests
    • Scalability tests
  • Component / Functional Testing
    • Database tests
    • Configuration tests
    • Compatibility tests
    • Flow tests.

But since you know “A/B testing” and it’s one of the services we offer, you’re probably here to validate the fact that we excel in A/B testing. We’re tempted to tell you how we are different when it comes to website testing, why we are the best, and so on. But that doesn’t really move the ball.


 

Testing for Website Performance

As true and as succinct a definition of A/B website testing as is possible would call it a method of comparing two or more versions of a web page to determine which page performs better.

 

The key factor in our definition is the word performs. We can further define “perform” as producing quantifiable increases – purchase increases, revenue increases, subscription increases, more downloads, more signups … increases in conversions, i.e., whatever it is you want users to do on your website. Sometimes performance is even indicated by increases in click-throughs, though we consider click-throughs a lower-metric performance indicator.

 

We are frequently approached by marketing directors or others in similar positions who ask such questions about A/B testing as “what is the best way to test?” or “how can we get double-digit lifts every time we test?” or some other question that grew out of a blog post they read or a web seminar they watched.

 

The reality is, there is no good answer for any of these questions because every situation is different. There is no cookie-cutter answer for A/B testing, or any website testing, for that matter.

 

There is also a misconception that A/B testing is relatively easy and straight-forward, since at its most basic it compares two iterations of the same item. But just like with anything else we do in life, if we want to see good results, we have to put in the time and effort to make it great.

 

Many companies offer A/B testing as a service, and just as many others have a product that will do A/B testing for you. But at the end of the day, tools cannot and will not increase conversions for you. There are no shortcuts, especially when it comes to something as deeply rooted in statistical analysis as proper A/B testing is.

 

It’s the expertise and scientific mindset that can help you read and decipher the data derived from multiple forms of website testing that will lead to high conversion rates on your site.

 

A/B Testing Fallacies and Mistakes

Many marketing veterans think “A/B testing” is just a buzz term, but savvy and technically minded marketers today use testing to gain insights directly into online user behavior. Understanding what resonates with your visitors in order to create a better user experience will almost inevitably increase conversion rates in targeted campaigns.

 

However, an often overlooked secret is that anytime you create a test in which your hypothesis fails, it provides equally powerful insight about that user segment. It tells you what not to do (assuming that you have followed a proper methodology when performing your tests). What not to do can be as valuable as, if not more valuable than, a winning test.

 

The more information you can document from testing, the more insight you can obtain about a specific user segment, and the more value you can bring to the organization about your online users as a whole.

A scenario we often see is the desire to build test ideation strategy based on what a competitor might be testing. When we hear clients talk about this, the first thing that comes to mind is wanting to introduce them to Terry Tate, the office linebacker, especially when they say things like, “Let’s test some button colors like our competitors are doing.”

 

To further the point, here are some questions to ask if someone suggests testing what a competitor might be testing:

  • How did their goals align with the test?
  • What were the results of that test?
  • What initially fueled that test?
  • How difficult was it to implement based on their back-end technology?

 

Some other very common mistakes in A/B testing that we often see include:

  • A/B tests called way too early. It takes time to obtain enough results data.
  • Tests that are not run for at least two business cycles (varies per industry).
  • Tests based on an improper hypothesis or unrealistic expectations.
  • Failure to segment users when testing.
  • Not understanding that failed tests can be huge wins in terms of understanding user behavior.

 

Common Myths About Testing Shortcuts

It’s natural to want to get things like testing done and to move on to posting content that will draw more users and revenue. Unfortunately, proper testing demands time and focused attention.

 

Myth: Marketers’ instincts are better than A/B testing results.

 

Fact: Have you ever met a marketer? Ha, just kidding. Actually, any professional marketer worth their salt can tell you that, sure, they have instincts and ideas based on experience, and they can make a correct call now and then. But they’ll also tell you that they don’t move forward with anything until they’re set up to track and test how it does.

 

Everything starts with an instinct or an educated guess, also known as a “heuristic assumption.” Even if it’s an idea you swipe, you adopt it based on the belief that it will perform for you. But you don’t really know whether it’s performing until you see the numbers, and you don’t know what the numbers mean if you don’t have another set of data that provides the basis for a valid comparison.

 

Myth: A competitor did an A/B test on their “buy button” and got a 30 percent lift with the winner. We can do the same. Heck, we can just do our version of their buy button.

 

Fact: Not only will the winner more than likely fall flat on its face for you, the test that has sold you on this “winner” will probably bomb if you try to replicate it.

 

Consider that you and your competitors share the same customer base. Though they all come from the same pool of people, there’s some reason your customers shop with you and why some prefer the other guys. It’s partly the difference in your product, and some of it has to do with marketing and your ads, website, etc. But a lot of the difference is due to the customers themselves. It’s the old “different strokes for different folks” maxim; they prefer one or the other because of who they are and/or because of any of the countless number of things that influence their decisions.

 

You can set up an A/B test to present Button One and Button Two for your customers to choose from, but it must do so in consideration of your customers’ influencing Factors C, D and E, on through to X, Y and Z. You need to test according to a valid customer persona to get results that are illuminating about user experience on your website. You also have to test long enough to obtain enough data to ensure statistical confidence in your results.

 

Myth: You should have winning tests at least 50 percent of the time.

 

Fact: If you have wins – tests that show a measurable preference – more than 25 percent of the time, you are a testing rock star. Agencies like to tout that they can get wins 40, even 50 percent of the time. Anyone who has worked in this industry knows that testing wins are a lot like batting averages in baseball. If you have a lifetime batting average of over .300, then you are doing a fantastic job.

 

The problem is that there are dozens, if not scores or hundreds, of factors on a website that affect conversion rates. But not all of them have that much of an effect. The better planning and ideation sessions you have before testing, the more likely you are to test factors that relate to your goals and result in a statistically significant outcome. These decisions are easier when they are based on quantitative analyses of existing web analytics data, and experience in testing and website optimization across multiple e-commerce and SaaS fields.

 

A/B Testing is Far More Complex Than Its Name Implies

 

You can’t just jump into A/B testing without a goal for your campaign and a valid customer persona, and expect to benefit from whatever your test results tell you. For that matter, you’ll need a solid understanding of user segmentation and data substantiation to know what your test results truly mean. Though it’s often described as simply as an “either-or” proposition, for A/B testing to be useful toward conversion optimization requires a deeper understanding of statistical analysis than can be gleaned from a landing page, blog post or web seminar.

 

Some other areas of interest to you might be:

 

  1. Conversion Optimization: Tactics for conversion optimization must reach down to a funnel level or to individual pages to adequately focus on fulfilling your site users’ needs.
  2. Landing Page Optimization: The need for landing page optimization never ends as you seek to understand changing customer personas and how to inform, assure and guide site users toward conversions.
  3. Web Analytics: It’s foundational to conversion optimization, but too few understand how strongly gaining insight from web analytics depends on understanding statistics as well as validation of data through qualitative analysis.

Get Weekly Updates

Jeremy Smith

Conversion Expert

Jeremy writes about conversion optimization, web psychology, and what makes users click in the digital world. He is also a Google Certified trainer and avid online marketer.

What People are Saying

After implementing our new landing pages, we saw a 212% increase in conversions.

Kim Hall, Aegis Living

Aegis Living