Split Testing 101 (With Bacon)

As the split-test analyst here at LeadPages®, I naturally do a lot of split testing. As a person with taste buds and a kitchen, I cook a lot of bacon. That second skill wouldn’t be relevant at most offices . . . but LeadPages® isn’t most offices. And recently, I had the chance to use both these talents at work on one tasty morning.

Let me explain. Once a month, LeadPages® hosts an event known as Breakfast Club, where one of our departments makes breakfast for the entire office up here in Minneapolis. When my department’s turn arrived, I felt it was my duty to go the extra mile and split test 3 different flavors of bacon to determine which one my fellow LeadPagers preferred: original, jalapeño, or maple.

In the process, I didn’t just learn about my coworkers’ taste in smoked meats and make the office smell delicious. The experiment also gave me the chance to think through some principles of sound split testing that you can apply anywhere, whether in your mouth or on the web. Read on for the 3 main lessons I discovered from my bacon split test—or click below to get these insights in infographic form right away.



When you’re split testing, you want to answer a particular question with each test. Here, I didn’t care about how people liked their bacon cooked or what brand they preferred or anything but my three flavors. I didn’t want my results to be skewed by other factors, so I took 3 different actions to control my variables:

  • Start with a consistent brand: Using only one brand of bacon reduced any validity threats based on the quality of the bacon.
  • Make sure the finished product is consistent: Each of the 3 flavors of bacon were cooked for the same amount of time to ensure that the hungry masses were not swayed by different degrees of crispiness.
  • Ensure consistent availability: If not enough people got to sample a certain flavor, that would seriously mess with my results. So I kept equal quantities of each of the 3 flavors of bacon cooking at the same time.

The Split Test Takeaway

Each time you start a new split test, decide what you want to learn. Are you trying to improve your landing-page headline? Considering a switch to a new layout? Testing a new button color? Whatever it is, start by duplicating your original page exactly. Then, on your variant, change only the element you’re testing. (And then, make sure your site doesn’t go down while you’re running the test!)


Another key element of split testing: make sure your data is reliable. That includes ensuring your sample size is big enough to give you results that will apply to your audience in general, and assessing the right kind of data once you do. In order to do that with my bacon split test, I had to:

  • Get lots of unique visitors to participate: I cooked 6 packs of bacon (2 packs of each flavor), but I didn’t serve it all at once. Instead, I set out just-big-enough batches throughout the morning so that everyone could participate but no one could take a mountain of bacon for just themselves. (Not that they would. We really are nice up here in Minnesota.)
  • Make sure your sample is representative: Not everyone works the same schedule at LeadPages®, and schedules can vary by department. To make sure I wasn’t leaving anyone out, I also took “orders” for anyone who’d be late to Breakfast Club so they got an equal chance to choose a flavor.
  • Let the empty plates speak for themselves: During the split test, many people voiced their interest in the jalapeño bacon. If I’d asked people beforehand to vote on what flavor they’d enjoy most, the results probably would have led me to cook mostly jalapeño. However, when it came time to check the data—that is, how many slices of each flavor people actually chose—the jalapeño bacon finished second. The winner: maple.

The Split Test Takeaway

First, you need to keep your split tests running long enough to get enough traffic to see valid results. A healthy benchmark varies based on your current traffic totals, but you’ll want to run your test for at least 1 week and get at least 400 unique visitors. This should give you an accurate representation of how your normal visitors would act when they see your LeadPages® and LeadBoxes™.

While you’re conceiving and running the test, try not to be swayed too much by verbal feedback or conventional wisdom. For instance, people might tell you it makes the most sense to emphasize your company in the headline of your landing page. However, test and you might well find that more people opt in when your headline focuses on the product and doesn’t mention your company at all.


Once you’ve collected your data, now you can decide what to do next with your LeadPage®, LeadBox™, or breakfast, such as:

  • Give the people what they want: Make the winning variation your new standard. If they want more maple bacon, buy more maple bacon for the next Breakfast Club!

  • Optimize further with a follow-up test: : If I wanted to move closer to my goal of cooking the most crowd-pleasing bacon of all time, I could split test different brands of maple bacon to determine the champion.

  • Too close to call? Consider rerunning the test: Of course, it’s possible that these results were a fluke: there was only a 2-slice margin of victory between maple and jalapeño. If I were being truly scientific, I’d run this test again to make sure this wasn’t just a random variation.

The Split Test Takeaway

When you’re assessing your results, don’t just look at which variant performed better to decide which variant will be your new standard—look at how much better it performed.

The LeadPages® split-testing tool tells you each variant’s probability of outperforming the original. If it’s over 94%, turn off the test and set that page as your new standalone page. Later, you can try improving another element of the page by starting the same process over again. If the probability is 93% or below, try extending the test run or rerun it at a different time to make sure any change you make is the right one.

Rerunning the same test is nothing to be ashamed of. As a matter of fact, many split tests that end in a blowout win are often repeated to account for any bugs the test may have had. Whether your results are clear, a little murky, or just too delicious to not try again, you’ll want to keep testing to continuously improve your process.

A quick disclaimer: As you may have noticed, my bacon experiment was not a true split test. Here, “traffic” was sent to all 3 variants at once—which would never happen in a well-designed split test on the web. A statistician would probably grimace at my metrics. But using these best practices every time you run a real split test will help you learn more about your visitors and make decisions with more confidence.

So you can refer back to these tips the next time you do a split test, we’ve created a handy (and, thanks to our designer, pretty cute) infographic. It’ll also tell you how to apply these principles inside your LeadPages® account. Click the button below to download the infographic!