Designing Successful A/B Tests

As part of the Growth UX team at Shopify, our goal was to widen the company’s reach to help increase traffic to our site.

We accomplish this through SEO landing pages, paid advertising campaigns, A/B tests, and experimentation. Much of my job appears to be focused on optimizing landing pages for bots, but it’s humans that interact with our product and, ultimately, become paying customers.

Growth designers on our team move quickly and work iteratively, learning everything we can from past experiments, failures included. We use A/B testing as a tool to unlock information needed to improve conversion, deepen engagement, and maximize the positive experiences of our users.

A/B (or “variant”, or “split”) testing describes the testing of two designs and measuring which one is better at achieving a predefined goal. As an example, let’s take a look at the free trial landing page that has undergone many iterations over the last few years at Shopify.



Analyzing what to work on is arguably the most important step in designing successful A/B tests. The mindset I take to most projects is there’s always room for improvement, but we have to decide if it’s worthwhile for our whole team to invest time and effort into a given marketing initiative.

  • Where are our conversion rates faltering?

  • Where are we sending a lot of paid traffic?

  • Where are the holes in our funnel?

  • Will we see significant results (good or bad) from this experiment?

  • What data indicates that this may be impactful?

  • Is this a priority for Shopify as a whole or just your team specifically?


Once we determine a project is worthwhile pursuing, we prioritize our A/B tests based on traffic volume, ad spend, and overall impact. Since our team sometimes has incoming requests from multiple teams within Shopify, it can be easy to get swept up in the urgency within specific teams. This isn’t always a clear indication of what we should actually work on, and we try to use hard data instead of gut feelings and assumptions to prioritize.


Identifying the problem

What metrics are you trying to improve? Key Performance Indicators (KPI’s) are simple metrics that reveal current state vs. where you want to be. Setting goals like increasing conversion is too abstract for a company like Shopify and we need scalable measurements (numbers, %, or $$$). For example, the KPI in our free trial example was to increase conversion rate percentage.

The most recent version of the free trial page was converting okay, but the page was outdated and didn’t represent the brand well. This was the first touch point many of our customers had with Shopify and it was a sad face ☹️. Shopify was sending a significant amount of paid traffic to this page, so it was worthwhile experimenting to see if we could improve it.

Create a hypothesis

We use hypothesis to make educated guesses on why our landing pages aren’t performing well. Most of the time we can propose solutions from data we’ve been collecting from previous versions. This is why it’s important that hypotheses are testable and measurable.

Hypothesis based on previous failures:

  • The design was providing too many exit points. We were losing traffic to other pages that were not converting.

  • Many of our landing pages had the call to action near the footer. We found that users were having trouble finding the CTA and never reached the end of the page.

The customers we send to the free trial pages are highly motivated individuals. That means they are searching for keywords on Google like “sell flowers online” or “sell toys online.” We know exactly what these users are searching for. When we made them sift through long amounts of text/imagery, we lost them fast.

We then used these insights to help inform our design decisions to create a page that both looks great and serves it’s purpose.



We don’t test the small stuff. This probably goes against what you’ve read about A/B testing. At Shopify, we rarely do smaller scale tests because they don’t affect our bottom line enough. We don’t care if red or blue buttons perform better. If you test big and bold changes you’ll get results quicker — people either love or hate it. This is the way we reach fast design improvements. Here are some of the things we improved from the last test we ran:

  1. Give users only two options: sign up or leave.

  2. Provide a scalable design that is flexible to other verticals/sales channels.

  3. Keep the CTA above the fold.

  4. Reduce amount of content on the page.

  5. Improve the visual design.

The result…


Gathering data

The resulting, winning design improved the conversion rate significantly. From the graph below, these changes may not seem like much — until we start applying large volumes of users to them. Paid search will keep getting more competitive and raising cost per lead is not a long term solution. If you can double your conversion rate, you essentially slash your CPA in half.

It’s also important to ensure you’re looking at the data from every angle. In past experiments, we had more signups, which looks great on paper, but we actually had less customer retention over time. It may look like a successful experiment, but if you dig deeper, the surface data can be misleading. Fortunately, our free trial experiment was super successful and is now one of the best converting pages on

This slight increase in conversion rate saved Shopify millions of dollars in advertising 📈


Finding a winning landing page definitely feels great, but it doesn’t mean your work is done. Ongoing testing can enable a cycle of improvement and creative experimentation on your team.

We work very closely with other disciplines with Shopify to learn from data to better understand our marketing strategy as a whole and plan for long-term success of our experiments. This helps our team produce work that is not only visually pleasing, but also contributes directly to Shopify’s bottom line.

DesignJanna HaganComment