CRO Minidegree Review — Week 1

Yatin Garg
6 min readFeb 14, 2021

This is the first post in the series of twelve posts where I will be reviewing CXL Institute’s Conversion Rate Optimization Mini-degree, once every week. In this post, I will be covering CRO fundamentals and A/B testing basics.

Background:

For the last 10 years, I have been a part of the growth teams at several companies — big and small. Invariably, I have seen that broadening the top of the funnel is, in a big way, a function of capital deployed. I am certainly not discounting the hustle needed in cracking and optimizing various channels of growth but can’t deny that without fixing the conversion, most of these efforts go waste.

A classic mistake most of us do is to try throwing a bunch of stuff at the wall in the hope that something might just stick. While some areas like design and UX are known problems at most places, a lot of issues don’t even feature on our list. Not knowing what we don’t know usually is a bigger problem.

If you are like me, you might have tried picking up books on copywriting, UX, analytics, design and of course, would have read articles and blogs in hope of finding a solution. However, a piecemeal approach to fix conversion hardly works. So, a structured approach to fixing conversion becomes a very important gap to fill.

I have been following Peep Laja, founder of CXL Institute, for a couple of years now. Off and on, I have picked up a few golden nuggets from his emails and articles. I applied for their CRO minidegree and recently got accepted into their Scholarship Program. It is a 12-week program and I shall be sharing my learnings here.

So why CRO?

The course starts by establishing the need for a CRO program. For me, it was like preaching to the choir but an example really stuck with me.

We all agree that redesigning websites or apps is a mammoth effort for any organization. It costs a lot of time and money. Finish Line, a multichannel retailer of shoes and apparel, lost $3 million in sales in a quarter with its redesigned website. Ignoring Conversion Optimization is expensive!

It’s all about Ideas. But which ones?

Conversion optimization ideas can come from anywhere. Your observation, user personas, competition, front-line people, industry, Googling, and of course, from Hippo as well. However, conversion research is a good place to start this, systematically.

No idea is a bad idea, but most ideas won’t help. Knowing what ideas to pick and test makes a real difference. So, how do we bring method to the madness?

Here’s a quick process:

  1. Maintain a list — nothing fancy, a simple spreadsheet can be a good start.
  2. Write hypothesis — make your ideas specific, that’s how scientists do it
  3. Rank the ideas — consistency matter more than accuracy
  4. Testing!

While maintaining a list is self-explanatory, to make the idea specific, write those ideas as hypothesis. But, what exactly is a hypothesis? Here’s a definition I found on the CXL blog:

A hypothesis is a proposed statement made on the basis of limited evidence that can be proved or disproved and is used as a starting point for further investigation.

And here’s a template to write a good one:

If I _____________________________, I expect _________________________, to happen, as measured by ___________

Merely saying “We should make our headline more emotional. It’ll get more people to read” doesn’t mean much on its own. It sounds vague.

So, we write it like this — If I use X word in the headline like this, I expect to increase the number of items per order as measured by Google analytics average order value. This makes the idea more specific.

To prioritize which hypothesis to test, several frameworks can be used. A simple framework suggested by Conversion Sciences is as follows:

Framework for ranking hypothesis to prioritize testing

We want high impact, low effort ideas to get prioritized. This framework actually allows you to rank ideas on the famous ICE score (as suggested by Sean Ellis) — Impact, Confidence (or proof) & Ease of Implementation.

To remove subjectivity from this exercise, these metrics are ranked on a scale of one to five. The team at Conversion Science write some assumptions that support what a one on an impact scale is versus a five. By virtue of doing this over time, they have some assumptions and ideas around how they score a hypothesis on the scale — one versus a five — in terms of how much confidence they have in that idea.

If it doesn’t have any proof, it’s a one. If they’ve run an AB test on something similar, it might have a confidence level or proof level of five. Remember, accuracy isn’t important, but following the same scale for all competing ideas is crucial to make this process useful.

You can make your own framework also or use what CXL often uses. It is called PXL Prioritization Framework.

After you have zeroed down on the hypothesis, it’s time to test!

But, before we start with any sort of testing, understanding the statistics needed to do testing right is extremely important. I will be sharing a detailed guide on the statistics of A/B testing in another post. For now, let us understand the basics.

Bad testing is worse than no testing at all. It wastes time & effort and is prone to false positives and imaginary lifts.

The purpose of A/B testing is not knowing whether A or B wins. The idea here is to understand what version does better when exposed to the entire audience. However, we have seen many experiments that seemed liked winners during the testing phase but failed to produce the results when they got implemented, even after reaching statistical significance.

2 major reasons:

  1. The sample size was probably not right.
  2. Variance played a critical role.

When 1,000 A/A tests (traffic was diverted to two exactly identical pages) were run:

  • 771 experiments out of 1,000 reached 90% significance at some point.
  • 531 experiments out of 1,000 reached 95% significance at some point.

So, the answer lies in predetermining a sample size and running the test for a sufficiently long period.

Check this awesome tool to predetermine the right sample size by CXL. There are many others also.

To understand variance, see the following pictorial representation:

Source: https://conversionsciences.com/ab-testing-statistics/

In a perfect world, a sample would be 100% representative of the overall population. However, this rarely happens in the real life.

So, the margin of error & confidence interval become important concepts to understand.

In order to compare two pages against each other in an A/B test, we need to first collect data on each page individually. Let’s say variant A (original page) converts at an average of 15%, plus and minus 3% in sufficient number of tests. So, this gives us a range of 12% to 18%. This range is called confidence interval. Plus-minus 3% is the margin of error.

We do these tests several times to achieve a predecided confidence level (usually 95% confidence level is usually taken as the standard for most CRO activities). This means if we test variant A on conversion for 20 times, 19 times we will be in the range of 12 to 18%. The same process is repeated for variant B (new page here).

By default, we assume that variant B won’t be an improvement over variant A (our original page) — this is called null hypothesis.

Once we have our conversion rates for both the variants we are testing against each other, we use statistical hypothesis testing to compare these variants and determine whether the difference is statistically significant.

So, how do we decide?

  1. If the confidence interval of both variants is the same, keep testing.
  2. If variant B seems to be converting better, we need to ascertain if the change is statistically big enough for us to conclude anything.

Statistical significance is attained when the p-value is less than the significance level.

Don’t worry about the meaning of these words right now. We will cover these in detail soon.

For now, all you need to remember is that A/B testing or hypothesis testing needs a rigorous process and correct measurement. It can be influenced by sample size & variance. So, following the right methodology should be the focus.

Thank you for making it till the end. I will link the rest of the posts here soon.

Happy Learning,

Yatin

--

--