Hello and welcome to today's HubSpot Academy Master Class on demystifying conversion rate optimization. We'll be getting started in just one minute.
Hello again. My name is Joel Traugott, and I teach all things analytics and optimization here for the HubSpot Academy. HubSpot academy is a worldwide leader in inbound marketing and sales education, offering online training for the digital age. Today we're going to discuss conversion rate optimization, a relatively new field in digital marketing that many marketer's are having trouble with. Conversion rate optimization is a process you can use to research, discover, and validate new ways to create great experiences for your users, and drive leads and revenues for your business. Today I'm joined by the powerful Michael Aagaard, Senior Conversion Rate Optimizer at Unbounce, who will be busting the most common myths in conversion rate optimization and telling us some stories.
Michael's been obsessed with conversion optimization since 2008 when he started as a freelance consultant working in varied industries and companies to optimize their digital marketing activities. Since joining Unbounce in 2015, Michael has been traveling the world speaking at conferences and teaching audiences what he has learned. Michael is a true and great leader in the CRO industry, and we are pleased to bring him here to our audience here at HubSpot Academy. So without any further ado, Michael take it away.
Thank you very much. Just grab the screen here. All righty, thank you very much for that very very kind introduction. I am super happy to be here today with you Joel. This is kind of the fun session. As you mentioned, we're going to be demystifying some of the CRO myths here. There are quite a lot of them. As you mentioned, it is a young industry and we're talking about the evolution of CRO. Whoops. Sorry about that. Hi again. And we're back. All right. So it's still a very young industry, and I know this is a little bit corny to use this slide here, but we're on an evolution here. And I think we're all still trying to learn kind of what's up and down here. Nobody has the definitive answer for what zero is. And for me it's been a long journey and it's been one that I've been yapping ... Been ahold of this since 2008, and it's been a big part of my life, and I've evolved as a human being together with CRO.
So in 2008 you'll see here I ... This is me in my ... By the way this is different stages here also of looks. But in 2008 I certainly wasn't a toddler, but I was absolutely a toddler in CRO terms. From that perspective. Called myself a homo oblivious at this state here. I was fresh out of ... Well, not much experience and so on, and I'd been to school and all those things and I thought I knew everything, and I didn't understand why the people at the first agency I worked at, why they didn't just listen to me because I knew all this stuff. Right? Then we got to 2010. By that time I had kind of like figured out all the stuff about A/B testing and I thought that was the solution to everything. Test is randomized. Just test everything. Any random idea, I thought I was being scientific just because I was testing stuff which is pretty ridiculous. I can see that now.
Then I had a bunch of humbling experiences and I found out that split testing wasn't the key to everything, and I wasn't scientific just because I was testing random things. And also I found out that there was a lot of background stuff I didn't understand. Stats and so on. So I kind of reinvented myself and I realized that I certainly didn't know everything and I had to really pull myself together and start learning this stuff. And then you know 2017, I call myself I think Cro-Magnon stage here. Ha. Little pun there. But the point being that I'm not a Homo Sapien in CRO terms yet. I'm not sure I ever will be and I love that because I think that's one of things I think is so fantastic about conversion rate optimization is I do believe I'll keep learning forever. And I think that's a beautiful thing.
So what I'm trying to do with a webinar like this today is I'm really trying to catapult the audience, you guys, over all of this huge learning curve I had. Because I did it the hard way. Trial and error, and I made so many mistakes. And I'm trying to help people completely avoid all those mistakes. And a way of doing that is by busting a couple of myths. So the first one here is that conversion rate optimization is all about optimizing conversion rates. So that might sound a little bit weird, a weird point, because it sounds like the point is to optimize conversion rates. Well it is. But it's not the only thing. I think a good way of putting it here is that you have to be a little bit careful that you don't steer yourself blind on that conversion rate. My friend [Pablaya 00:05:49] from Conversion Excel, I think puts it very well. A couple times I've heard him say, "If you want higher conversion rates just make everything free on your website. No. Conversion rates will go up, but unfortunately you won't be making any money."
So the point being that the conversion rate is not the only thing out there. So really you need to ask questions like ... [inaudible 00:06:09] We can ask the question, "Is my business doing better than it was yesterday?" And the conversion rate doesn't always tell you that. And let's dig into that a little bit more because I'd say you kind of have to understand what a conversion rate is. So for example people will say something like, "Our conversion rate is 2.37 exactly everyday and it never changes." Well I would say, "Oh ha. Interesting. But based on what?" You can base a conversion rate on many different things and a conversion rate can measure many different things. So if we're saying conversion rate for your main goal, that could be selling something on an eCommerce for example. I'd say, "What is that conversion rate based on? There's a big difference between sessions and users. What are you using? Do you know?"
For example, if you're using Google Analytics for everything, you're probably basing it on sessions because that's a session based tool. If you're using Kissmetrics, you're probably basing it on users. And so on and so on. But sessions basically is visits, and users are visitors. Two very different things. So obviously your conversion rate is going to be higher if you're using users, and that is also a more accurate metric I'd say. But what about goals? Your conversion rate, which goal are you looking at here? There's a big difference between a click through, and someone purchasing something that costs $2,000. What vertical are you looking at? [inaudible 00:07:27] Has a different channels will probably have different conversion rates.
Referral sources, and then it continues. Definitely days of the week, days of the month have different conversion rates. I've never ever seen an analytics account where there was a uniform conversion rate on weekends and weekdays. And I've seen a lot of Google Analytics accounts. Seasonality and so on and so on and so on. Device has a massive impact. So sometimes people have a tendency just to talk about conversion rate as being one thing, but it's many many different things. Another example ... Yeah.
This sort of ties into one of the questions from our audience. We hear this a lot. What number should I have focused on to get started? If it's not my click through rate, where do I even begin?
So that's a big topic I'd say. But something that ... A metric that's meaningful to you. So you can have different KPIs right? You can have a main KPI that might be for a SAS company it probably would be trial starts. For eCommerce, the sale and so on. Then you might have some little bit softer conversion goals you can measure and that might be, for eCommerce, adding to basket for example. Here at Unbounce one of the things I do look at is how many people visit important pages. How many of them actually start signing up and then drop out and so on. So there's a lot of different metrics you can use there. The main thing is just an exercise, I'd say of actually going through it and saying, "What means something?"
For us, a SAS company, I'd say lifetime value is the most important metric. So I use that a lot in split testing. I don't use it as the main KPI right now, but that's something where I want to update an algorithm to be able to do but it takes a little bit. But right now I'm always calculating our lifetime value, estimated lifetime value for the variance in split tests. Because there's a big difference between which plan you sign up for. $4.99 is worth more than a $79 plan logically. And also is someone signs up for an annual account that's different than month. So this has a massive impact. So for me it doesn't really make sense to put on the blinders and only look at a conversion rate or count, "This variation got 99 signups and this one got 80," and then treat them the same. It doesn't really make sense. So for us lifetime value is really important and I've seen different experiments we've run where actually just raw numbers counting signups, the treatment is underperforming. But as soon as we do a sanity check and see how much money it's making, it's ahead.
So obviously in that case, it would be my opinion you should go with the money, and you should just go with raw numbers. And if you're a consultant, another thing is you need to learn the language that the client uses. So it took me a while to figure out that if you just go, "Oh. A 53% lift," they don't know what that means obviously. And then I started talking about money, but I had clients where they weren't counting, they weren't used to talking about money, they're used to talking about leads. So you have to find the right language also, and the right metrics.
So it's more of a wholistic approach?
Ah yes. Definitely. Definitely. So there's different metrics at different stages. But always keep your eye on the health indicators. Are moving in the right direction? So another one could be for example, "Our landing page converts at 1.20%." So I'd say again, "Based on what?" What are you basing that conversion on. So here's an example from a landing page. We see a conversion rate of 1.20 here, but that's an average. And you'll see here that there is a bunch of different channels going in there. So for example this on, this is not horrible example, it's just a simple example to show to you. So we have Google and Bing, and obviously we have a lot more traffic from Google, and we have a small sample size from Bing so we might have to be a little bit careful with these.
But there is an indicator. And you'll see that the conversion rate is actually over 3% for Bing, and it's 1.17 for Google. And then we have some trip traffic here. I wouldn't worry too much about that. It could have been bigger numbers, but you'll see that the average conversion rate is based on this, right. So you'll see when you start unfolding it, that average can be covering up the truth. And furthermore I'd also say, "Well maybe we should treat these two channels differently then," and so on. So again, a point being that just be aware of what a conversion rate is. And also be aware that they change over time. It's not like you're going to have a static conversion rate of 1.20 forever, and it doesn't mean that if you run an experiment and you get a 10% lift that that's going to be there forever either. And here's an example.
You'll see ... This is an example from a website. We're looking at the conversion rates. You'll see that they fluctuate like crazy. There are some patterns you'll see here that recognizable, but the main point here being that they fluctuate wildly from day to day. And that also kind of grows into your whole data background or whatever. Statistics and so on is if you're running a test, does it make sense then to run a test Monday to Friday and then run another one from Saturday to Sunday and then compare? I would say no because those are two wildly different days. So that goes back to the point of, for example, testing for four weeks so that you know that you're getting ... You're actually running your experiment over periods that are representative of the real world really.
So I jumped ahead a little bit before, but the point here being we talked about before with metrics. Is my business doing better than it was yesterday? That's the most important thing to look at. And like I said before, if you're only looking at that conversion rate for example, well you could be throwing away pennies, or throwing away dollars for pennies which is obviously not a good tactic. I also jumped a little bit ahead here before that all conversions are not created equal and that's very very important. Here's an example from Unbounce. You'll see it ranges quite a lot from $79 a month to $3.99 annual, monthly, and so on. So there's quite a few things to consider here.
And I say the worst thing is when CRO gets really really stupid, that's actually when you start optimizing towards the wrong [inaudible 00:13:46] right? Where you think you're optimizing, and you think you're being scientific, and you think you're headed in the right direction, but you're actually doing the opposite. You're killing your business. And that's one of the reasons why understanding statistics and so on is really important. Especially if you want to conduct split tests.
Which brings us to the next point. A myth that I hear a lot. People often have an impression that conversion rate optimization and A/B testing are basically the same thing. It's basically a discipline of running as many tests as possible. I used to think that and it's kind of a sexy thought. It's awesome whenever you're in doubt, you split test it. Wohoo. And then you just go and you do that all the time, and everything just magically gets better. And that's, in my opinion, a very limited understanding and it's wrong. People sometimes introduce me as an A/B tester and I actually get offended because I'm like, "No. I'm not just an A/B tester. That's a small, tiny part of what I do. I'm into conversion optimization." I left out the rate there.
So I what I like to compare CRO in 2017 to, is to where SEO was at in 2008. Again it's a young industry, we're all trying to understand this. For example, SEO stayed. People ... It seems in my experience from working with a lot of different companies, and speaking to marketers all the time, that people have an understanding that SEO is complicated and entails a lot of different aspects. It's on site, off site stuff, technical locums, national, and so on and so on and so on and so on. All this go into SEO. And if someone were to walk up and say, "Well SEO's just link building and that's all. You know just build some links," people would be like, "Hm. Really. You don't know what you're talking about do you?"
But this is what happens after an industry gets well established and a lot of the companies start getting into it. They become smarter, they can actually become critical of it. And we're headed there, CRO, but we're not quite there at. So the way I look at real conversion rate optimization I'd say is like this basically. It's a very complicated field, and it involves a hell of a lot of different disciplines and you kind of have to be multi disciplinarian to do this right. I like to compare it to golf. The golf of online marketing. You know in sports you'll have these incredible athletes who just excelled in whatever. Basketball and so on, and then they retire they start playing golf and they get hooked. They never stop because it's so difficult and you're also competing against yourself. That's the way I see CRO. And that's why it's beautiful.
But you have to understand statistics is the backbone of everything. Every time you look at any form of data, there's stats involved. If you don't understand stats it's really hard for you to be critical of the data you're looking at and I would say it's basically impossible to do proper split testing if you don't understand stats. The scientific method is extremely important. It's ridiculously important that you understand some web analytics. It's really important to have a bit of business sense so you can actually make wise decisions. And copyrighting is insane importance as well. Research quality of quantitative. I'm probably going to say that all of these are insanely important, but yeah. I do mean that. UI design.
This leads me into a question we get from agency partners a fair amount. They always want to know who on my team should be doing CRO? Is it my PPC person, is it my designer, is it me? That seems like a sticking point for some.
Yeah. And that's a good question I think one of the big mistakes that agencies make is they underestimate CRO, and they go like, "Hey, who can do this? Well John, or SEO guy he's smart. Yeah, he's our new CRO expert. Hey go out and do some website reviews. We'll only charge $20,000 for them." Or it's something like, "Oh Joe who comes in like twice a week, our student helper or whatever it's called. Our intern. He's smart, let's make him our CRO." So I think that happens a lot, or you try to find someone on your team and dedicate that. Make that person magically become the CRO guy. In some cases you can do that, but in many cases I think it's kind of this weird assumption that if you know this other field, then obviously you're going to be good at this too.
Sometimes analytics people are really good at this because they kind of have an analytical mind frame. In some cases people who are very focused on analytics just have a problem translating that into something that you can create a human experience based on. So I'd say look for some qualities instead, and if you're going to hire a new person I say you look for someone who has a logical, analytical mindset. Someone who approaches problems in such a way that they strategically try to look for a solution here. Someone who's curious and passionate. Someone who always ask that extra questions. Say, "Yeah, okay. But could you dig one layer deeper. Okay, so that's your conversion rate. Can you show me your conversion rate on devices. Aha." And so on.
The strength finder I think is an amazing exercise. I'd say people who have strategic as the number one strength are I would say great fits for the conversion rate optimization role because a person with a strategic power in Gallup turns into someone who is good at jumping into situation, situation getting an overview, analyzing it, and then based off what's in front of them choose the right way forward. Basically cut through the clutter. And that's basically what I spend most of my time doing as a CRO.
So I would probably say get ahold of someone who shows some of these qualities and then preferably someone who has experience with analytics, and obviously if they also have experience with copywriting and so on that would be really really good. But it is difficult to find, but if you have someone on your team who shows some of these qualities that's great. I think some people kind of, because it's new and it's nobody understands really, and it shows so much potential and it's mysterious. You know, CRO. I think some of them try to think that it's this quick thing you can learn and then you can get ahead of everyone. And I'd say if that's your approach then you're headed straight towards failure.
Looks like keyword stuffing was to SEO in 2005.
Makes sense. Thank you
And so that's why when ... The beauty of it I think is that it's so complicated, and then when people reduce it to just one thing well then it's just doesn't make sense anymore. And so A/B testing gets so much attention. I hosted a conversion [inaudible 00:20:26] live, 2017 at Conversion Excel's yearly annual conference. I was a host for this year. So I basically got to see all the speakers and because I'm a speaker myself, normally I'm so preoccupied with my own talk I don't get to see it. So I got to see every single speaker. Beautiful thing was that it transitions. So people weren't really talking ... Speakers weren't talking so much about A/B testing anymore. The ones who were they're just talking about how to do it right, and they're talking about how important it is that you base your A/B testing on proper research and so on.
And so that means that the industry as a whole has kind of moved in the right direction, and I'd say it's an important step forward. So that's where we're headed. But A/B testing ... Exactly. So that was the myth we're on now that CRO and A/B testing are the same thing. And this is my point. It's not mandatory. People think that split testing you have to do it, but absolutely not. It's just one tool out of many we have at our disposal and we use it when it's helpful. We don't use it when it's not helpful. And when it's not helpful that is especially when you don't have enough traffic, you don't have samples that are large enough to actually get proper data because the point of split testing is not to find confirmation, it's to find information. That's the important thing. THat's why you have to also be disciplined and understand stats when you do A/B testing.
So a lot of people, like you'll hear advice like, "Everybody should be split testing," or "It's safe to say that if you aren't A/B testing, split testing, then you're not getting the full potential out of your online marketing." That's just not true. That's insane advice. So I say ... I'll show you an example of why. Here's a very common scenario, right? You have current conversion rate, a baseline conversion rate of 2%, you want to detect a desired relative lift of 10%. Your target significance level is 95%. For that, when we're talking frequent testing here, for that you'd need a total sample of 156,800 users to be able to take that lift. You get 50 users today. How long is that test going to take? Drum roll.
About a week.
Yeah. Eight and a half years. So it's safe to say that this puts a different perspective on that split test. And then I would say this is just not fast enough. If it's going to take you eight and a half years to get a conclusion on this, that's just crazy. You do not have enough traffic to perform a proper split testing. You could try to get a bigger lift. You could try to get more traffic whatever. But the current situation, this is the truth. So you can make a business decision then to say, "I don't care. I'm going to stop it after a week. After seven days. But I'm actually missing over 3,000 days then." That's fine, but then you just have to understand that there's no stats backing what you're saying here. Right? And there's no science behind it.
So for example, doing an exercise like this and understanding the required sample size just changed everything for you, and in a case like this you don't run ... It doesn't make sense. So I'd say a very very important part of all this is actually doing. Being disciplined enough to do this stuff upfront. So this what often happens right? Week one the test is charging ahead, you have a small sample. That means that any little action, any conversion's going to have a massive impact. So you look at the first ... Oh, if you're on eCommerce for example, there's a novelty effect. People come back every week and they're like, "Oh wow. Now the button is purple. I'll try to click it."
So during the first week you might see a big lift and you're going to start popping the champagne. If you're a consultant you call up the client, you're like, "Dude we got a six out of seven lift man. We [inaudible 00:24:04]" And then week two, not quite as impressive. Week three, four [inaudible 00:24:11]. It's basically flattened out. And this the entire test duration. So this is why it's important to know how long to run the test before you can have a sample that's really representative. And in this case, well it ended up being basically no difference detected. So this would obviously not have been a super smart decision if you had popped the champagne and stopped it at week one because it would have been the opposite of what the truth is.
That's where it gets really dangerous. Because then you start learning from this. You say, "Wow," and actually you're learning the opposite of the truth. And then maybe you start sharing it, you write a blog post, and then it becomes part of the collective best practice and so on. And then it gets really really dangerous. You can try, I built a split testing calculator here at Unbounce with a duration so on. So I want to show you a sample before. It's a really good exercise. Try and use it. Try to play around with baseline conversion rate, desired lift, and average daily business [inaudible 00:25:04]. Then you would get a good feel for it. So basically as a rule of thumb, the higher the ... Based on conversion rate. The higher the desired lift, the more traffic you get, the more likely you'll be able to pull off split testing, and vice versa.
Though a quick ... Quick point of clarification for the listeners. When we're talking about desired lift of ten, that's a 10% of lift from two. Correct?
Yeah yeah. And we're talking about relative here. Relative lift. And that's obviously, that's a question I get is, "How do I know?" Well obviously [inaudible 00:25:30] but one thing is that if you know your daily amount of traffic, then you can plug that in and then you can see how big of a lift will I need to be able to pul this off in a reasonable timeframe. A reasonable timeframe for me is a month. I end up running ... At Unbounce it doesn't make sense for us to run tests for less than two weeks. We can with sample size, but that's the duration I need for it to be somewhat representative. I'd rather run them for a month. But that means that maybe you can get away with running a test with you traffic within a month. But then you might need a 50% lift, but not a ten.
50% is high. It's a high lift. It means you can't just fart around with some button copies. You need to do something radical. And another thing that's horrible about our bogus case studies, people stopping the test before they're cooked is that a lot of them show these insane lifts that are just not there. They're imaginary lifts. And the problem is that although people share them, and the expectations are set that from split testing ... To be a success you have to have like 100% lift or something. That very rarely happens as a true lift, simply because people aren't running them on enough. So this exercise will help you understand that stuff.
Another one is, this one is one I see often that I just talked about right now is that people will show you kind of something like this. 400% increase in conversion. And people go, "Oh my god. It must be true because there's a green arrow and it's pointing up." Well really this to me is pure evil. And so many people do it. I'm shocked that people still do this where I'm like, "Hey man. As an audience member here you are giving zero, zero data to actually be able to be critical of what you're showing me and that makes me feel uncomfortable." I'd say, "Increase in conversion? What conversion are you talking about? Again, is it click through rate? What is this? What sample is this based on? How long did you run the test?"
It's like ... People just show me this, I have to stop listening because I'm like, "I don't know ... You're not showing me enough here." Because let me show you what could be behind these numbers. So this is an example I used ... Here's the donators and donations. This is from an example from some charity I was using this sample with. So for example you have control and [inaudible 00:28:01], you might have ... Well let's say you have 2,000 in each sample. You're controlling your treatment right? And then you have one in five conversions donations to CS. So that is a 400% lift right there. But it's a lift from a conversion rate of 0.05 to 0.25. That's not a lot more.
So if we change this and added just a few more conversions on the control for example, well then we'd quickly be down to a lift of 30%, and very very quickly it won't be there. So this is not a 400% lift. This is a snapshot in time where at that moment there was a 400% difference. It's not the same as saying that we create this. It's not the same as saying this is conclusive. Right? It's basically seeing this picture and saying, "This is proof that man can fly." Oh, really? Well you can only see his feet. Maybe he's jumping and it seems like this ... Maybe he's hanging onto something. This is not proof at all. This is a snapshot in time. It's the same thing. And this here is one of the reasons why I think there's so much misinformation going on, because in the scientific academia if you're running a test ... Oh sorry. Doing an experiment, there's peer review. People go through it and they call BS on it if it isn't true.
That's not the way it is in marketing. We just publish whatever the hell we want to and we claim whatever we want to. It's a big problem. So I have respect for the fact you can't disclose everything, but when I showcase this, I always show the sample size, and I want to know how many conversions are in there. Because you could have samples of the 50,000 and four conversions, and that is still silly. I want to see that. I want to know how long. I always show how long we ran our test for. What was the duration. I'm going to talk about ... We show the significance of [inaudible 00:29:56] With those three numbers on there, those three stats, people can actually make ... I'm not showing them everything, but it's a large step towards being critical of the case I'm showing.
I had a sample of whatever, 2,000 conversions. Could be new trial starts for us. We ran the test for four weeks, and we had a significance level of 99%. That's very different than I had ten conversions, I ran it for two days, and I had a significance level of 70. So with just that, you're just helping people being critical of what you're doing, and that means that we can help each other because then we'll know the truth about the case studies people are presenting. Just a little quick think about stats again. I love this quote from Deborah J. Ramsey. "Real accuracy depends on the quality of data as well as on the sample size when we're talking stats. It doesn't say that it depends on getting a 95% significance level."
These things have to be there before. If the quality of your data's poor, then so is your insight going to be. And also again, sample, sample, sample. You need a sample that's large enough to be representative. Meaning you're running a split test to understand how would this perform in the wild. We're trying to mitigate risk by trying to understand that sample size is very very important. And just to drive it home here. 95% significance alone is not enough to guarantee valid test results, sample size and duration are critical. So this is also just to get back to the fact that split testing is more complicated than a lot of tools, vendors, blog posts, make it out to be. It can be very very helpful. It can also be the opposite. It can be absolutely detrimental to your success. And again, it is not mandatory. It's just a tool you have, and you don't have enough traffic then it's simply better to compare periods and not split testing.
We published a cool blog post, [Aaron Orndorf 00:31:57] guest post on the Unbounce blog recently. I'm in there, and Pablaya and couple handful of other CRO people. And this is a pretty interesting article. I think that Aaron kind of created it here because he asks us all to kind of try to explain what makes a true conversion optimizer. And I thinks some of the ... We're basically ... Everybody in the post is getting to the same points, and I find it quite interesting and I think it would be helpful if you check it out.
So this goes back to what you're talking about about finding kind of skeptics, people who are into process and analytics, digging and exploring.
Yes. This post, there's a lot of different angles there on what to look for and also how to call out people who are hopelessly [inaudible 00:32:43]. One of my pieces of advice there, for example, is if I introduce someone I want them to display ... Or I'm trying to figure out, "Do they hae a process?" Because I say processes [inaudible 00:32:54] is so important. So I would probe into that, and I'd also ask them a question like, "So if you run a split test and can't afford it. You stop. How do you figure out when it's cooked?" And if they go, "95% significance." Then I'll say, "Oh thank you. That will be all for today." Or if they don't mention the word research for example in the whole interview then I'll also know that that's a bad sign because it sounds like they're basing important decisions on their gut feeling. Which in my experience a very bad tactic.
And that brings us to the third myth. It is best to follow best practice. That was the way I worded it. But it kind of revolves around best practice really, and what is best practice. You know a lot of best practice becomes the tactical stuff, right? You know you see articles like this one for example. 50 Split Testing Ideas That You Can Run Today. And that sounds great, but again it's just like, "Really? You give me 50 generic things I can test?" I talked about test duration before, but even at Unbounce we have a lot of traffic. I run a lot of tests for a month. Well if I'm going to spend a month on each test to get proper results, does it sound good that I'm going to find a blog post and then go through 50 ideas? 50 random things that worked on eCommerce for example?
This happens a lot. It's the same thing with button color, it's the same thing page length and images and all this stuff. And then new trends appear. You're like, "Oh." Conversational forums or whatever, and then everybody ghost buttons and people just on that. And lot of this best practice just comes from people talking about it. Or for telling people running the split test on bad using best stats and drawing the wrong conclusions. So my point here is that I'd say a lot of best practices are actually just worse practice when it comes to tactical stuff like this.
So would I. So Mike would you agree with this statement then? "Really the best practice you need is a process that is involved with research and actual getting into-"
Yes. Exactly. Yes. And I'm going to get into that right now actually. So that's perfect. Yeah. So to set this up, this is one quote by Einstein I love. I use it a lot. "If had one hour to save the world, I would spend 55 minutes defining the problem and only five minutes finding the solution." To me this is beautiful and so important to CRO, and this is what it's all about. Again I'm going back to the whole testing thing. For example if you've got to be running your test for one month, then you have 12 shots a year. So I say you better make god damn sure that every single attempt counts.
So again, it's all about finding the problem so you know what ... It's very hard to solve a problem you don't understand. And that's basically what you're doing if you're just applying the fifty best practice tests for example. Just randomly going through stuff, and you're blindly hacking away trying to hit something along the way. So let me give you an example of identifying, understanding a problem before you try to solve it. This is an example from before I joined Unbounce. I was a consultant for many years and this is a project I had with former business partner Craig Sullivan. And so this is a SAS company we were working with, and we're doing website review for them.
So this is basically their sign up funnel process. So you have ... It's very typical. You have a pricing page, you have a sign up page, you have the information, you have to check email, [inaudible 00:36:26], duh da dada, you're all the way through. So just off the top of my head I was like, "Oh the sign up page is horrible and we can do so much on the pricing page. Those two. That's where we should start." And that's my gut feeling. And again, I mean in some aspects I probably have honed my gut feeling from having done a lot of this, but on the other [inaudible 00:36:48] as useless. So we did our analytics video and we tried ... Did our data driven review of the website and we figured out that the biggest drop off was actually between the email, and then people confirming it which seems insane.
We're like, "Wow. After people have committed and basically signed up we're losing them at the confirmation email. That's weird." So we went through it and we figured out they're using the same email template for everything basically. So even if you ... First the email you get from this is saying, "Templates," so that one has a big green login button that takes a lot of attention. And then there's another link that goes to homepage, and then there's a text link which is actually the verification link right? So what happened was a ton of people click [inaudible 00:37:40] click big green button, login because they're going to login for the first time and then they get an error message and the error message says that you have to check your email and click the link. And then people do that, and they loop around and they get angry and they take off and think of competitors.
So obviously that's the big problem. What's the solution? Well I would probably just recommend having one link in there and making it very clear that's the one you have to click to verify your account. And then I would say, "Test this ..." Oh hell no. This is a no-brainer. You just fix it. There's no reason to ... You're not going to test it for a month. This is obvious. This is one of those [inaudible 00:38:14] fucking just fix it. It's the same thing as if you locked the door to your physical store, and you were wondering why you didn't get any customers, and then you unlock the door for a week and said, "Oh strangely enough I get customers," and then you lock it again.
No no no. Fix this, start working on something more important. So what I'm also saying here is that we could spend a long time testing the pricing page and actually doing the wrong thing because if we hadn't understood where the problem was ... Plus all that testing on the pricing page would have been fundamentally well screwed because we didn't know that there was a hole in the funnel later on. So all of our testing there would have been massively influenced by the fact that they were dropping out later, and we wouldn't be able to control that so it would've been just horrible horrible horrible experience for everybody. So this is one of the reasons why research is so important and understanding these problems before you try to solve them. This is what real CRO is about I would say. This is one of the most important things.
So this takes us to process. So this is basically the process I use. This is the most simple, the most I can boil it down. You'll often see those little diagrams where there's like four points. There's basically research, tests, do more validation, rinse and repeat. That's very very oversimplified. I'll get into the process a little bit here, but one of my points here is this used to be my entire process just run as many experiments as possible at at time. Random, ha ha ha. I've changed now. Now this is basically my process. This is where I spend most of time. Conducting research, forming and validating hypothesis, and then analyzing the data from my experiments. That also means that I try to ... Not try. I do implement everything from [inaudible 00:39:57] into analytics so I can analyze it afterwards.
And another point I have here is when I say conduct experiments, that does not necessarily the split test. Because as I said before, it's not mandatory. But you're always experimenting. You're always finding ways of observing. Having a hypothesis, testing it out through observation, and then refining your hypothesis and getting conclusions. So experimentation is always important, but that experimentation method doesn't have to be split testing. And then my point is that everything starts and ends with your users, your customers. Everything. It's digital, it's ones and zeroes, especially [inaudible 00:40:32] but it's a real person who has to ... Whatever you're doing has to have impact on them, otherwise it's just ... It doesn't matter.
So understanding your customers, what goes on in their heads, the various motivations, all that stuff is just the number one most important thing. And you know, we're not ... It's not Mad Men anymore. There's no ... I don't think that marketers has to be psychics. I think this is a very interesting way to be working with data. And getting back to the point you had before, yes I'd say that the only real best practice is to have a solid process that helps you get through this. Something that you can replicate. Something that you can use again and again to achieve good results.
So with mine here, research, it always starts with research. That's the bulk. Then I have a whole step about forming and validating hypothesis based on my research. Which hypothesis are valid, are reasonable, so on, based on that. We're going to start creating treatment. That means wire framing and so on. Writing copy. Creating that thing that we're going to conduct experiments. That could be A/B testing. Could be comparing over time, there are many different ways of doing it. Based on that you're going to get some results back. You're going to analyze them, you're going to learn a lot. And then from there you're probably going to do a followup experiment. You're going to refine what you did. And I think that's the way the process keeps going over and over. This is how you recreate it.
All right. That was it for my slides. And I do believe we're ready for some questions.
Absolutely let's get some questions in here. So the first question that comes up is, "Do you have a particular process or methodology?" I guess you sort of shared it, but is there like a branded process out there somewhere that you really like or recommend as a good starting place?
Well yeah. I would say check out Conversion Excel, and Conversion Excel institute. Pablaya has some very very solid courses that he has, in my opinion ... Yeah. Probably the most legit guy in the business, and Pab and I were both kind of ... We had the same mentor. Craig Sullivan. I mentioned that before. And so our methodologies are very similar and it's all about insights. It's a very research driven approach so his Master Classes are amazing.
Okay. Here's another one. "Michael, can you share a funny experiment or a test? Something that will make us chuckle."
Yes I can. I have many. So I once conducted an experiment, I set it up for eCommerce client using VWO, and I didn't quite understand what I was doing, so I was tweaking out. Did I make element with basket and stuff. Which meant that it was porcelain, it was porcelain, but it basically mean that for a whole weekend, all you could put in the basket was a plate. Even though you try to put cups are int here so, yeah. A lot of angry clients and way too many plates. Another good one is I ran an experiment where I tested button cup, and I changed it to don't click this button. Completely insane experiment. At least I was doing it on one of my on things, I wasn't doing it for a client. You know as it always happens the only good about the experiment was it ran long enough to see that basically they didn't really have much difference. And I was testing it on one of my own kind of ebook landing pages and stuff.
And people know who I am so I think they just thought it was funny and it seemed they had already made up their minds. So in the first week it was doing better. Small sample and stuff, and you know it would have been funny if I'd stopped it there and started writing blog posts about the fact that you should write click this button. And then four weeks later I think it ended up tanking and doing slightly worse. Anyways so, that's a good example I think of just a completely stupid split test where there's no hypothesis or any reasonable reason or motivation to do it other than ... I don't know. Insanity.
That's fair. Thanks for sharing that. So another one ... Man it's hard to keep up. We got so many questions coming in here. What stats software do you use or recommend sort of in your day to day?
I don't use a ton. I do a lot of my stuff manually to be honest. I mean a lot of it is pretty basic. It's pretty basic stuff. But I mean for example the ... Well there are many different [inaudible 00:45:09] calculators. The one we built for [inaudible 00:45:11] Combines the color of different things. So it kind of gives you the idea of texturation but it also helps you kind of understand the relationship between the different numbers that go in there. So that is something I use all the time because one of the first questions I have if I'm approaching a problem is like, "So what kind of tools do we have here at our disposal?"
So one of the first things I'll do is I'll try to figure out what's our testing bandwidth, right? And then pretty quickly you can figure out do we ... Is that an option to even test or not? And if it isn't then I'll just ... I'll say no. I can't. No reason to even consider that. There's ABtestguide.com, out there is a cool tool. There's some bayesian and frequencies calculators in there. As far as confidence, significance, [inaudible 00:45:57], so on, Abtestguide.com. You just look for that. Some of my friends in the Netherlands on a [inaudible 00:46:05] created that one. So those ... I actually say that those two tools together are pretty awesome. They're very simple, it's a very ugly website but it's called maths, is fun. And they have a bunch of cool percentage calculators and stuff that I use just to make life easier. It's a little secondary brain.
But most of the ... Like for frequency stuff that's pretty basic stats. But that's probably my number one recommendation when people ask, "What's your best split testing advice?" I say, "Go back and learn statistics."
Okay. We got another one coming in here. So, "What's an alternative to A/B testing? So if you're not going to test sometimes you just go with your gut?" What are your thoughts on that?
Well I would say comparing periods is the best thing then. And so the whole point of A/B testing is you're trying to limit bias because you're running it at the same time, on the same kind of quality of traffic. So that eliminates some seasonality. It represents a whole new can of worms as far as working in the realm of split test. Let's not talk about that. So basically you're doing the same thing. You'd just be comparing periods. So comparing periods there's going to be noise. So you'll want to be careful and you'll wan to have samples. You wan to have test ... The durations you're looking at have to representative and they have to be pretty much equal. So it doesn't make sense that you're looking at the last six months of data, and then you're changing the page for two days, and then based on those two days you go, "Oh it's doing bettor or worse." It has to be representative.
And then also you're trying to eliminate noise, so you might want to look at and make sure you're looking at the right channel, the right device, the right campaign, and so on. You're trying to eliminate as much noise as possible, and then you're trying to compare periods. So that also means you should be aware of different things outside that could have impact. Like you're testing Santa Claus on your buttons and wow, you're doing it in December. Like, maybe there's something there if you compare to July and so on. So some of those reasonable things. But yeah, establish the baseline. Establish ... Make sure you understand what that baseline is made up of. And then compare for example [inaudible 00:48:21] Last month. Let's compare it to the same one last year. Let's compare it to a [inaudible 00:48:40] month, and a bad month. [inaudible 00:48:42] Help you understand a little bit better.
That makes sense. So rather than doing it sort of at the same time, just take three months and then try the next three months and be aware there will be some noise in there.
Yes. Yeah. Something like that. And then obviously if you're not split testing then I'd say research is the most important thing, but it's even more important when you're doing stuff like this. So go back to user testing, go back to interviewing your customers, and just doing a lot of ... Especially qualitative research. And I'll even do that with split testing. I'm saying don't think of split testing as a tool for conducting research. Think of it as a way to qualify your hypothesis. You spent a long time doing a lot of awesome research. You put all that together to the best possible treatment. And now you're using split testing as the final [inaudible 00:49:41]. It does it's work. I think that's the right way to think of it. Split testing is not research.
That makes sense. So we got another one here. "What are some basic statistics terminology or concepts any CRO needs to know?"
Well you need to understand P value, you need to understand ... Well it depends on ... So frequent testing is very different from bayesian. So let's not talk about bayesian right now. Let's just stick to frequency testing because that's also the most common. So yeah, when we're talking frequency testing, hypothesis testing, well you'll have to understand what the null hypothesis is, you'll have to understand what is a power level, what does significance actually mean, what is P value, sample size, confidence bounce, error margin. Some of those ... Yeah. You basically have to understand what all those things mean. And if you do then you'll have a pretty good idea of stats and the stat that go into A/B testing.
Okay. Can you elaborate a little bit on the research you do? So how do you see where it breaks down? Is this primarily through Google analytics?
Yeah. So there's different ways. The way I like to think about it is as soon as I ... There's a problem at hand, you're trying to understand the problem to be able to solve it. And then quickly I'll try to, or as soon as possible, I'll try to get some user based insights. Some user data. So Google Analytics, or anywhere that's set up is pretty handy for that. I like to think of GA, Google Analytics, as my little conversion buddy. He's the geek I have in my hands. I'll be like ... For example, you're looking at a landing page and go, "What the hell should we do here?" And go, "Hm. I'll ask Google," and say, "Hey, Google. Where do people go after this page?"
So then you can, for example, look at your landing page's report. You look at you entrance paths, and you can see the second page that tells you where the people go after a landing page, which is very informative. Because then you're like, "Well everybody's going to the homepage. Wow. I wonder why they're doing that. Don't they trust us? Is that just normal behavior? Everybody's going to the customer support page. What does that mean?" Everybody's going to pricing. Oh, then maybe we should talk about pricing. It seems important." It tells you a little about [inaudible 00:51:56]
Or I try to break it down on devices, and I say, "Hm. Okay so what's the most important here?" Unbounce for example, we don't see very much action on mobile because [inaudible 00:52:08] product it's basically Photoshop, with built in CSS, CMS, and A/B testing for creating landing pages. You're not going to do that on mobile. So understanding that is very important, and so on and so on and so on. So basically the way I think of it again, is trying to understand the problem so you can use quantitative approach like analytics to understand what and where. What's going on where, and then you need the why. That's when you turn to your qualitative methods.
So then that would for example run your feedback poll. Or you know doing a session recording. You know that people are dropping off in your signup form, you don't understand why. You set up your session recording and start recording them, and you'll see, "Oh okay. So they have a problem actually. The formatting of the country code or whatever is limited to one specific where you're putting it. People don't know that so they [inaudible 00:53:00] Okay, let's fix that." Or you start running feedback polls, and you say, "Hey why are you bouncing on the ..." Sorry, not bouncing. I'm not talking about a one page visit. "Why are you exiting the website on the pricing page," for example. And you could ask them, "Is it because you're just doing research? Is it too expensive for you? Do you need to talk to your team or something like that?"
So you're constantly trying to dig deeper and understand better. [crosstalk 00:53:30]
I think we've got time for one more question. I think this is a good one. So people who are leaving this webinar today, if they can go back to their desk, close down the webinar and do one thing to get them on the path to be a better CRO, what should they do? I know I know. Not best practices.
No no. I ... Yeah. That's a great question. Oh man there's so many. I'd say actually sit down and maybe try to use the calculator I showed. So if you have an assumption that you can [inaudible 00:54:06] split tests, just try to use a calculator and see what the baseline is. Plug in the amount of variance and plug in how many visitors, how many users you get per day. Put in all that stuff and then just play around with it and just see actually what will it take for us to run a split test. Maybe you'll be pleasantly surprised, and maybe you'll actually be like, "Wow. Okay. Eight years." So I think that's kind of a very tangible thing to do. But more than that, I would say going back to the Einstein quote, just remember that. Try to be a little more ... Just very broad advice. But I think A/B test is a shiny object, and if you can stay a little bit away from that that is very constructive. If you can start focusing on trying to understand the problems before.
So when you approach something, before you go, "Aha. We should test that," then you go, "Hm. Should we actually?"
Okay. Okay. I like it. I like it. I think that's about all the questions we have today from the audience. So I'd like to take the opportunity to thank Michael for joining us as well as the audience for being here live today. One thing for everyone who's watching this today to keep an eye out for, we're going to be launching a conversion rate optimization certification towards the second half of the year. So people can pickup an easy to use ready to go process to actually do some of this stuff.
Michael, do you have any last words? Any advice? Anything for our audience before we sign off?
Don't underestimate CRO. It's a very complicated area. And also don't steer yourself blind on that conversion rate aspect. Make sure you're doing stuff, and [inaudible 00:55:43], health of your business. And don't think that CRO is only split testing because then it's going to be a horrible thing for you. And I'm looking forward to you guys' course. I think it's going to be awesome, and it was a pleasure being here today. So thank you very much for having me.
Micheal thank you so much, and we hope to have you back very soon.
I'd love to.
This one's for the audience. If you've enjoyed this master class, you're in luck. On Tuesday at the same time, one, my colleague Courtney Sembler will invite a bestselling author, Whitney Johnson to a master class to discuss leveling up your career and driving corporate innovation through personal disruption. This is going to be a great one. She's a bestselling author. She's got a lot to say. So don't miss it. And I think that's it for us today. Michael thank you again. Everybody on the line, thank you so much. I hope everyone learned something new.