By Ellie Langan
22 Oct 2025 · 8 Min Read
Introduction.
Let’s kick things off with a clear, no-fluff explanation of what Conversion Optimisation (CO) actually is. CO isn't just about A/B testing button colours or making landing pages prettier.
Yes, we do do that, but that’s just scratching the surface. It’s about understanding how people interact with your website, where they hesitate, where they drop off, and then making smart, evidence-based changes, that guide them towards the actions you want them to take.
At its core, CO is the practice of increasing the percentage of users who complete a desired action on your site.
That action could be anything: buying a product, adding something to their basket, signing up for your newsletter, booking a call, downloading a resource, clicking a key link, whatever moves your business forward.
It doesn’t require more traffic; it helps you get more value from the traffic you already have. Even a small lift in conversions... say from 2% to 2.1%, can equal significant revenue growth, without actually spending a single extra penny on ads.
In this blog post, we’ll break down how to think about CO strategically, how to identify what’s actually blocking conversions on your site, and how to make changes that are backed by insight, not guesswork.
Why research is important.
When most people think about improving their website, they jump straight into “solutions”, change the headline, test a new button, redesign a page and launch. But... the problem is, without research, it’s all just guesswork.
Yeah, you might stumble on a win, but you’ll probably waste time (and money) along the way.

Research and insights are what separate random tinkering around from intentional optimisation. It helps you actually understand what’s happening with your users, so instead of saying “we think this might work,” you can confidently say “we know this is the issue and here’s how we think we can improve it.”
What tools can you use to gather evidence?
Your users are constantly giving away clues about what is and isn't working on your site. Every click, hesitation, every drop-off point, it’s all feedback. There are plenty of tools that can help to show you what users are doing:
Google Analytics.
The absolute essential used by over 31mn websites worldwide to measure website traffic, user engagement and performance. It’s the definitive source for quantitative data, telling website owners “what” is happening across their entire site.
Microsoft Clarity.
Clarity is Microsoft’s free behaviour analytics tool. It gives you heatmaps, scroll maps, and session recordings so you can see where people click, how far they scroll, and where they get stuck. The downside is that it doesn’t include surveys or polls, so you’re limited to passive observation, and you can’t ask users questions directly.
Hotjar.
Hotjar is one of the most popular tools for understanding user behaviour. Like Clarity, it offers heatmaps and recordings, but it goes a step further with on-page surveys and feedback widgets. That means you get both the what (behavioural data) and a little bit of the why (user feedback). However, Hotjar comes with a cost once you go beyond their basic free plan.
There are other tools out there too, like FullStory and VWO, each with their own mix of heatmaps, clicks and survey capabilities. The key is knowing which ones give you enough insight without drowning you in data you don’t need.
But these tools tell you “what” is happening, not “why”:

- Analytics platforms: (e.g. Google Analytics) These are great for showing you the numbers. You can see how many people visited a page, where they came from, how long they stayed, where they dropped off and what percentage converted.
But, analytics can only tell you what happened, not why. You might see that 80% of people left a page after 10 seconds… but you won’t know if it was because the page was confusing, slow, irrelevant, or because they found exactly what they wanted quickly. The “why” is missing.
Heatmaps & scroll maps: Heatmaps show you where people spend their attention on a page. They’ll highlight which parts people scroll to, hover over, or spend the most time looking at. It’s a great way to see if your content hierarchy is working and if people are actually getting to the key bits you want them to see. It's important to remember though that it’s a broad overview, not precise. Heatmaps can show you attention zones, but they won’t tell you if someone was frustrated, confused, or just skimming.
Dead clicks and rage clicks: Clicks can be really revealing. You’ll see what people actually interact with - buttons, links, navigation menus, images etc. This is where you start spotting the sneaky issues like:
Dead clicks: where users click on something that isn’t clickable. That’s a sign your design is misleading or unclear.
Rage clicks: where someone clicks repeatedly on the same spot, usually out of frustration. That’s a huge red flag for broken functionality or poor UX.
Clicks can help you see if your design is intuitive or if it’s sending people down dead ends. But again, you don’t know “why” they tried to click there in the first place.
- Session recordings: These are replays of how users move through your site, where they scroll, hover, click, pause and drop off. They can be super eye-opening because you’re watching the journey unfold in real time.
The thing is, you can end up drowning in them. Watching recordings is very time-consuming, and without a clear plan, you can waste hours trying to spot patterns that may not mean much. Plus, you’re still interpreting what people are doing rather than hearing it from them.

- Surveys & polls: Surveys are useful because you’re actually asking people what they think. You can pop up quick polls on-site (“What stopped you from buying today?”) or send follow-ups by email. But (yep, another but!) what people say and what they do don’t always line up. Users might give polite, rational answers that don’t reflect the messy reality of how they behave in the moment or vice versa! You have to take survey data with a pinch of salt.
So, yes, all of these tools are helpful - but as we said, they’re limited. They give you the 'symptoms' but rarely the cause.
That’s where User Testing comes in. It’s the difference between guessing why something happened and hearing it directly from the people experiencing it.
Davy Ros, President and Founder, CROQ la vie.
“Observing testers using our website has revealed things we just hadn’t been aware of before. We’ve been able to improve the user experience and see positive results almost immediately.”
User testing tells you why.
User testing is as close as you’ll ever get to sitting next to someone while they use your website. Instead of staring at dashboards or guessing why someone dropped off, you actually hear them say things like:
- “I don’t know what this means”
- “I was expecting the button to take me somewhere else”
- “I don’t trust this bit”
That kind of feedback is *chefs kiss*. It shows you blind spots you might not even notice yourself, validates your ideas before you invest in big changes, and saves you from making costly mistakes.
At the end of the day, it’s not about making a “prettier” site (although you probably would get some extra stars for good aesthetics). It’s about making it easier for real humans to do what they came to do.
Getting the right testers.
One of the easiest mistakes to make with user testing is picking the wrong people. If you’re testing with people who don’t remotely resemble your target audience, you’ll end up with feedback that sounds valid, but doesn’t actually reflect how your real customers might think or behave.
Here are a few things to keep in mind:
- Match your audience. If you sell to small business owners, don’t test with students who’ve never run a business. If your product is aimed at parents, test with parents, not just anyone who's available and will sign up.
- Think about context. Are your users usually on mobile, busy parents, in a rush? Try to recreate that context when testing. Don't test on desktop if that's not where they're visiting from, and set the scenario of the test to align with them!
- Beware of “professional testers”👻. Some platforms have people who’ve done hundreds of tests and know how to “game” them. They’re not giving you genuine first impressions of your target audience, and you'll lose that 'fresh eyes' perspective.
The goal is to get as close as possible to your real users. Even if it means recruiting through your email list, customer base, or social media, it’s worth the extra effort. The closer the testers are to your actual audience, the more valuable the insights you’ll get.
Different ways to recruit testers.
You don’t need a huge budget to get started. Here are a couple of approaches:
- Coffee shop testing: This is as simple as it sounds! Take your laptop or tablet, ask people in a coffee shop (or library, or other space) to complete a task, and watch what happens. It’s quick, easy and can be surprisingly insightful.
- User testing platforms: If you want something more structured, there are platforms like Userlytics, Usertesting.com, Userbrain and so on, where you can recruit testers quickly. They’ll record themselves completing your tasks and you’ll get videos back and transcripts back to review.
The real magic of using a dedicated platform though, is how it connects your big-picture analytics with the tiny, crucial user moments. You see those frustrating drop-off rates on your dashboard, but you have no idea why. Well, as the team at Userlytics makes clear, user testing is the tool that gives you the answer:
“User testing adds essential context to these patterns by uncovering the reasons behind the drop-offs. Are users confused by the layout? Is there a trust issue or unexpected friction? By pairing behavioural data with usability insights, teams can make targeted adjustments that improve retention and ultimately recover lost revenue.”
It’s all about turning vague numbers into clear, profitable fixes. The video feedback is the “why" that helps you nail down the “what" and actually get back the money you were losing.
Using screener questions.
If you’re using a platform, screener questions are your best friend. They help you filter out people who aren’t relevant to your audience (and hopefully some of the “professional testers” 👀).
For example, if you’re testing an online course platform, you could ask:
- “Have you ever signed up for an online course in the past 6 months?”
- “Which of these best describes you: student, freelancer, small business owner, etc.?”
You can accept, deny, or advance users who click on specific answers, this way, you get closer to your actual audience.
Moderated vs. unmoderated testing.
With user testing, you can have two types. moderated or unmoderated. Both can apply when testing from a platform, but only moderated can apply when 'coffee shop' testing.
Moderated testing: You (or a researcher!) are live on the call with the tester, guiding them, asking questions, and probing deeper when something’s unclear.
Pros: Richer insights, you can dig into “why” in the moment, and spot things the tester might not say unprompted.
Cons: More time-consuming, harder to scale, and you risk accidentally leading the tester if you’re not careful.
Unmoderated testing: Testers complete tasks in their own time while recording their screens and narrating their thoughts.
Pros: Faster, easier to scale, no scheduling headaches.
Cons: Less depth, you can’t ask follow-up questions, and sometimes testers go off-track or misinterpret tasks.
Both methods have their place. If you’re just starting out, unmoderated testing is often the easiest way to get going.

How many users do you need?
The short answer:... not a lot.
Actually, Jakob Nielsen famously championed the “rule of five," and for good reason. He argued that after testing with just five people, you’ve probably uncovered about 85% of your core usability issues. Beyond that fifth user, the insights you get start to drop off, but the cost and time involved increase.
That research is a little outdated now and is disputed by some. Here’s the important context: that rule is about qualitative insight, not hard data.
- Qualitative Insight (The "Why"): If your goal is to find why people are getting stuck, confused, or frustrated, then 5 to 8 users per distinct user segment is often enough. You’re looking for patterns in their behaviour and hearing the actual words they use.
- Quantitative Data (The "What"): If you’re running a preference test or a survey and you need to state that one version is statistically better than another (e.g. "Version B converted 10% more users"), you’ll need a much larger sample, often 100 or more users to achieve statistical significance.
The smart approach: Test segments and in rounds.
Don't test 20 people all at once. Instead, embrace small, iterative rounds. Test 5 people from one segment (e.g. mobile users starting from a certain page), pause, fix the top issues they found, and then test another 5 people on a different segment and a different journey.
Setting the right tasks.
One of the easiest ways to ruin your own user test is by setting bad tasks. You need to put people into a realistic mindset, not turn them into some compliant, clicking robots. If you guide them too much, it’s only going to validate your own assumptions and you’ll miss the real blind spots.
To get the authentic, messy truth from your users, your task instructions should follow three golden rules🌈⚱️:
- Realistic: The task must mirror a genuine, real-life goal of your target user. Don't ask them to just click around aimlessly. Ask them to "find the returns policy because the shoes they ordered were the wrong size."
- Open-Ended: You should never tell them how to complete the task. Let them navigate naturally and follow their instincts. If you mention the name of a button or a menu item, you’ve spoiled the test and missed the chance to see if your labels are confusing or difficult to find.
- Clear: The instructions must be unambiguous. No one likes guessing games, especially when they're talking out loud. Keep the scenario brief and the expected outcome obvious.
Example:
| Bad example | Good example |
| “Click the 'Get Started' button and fill out the sign-up form." (Too specific, leading.) | “Imagine you're trying to sign up for the free trial. Go ahead and show us how you’d do that." (Realistic goal, user-led.) |
| “Find the section about our pricing structure." (Too passive.) | “You’re trying to figure out which package is right for a small business of three people. Show us where you would go to find that information." (Creates a scenario, requires a decision.) |
The key is to ask yourself, “Would a real user in the wild use my site like this?” If the answer is no, rewrite the task. You are testing your design, not the user's ability to follow directions!
Synthetic testing.
Synthetic testing sounds like something that’s come out of a sci-fi movie, but it's really just AI-driven simulations of user behaviour. Instead of a person completing a task, you give a script or design to a platform and let an algorithm simulate thousands of potential user journeys.

At Bind, we definitely still believe that genuine human user testing is the best, you simply can’t replace the authentic human confusion, surprise, and spoken feedback. That being said, synthetic testing has found a smart place in the toolkit:
- Early-Stage Validation: You can test rough concepts, wireframes, and early prototypes fast. Before you invest time in humans, you can quickly find major structural flaws and validate your core ideas.
- Scaling and Hypotheses: It can quickly process data on a massive scale, allowing you to generate strong hypotheses about how users might behave. This is a great starting point for more focused human testing later on.
A word of caution… right now, synthetic testing lacks genuine human emotion and surprise. It’s great at telling you if a button is hard to find, but it can’t tell you why that button makes a user feel suspicious or frustrated. Use it to get a quick, scalable answer to ‘what’, and then use real human testing to nail the all-important ‘why’.
Els Aerts, Co-Founder AGConsult & User research advocate
“AI will replace user researchers sooner than it will replace real users.”
So, what now?
Go back to those tools we mentioned above and do a bit of evidence gathering. Try to identify a few “conversion weak spots" on your website; which page do people seem to drop off in the user journey? Which pages do they seem to hesitate on? Where do they rage click?
Once you’ve got those conversion weak spots you have a better understanding of “what” is happening on the website. Now you need to follow it up with some user testing to understand “why”.
Now build a task around it following our golden rules, recruit or ask five people who fit your target audience to help and go ahead and launch a test.
If you’re still not sure, get in touch with us at Bind Media and we can give you some further guidance.