A/B TestingBest PracticesCommon Mistakes

5 A/B Testing Mistakes That Are Killing Your Conversion Rate

January 5, 2025
7 min read
By Diego Durli

You're running A/B tests. You're following best practices. You're making data-driven decisions.

And your conversion rate is still stuck.

After helping hundreds of founders optimize their landing pages, I've seen the same A/B testing mistakes over and over. These mistakes don't just waste time—they actively hurt your conversion rate by leading you to wrong conclusions.

In this post, I'll break down the 5 most damaging A/B testing mistakes and, more importantly, how to avoid them.

Mistake #1: Stopping Tests Too Early (The "Peeking Problem")

This is the #1 mistake I see, and it's understandable why people make it.

What Happens

Day 2 of your test. You check the results. Variant B is winning by 25%! You get excited. You declare a winner. You implement the changes.

Then, two weeks later, you notice conversions actually went down.

What happened?

The Problem

Early in a test, results are volatile. You might see a 25% lift with 20 conversions, but that could easily be random chance (noise, not signal).

Statistical significance isn't just a nice-to-have—it's essential.

When you "peek" at results and make decisions based on incomplete data, you're likely seeing random variance, not real improvement.

The Fix

Set decision criteria before starting:

  • Minimum runtime (usually 1-2 weeks)
  • Minimum sample size (at least 100 conversions per variant)
  • Statistical significance threshold (95% or higher)

Don't look at results until you hit these criteria. I know it's hard. Do it anyway.

Use a calculator: Tools like Tiny A/B Test automatically calculate statistical significance. If it says "not significant," the test isn't done yet, no matter how good the numbers look.

Real Example

I ran a test that showed a 35% improvement after 3 days with 90% confidence. Exciting, right? I almost stopped it.

But I waited. After two weeks, the improvement dropped to 12% with only 73% confidence. Not significant.

If I had stopped early, I would have implemented a change that didn't actually work.

Action Step

Before starting your next test, write down:

  • "I will not check results before [date]"
  • "I will not declare a winner before [X] conversions and 95% confidence"

Tape it to your monitor if you have to.

Mistake #2: Testing Too Many Things at Once

Your landing page has problems. You want to fix all of them. So you change the headline, the CTA button, the image, the form fields, and the color scheme all in one test.

Congratulations, you just ruined your test.

The Problem

When you change multiple elements simultaneously, you have no idea which change drove your results.

Example scenario:

  • You change the headline (⬇️ -5% impact)
  • You change the CTA button (⬆️ +20% impact)
  • You change the hero image (⬆️ +3% impact)

Net result: +18% improvement. You declare victory!

But wait—what if you had just changed the CTA button? You would have gotten a +20% improvement. The headline change actually hurt your results, but you don't know it because you tested everything together.

The Fix

Test one significant element at a time.

Priority order:

  1. Headlines (highest visibility, big impact)
  2. CTA buttons (where conversions happen)
  3. Value propositions (why someone should care)
  4. Form fields (friction points)
  5. Images/visuals (supporting elements)
  6. Colors/design (usually smallest impact)

Yes, this means testing takes longer. But you'll learn what actually works instead of getting muddy results.

The Exception: Multivariate Testing

If you have LOTS of traffic (10,000+ visitors per week), you can test multiple elements using multivariate testing. But for most startups and indie hackers, you don't have enough traffic for this to reach significance in a reasonable timeframe.

Real Example

A client wanted to test 5 changes at once. We broke it into 5 separate tests instead:

  • Test 1 (headline): +8% improvement ✅
  • Test 2 (CTA button): +23% improvement ✅
  • Test 3 (hero image): -2% (made it worse!) ❌
  • Test 4 (form fields): +15% improvement ✅
  • Test 5 (color scheme): +1% (not significant) ❌

If they had tested everything together, they would have implemented the image change that actually hurt conversions, negating some of their gains.

Action Step

Make a list of everything you want to test. Rank them by potential impact. Test them one at a time, starting with #1.

Mistake #3: Not Having a Real Hypothesis

Too many people treat A/B testing like throwing spaghetti at a wall to see what sticks.

"Let's try a red button and see what happens!"

That's not testing. That's guessing.

The Problem

Without a hypothesis:

  • You don't learn anything (even when you lose)
  • You can't build on past tests
  • You waste time testing random ideas
  • You can't prioritize what to test

What a Real Hypothesis Looks Like

A proper hypothesis has three parts:

Format: "I believe that [change] will [impact] because [reasoning]."

Good examples:

  • "I believe that changing the CTA from 'Submit' to 'Get Started Free' will increase sign-ups by 15% because it reduces uncertainty about cost and creates a clearer next step."
  • "I believe that adding customer logos above the fold will increase trial conversions by 10% because it builds trust with new visitors who don't know our brand."

Bad examples:

  • "Let's try a bigger button" (no reasoning or expected impact)
  • "Red converts better than blue" (no context for why)
  • "We should test this" (not even a hypothesis)

Why This Matters

When you have a real hypothesis:

If you win: You know why it worked, so you can apply that learning to other pages

If you lose: You still learn something valuable about your audience

Example: I hypothesized that "Start Free Trial" would beat "Get Started" because it emphasized no cost. It lost. But I learned my audience wasn't worried about price—they were worried about complexity. So my next test focused on ease-of-use messaging, and that won big.

The Fix

Before every test, complete this sentence:

"I believe that [changing X to Y] will increase [metric] by [Z%] because [reasoning based on data/research/user feedback]."

If you can't complete it, you're not ready to test yet.

Action Step

For your next test, write your hypothesis in a doc. After the test (win or lose), write down what you learned. Over time, you'll build a library of insights about what works for your specific audience.

Mistake #4: Ignoring Statistical Significance

"Variant B is winning by 5%! That's better than A, so let's ship it!"

Not so fast.

The Problem

A small difference could easily be random chance. Without statistical significance, you're just flipping a coin.

What statistical significance means: There's a 95% (or higher) probability that the difference is real, not random luck.

What it doesn't mean: Variant B is better because the number is higher.

Common Scenarios

Variant B: 3.5% conversion rate vs. Control A: 3.2% conversion rate (50 conversions each)

  • This looks like a win
  • But with only 50 conversions, this is likely noise
  • You need more data

Variant B: 3.5% conversion rate vs. Control A: 3.2% conversion rate (500 conversions each, 95% confidence)

  • Now you have statistical significance
  • This is a real difference you can trust

Why This Matters

False positives cost you revenue. When you implement a "winning" variant that wasn't actually better, you're potentially lowering your conversion rate.

I've seen companies implement changes based on weak data, only to see conversions drop. They then waste weeks running new tests to figure out what went wrong.

The Fix

Never declare a winner without 95%+ statistical significance.

Most A/B testing tools (including Tiny A/B Test) calculate this automatically. If it doesn't show 95%+, keep the test running.

Need results faster?

  • Test bigger changes (more likely to show significant differences)
  • Test higher-traffic pages
  • Drive more traffic to the test page

Sample Size Matters

You need a certain number of conversions to detect a difference. General guidelines:

  • Small improvement (5% lift): 1,000+ conversions per variant
  • Medium improvement (10% lift): 400+ conversions per variant
  • Large improvement (20%+ lift): 100+ conversions per variant

Real Example

I was running a test that showed Variant B winning by 8% after one week. 89% confidence. Close, right?

I ran it another week. The difference dropped to 3% with 67% confidence.

If I had stopped at 89%, I would have implemented a change that didn't actually improve anything.

Action Step

Check your current A/B tests. Are any running with less than 95% confidence? Keep them running (or turn them off and start fresh with bigger changes).

Mistake #5: Testing the Wrong Things

This is the most insidious mistake because you're doing everything else right—but you're optimizing the wrong elements.

The Problem

You spend a month testing button colors and end up with a 2% improvement.

Meanwhile, your headline is confusing, your value proposition is unclear, and you're losing 50% of visitors before they scroll.

Not all optimizations are created equal. Testing button colors when your headline is broken is like rearranging deck chairs on the Titanic.

The Impact Hierarchy

High Impact (test these first):

  1. Value proposition / headline
  2. Call-to-action clarity and placement
  3. Social proof and trust signals
  4. Form friction (number of fields)
  5. Page load speed

Medium Impact: 6. Supporting copy 7. Images and visuals 8. Layout and whitespace 9. Pricing presentation

Low Impact (test these last): 10. Button colors 11. Font choices 12. Minor copy tweaks 13. Small design elements

How to Prioritize

Ask yourself:

  1. Does this affect the visitor's understanding of what I offer? (High impact)
  2. Does this affect their trust in my product? (High impact)
  3. Does this reduce friction in taking action? (High impact)
  4. Is this just a visual preference? (Low impact)

The Fix

Do a conversion audit:

  1. Open your landing page
  2. Scan it for 5 seconds
  3. Look away and answer:
    • What does this product do?
    • Who is it for?
    • What's the main benefit?
    • What should I do next?

If you can't answer these clearly, those are your high-impact test opportunities.

Alternative method: Show your page to 5 people for 5 seconds each. Ask them the same questions. Their confusion tells you what to test.

Real Example

A client wanted to test button colors. I convinced them to test their headline first:

  • Control: "Advanced Analytics Platform"
  • Variant: "Know Which Marketing Channels Actually Drive Revenue"

Result: 47% increase in sign-ups.

Then we tested button colors (blue vs. green). Result: 1% improvement, not statistically significant.

They almost wasted a month testing buttons when their headline was killing conversions.

Action Step

List 10 things you want to test. Rank them by potential impact using the hierarchy above. Start with #1, ignore 8-10 for now.

Putting It All Together: Your A/B Testing Checklist

Before running your next test, make sure you can check every box:

Before Starting:

  • [ ] I have a clear hypothesis with reasoning
  • [ ] I'm testing one significant element
  • [ ] I've identified a high-impact element to test
  • [ ] I've set a minimum runtime (1-2 weeks)
  • [ ] I've set a minimum sample size (100+ conversions)
  • [ ] I've set a significance threshold (95%+)

While Running:

  • [ ] I'm not checking results before my minimum criteria are met
  • [ ] Both variants are getting equal traffic
  • [ ] I'm tracking the right goal metric

Before Declaring a Winner:

  • [ ] I've hit my minimum runtime
  • [ ] I've hit my minimum sample size
  • [ ] I've reached 95%+ statistical significance
  • [ ] The results make sense (not just a fluke)

After the Test:

  • [ ] I've documented what I learned (win or lose)
  • [ ] I've implemented the winning variant (if significant)
  • [ ] I've planned the next test based on learnings

The Cost of These Mistakes

These aren't just theoretical problems. Here's what these mistakes actually cost:

Stopping tests early: You implement changes that don't work, potentially lowering your conversion rate

Testing too many things: You can't learn from your tests, so you keep making the same mistakes

No hypothesis: You waste time on random tests instead of strategic optimization

Ignoring significance: You make decisions based on noise, not signal

Testing the wrong things: You spend months optimizing button colors while your headline kills conversions

Your Next Steps

  1. Audit your current tests against the checklist above
  2. Stop any tests that don't meet the criteria
  3. Restart with proper hypotheses and decision criteria
  4. Focus on high-impact elements first

A/B testing is one of the most powerful tools for improving conversion rates—but only if you do it right.

The good news? Now that you know these mistakes, you can avoid them and start seeing real, sustainable improvements in your conversion rate.

Ready to start testing the right way? Get started with Tiny A/B Test free and access built-in significance calculators, hypothesis templates, and smart test tracking that helps you avoid these mistakes.


Have you made any of these mistakes? You're not alone—I've made every single one. The key is learning from them and improving your testing process. What's your biggest A/B testing challenge? Let me know!