A/B Testing Landing Pages: The 7 Elements That Actually Move the Needle
Most landing page tests waste traffic on low-impact changes. These seven elements drive the majority of conversion rate improvements — here's what to test and what to expect from each.
There are hundreds of things you could test on a landing page. Most of them won't move your conversion rate. A team that systematically tests the right elements — in the right order — will outlearn a team running random tests in any reasonable timeframe.
Here are the seven elements that consistently produce meaningful conversion changes, with specific test ideas for each.
1. The headline
The headline is the highest-leverage element on any landing page. It determines whether visitors engage with the rest of the page or leave immediately. It's the first thing users read and often the last if it fails to connect.
Testing headline framing is different from testing headline copy. Framing changes include:
- Outcome vs. feature: "Get your first 100 customers" vs. "Email automation for B2B SaaS"
- Specificity: "Save 4 hours per week on reporting" vs. "Work faster with automated reports"
- Audience: "For solo founders" vs. "For fast-growing teams" — the same product described for different audiences
Expected lift range from headline tests: 10–40% on conversion rate. High variance, but also the highest upside of any element. Run headline tests first.
One caution: headline and subheadline are tightly coupled. A test that changes only the headline while leaving a contradicting or redundant subheadline won't reflect the true potential of either.
2. Hero image or video
Visuals carry meaning that copy can't — they establish who the product is for, signal the quality of the experience, and either create or destroy trust within the first 200ms of a visit.
High-signal test ideas:
- Product screenshot vs. illustration: Screenshots perform better for SaaS with a clear, attractive UI. Illustrations often outperform for complex products where screenshots are confusing.
- Lifestyle image vs. product image: Works better for consumer products; test which audience interpretation your audience brings.
- Hero video vs. static: Video consistently increases time on page but doesn't always improve conversion. The test is worth running if you have video assets — the result is often surprising in one direction or the other.
- Person facing the camera vs. looking away: In A/B tests across multiple industries, faces looking toward the page's CTA subtly guide user attention. Small effect, but consistent.
3. CTA copy
"Get started" converts worse than you think. It's the most common CTA and the most invisible one.
Effective CTA copy changes to test:
- Specificity: "Start your free trial" → "Start testing in 10 minutes"
- Value-forward: "Sign up" → "Unlock experiment results"
- First-person: "Start my free trial" consistently outperforms "Start your free trial" in most niches — a 10–15% lift is common
- Risk reduction: "Try free for 14 days" → "Try free — no credit card needed"
The most productive approach: test CTA copy as a system — the button text, the nearby supporting copy ("Join 3,000 teams already testing"), and the headline should all reinforce the same message. Testing button copy in isolation often produces underwhelming results.
4. CTA placement and size
Where the button sits matters as much as what it says. The primary CTA should appear above the fold without scrolling on the most common viewport sizes. That sounds obvious — many landing pages violate it on mobile.
Test ideas:
- Sticky CTA bar: Keeps the primary action visible while scrolling. Often produces a 5–15% lift on long pages.
- CTA at multiple points: Adding a second CTA at the bottom of the page for visitors who scroll through helps capture bottom-of-page intent without reducing above-fold clarity.
- Button size: Larger isn't always better, but many pages err toward buttons that are too small on mobile. Test a version where the button spans 80%+ of the mobile viewport width.
5. Social proof
Social proof is a category, not a single element. Testimonials, customer logos, review counts, case study links, and press mentions all serve different functions and test differently.
The specific form matters:
- Logo walls vs. named testimonials: Logo walls build brand credibility. Named testimonials with specific outcomes ("We cut churn by 18% in the first quarter") build functional credibility. Test which your audience needs — early-stage products usually need specifics; established brands can rely more on logos.
- Testimonial specificity: Vague testimonials ("This tool is amazing!") perform significantly worse than outcome-specific ones. Before testing placement, first test whether your testimonials have enough specificity to work at all.
- Review count and score: Pages showing a star rating and review count often outperform pages without, but only if the score is strong (4.5+). A visible 3.8 rating can actively hurt conversion.
- Placement relative to the form: Social proof placed immediately above or beside the CTA typically outperforms social proof placed at the bottom of the page.
6. Pricing display
For pages with visible pricing, how you present it is often more important than the price itself.
Effective tests:
- Highlighting a plan: Most pricing tables test showing vs. not showing a "Most Popular" or "Recommended" badge on a middle tier. This anchoring effect consistently reduces decision paralysis and lifts trial starts.
- Annual vs. monthly default: Showing annual pricing by default increases average revenue per trial but reduces trial start rate. Which one optimizes for your LTV depends on your churn profile.
- Price framing: "$49/month" vs. "$1.60/day" — the daily equivalent framing reduces sticker shock for higher price points. Test this if your primary plan is above $30/month.
- Free tier visibility: If you have a free tier, whether to make it visually prominent or de-emphasize it in favor of paid plans is a high-impact test. The right answer varies significantly by traffic source.
7. Form length
Longer forms produce fewer but higher-quality leads. Shorter forms produce more submissions with lower intent. Neither is universally correct — it depends on what happens after the form.
Tests worth running:
- Email-only vs. email + name: Removing the name field typically lifts form completion 10–20%. If your onboarding emails aren't personalized, you're paying a form completion penalty with no benefit.
- Progressive forms: Show two fields initially, reveal more after the first submission. Works well for complex lead gen. Adds implementation complexity.
- Social login vs. email form: "Continue with Google" reduces friction dramatically for SaaS trial signups. It also changes the data you collect and the onboarding experience. Test the conversion lift against the downstream effects.
Prioritizing what to test first
Run headline tests first. They have the highest potential upside and affect every visitor. If your headline isn't working, everything downstream is measuring performance against a broken foundation.
After the headline, use this simple framework: estimate the percentage of visitors who see the element, estimate the potential lift, and score each by their product. High-traffic elements with high potential lift go first. Marginal elements — footer redesigns, icon changes, microcopy tweaks — go last, if at all.
Practical takeaway: Pick one element from this list that you haven't tested yet. Write three headline variants or two CTA copy options. Calculate the sample size you need to detect a 10% lift at your current conversion rate. Launch the test before you read the next piece of content about A/B testing.