May 25, 2025
Designing Beyond the Average: Why Your Product Shouldn't Fit No One

In a world obsessed with averages, it’s easy to assume that building for the "average user" is safe, efficient, even strategic. After all, average is where the majority lives, right?

Wrong.

This isn’t just a design flaw—it’s a business risk. The real kicker? The idea of the “average user” was debunked over 70 years ago by, of all people, the U.S. Air Force. And the lessons they learned while designing fighter jet cockpits still apply—only now, we’re designing digital products, not planes.

The Study That Changed Everything

Back in 1950, the U.S. Air Force was dealing with a deadly mystery: why were highly trained pilots making frequent errors in flight? The initial assumption was human error. But Lt. Gilbert S. Daniels, a physical anthropologist, had another theory.

He measured 4,063 pilots across 140 physical dimensions—height, chest circumference, limb length, etc.—to design a cockpit for the “average” pilot. What he discovered shattered decades of assumptions.

Out of thousands of pilots, not a single one matched the average across all dimensions. Even when narrowing it down to just three dimensions, fewer than 3.5% matched.

Designing for the average, it turned out, fit almost no one.

So what happened? Cockpits became adjustable. Aircraft became safer. And the Air Force learned that if you design for the individual, not the imaginary average, everyone benefits.

From Cockpits to Clicks: Why Digital Products Are No Different

Today, we’re not designing cockpits—we’re designing checkout flows, onboarding experiences, healthcare portals, and AI interfaces. But the mistake is the same: designing for a mythical average user.

If you're in e-commerce, health tech, or building AI-driven platforms, here’s a hard truth: there’s no such thing as a typical user. You’ve got one customer on mobile in a rural town with spotty data and another on a high-end desktop in Singapore toggling between five tabs and dark mode.

Try building one experience that suits them both—and good luck hitting your retention metrics.

This is where automated user testing and personalization validation come in.

What Is Automated User Testing?

Automated user testing is the process of using intelligent tools (like CarbonCopies AI) to simulate how different users interact with your product. These simulations aren’t just bots clicking buttons—they’re AI-driven agents that behave like real people.

They catch bugs you didn’t know existed. They test long-tail user flows. They say, “Hey, this dropdown breaks if I switch languages mid-checkout” or “This chatbot loops forever when I say something weird.”

And they do it at scale—without a room full of QA engineers.

Benefits of Automated User Testing

  • Scalability: Run thousands of test cases across dozens of simulated user personas in hours—not weeks.
  • Speed: Identify bugs and UX blockers early, before they hit production and kill conversion.
  • Coverage: Test edge cases, AI-driven flows, and adaptive content that traditional testing tools miss.
  • Efficiency: Drastically reduce manual QA hours. Your PMs and devs will thank you.

What Is Personalization Validation?

Let’s say you are investing in personalization (smart move). Great—you’ve got different content blocks for first-time vs. returning users, or dynamic pricing for enterprise customers vs. startups. But how do you make sure it works?

That’s what personalization validation is for. It’s the process of ensuring that each persona is getting the right experience, without errors or misfires.

This is not just about A/B testing. This is about validating that your tailored experiences actually fit the people they’re meant for.

Key Elements of Personalization Validation

  • Content Relevance: Is the user seeing content that makes sense to their context?
  • Journey Alignment: Are CTAs and flows adapted to their stage in the funnel?
  • Accessibility & UX: Does the experience still work for dyslexic users, screen readers, or those in low-light environments?
  • Language and Localization: Are translations accurate and consistent?

CarbonCopies, for example, automatically runs these validations using AI personas that mirror real user behavior, catching issues before they hit your real customers.

Why "Fit for Every Persona" Is the Future

Building “one version for everyone” is not just lazy—it’s unsustainable. Users expect personalized, frictionless experiences, and they’ll bounce fast if you don’t deliver.

When you design for every persona, you’re designing a product that:

  • Works across devices, languages, and modes (light/dark, mobile/desktop)
  • Understands edge cases and accommodates them
  • Builds trust with users who feel seen and supported

Let’s be real: “accessible and adaptive” is no longer a nice-to-have—it’s a conversion booster.

Real-World Use Case: Persona-Driven Testing in Action

Consider a fintech SaaS company using CarbonCopies AI for automated regression testing. They simulated three personas:

  1. Finance Manager: Needs oversight and policy validation before approving reimbursements.
  2. Department Manager: Often on mobile, needing quick approvals with no visual hiccups.
  3. Payroll Employee: Requires translated interfaces in Bahasa Indonesia.

CarbonCopies simulated these workflows and uncovered:

  • Overlapping approval criteria that needed streamlining
  • Contrast issues in dark mode on mobile
  • Broken translations and misaligned buttons in the Indonesian UI

Manual testing missed most of these. But automated user testing found them fast.

Results?

  • 50% reduction in manual testing time
  • 35% increase in test coverage
  • Fewer support tickets, better UX, and faster time to release

You’re Not Building for Pilots, But You Might Be Flying Blind

Unless you're running rigorous automated tests across all user types, you’re basically flying your product blind. You’re assuming your users are average, your journeys are smooth, and your edge cases are rare.

Spoiler: they’re not.

Your checkout flow might break for low-vision users. Your chatbot might loop for non-native speakers. Your “one-size-fits-most” UX might be quietly bleeding revenue.

And that’s the rub. When we design for the average, we alienate the many.

How to Start: Move Beyond the Average in 4 Steps

  1. Map Your Personas: Go beyond demographics—include behaviors, goals, tech stack, and accessibility needs.
  2. Simulate Their Journeys: Use tools like CarbonCopies to create AI agents that act like real users.
  3. Test Automatically and Continuously: Build this into your CI/CD pipeline.
  4. Fix Fast, Learn Faster: Use feedback loops from AI simulations to prioritize bugs that actually impact users.

TL;DR: Average Is Overrated

If fighter pilots couldn't fit into a cockpit designed for the average, why do we still expect digital users to thrive in one-size-fits-all experiences?

Embrace the variability. Validate every journey. Automate every edge case.

With automated user testing and personalization validation, you’re not just testing your product—you’re proving that it works for real people.

Not just the mythical average.