December 18, 2024
Designing for Two Audiences: Human-Centered and Agent-Centered UX in Action

Introduction
In the early days of digital product development, “user experience” always meant the human experience. Designers and product managers championed user-centered design principles, persona-driven product development, and empathy mapping to better understand human needs. But in the world we’re rapidly entering—one with AI-driven applications, autonomous agents, and non-human decision-makers—this definition of “user” is no longer so straightforward.

Today’s UX landscape includes not just human customers, but also AI agents that interact with our platforms, recommend content, automate workflows, and even shape the user interfaces that other humans see. If you’re a startup designer or product manager, you might be wondering: Do the old rules of human-centered design still apply when the ‘user’ might be an AI agent? And if so, how do you test this new kind of user experience to ensure both humans and agents benefit?

In this post, we’ll explore how emerging approaches to UX testing, persona development, and test orchestration can help bridge this gap. We’ll look at integrating traditional user testing methods with cutting-edge AI-driven testing tools, discuss the value of shift-left and shift-right testing in a world increasingly influenced by AI, and offer practical steps to ensure your products work seamlessly for both human users and their digital counterparts.

Redefining ‘User’ in the Age of AI

For decades, user experience has been synonymous with human experience. We craft user journeys, build persona-driven product development frameworks, and iterate based on human feedback. The classic persona might be a 35-year-old working parent who checks a mobile app during a morning commute. The goal: understand human motivations, pain points, and desires.

But what happens when the “user” is an AI agent? Imagine a recommendation engine that “uses” your platform to process content feeds and algorithmically surface the most relevant items for human users. Or consider a voice assistant that interacts with your application’s backend APIs to fulfill voice commands on a smart speaker. These AI entities engage with your product, but their behaviors, constraints, and “desires” are entirely different from human-centered motivations.

In this new landscape, persona testing must extend beyond static human archetypes. Instead of focusing solely on human attributes—age, occupation, tech-savviness—we must consider agent-based user experience, where the persona might represent an AI-driven system with unique operational parameters. What does persona-based user testing look like when your “persona” is a machine learning model with specific input-output requirements?

At the same time, we must consider that human users and AI agents often co-exist within the same ecosystem. A human might rely on an AI assistant to navigate a complex interface, or an AI agent might influence what a human sees next. This interplay makes user testing more complex. It’s no longer about testing a product with a single audience in mind; it’s about ensuring a seamless synergy between human and AI interactions.

Human-Centered vs. Agent-Centered UX

Traditional UX revolves around human-centered design principles. The goal is to make interfaces intuitive, interactions frictionless, and content engaging. Humans appreciate visual clarity, simple navigation, and accessible language. Empathy mapping helps designers step into the user’s shoes, identifying pain points and opportunities for delight.

But what about agent-centered UX? When your “user” is an AI, how do you define good UX? AI agents don’t feel frustration when a button is hard to find; instead, they may struggle with ambiguous data formats or inconsistent response structures. The usability heuristics for AI might be entirely different—focusing on clean, standardized APIs, predictable output formats, or well-documented endpoints rather than visual aesthetics.

This raises an intriguing question: Where does shift-left and shift-right testing fit into this picture? Traditionally, shift-left testing involves pushing tests earlier into the development process, catching bugs before they ever reach the user. In an environment where 80% of code may be generated by AI in the near future, early testing still matters, but what are we testing for?

  • Shift-left for Human Users: Ensuring the initial UI prototypes are intuitive, forms are validated, and user journeys make sense.
  • Shift-left for AI Agents: Verifying early in development that APIs follow consistent naming conventions, data schemas are stable, and machine-readable documentation is available. Here, AI-driven testing tools and data-driven test automation frameworks come into play.

On the other hand, shift-right testing focuses on post-deployment monitoring and continuous feedback. If your product interacts with AI recommendation engines, shift-right testing might involve analyzing behavioral analytics over time, ensuring that AI agents effectively utilize your platform’s features and that human satisfaction remains high. The key is treating AI agents as part of your user base, just as deserving of continuous UX evaluation as your human customers.

Testing Strategies for Agent UX

How do you test for something that isn’t human? Let’s consider a practical scenario: You’ve designed an online marketplace that humans use to shop for niche products. You also have an AI-driven aggregator that crawls the marketplace, curates top items, and feeds these recommendations into a partner platform. Your product must work flawlessly for both the human buyer and the AI aggregator.

a) AI-Driven Testing Tools and Continuous Testing:
Traditional usability tests—where a participant navigates an app while thinking aloud—won’t capture an AI’s perspective. Instead, you might create automated test suites that simulate an AI agent’s interactions. By using AI-driven testing tools, you can programmatically hit endpoints with large sets of test data, validating that responses are returned in the expected format. Continuous testing frameworks can log anomalies in real-time, alerting you when the platform becomes less “machine-friendly” due to a code change or new feature.

b) Persona-Based User Testing with a Twist:
When dealing with agent-based experiences, “personas” might represent AI behavioral patterns rather than human demographics. For instance, you could define an “AI aggregator persona” that prefers certain data structures and reacts poorly to unpredictable schema changes. Testing against this persona involves ensuring your system consistently provides structured outputs. This is similar to persona testing for humans but focusing on technical characteristics rather than emotional or cognitive traits.

c) Empathy Mapping for AI?
While you can’t truly empathize with an AI’s feelings, you can “empathize” with its technical requirements. This might involve mapping out the agent’s journey: it queries your API, receives structured data, and returns recommendations. Points of friction might include inconsistent formatting, delayed response times, or missing parameters. By identifying these friction points early, you can preempt issues before they cascade into poor user experience for the humans who rely on the AI’s filtered content.


Tools and Techniques for Mixed-User Testing

Today’s product teams have a growing toolkit for incorporating both human and agent-centric testing:

  1. Behavioral Analytics in Testing:
    Collect data not just from human sessions but also from AI sessions. How often do AI agents request data? Are they getting error responses or malformed payloads? By treating AI behavior as part of your analytics pipeline, you gain insights into how well your platform supports machine users.
  2. Data-Driven Test Automation:
    Testing frameworks can simulate various “agent personas” by sending requests with different parameters or at high frequencies. This ensures your platform’s performance and data integrity holds up under stress conditions that human users might never produce.
  3. Intelligent Test Orchestration:
    Some emerging tools use machine learning to determine which tests to run and when, optimizing coverage for both human and AI use cases. For example, if analytics show the AI agent is having trouble interpreting a new endpoint, the system might automatically trigger more granular tests around that endpoint.
  4. Voice Interface Usability and Agent Interaction:
    If your product has a voice interface, you might not only test how humans understand voice commands but also how AI systems (like speech-to-text engines) parse your voice output. Ensuring that both the spoken words and metadata are easily processed by such agents can improve overall user satisfaction, as the AI can then provide richer, more accurate results.
  5. Prototyping AI-Human Interfaces:
    When building a prototype, consider scenarios where AI agents serve as intermediaries. For example, test how easily your platform can integrate with a popular AI content summarizer. If the summarizer struggles with your HTML structure or fails to extract relevant metadata, it’s a sign your UX might need adjustments—just as a confusing button placement would be a sign of poor human UX.


The Future of UX Testing in a Hybrid Landscape

As we move forward, we should expect the boundaries between human-centered and agent-centered UX to blur. In fact, you might not always know whether a request to your platform came from a human or an AI agent. That means building your product to be universally accessible—both “human-friendly” and “machine-friendly”—is not just a luxury but a necessity.

Tools for generative AI in software testing will likely expand, using machine learning models to predict what human and AI agents need next. Intelligent test orchestration might become standard, automatically adjusting test strategies based on real-time feedback and user patterns (both human and machine).

For product managers, this means embracing complexity. It’s no longer sufficient to manage a backlog that only accounts for human feature requests. You must consider AI-driven interactions as part of your product’s ecosystem. When prioritizing features, think about how they affect both human engagement and AI “understanding.” For example, a subtle change in the API structure might not affect human users directly, but it could break a crucial AI integration.

Real-Life Scenario:
Imagine a content aggregation platform that helps humans discover articles. A recommendation agent (an AI “user”) relies on your platform’s tags and metadata to classify content. If you suddenly change how tags are structured, humans might not notice—but the recommendation engine might struggle, leading to less relevant suggestions for humans. This interdependence makes it critical to test for both sides and continuously monitor for shifts in agent behavior.

Conclusion

In a world where AI agents increasingly interact with our products, the concept of “user experience” must evolve. Human-centered design remains essential because humans are still at the center of it all. But we must now acknowledge that not all users are human. Some are AI-driven agents whose satisfaction depends on data consistency, clear documentation, and predictable response structures rather than intuitive icons or well-chosen color palettes.

For designers and product managers at startups, this shift is both a challenge and an opportunity. By embracing agent-based user experience, you can ensure your product remains accessible and valuable to all parties—human and machine alike. Persona-driven product development now includes AI-generated user personas, and continuous testing must account for both human and machine feedback loops.

What's in it for you?


As you plan your next product iteration, consider setting up a testing pipeline that simulates AI-agent interactions. Use AI-driven testing tools to catch format inconsistencies early. Analyze behavioral analytics data to understand how both humans and AI agents engage with your product. By doing so, you’ll not only future-proof your product against an increasingly automated world but also ensure a richer, more resilient user experience ecosystem. Reach out to our founding team: isabella@carboncopies.ai to chat.

More from The Chronicles of CarbonCopies