Introduction: Fast Development with High Quality Isn’t a Myth—We’re Living It
At Adobe, we know that our customers rely on Adobe Journey Optimizer (AJO) to deliver seamless, timely, and personalized customer experiences. That’s why we have made it our mission to catch issues before they reach our customers – and resolve them quickly when they do.
How? With something we call Customer Use Case (CUC) Automation.
While we continue to invest in traditional testing methods like unit tests, functional tests, and integration checks, Customer Use Case (CUC) Automation goes a step further. It tests AJO the way our customers actually use it—across journeys, segments, profiles, message authoring, delivery, and more. By mirroring real user workflows end to end, CUC ensures everything works together seamlessly, just as you’d expect in a live environment.
This means better reliability, faster issue resolution, and smoother experience for our team and our customers.

Here’s the uncomfortable truth: if automation doesn’t reflect how our customers actually use the product, it’s not doing its job.
That was the gap we saw and filled with Customer Use Case (CUC) automation. Instead of just testing services, we test entire user workflow: how real users build, execute, and depend on our workflows. This includes not just our app, Adobe Journey Optimizer (AJO), but all the platform services it’s built on—like Adobe Experience Platform, profile, and segmentation services.
We’ve now automated 200+ critical user workflows. These tests run hourly or every few hours across environments and regions. And they’re not flaky. We hold a 98%+ pass rate across the suite, and our smoke tests have a 100% success rate. When something breaks, it’s real and it matters. That’s why our developers trust these tests so much, they let them gate production deployments.
But catching issues is only half the equation.
The other half is responding fast. When a test fails, we don’t just log it—we auto-trigger war rooms. We treat stage like prod. Our goal isn’t to be bug-free forever (that’s a fantasy). Our goal is to fail fast, resolve faster, and make sure customers never feel the pain.
In short: we’re proving that fast development with great quality is not a myth. With the right focus—CUC coverage, test health discipline, and Continuous Integration/Continuous Deployment (CI/CD) integration—we are giving our developers full autonomy, without compromising trust.
In this blog, you’ll learn how we built this system, what it looks like in action, and how it’s changed the way we ship software at scale.
Why Traditional Testing Isn’t Enough
Most teams don’t have a testing problem. They have a testing blind spot.
Unit tests catch broken logic. Feature tests validate new capabilities. Integration tests check API contracts and service responses. These layers are critical—and we use them too.
But here’s the catch: none of them tell what the customer sees.
Traditional tests focus on systems—not scenarios. They assume that if each service behaves as expected in isolation, the customer experience will be fine. But in reality, what customers interact with is a chain of services, stitched together in workflows that span teams, platforms, and microservices.
We saw this clearly in AJO.
Adobe Journey Optimizer is built on top of public cloud platforms like Azure and AWS, as well as Adobe Experience Platform. This means that even if our own APIs are functioning correctly, the overall customer experience can still be affected by issues in underlying services—like cloud infrastructure, profile data, segmentation logic, or activation flows. Traditional test cases often miss these kinds of problems because they focus on isolated responses, not whether a customer journey actually runs end to end.
That’s the difference.
Customer Use Case (CUC) testing looks at the product the way our users do. It doesn’t ask, “Did this API respond correctly?” It asks, “Can a customer build and execute a Journey/Campaign, with all the dependencies working together, end to end?”
Once we started thinking that way, we started catching issues before customers ever saw them.
What Customer Use Case Automation Really Means
A Customer Use Case (CUC) is more than an integration test. It’s a full representation of how a user interacts with our product—from start to finish—across every system involved.
For us, that meant mapping out exactly how customers use AJO: building journeys, creating segments, sending data to platforms, triggering events, and more. And not just the “happy path”—but edge cases too.
We prioritized based on impact:
- What are the most common workflows?
- Which use cases depend on other services in larger ecosystem that the app is built?
- Which areas do bugs impact the most when they sneak through?
Once we had the list, we built automation for each. Today, we’ve covered 200+ CUCs using a backend-focused framework powered by Cucumber and Selenium. These tests are API-first and simulate full end-to-end flows, not just isolated calls.
Building a Suite That Engineers Trust
Automation doesn’t matter if no one trusts the results.
That’s why we invested early in test reliability. We made test health a priority, not an afterthought. We stabilized data setups, fixed flaky tests, and monitored failure patterns closely. The result?
- 98%+ pass rate across all scenarios
- 100% pass rate on smoke tests, consistently
This stability turned the test suite from a “nice to have” into a core dependency for engineers. Developers now deploy to staging—or a single production region—and wait for CUCs to run. If they pass, the rollout continues. If not, they halt or roll back.
That trust only happens when test results are clear, accurate, and fast.
Tech Stack & CI/CD Integration
Here’s how we structured our test execution:
- Smoke Tests:
Run every hour. Cover happy path customer journeys. Execute in < 5 minutes. If pass rate falls below 100%, we auto trigger a war-room to investigate and resolve.
- Minor Regression Suite:
Contains 95% of our CUCs. Runs every 4 hours. Takes ~1 hour to complete. If pass rate falls below 98%, we auto trigger a war-room to investigate and resolve.
- Environment-wide Execution:
Tests run in every environment and across regions to validate region-specific issues before rollout.
All of this is deeply integrated into CI/CD pipelines. If a test fails, the pipeline stops. If smoke tests fail in stage—we treat it like a prod failure. Stage is no longer a dumping ground; it’s a production-grade environment.
Culture Shift: From Testing to Quality Gates
The real breakthrough wasn’t just the technology—it was the cultural shift that followed.
Because engineers now trust the tests, they’ve started using them to make real decisions:
- Deploy to one region → run CUCs → promote only if green
- Pause at staging → run CUCs → promote to prod only on success
- Treat test failures as production incidents, not pipeline noise
We even auto-trigger war rooms when smoke tests fail. This ensures rapid response and coordination, long before a customer ever feels the impact.
We’ve also seen adoption across microservice teams. Many now follow the same “regional rollout + CUC validation” pattern to ship safely and confidently.
Quality that our customers can count on
Software will never be 100% bug-free, but we’re committed to catching issues before they reach our customers and resolving them quickly if they do.
Customer Use Case Automation lets us:
- Monitor user workflows, not just individual features
- Catch issues across integrated services and systems
- Deliver reliable updates that our customers can trust
By shifting our approach from reactive fixes to proactive quality, we’ve built a foundation that allows us to move faster, with greater confidence.
The result? Customers get latest features and innovations in AJO-delivered faster, and with the quality that they can count on.