Leveraging AI to Accelerate AEM and Day-to-Day Development Work | Community
Skip to main content
abhishekanand_
Community Advisor
Community Advisor
April 9, 2026

Leveraging AI to Accelerate AEM and Day-to-Day Development Work

  • April 9, 2026
  • 2 replies
  • 70 views

I’d like to understand how teams, within the Adobe Experience Manager (AEM) , are practically using AI in day‑to‑day development work to speed up delivery, such as AI‑assisted coding, agentic AI, vibe‑coding approaches, and other productivity tools. Specifically, how are you applying AI to AEM development, configuration, content migration, testing, and optimization, and what real productivity gains are you seeing versus hype? I’m also interested in perspectives on Adobe’s AI strategy (Sensei, Firefly, AI‑driven features) from a delivery standpoint, does it meaningfully reduce development effort or mainly shift complexity elsewhere? A growing challenge is that clients now expect 7 days of work in 2 days, assuming AI automatically compresses timelines, while being reluctant to estimate effort realistically. How are you adapting estimation models, educating stakeholders, and balancing AI‑assisted speed with quality, governance, and team sustainability?

2 replies

BrettBirschbach
Adobe Champion
Adobe Champion
April 9, 2026

We’ve recently started using AI to aid component development in AEM projects - AI creates a fully working component as a starting point (sling model, unit tests, dialogs,  HTML, JS/CSS, Content Policy, and a Brand Library page for usage).  The results in terms of % productivity increase are not yet measured, but it definitely does elevate the starting point and give FE developers much more independence on content-authored components.  For components with heavy logic, gains are less but still there due to a very good shell to start from.

 

Creating formal Claude (Agent) SKILLS for the different parts of coding a component has made a dramatic difference in AI’s ability to not only code something that is “working” but also according to the project’s expectations and patterns for coding those things. Without the skills, AI definitely output a lot of code that looked and smelled like AEM, but didn’t fit project paradigms, reuse common widgets and logic, and required a decent amount of refactoring to be useful or sometimes even function - gains were dubious without the AI skills.  What I also found is that AI does better in a clean codebase of singular patterns - the more different ways you do something in a code base, the more AI struggles even if you do have formal skill instructions, since it will at times get confused when your instructions say one thing and your code is doing another, especially as context gets large and AI attempts to optimize its efforts.

 

Teaching AI how to code AEM components has actually been quite fun. Using it to bang out the tedious parts of the code (e.g. creating the content policy and assigning it to a dozen page templates, creating a brand library page demonstrating style variations, etc.) feels great, regardless of the exact % savings.  As an Architect, I not only appreciate the time it saves devs in creating the code, but the time it saves me in validating certain patterns and practices that the AI gets perfect every time (based on the supplied skills) where developers (especially new to the project) can occasionally overlook.

 

Level 1
April 10, 2026

I can share my experience regarding development using AI:

We have started using Adobe’s local development AI skills for AEMaaCS that were recently released by Adobe - check here https://experienceleague.adobe.com/en/docs/experience-manager-cloud-service/content/ai-in-aem/local-development-with-ai-tools and it’s been a noticeable step up from just a generic prompt with AI like "create a component…" or help fix a bug and so on. The big difference is that with the skill, it is much better at following our project structure and conventions, so the output is closer to something we can actually consume without a lot of clean-up or back and forth.

We also created a handful of prompt templates for the repeatable parts of AEM work (component building, dialogs, Sling Models, unit tests, servlets, workflows, etc.) which reduces the errors AI will make. On the straightforward, pattern-based components, we are seeing approx. 20% developer time saved (early days, still validating with better measurement). It has tremendously helped for writing unit test cases which developer hates to write.

For components with heavier logic or third party integrations, the gains are smaller and you still need a developer to steer it manually at every stage as ​@BrettBirschbach  mentioned.

One thing we also learned was that AI works best when the codebase is consistent. If the repo has three different ways of doing the same thing the assistant gets confused and you spend your time refactoring it. E.g. when to use core components, follow design patterns, etc.

Overall, because we are still early in adoption and the gains are incremental, we’re being even more diligent on PR reviews, QA, and other checks. So we’re not seeing a big, obvious reduction in end-to-end sprint cycle time yet but over the time this will improve.