It takes 1 hour+ for the Adobe build process to build and deploy to a cloud env. This means a simple fix takes many hours, as first we have to build and test on dev, then build and test on UAT, then build on stage, then live.
Thats 4+ hours just building duplicate code.
And that is if Adobe Build system is working. We get builds failing due to environment hosting issues on a weekly basis, and adobe take weeks or months to respond and fix them, taking out our deployment process and production changes with it.
Can we instead build on say dev, and deploy to UAT using a code package? Adobe say this doesnt work, but it would be much faster, and also much more reliable.
Solved! Go to Solution.
Views
Replies
Total Likes
As per the process enveloping governance and security and to adhere to the CI/CD process, we cannot deploy any changes using packages.
If it takes longer time to build the package, please look into your project repository structure and see how it can be optimized.
Hi @nutmix2
As per Adobe documentation; All content and code persisted in the immutable repository must be checked into git and deployed through Cloud Manager. In other words, unlike current AEM solutions, code is never deployed directly to a running AEM instance. This ensures that the code running for a given release in any Cloud environment is identical, which eliminates the risk of unintentional code variation on production. As an example, OSGI configuration should be committed to source control rather than managed at runtime via the AEM web console’s configuration manager.
So all the deployment has to go through the Pipeline deployment.
Thanks!
As per the process enveloping governance and security and to adhere to the CI/CD process, we cannot deploy any changes using packages.
If it takes longer time to build the package, please look into your project repository structure and see how it can be optimized.
The 1 hour build time per env, resulting in a minimum 3 hours to go from dev to prod just in build time for a one line hotfix, is not acceptable. With episerver, we could build on stage, then push the package between stage and prod without rebuilding for prod, which would be pointless. Also builds took minutes vs hours. From a governance and security point of view, it would be far better to be able to push a tested package from say test to stage to prod, vs. building a potentially different one on each, with the associated unacceptable build times. We have no idea how we could optimise for build time, other than to remove functionality, which is not an option.
The simple outcome of this unworkable system is that if it takes 4+ hours to put a critical hotfix live, due to the slow build times, we will have to make changes directly to the production branch, and push it into production without any testing. We cannot afford to have our site down for 4 hours to make a one line change in a P1 production emergency. 4 hours = hundreds of thousands in lost revenue. With episerver we were able to write, build, test and deploy hotfixes within 1-2 hours, with full due diligence and due process. Luckily we only had to do this rarely (e.g. if a exploit was found, or a provider failed unpredictably).
If we add a more likely case, were we have to build on dev several times before we are ready to push a hotfix, then it could be 6-8 hours due to taking 1 hour each time. Building and testing locally is often ineffective as the local env is so different from the cloud env.
Add on to this the problem that the build on any env (including dev and prod) randomly fail due to Adobe issues (e.g. kubernetes issue), and we have to ask Adobe to fix, and adobe take anything from 1 week to 2 months to fix broken environments, and its a real concern.
Hi,
Yes, Adobe is right , it does not work. When your package contains /apps and /libs, package install will failed because /apps and /libs are immuntable.