Expand my Community achievements bar.

Tuesday Tech Bytes: Unleashing AEM Insights Weekly

Avatar

Community Advisor


Welcome to 'Tuesday Tech Bytes,' your weekly source for expert insights on AEM. Join us as we explore a range of valuable topics, from optimizing your AEM experience to best practices, integrations, success stories, and hidden gems. Tune in every Tuesday for a byte-sized dose of AEM wisdom!

 

Get ready to meet our blog authors:

Introducing Anmol Bhardwaj, an AEM Technical Lead at TA Digital, with seven years of rich experience in AEM and UI. In his spare time, Anmol is the curator of the popular blog, "5 Things," where he brilliantly simplifies intricate AEM concepts into just five key points.

 

And here's Aanchal Sikka, bringing 14 years' expertise in AEM Sites and Assets to our discussions. She's also an active blogger on https://techrevel.blog/, with a recent penchant for in-depth dives into various AEM topics.

 

Together, Anmol and Aanchal will be your guides on an exhilarating journey over the next 8 weeks, exploring themes such as:

  •  Theme 1 - AEM Tips & Tricks
  •  Theme 2 - AEM Best Practices
  •  Theme 3 - AEM Integrations & Success Stories
  •  Theme 4 - AEM Golden Nuggets.

 

We are absolutely thrilled to embark on this journey of learning and growth with all of you.

 

We warmly invite you to engage with our posts by liking, commenting, and sharing. If there are specific topics you'd like us to delve into, please don't hesitate to let us know.

Together, let's ignite discussions, share invaluable insights, and collectively ensure the resounding success of this program!

 

Quicklinks to each tech byte:

 

Kindly switch the sorting option to "Newest to Oldest" for instant access to the most recent content every Tuesday.


Aanchal Sikka

16 Replies

Avatar

Community Advisor

Adaptive Image Rendering for AEM components

In the realm of Adobe Experience Manager (AEM) development, optimizing images within AEM components is crucial. One highly effective method for achieving this optimization is by using adaptive rendering.

What are Adaptive Images ?

Adaptive images are a web development technique that ensures images on a website are delivered in the most suitable size and format for each user’s device and screen resolution. This optimization enhances website performance and user experience by reducing unnecessary data transfer and ensuring images look sharp and load quickly on all devices.

Example:

 
 
 

aanchalsikka_2-1696918183719.png

 

Let’s analyze the example, with a primary emphasis on the srcset attribute for the current blog:

  1. <div data-cmp-is="image" ...>: This is the HTML element that is generated by Image component . It has various attributes and data that provide information about the image.
  2. data-cmp-widths="100,200,300,400,500,600,700,800,900,1000,1100,1200,1600": These are different widths at which the image is available. These widths are used to serve the most appropriate image size to the user’s device based on its screen width.
  3. data-cmp-src="/content/wknd/language-masters/en/test/_jcr_content/root/container/feature.coreimg.60{.width}.jpeg/1695628972928/mountain-range.jpeg": This attribute defines the source URL of the image. Notice the {.width} placeholder, which will be dynamically replaced with the appropriate width value based on the user’s device.
  4. data-cmp-filereference="/content/dam/core-components-examples/library/sample-assets/mountain-range.jpg": This is a reference to the actual image file stored on the server.
  5. srcset="...": This attribute specifies a list of image sources with different widths and their corresponding URLs. Browsers use this information to select the best image size to download based on the user’s screen size and resolution.
  6. loading="lazy": This attribute indicates that the image should be loaded lazily, meaning it will only be loaded when it’s about to come into the user’s viewport, improving page load performance.

How Browsers Choose Images with srcset Based on Device Characteristics:

  1. Device Pixel Ratio (DPR): The browser first checks the device’s pixel ratio or DPI (Dots Per Inch), which measures how many physical pixels are used to represent each CSS pixel. Common values are 1x (low-density screens) and 2x (high-density screens, like Retina displays). DPR influences the effective resolution of the device.
  2. Viewport Size: The browser knows the dimensions of the user’s viewport, which is the visible area of the webpage within the browser window. Both the viewport width and height are considered.
  3. Image Size Descriptors: Each source in the srcset list is associated with a width descriptor (e.g., 100w, 200w) representing the image’s width in CSS pixels. These descriptors help the browser determine the size of each image source.
  4. Network Conditions: Browsers may also consider the user’s network conditions, such as available bandwidth, to optimize image loading. Smaller images may be prioritized for faster loading on slower connections.
  5. Calculation of Effective Pixel Size: The browser calculates a “density-corrected effective pixel size” for each image source in the srcset. This calculation involves multiplying the image’s declared width descriptor by the device’s pixel ratio (DPR).
  6. Selecting the Most Appropriate Image: The browser compares the calculated effective pixel sizes to the viewport size. It chooses the image source that best matches the viewport width or height, ensuring that the image is appropriately sized for the user’s screen.
  7. Loading Only One Image: Typically, the browser loads only one image from the srcset list. It selects the image source that is the closest match to the viewport dimensions. This approach optimizes performance and reduces unnecessary data transfer.

Example:

 

 

 

 

<img srcset="image-100w.jpg 100w, image-200w.jpg 200w, image-300w.jpg 300w" alt="Example Images">

 

Suppose the browser detects a device with a DPR of 2x and a viewport width of 400 CSS pixels. In this case, the browser selects the image-200w.jpg source (200w) because it is the closest match to the viewport width of 400 pixels. Only the image-200w.jpg will be loaded, not all three images.

By dynamically selecting the most appropriate image source from the srcset, browsers optimize page performance and ensure that images look sharp on high-resolution screens while minimizing unnecessary data transfer on lower-resolution devices.

Implementing adaptive images for Components

In Adobe Experience Manager (AEM), both ResourceWrapper and data-sly-resource are used for including or rendering content from other resources within your AEM components. However, they serve slightly different purposes and are used in different contexts:

ResourceWrapper:

  • Purpose: ResourceWrapper is a Java class used on the server-side to manipulate and control the rendering of resources within your AEM components.
  • Usage:
    • You use ResourceWrapper when you want to programmatically control how a resource or component is rendered.
    • This appraoch is also used by WCM Core components like Teaser.
  • Benefit: Utilized as an abstract class, ResourceWrapper provides advantages in terms of reusability, consistency, and streamlined development. In the context of components responsible for displaying images, developers can extend this abstract class and configure a property on the custom component. This approach expedites the development process, guarantees uniformity, simplifies customization, and, in the end, elevates efficiency and ease of maintenance.

data-sly-resource:

  • Purpose: data-sly-resource is a client-side attribute used in HTL (HTML Template Language) to include and render other resources or components within your HTL templates.
  • Usage:
    • You use data-sly-resource when you want to include and render other AEM components or resources within your HTL templates.
  • Example:
     <sly data-sly-resource="${resource.path @ resourceType='wknd/components/image'}"></sly>

Our focus will be on harnessing the power of the ResourceWrapper class from the Apache Sling API as a crucial tool for this purpose.

Understanding ResourceWrapper

The ResourceWrapper acts as a protective layer for any Resource, automatically forwarding all method calls to the enclosed resource by default.

 

To put it simply, in order to render an adaptive image in our custom component:

  1. We need to integrate a fileUpload field.
  2. Wrap this component’s resource using ResourceWrapper. The resource type associated with the image component will be added to the Wrapper.
  3. As a result, this newly enveloped resource can be effortlessly presented through the Image component.

It’s a straightforward process!

Now lets get into the details.

 

Step 1: Create an Image Resource Wrapper

The first step is to create an Image Resource Wrapper. This wrapper will be responsible for encapsulating the Image Resource and appending the Image ResourceType to the corresponding value map. This wrapped instance can then be used by Sightly to render the resource using the ResourceType. Please keep in mind that the URLs generated by Sightly depend on the policy of the Image component used.

Link to complete class: ImageResourceWrapper

 

public class ImageResourceWrapper extends ResourceWrapper {
 
    private ValueMap valueMap;
    private String resourceType;
 
    // Constructor to wrap a Resource and set a custom resource type
    public ImageResourceWrapper(@NotNull Resource resource,  String resourceType) {
        super(resource);
 
        if (StringUtils.isEmpty(resourceType)) {
            // Validate that a resource type is provided
            throw new IllegalArgumentException("The " + ImageResourceWrapper.class.getName() + " needs to override the resource type of " +
                    "the wrapped resource, but the resourceType argument was null or empty.");
        }
        this.resourceType = resourceType;
 
        // Create a ValueMapDecorator to manipulate the ValueMap of the wrapped resource
        valueMap = new ValueMapDecorator(new HashMap<>(resource.getValueMap()));
    }
 
....
}

 

 

Step 2: Create an Abstract Class for Reusability

To promote code reusability across components, create an abstract class that can be extended by various Models. Additionally, add utility functions that can be used by Models that extend this abstract class.

Link to Complete class: AbstractImageDelegatingModel

 

public abstract class AbstractImageDelegatingModel extends AbstractComponentImpl {
 
    /**
     * Component property name that indicates which Image Component will perform the image rendering for composed components.
     * When rendering images, the composed components that provide this property will be able to retrieve the content policy defined for the
     * Image Component's resource type.
     */
    public static final String IMAGE_DELEGATE = "imageDelegate";
    private static final Logger LOGGER = LoggerFactory.getLogger(AbstractImageDelegatingModel.class);
 
    // Resource to be wrapped by the ImageResourceWrapper
    private Resource toBeWrappedResource;
 
    // Resource that will handle image rendering
    private Resource imageResource;
 
    /**
     * Sets the resource to be wrapped by the ImageResourceWrapper.
     *
     *  toBeWrappedResource The resource to be wrapped.
     */
    protected void setImageResource(@NotNull Resource toBeWrappedResource) {
        this.toBeWrappedResource = toBeWrappedResource;
    }
 
    /**
     * Retrieves the resource responsible for image rendering. If not set, it creates an ImageResourceWrapper based on the
     * configured imageDelegate property.
     *
     *  The image resource.
     */
    @JsonIgnore
    public Resource getImageResource() {
        if (imageResource == null && component != null) {
            String delegateResourceType = component.getProperties().get(IMAGE_DELEGATE, String.class);
            if (StringUtils.isEmpty(delegateResourceType)) {
                LOGGER.error("In order for image rendering delegation to work correctly, you need to set up the imageDelegate property on" +
                        " the {} component; its value has to point to the resource type of an image component.", component.getPath());
            } else {
                imageResource = new ImageResourceWrapper(toBeWrappedResource, delegateResourceType);
            }
        }
        return imageResource;
    }
 
    /**
     * Checks if the component has an image.
     *
     * The component has an image if the '{@value DownloadResource#PN_REFERENCE}' property is set and the value
     * resolves to a resource, or if the '{@value DownloadResource#NN_FILE}' child resource exists.
     *
     *  True if the component has an image, false if it does not.
     */
 
    protected boolean hasImage() {
        return Optional.ofNullable(this.resource.getValueMap().get(DownloadResource.PN_REFERENCE, String.class))
                .map(request.getResourceResolver()::getResource)
                .orElseGet(() -> request.getResource().getChild(DownloadResource.NN_FILE)) != null;
    }
 
    /**
     * Initializes the image resource if the component has an image.
     */
    protected void initImage() {
        if (this.hasImage()) {
            this.setImageResource(request.getResource());
        }
    }
}

 

 

Step 3: Create a Model for Sightly Rendering

Next, create a Model class that will be used by Sightly to render the image. This Model should extend the abstract class created in step 2.

Link to Complete class: FeatureImpl

Link to Complete interface: Feature

 

@Model(adaptables = SlingHttpServletRequest.class, adapters = Feature.class, defaultInjectionStrategy = DefaultInjectionStrategy.OPTIONAL)
public class FeatureImpl extends AbstractImageDelegatingModel implements Feature {
 
    // Injecting the 'jcr:title' property from the ValueMap
    @ValueMapValue(name = "jcr:title")
    private String title;
 
    /**
     * Post-construct method to initialize the image resource if the component has an image.
     */
    @PostConstruct
    private void init() {
        initImage();
    }
 
    public String getTitle() {
        return title;
    }
}

 

 

Step 4: Sightly Code for Rendering

In your Sightly code, use the Image resource type and the model created in step 3 to render the image. This is where you can customize how the image is displayed based on your project’s requirements.

 

<sly data-sly-template.image="${@ feature}">
    <div class="cmp-feature__image" data-sly-test="${feature.imageResource}" data-sly-resource="${feature.imageResource =disabled}"></div>
</sly>

 

These steps provide a structured approach to implement adaptive images for custom components using the WCM Core Image component in Adobe Experience Manager. Customize the Model and Sightly code as needed to suit your specific project requirements and image rendering policies.

 

Step 5: Configure Image Resource Type

In this step, you will configure the Image Resource Type to be used in the component’s dialog. This involves setting the imageDelegate property to a specific value, such as techrevel/components/image, within the component’s configuration. This definition is crucial as it specifies which Image Resource Type will be utilized for rendering.

Link to the component in ui.apps: Feature

<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:cq="http://www.day.com/jcr/cq/1.0" xmlns:sling="http://sling.apache.org/jcr/sling/1.0"
    jcr:primaryType="cq:Component"
    jcr:title="Feature"
    componentGroup="Techrevel AEM Site - Content"
    sling:resourceSuperType="core/wcm/components/image/v3/image"
    imageDelegate="techrevel/components/image"/>

 

Step 6: Add Image Upload Field to the Component’s Dialog

In the next step, you’ll enhance your component’s functionality by incorporating a file upload field into its dialog. This field allows you to easily upload images either from your local system or directly from AEM’s digital assets repository.

Link to the complete code of component’s dialog

 

<file
	granite:class="cmp-image__editor-file-upload"
	jcr:primaryType="nt:unstructured"
	sling:resourceType="cq/gui/components/authoring/dialog/fileupload"
	class="cq-droptarget"
	enableNextGenDynamicMedia="{Boolean}true"
	fileNameParameter="./fileName"
	fileReferenceParameter="./fileReference"
	mimeTypes="[image/gif,image/jpeg,image/png,image/tiff,image/svg+xml]"
	name="./file"/>

 

Step 7: Verify Component Availability

Ensure component is available for use in your templates, allowing them to be added to pages. Additionally, it’s crucial to verify that you have appropriately configured the policy of the Image component. The policy of the Image component plays a pivotal role in generating the adaptive image’s URL, ensuring it aligns with your project’s requirements for image optimization and delivery.

At this point, with the knowledge and steps outlined in this guide, you should be fully equipped to configure images within your custom component.

 

aanchalsikka_4-1696918966624.png

 

When you inspect the URL of the image, it should now appear in a format optimized for adaptive rendering, ensuring that it seamlessly adapts to various device contexts and screen sizes.

aanchalsikka_5-1696919003469.png

 

This level of image optimization not only enhances performance but also significantly contributes to an improved user experience.

 


Aanchal Sikka

Avatar

Level 9

Interesting article.Will this load an image based on the resolution ? For example, I have divided my page into 4 columns and I want to add images into each column. You can imagine the size of one column, ideally it is small in size. Now I have an image which is 2k resolution and size is 15MB. But as per page design it needs just 300x300. WIll this approach loads only 300x300 image instead of loading 2k resolution image 

Avatar

Community Advisor

Hello @Mario248 

 

Yes, it should. 

The AEM would return all available srcset and browser should do the work of picking the best match.


Aanchal Sikka

Avatar

Level 4

I don't think it will do that, it will load image size depending on your view port and DPR and not on the width of the container in which the image is used. 
For example if your screen is 400px X 700px and DPR 2 and you have divided the screen into 4 columns of 100px each then the image size needed  for each column will be of 100px X 2 = 200px and 700px X 2 = 1400px, which will be sufficient for good quality image but browser doesn't know about this since we control the size using css so 
width : 400px X 2 = 800px and height 700px X 2 = 1400px will be used by browser. So the nearest image to 800px size will be used for each column.

Avatar

Community Advisor

AEM Tips & Tricks

 

Hello Everyone

Excited & thrilled to embark on this journey of learning and growth with all of you.
Following this week's theme for Tuesday Tech Bytes.

In this article, we're diving into the Tips & Tricks for AEM Development, regardless of whether you're an AEM veteran or just starting your digital journey, hopefully, you come away with something new which you learned through this.
 

 
AdaptTo(): The Not-So-Smooth Path to Sling Model Instantiation
 
In AEM development, the `adaptTo()` method is a widely used and reliable method, but it is not always the best fit for every job.
 
It's commonly used for translating objects and instantiating Sling Models, but there's a twist. As it turns out, relying solely on `adaptTo()` can lead to some not-so-ideal situations.
 
Let's say, you're tasked with translating a Resource into a Node, and you reach for the familiar `adaptTo(Node.class)` call.
Simple, right? Well, not always.
 
The trouble begins when you find yourself in one of these tricky scenarios:
  • The implementation doesn't support the target type.
  • An adapter factory handling the conversion isn't active, possibly due to missing service references.
  • Internal conditions within AEM fail.
  • Required services are simply unavailable.
 
In these situations, instead of throwing exceptions, `adaptTo()` remains silent and returns `null`.
You might think, "No big deal, I'll just add some null checks and handle it gracefully."
But here's the catch: The absence of exceptions means that we often miss out on essential logs and traces that could help us diagnose why a Sling Model is failing.
 
But wait, there's a better way – one that doesn't require you to add multiple null checks everywhere.
That is SLING's ModelFactory.
 
Since Sling Models 1.2.0, there's been a game-changing alternative for instantiating models.
The OSGi service ModelFactory provides a method that throws exceptions when things go awry.
This is a departure from the Javadoc contract of `adaptTo()`, but it's a change for the better.
With ModelFactory, you can wave goodbye to those null checks and gain clear insights into why your model instantiation failed.
 
Here's how it works:
 
 
try {
MyModel model = modelFactory.createModel(object, MyModel.class);
} catch (Exception e) {
// Display an error message explaining why the model couldn't be instantiated.
// The exception contains valuable information.
// MissingElementsException - When no injector could provide required values with the given types
// InvalidAdaptableException - If the given class can't be instantiated from the adaptable (a different adaptable than expected)
// ModelClassException - If model instantiation failed due to missing annotations, reflection issues, lack of a valid constructor, unregistered model as an adapter factory, or issues with post-construct methods
// PostConstructException - In case the post-construct method itself throws an exception
// ValidationException - If validation couldn't be performed for some reason (e.g., no validation information available)
// InvalidModelException - If the model type couldn't be validated through model validation
}
 
But that's not all.
ModelFactory offers additional methods for checking whether a class is indeed a model (bearing the model annotation) and whether a class can be adapted from a given adaptable (check out `canCreateFromAdaptable`, `isModelClass`, `isModelAvailableForResource`, and more).
 
In a nutshell, while `adaptTo()` is still handy for object translations, when it comes to SLING Model instantiation, ModelFactory is the savvy choice.
 
Want to dive deeper?
Check out the official documentation for ModelFactory.

Reduce Code duplication, and complexity, and create Atomic structure by creating an interface for similar SLING models
 
Let's look at these 2 SLING Models:
 
@Model(adaptables = Resource.class)
public class FootballArticleModel {
@Inject
private String title;

@Inject
private Date publicationDate;

// Getter methods for title and publicationDate
}
 
@Model(adaptables = Resource.class)
public class BasketballArticleModel {
@Inject
private String title;

@Inject
private Date publicationDate;

// Getter methods for title and publicationDate
}
 
In this approach, you have separate Sling Models for Football and Basketball Articles, which can lead to code duplication and complexity if these articles share common properties.
 
// Common Sports Article Interface
public interface SportsArticle {
String getTitle();
Date getPublicationDate();
}
// Sling Model for Football Article
@Model(adaptables = Resource.class)
public class FootballArticleModel implements SportsArticle {
@Inject
private String title;

@Inject
private Date publicationDate;

// Implementing methods from the SportsArticle interface
public String getTitle() {
return title;
}

public Date getPublicationDate() {
return publicationDate;
}
}
 
You can make a similar Model for Basketball Articles.
  • If you need to update the common properties' behavior (e.g., `getTitle`) for all sports articles, you can do so in one place (the interface) instead of modifying multiple Sling Models.
 
  • When working with instances of these models in your AEM components, you can adapt them to the `SportsArticle` interface, simplifying your code and promoting consistency across various sports article types.
 
  • This approach promotes code reusability, maintainability, and flexibility in your AEM project, making it easier to handle different types of sports-related content with shared properties.
 
This is just a simple and small example, but implementing this for a complex structure can help you achieve atomic design in even SLING Models.

Why use Resource Type Mapping in SLING Servlets?
 
Instead of specifying exact paths for servlet mappings, map servlets to resource types, making them more flexible and reusable. But why?
 
  1. Enhanced Access Control: Servlets bound to specific paths often lack the flexibility to be effectively access-controlled using the default JCR repository Access Control Lists (ACLs). On the other hand, resource-type-bound servlets can be seamlessly integrated into your access control strategy, providing a more secure environment.

  2. Suffix Handling: Path-bound servlets are, by design, limited to a single path. In contrast, resource type-based mappings open the door to handling various resource suffixes elegantly. This versatility allows you to serve diverse content and functionalities without the need for multiple servlet registrations.

  3. Avoid Unexpected Consequences: When a path-bound servlet becomes inactive (e.g., due to a missing or non-started bundle), it can lead to unintended consequences, like POST requests creating nodes at unexpected paths. This can introduce unexpected complexities and issues in your application. Resource type mappings provide better control and predictability.

  4. Developer-Friendly Transparency: For developers working with the repository, path-bound servlet mappings may not be readily apparent. In contrast, resource type mappings provide a more transparent and intuitive way of understanding how servlets are associated with specific resources. This transparency simplifies development and troubleshooting.
 

Implement asynchronous processing for non-blocking tasks to free up server resources and improve responsiveness.
 
The concept is simple enough to understand. So let's look at how to implement it.
We will create a simple trigger and consumer component.
 
 
Job Trigger Component [ MyEventTrigger ]
@component(args)
public class MyEventTrigger {
@Reference
private JobManager jobManager;
public void triggerAsyncEvent() {
// Create a job description for your asynchronous task
String jobTopic = "my/async/job/topic";
// Add the job to the queue for asynchronous processing
jobManager.addJob(jobTopic, <property_map>); // you can create a property map and pass it to the job.
}
}
MyEventTrigger triggers an asynchronous event by adding a job with the topic "my/async/job/topic" to the job queue using the jobManager.addJob method.
 
 
Creating a Job Consumer Component [ MyJobConsumer ]
 
@component(
service = JobConsumer.class,
property = {
JobConsumer.PROPERTY_TOPICS + "=my/async/job/topic"
}
)
public class MyJobConsumer implements JobConsumer {
private final Logger logger = LoggerFactory.getLogger(getClass());
@Override
public JobResult process(Job job) { // Job is passed as an argument, and you can use the property_map in this function.
try {
logger.info("Processing asynchronous job...");
// Your business logic for processing the job goes here
return JobConsumer.JobResult.OK;
} catch (Exception e) {
logger.error("Error processing the job: " + e.getMessage(), e);
return JobConsumer.JobResult.FAILED;
}
}
}
 
MyJobConsumer listens for jobs with the specified topic "my/async/job/topic" and processes them asynchronously.
 
Your specific business logic should be implemented in the process method.

Use Sling request filters to perform pre-processing or post-processing tasks without modifying servlet code.
 
Let's say, you want to add a custom suffix to the URLs of specific pages for tracking purposes, but you don't want to modify the servlet code for each page.
 
Instead, you want to handle this at the request level.
 
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.SlingHttpServletResponse;
import org.apache.sling.api.servlets.SlingSafeMethodsServlet;
import org.apache.sling.engine.servlets.SlingRequestFilter;
import org.osgi.service.component.annotations.Component;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import java.io.IOException;
@Component(
service = SlingRequestFilter.class,
property = {
"sling.filter.scope=request",
"sling.filter.pattern=/content/mywebsite/en.*" // Define the URL pattern for your pages
}
)
public class CustomSuffixRequestFilter implements SlingRequestFilter {
@Override
public void doFilter(SlingHttpServletRequest request, SlingHttpServletResponse response, FilterChain chain)
throws IOException, ServletException {
// Get the original request URL
String originalURL = request.getRequestURI();
// Add a custom suffix to the URL
String modifiedURL = originalURL + ".customsuffix";
// Create a new request with the modified URL
SlingHttpServletRequest modifiedRequest = new ModifiedRequest(request, modifiedURL);
// Continue the request chain with the modified request
chain.doFilter(modifiedRequest, response);
}
@Override
public void init(javax.servlet.FilterConfig filterConfig) throws ServletException {
// Initialization code if needed
}
@Override
public void destroy() {
// Cleanup code if needed
}
}
 
This is just a small example, but you can achieve many things with this:
  • Authentication of requests
  • As well as authentication of all the servlet requests coming into a server
  • Checking resource type, path, and request coming on from a particular page, etc.

It can act as an extra layer of security or an additional layer of functionality before a request reaches a servlet.


Use Custom SLING Injectors for Injecting essential information to your SLING model not present in the component properties
 
Imagine you need a way to effortlessly inject custom request headers into your AEM components.
These headers might carry essential information or flags that your components rely on, but manually parsing them every time is not very efficient.
 
Here we can use custom SLING injectors
 
1. Create Custom Annotation
@Target({ElementType.FIELD})
@Retention(RetentionPolicy.RUNTIME)

// Declareation for custom injector.
@InjectAnnotation

// Identifier in the Injector class itself
@Source("custom-header")
public @interface InheritedValueMapValue {
String property() default "";
InjectionStrategy injectionStrategy() default InjectionStrategy.DEFAULT;
}
 
2. Create a Custom Injector for Request Headers:
 
// Implement a custom injector for request headers by implementing the `org.apache.sling.models.spi.Injector` interface.
@Component(property=Constants.SERVICE_RANKING+":Integer="+Integer.MAX_VALUE, service={Injector.class, StaticInjectAnnotationProcessorFactory.class, AcceptsNullName.class})
public class CustomHeaderInjector implements implements Injector, StaticInjectAnnotationProcessorFactory
{
@Override
public String getName() {
return "custom-header"; // identifier used again
}
@Override
public Object getValue(Object adaptable, String fieldName, Type type, AnnotatedElement element,
DisposalCallbackRegistry callbackRegistry) {
if (adaptable instanceof SlingHttpServletRequest) {
SlingHttpServletRequest request = (SlingHttpServletRequest) adaptable;
String customHeaderValue = request.getHeader("X-Custom-Header");
if (customHeaderValue != null) {
return customHeaderValue;
}
}
return null;
}
}
 
2. Inject the Custom Header into Your Model:
In your SLING model, use the custom @inject annotation to seamlessly inject the custom header value into your component.
 
@Model(adaptables = SlingHttpServletRequest.class)
public class CustomHeaderComponent {
@Inject
private String customHeaderValue; // This will be automatically populated by the custom injector
// Your component logic here, using customHeaderValue
}
 
 
3. Use the Injected Custom Header Value:
 
Now, you can tap into the customHeaderValue within your SLING model, accessing the custom request header without the hassle of manual parsing.
 
 
public class CustomHeaderComponent {
@Inject
private String customHeaderValue;
public String getCustomHeaderValue() {
return customHeaderValue;
}
// Other methods to work with the custom header value
}
 
In this scenario, we've conjured up a custom injector, CustomHeaderInjector, to inject custom request headers into your AEM components.
 
This kind of custom injection is most commonly used in injecting Cookies and Headers into a component/s, to achieve different functionalities based on it.
 
Official Documentation:
You can also refer to this GitHub for sample code:
 
 

There are many things to learn and explore in AEM, and many tips and tricks as well apart from this, these are the tips that I have not seen implemented as much, and are helpful but lesser known IMO.
 
So, I hope you've gathered some valuable insights to enhance your knowledge of AEM.
I will be writing another part of this which focuses more on the testing and authoring tricks, along with 1(or 2) development tips as well in the next week.
 
Thanks!

Anmol Bhardwaj

Anmol_Bhardwaj_0-1696919854786.png

 

 

Avatar

Community Advisor

AEM Tips & Tricks

 

This is the second part of the tips and tricks article in AEM. And, as I said last time, this part will focus on Authoring, and testing as well, along with development.
To start off, the first feature is something provided by AEM OOTB, and it a BIG help for both authors & developers.
 
 
Developer Mode in AEM:
 
  • Discover Errors with Ease, Navigate Nodes in a Click
 
Now this is a pretty common feature known to many but used by some. I wanted to share some advantages of AEM developer mode.
 
Developer Perspective:
 
With Developer Mode, you gain superpowers in AEM development:
 
  • Effortless Error Detection: You can find out errors in your SLING models without leaving the page in the error section of developer mode, it shows the entire stack trace.
     
  • Point-and-Click Node Navigation: You can easily navigate to your component node or even it's scripts in CRX through the hyperlink present in developer mode.
     
     

    image.png

  • Policies Applied to Component: With a click of a button, you can check what policies are added to the particular component.
  • Component Details: You can even check out your component details with hyperlink present in developer mode. It gives a very clear and detailed information about your component. ( including policies, live usage & documentation)
     
    Author Perspective:
     
  • Live Usage / Component Report : You can also get all the instances and references where the component you selected is being used. This is really helpful for both developers, BA & authors.
     
  • Component Documenation & Authoring Guide: The documentation tab in the component details tab will point to the component documentation, which can be customised to add authoring guide to the component , which can help the authors in authoring the component better.

image (1).png

 

 
Unit Tests in AEM
 
With the introduction of AEMaaCS, and the inclusion of code coverage in the Adobe CI/CD pipeline.
Adobe Cloud Manager integrates unit test execution and code coverage reporting into its CI/CD pipeline to help encourage and promote the best practice of unit testing AEM code.
 
Now, you will find many articles on how to write test cases for SLING models, Configs, Util classes, etc.
So, I am not going to write about it, instead, I wanted to approach this a little differently. I want to explain where and when to use 4 of the most common frameworks used when writing test cases.
( Junit/ Mockito /SLING Mocks /AEM Mocks )
 
Mocking Sling Resources and/or JCR nodes
With the presence of AEM Mocks, there should not be any need to manually mock Sling Resources and JCR nodes.
It’s a lot of work to do that, especially if you compare it to loading a JSON structure into an in-memory repository. Same with ResourceResolvers and JCR sessions. So don’t mock Sling resources and JCR nodes! That’s the case for AemMocks!
 
Using setters in services to set references
 
When you want to test services, the AEM Mock framework handles injections as well, you just need to use the default constructor of your service to instantiate it, and then pass it to the context.registerInjectActivate() method. If required create the referenced services before as mocks and register them as well. AemMocks comes with ways to test OSGI services and components in a very natural way (including activations and injection of references), so please use it.
 
Junit vs Mockito vs SLING Mocks vs AEM Mocks
 
 
@ExtendWith({MockitoExtension.class, AemContextExtension.class})
class MyComponentTest {
private final AemContext context = new AemContext();

@Mock
private ExternalDataService externalDataService;

@BeforeEach
void setUp() {
// Set up AEM context with necessary resources and sling models
context.create().resource("/content/my-site");
context.addModelsForClasses(MyContentModel.class);
context.registerService(ExternalDataService.class, externalDataService);
}

@Test
void testMyComponentWithData() {
// Mock behavior of the external service
when(externalDataService.getData("/content/my-site")).thenReturn("Mocked Data");

// Create an instance of your component
MyComponent component = context.request().adaptTo(MyComponent.class);

// Test component logic when data is available
String result = component.renderContent();

// Verify interactions and assertions
verify(externalDataService).getData("/content/my-site");
assertEquals("Expected Result: Mocked Data", result);
}

@Test
void testMyComponentWithoutData() {
// Mock behavior of the external service when data is not available
when(externalDataService.getData("/content/my-site")).thenReturn(null);

// Create an instance of your component
MyComponent component = context.request().adaptTo(MyComponent.class);

// Test component logic when data is not available
String result = component.renderContent();

// Verify interactions and assertions
verify(externalDataService).getData("/content/my-site");
assertEquals("Fallback Result: No Data Available", result);
}
}
In this example:
 
- We have two test cases, each focusing on a different scenario: one when data is available from the external service and one when data is not available.
- We use JUnit 5 as the test framework.
- Mockito is employed to mock the behavior of `ExternalDataService`, an external dependency.
- Apache Sling Mocks is utilized via `AemContext` to set up an AEM-like environment with a test resource and Sling models for `MyContentModel`.
- The AEM Mocks Test Framework (by io.wcm) is used to add Sling models for `MyContentModel` and simulate AEM behavior.
 
Framework
Purpose and Use Cases
When to Use
When to Avoid
JUnit 5
Standard unit testing framework
- Writing unit tests for Java classes
- Not suitable for simulating AEM-specific
 
for writing test cases and
and components.
environments or mocking AEM services.
 
assertions.
 
 
Mockito
Mocking dependencies and
- Mocking external dependencies
- Not designed for simulating AEM-like
 
simulating behavior.
(e.g., services, databases).
environments or AEM-specific behaviors.
 
 
- Verifying interactions between code and
- Cannot create AEM resources or mock
 
 
mocks.
AEM-specific functionalities.
Apache Sling Mocks
Simulating AEM-like behavior
- Simulating AEM-like environment for
- Not suitable for pure unit testing of Java
 
and Sling-specific functionality.
testing AEM components and services.
classes or external dependencies.
 
 
- Resource creation and management.
- Overkill for basic unit tests without
 
 
 
AEM-specific behavior.
AEM Mocks Test Framework
Comprehensive AEM unit testing
- Testing AEM components, services,
- May introduce complexity for simple unit
 
with Sling models.
and behaviors in an AEM-like
tests of Java classes or external
 
 
environment.
dependencies.
 
 
- Simulating AEM-specific behaviors
- Overhead if you don't need to simulate
 
 
and resources.
AEM behaviors or use Sling models.
 

 

 
Keyboard Shortcuts: Boost Your Efficiency in AEM
 
When navigating Adobe Experience Manager (AEM), mastering keyboard shortcuts can be a game-changer. These shortcuts save you time and effort, making your workflow smoother.
 
So, let's unlock the power of shortcuts by pressing '?' in the Sites console.
image (2).png
 
Location: Any edit window mode
 
Shortcut
Description
Ctrl-Shift-m
Toggle between Preview and the selected mode

 

Implement custom health checks as OSGi services to monitor specific aspects of your AEM application's health and performance.
 
You might have noticed that there are health checks present in AEM's Sites  OOTB, but did you know that they can be imported and implemented in your custom code?
Also, you can add a new tile to that health check console through code.
 
Note: The console is available at
 
This can be done through Apache Felix Health API
 
Let's understand this by the following requirement:
 
You want to create a custom health check to monitor the health of AEM's indexing service, specifically checking if the AEM indexing queue is not overloaded.
 
Step 1: Add the following dependency to your maven project :
Add the following new dependencies:
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.healthcheck.api</artifactId>
<version>2.0.4</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.apache.felix.healthcheck.annotation</artifactId>
<version>2.0.0</version>
<scope>provided</scope>
</dependency>
 
 
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The SF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.apache.felix.hc.generalchecks;
import static org.apache.felix.hc.api.FormattingResultLog.bytesHumanReadable;
import java.io.File;
import java.util.Arrays;
import org.apache.felix.hc.annotation.HealthCheckService;
import org.apache.felix.hc.api.FormattingResultLog;
import org.apache.felix.hc.api.HealthCheck;
import org.apache.felix.hc.api.Result;
import org.apache.felix.hc.api.ResultLog;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.ConfigurationPolicy;
import org.osgi.service.metatype.annotations.AttributeDefinition;
import org.osgi.service.metatype.annotations.Designate;
import org.osgi.service.metatype.annotations.ObjectClassDefinition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@HealthCheckService(name = DiskSpaceCheck.HC_NAME)
@Component(configurationPolicy = ConfigurationPolicy.REQUIRE, immediate = true)
@Designate(ocd = DiskSpaceCheck.Config.class, factory = true)
public class DiskSpaceCheck implements HealthCheck {
private static final Logger LOG = LoggerFactory.getLogger(DiskSpaceCheck.class);
public static final String HC_NAME = "Disk Space";
public static final String HC_LABEL = "Health Check: " + HC_NAME;
@ObjectClassDefinition(name = HC_LABEL, description = "Checks the configured path(s) against the given thresholds")
public  Config {
@AttributeDefinition(name = "Name", description = "Name of this health check")
String hc_name() default HC_NAME;
@AttributeDefinition(name = "Tags", description = "List of tags for this health check, used to select subsets of health checks for execution e.g. by a composite health check.")
String[] hc_tags() default {};
@AttributeDefinition(name = "Disk used threshold for WARN", description = "in percent, if disk usage is over this limit the result is WARN")
long diskUsedThresholdWarn() default 90;
@AttributeDefinition(name = "Disk used threshold for CRITICAL", description = "in percent, if disk usage is over this limit the result is CRITICAL")
long diskUsedThresholdCritical() default 97;
@AttributeDefinition(name = "Paths to check for disk usage", description = "Paths that is checked for free space according the configured thresholds")
String[] diskPaths() default { "." };
@AttributeDefinition
String webconsole_configurationFactory_nameHint() default "{hc.name}: {diskPaths} used>{diskUsedThresholdWarn}% -> WARN used>{diskUsedThresholdCritical}% -> CRITICAL";
}
private long diskUsedThresholdWarn;
private long diskUsedThresholdCritical;
private String[] diskPaths;
@Activate
protected void activate(final Config config) {
diskUsedThresholdWarn = config.diskUsedThresholdWarn();
diskUsedThresholdCritical = config.diskUsedThresholdCritical();
diskPaths = config.diskPaths();
LOG.debug("Activated disk usage HC for path(s) {} diskUsedThresholdWarn={}% diskUsedThresholdCritical={}%", Arrays.asList(diskPaths),
diskUsedThresholdWarn, diskUsedThresholdCritical);
}
@Override
public Result execute() {
FormattingResultLog log = new FormattingResultLog();
for (String diskPath : diskPaths) {
File diskPathFile = new File(diskPath);
if (!diskPathFile.exists()) {
log.warn("Directory '{}' does not exist", diskPathFile);
continue;
} else if (!diskPathFile.isDirectory()) {
log.warn("Directory '{}' is not a directory", diskPathFile);
continue;
}
double total = diskPathFile.getTotalSpace();
double free = diskPathFile.getUsableSpace();
double usedPercentage = (total - free) / total * 100d;
String totalStr = bytesHumanReadable(total);
String freeStr = bytesHumanReadable(free);
String msg = String.format("Disk Usage %s: %.1f%% of %s used / %s free", diskPathFile.getAbsolutePath(),
usedPercentage,
totalStr, freeStr);
Result.Status status = usedPercentage > this.diskUsedThresholdCritical ? Result.Status.CRITICAL
: usedPercentage > this.diskUsedThresholdWarn ? Result.Status.WARN
: Result.Status.OK;
log.add(new ResultLog.Entry(status, msg));
}
return new Result(log);
}
}

If you notice this code, it is for the OOTB Health CHeck for Disk Space Usage, you will see that most of the properties required for this are actually added through OSGi Configs.
 
You can create any kind of Health Check, as we are creating this in Java, and you can basically add custom libraries as well and use them in combination with each other. You can make it more flexible by letting the users add stuff through OSGi Configs. You can even use context aware configuration, basically, the posibilities are endless.
 
 
Sample health checks using Apache Felix Health API:
 
 
Thanks!

Anmol Bhardwaj

Anmol_Bhardwaj_0-1697523578731.png

 

Avatar

Community Advisor

Basic Guidelines: Content Fragment Models and GraphQL Queries for AEM Headless Implementation

Unlocking the potential of headless content delivery in Adobe Experience Manager (AEM) is a journey that begins with a solid foundation in Content Fragment Models (CFM) and GraphQL queries. In this blog, we’ll embark on this journey and explore the best practices and guidelines for designing CFMs and crafting GraphQL queries that empower your AEM headless implementation.

Guildelines for Content Fragment Models.

  • Use Organism, Molecule, and Atom (OMA) model for structuring Content Fragment Models. It provides a systematic way to organize and model content for greater flexibility and reusability.
    • Organism: Organisms represent high-level content entities or content types. Each Organism corresponds to a specific type of content in your system, such as articles, products, or landing pages. Organisms have their own Content Fragment Models, defining the structure and properties of that content type. Example:
      • Organism: “Article”
      • Content Fragment Model: “Article Content Fragment Model”
    • Molecule: Molecules are reusable content components that make up Organisms. They represent smaller, self-contained pieces of content that are combined to create Organisms. Molecules have their own Content Fragment Models to define their structure. Example:
      • Molecule: “Author Block” (includes author name, bio, and profile picture)
      • Content Fragment Model: “Author Block Content Fragment Model”
    • Atom: Atoms are the smallest content elements or data types. They represent individual pieces of content that are used within Molecules and Organisms. Example:
      • Atom: “Text” (represents a single text field) in CFM
  • Relationships: Identify relationships between CFMs that reflect the relationships between different types of content on your pages. Also, GraphQL’s strength lies in its ability to navigate relationships efficiently. Ensure that your CFMs and GraphQL schema capture these relationships accurately. Use GraphQL’s nested queries to request related data when needed. For example, an “Author” CFM might have a relationship with an “Article” CFM to indicate authorship.
  • Page Components Correspondence: Identify the components or sections within your web pages. Each of these components should have a corresponding CFM. For example, if your pages consist of article content, author details, and related articles, create CFMs for “Article”, “Author” and “Related Articles” to match these page components.
  • Hierarchy and Nesting: Consider the hierarchy of content within pages. Some pages may have nested content structures, such as sections within articles or tabs within product descriptions. Create CFMs that allow for nesting of content fragments, ensuring you can represent these hierarchies accurately.
  • Manage the number of content fragment models effectively: When numerous content fragments share the same model, GraphQL list queries can become resource-intensive. This is because all fragments linked to a shared model are loaded into memory, consuming time and memory resources. Filtering can only be applied after loading the complete result set into memory, which may lead to performance issues, even with small result sets. The key is to control the number of content fragment models to minimize resource consumption and enhance query performance.
  • Multifield in Content Fragment Models: Adobe’s out-of-the-box (OOTB) offerings include multifields for fundamental data types such as text, content reference, and fragment reference. However, in cases where more intricate composite multifields are required, each set should be established as an individual content fragment. Subsequently, these content fragments can be associated with a parent fragment. GraphQL queries can then retrieve data from these nested content fragments. For an example, refer to link
  • Content hierarchy for GraphQL optimization: Establishing a path-based configuration for content fragments is essential to enhance the performance of GraphQL queries. This approach enables queries to efficiently navigate through folder and content fragment hierarchies, thereby retrieving information from smaller data sets.
  • Dedicated tenant/config folders: In the scenario of large organizations encompassing multiple business units, each with its unique content fragment models, it’s advisable to strategize the creation of content fragment models within dedicated /conf folders. These /conf folders can subsequently be customized for specific /content/dam folders. The “Allowed Content Fragment Models” property can be leveraged to restrict the usage of specific types of CFMs within a folder.
  • Field Naming: Opt for transparent and uniform field names across both CFMs and GraphQL types. Select names that provide a clear indication of the field’s function, simplifying comprehension for both developers and content authors when navigating the content structure.
  • Comments: Incorporate detailed descriptions for every field found in CFMs and GraphQL types. These comments should offer valuable context and elucidate the purpose of each field, aiding developers and content authors in comprehending how each property is intended to be utilized and its significance in the overall structure.
  • Documentation: Ensure the presence of thorough documentation for both CFMs and GraphQL schemas. This documentation should encompass the field’s purpose, the expected values it should contain, and instructions on its utilization. Additionally, provide clear guidelines regarding the appropriate circumstances and methods for using specific fields to maintain uniformity. Any data relationships or dependencies between fields should also be documented to offer guidance to developers and content authors.
  • Contemplate the option of integrating CFMs into your codebase, limiting editing access to specific administrators if necessary. This precautionary measure helps mitigate the risk of unintended modifications by unauthorized users, safeguarding your content structure from inadvertent alterations.
  • In the context of AEM Sites, it is advisable to prioritize the utilization of QueryBuilder and the Content Fragment API for rendering results. This approach enables your Sling models to effectively process and transform the raw content for the user interface.
  • Consider employing Experience Fragments for content that marketers frequently edit, as they provide the convenience of a WYSIWYG (What You See Is What You Get) editor.

Guildelines for graphQL queries

Sharing the general guidelines around creating graphQL queries. For syntax based suggestions, please refer to the links in References section.

  • Query Complexity: Consider the complexity of GraphQL queries that content authors and developers will need to create. Ensure that the schema allows for efficient querying of content while avoiding overly complex queries that could impact performance.
  • Pagination Implement offset/cursor-based pagination mechanisms within your GraphQL schema. This ensures that queries return manageable amounts of data. It’s advisable to opt for cursor-based pagination when dealing with extensive datasets for pagination, as it prevents premature processing.
// Offset based pagination
query {
   articleList(offset: 5, limit: 5) {
    items {
      authorFragment {
        lastName
        firstName
      }
    }
  }
}

//cursor-based pagination
query {
    adventurePaginated(first: 5, after: "ODg1MmMyMmEtZTAzMy00MTNjLThiMzMtZGQyMzY5ZTNjN2M1") {
        edges {
          cursor
          node {
            title
          }
        }
        pageInfo {
          endCursor
          hasNextPage
        }
    }
}

  • Security and Access Control: Implement security measures to control who can access which content from GraphQL queries. Ensure that sensitive data is protected and that only authorized users can execute certain queries or mutations. 
  • Query only the data you need, where you need it: In GraphQL, clients can specify exactly which fields of a particular type they want to retrieve, eliminating over-fetching and under-fetching of data. This approach optimizes performance, reduces server load, enhances security, and ensures efficient data retrieval, making GraphQL a powerful choice for modern application development.
  • Consider utilizing persisted queries as they offer optimization for network communication and query execution. Rather than transmitting the entire query text in every request, you can send a unique persisted-label that corresponds to a pre-stored query on the server. This approach takes advantage of server-side caching, allowing you to make GET requests using the persisted-label of the query, which enhances performance and reduces data transfer.
  • Sort on top level fields: Sorting can be optimized when it involves top-level fields exclusively. When sorting criteria include fields located within nested fragments, it necessitates loading all fragments associated with the top-level model into memory, which negatively impacts performance. It’s important to note that even sorting on top-level fields may have a minor impact on performance. To optimize GraphQL queries, use the AND operator to combine filter expressions on top-level and nested fragment fields.
  • Hybrid filtering: Explore the option of implementing hybrid filtering in GraphQL, which combines JCR filtering with AEM filtering. Hybrid filtering initially applies a JCR filter as a query constraint before the result set is loaded into memory for AEM filtering. This approach helps minimize the size of the result set loaded into memory, as the JCR filter efficiently eliminates unnecessary results beforehand. JCR filter has few limitations, where it does not work today, like case-insensitivity, null-check, contains not etc
  • Use dynamic filters: Dynamic filters in GraphQL offer flexibility and performance benefits compared to variables. They allow you to construct and apply filters dynamically during runtime, tailoring queries to specific conditions without the need to define multiple query variations with variables. More details in video (time 6:29)
//Query with Variables
query getArticleBySlug($slug: String!) {
  articleList(
    filter: {slug: {_expressions: [{value: $slug}]}}
   ) {
    items {
      _path
      title
      slug
    }
  }
}

//Query-Variables
{"slug": "alaskan-adventures"}
//Query with Dynamic Filter
query getArticleBySlug($filter: 
  ArticleModelFilter!){  
    articleList(filter: $filter) {
    items {
      _path
      title
      slug
    }
  }
}

//Query-variables
{
  "filter": {
    "slug": {
      "_expressions": [{"value": "alaskan-adventures"}]
    }
  }
}

Aanchal Sikka

Avatar

Level 5

Every bit in one place. Time for me to try everything you explained so nicely in an organized manner

Avatar

Community Advisor

AEM Performance Optimization: Best Practices for Speed and Scalability

 

Did you know?
There are 11,000 requests made in rendering a simple WKND page.
Think about how many requests your page makes for rendering a page.
 
Now, requests are not majorly a problem, but, repository access is.
  • Some facts:
    • 15-20ns (admin user)
    • 30-40ns (checks ACL etc, nested setup )
    • 1 ms
    • Creating 1 JCRNodeResource :
    • Accessing JCRResource takes a similar amount of time.
    • Creating ResourceResolver
Now, your page may have hundreds of components, all of them may have jcr functions that access nodes, and all of them may have resource resolvers.
Nanoseconds may look small, but now think about that into 1000 and then multiply it by visitors.
Now it may be seconds, which is a BIG deal.
 
So, that is why we need to:
  • Reduce repo access
  • Only open resource resolver where necessary
  • Try to reduce JCR functions and instead use SLING functions for high-level operations.
  • Don't try to resolve a resource multiple times.
    • Example: Don't convert node to resource then add path (like "/jcr:content") and then convert to node again.
  • Remove usage of @Optional & @inject as they check all injectors (in order) until it succeeds.

 

How do I check the same for my pages?
 
You can identify Repository Access through this simple method. ( Please ONLY use this in non-prod environments, QA & local are the best options )
 
Add 2 logs:
  1. TRACE on org.apache.sling.jcr.AccessLogger.operation: logs stack trace when creating JCRNodeResource
  2. TRACE on org.apache.sling.jcr.AccessLogger.statistics: logs number of created JCRNodeResource via each resource resolver.
 
Although there are ways to cache and there are CDN and dispatcher caching, it all starts with backend performance.
Avoiding repository access is an efficient way to improve performance.
 
But after backend performance improvement, we also need to leverage content caching, to get the best performance.
 

Leverage Content Caching
 
Caching is a fundamental technique to boost AEM's performance. It significantly reduces server load and improves response times for users. In this section, we'll explore Dispatcher Caching and Page Level Caching.
 
Dispatcher Caching
 
Introduction
 
Dispatcher is an Apache-based caching and load-balancing tool often used with AEM. It allows you to cache and serve content without involving the AEM server for every request, resulting in faster page loads.
 
Implementation
 
1. Install and Configure Dispatcher: Begin by installing and configuring the Dispatcher module on your web server (e.g., Apache HTTP Server).
You can find detailed installation instructions in the official Apache documentation.
 
2. Configure Caching Rules: Define caching rules in the Dispatcher configuration file (`dispatcher. any`). These rules specify which content should be cached and for how long.
For example:
/cached {
       /glob "*"
       /type "allow"
   }​
This rule caches all content under `/cached`.
 
3. Implement Cache Invalidation: To ensure that cached content is refreshed when it changes in AEM, configure cache invalidation.
AEM can send invalidation requests to the Dispatcher when content updates occur. Implement this communication between AEM and Dispatcher in the Dispatcher configuration.
You can find detailed instructions in the official Adobe documentation.
 
Scenario: Imagine you manage a news website where articles are published and updated frequently. Your goal is to improve page load times while ensuring that users always see the latest news.
 
Implementation
 
1. Configure Dispatcher for News Articles: In your Dispatcher configuration, specify rules to cache news articles for a reasonable duration (e.g., 5 minutes).
This ensures that articles load quickly for a brief period while reducing the load on your AEM server.
/news {
       /glob "/content/news/*"
       /type "deny"
   }

 

2. Implement Cache Invalidation for Articles: Configure AEM to send cache invalidation requests to the Dispatcher whenever a news article is updated.
This ensures that users see the latest articles within a short time frame. Here's an example of how to implement cache invalidation using the Dispatcher Flush Agent.
<flush>
       <rules>
           <rule>
               <glob>/content/news/*</glob>
               <invalidate>true</invalidate>
           </rule>
       </rules>
   </flush>

 

Performance Impact (Before and After)
 
Before implementing Dispatcher Caching for news articles, your website may have experienced a high server load due to frequent article requests. Page load times could vary, especially during traffic spikes.
 
After implementing Dispatcher Caching, the performance impact is noticeable:
- Reduced Server Load: The AEM server handles fewer requests for articles, reducing server load.
- Faster Page Load Times: Users experience faster page load times for news articles, resulting in improved user satisfaction.
 
Page Level Caching
 
Introduction
 
Page Level Caching in AEM allows you to cache entire pages, making it an effective strategy for serving static or semi-static content quickly.
 
Implementation
 
1. Configure Page Level Caching: In AEM, you can enable Page Level Caching by navigating to the page you want to cache. Open the page properties and go to the "Advanced" tab. Enable "Cache-Control" and set the cache timeout as needed. For example, you can set a cache timeout of 3600 seconds (1 hour) for a page that doesn't change frequently.
 
 
2. Use Cache-Control Headers: AEM automatically adds the appropriate `Cache-Control` headers to responses for the cached pages. These headers instruct the browser and intermediate caches on how long to cache the page.
 
 
Scenario: Consider an e-commerce platform with product category pages. These pages contain static content that rarely changes, such as product listings and descriptions.
Your goal is to enhance the user experience by delivering these pages quickly.
 
Implementation
 
1. Configure Page Level Caching: Identify the product category pages that rarely change and are suitable for caching. In AEM, configure these pages with Page Level Caching, setting an appropriate cache timeout (e.g., 1 day).
 
 
2. Cache Dynamic Parts Separately: For pages containing both static and dynamic content (e.g., real-time pricing), cache the static parts using Page Level Caching. Implement dynamic content retrieval through AJAX or server-side calls to maintain accurate data without compromising performance.
<!-- Example AJAX request for dynamic content -->
   <script>
       $.ajax({
           url: "/get-pricing",
           method: "GET",
           success: function(data) {
               // Update pricing on the page
           }
       });
   </script>

 

Performance Impact (Before and After)
 
Before implementing Page Level Caching for product category pages, users may have experienced delays in loading these pages, especially during peak traffic times. Server resources could be under strain due to frequent requests for the same content.
 
After implementing Page Level Caching, the performance impact is evident:
- Faster Page Load Times: Product category pages load significantly faster, enhancing the user experience.
- Reduced Server Load: Server resources are freed up as cached pages are served directly, resulting in improved server performance.
 
By applying Dispatcher Caching and Page Level Caching strategically, you can achieve noticeable improvements in page load times and server performance, ultimately delivering a smoother and more efficient user experience.
 

 
 
Optimize Asset Delivery
 
Efficient asset delivery is crucial for enhancing the performance of your AEM application. In this section, we'll explore two key strategies: "Implementing Responsive Images" and "Lazy Loading."
 
Implementing Responsive Images
 
Implementation
 
Responsive images adapt to the screen size and resolution of the user's device, ensuring that images look great and load quickly on both desktops and mobile devices. Here's how to implement responsive images in your AEM application:
 
1. Define Image Variants: Create multiple image variants with different resolutions and sizes. For example, you can have a small, medium, and large version of an image.
 
2. Use the `<picture>` Element: The `<picture>` element allows you to specify multiple sources for an image and let the browser choose the most appropriate one based on the user's device. Here's an example:
<picture>
     <source srcset="/path/to/image-large.jpg" media="(min-width: 1024px)">
     <source srcset="/path/to/image-medium.jpg" media="(min-width: 768px)">
     <img src="/path/to/image-small.jpg" alt="Description">
  </picture>

 

In this example, different images are served based on the user's screen size.
 
3. Set `srcset` and `sizes` Attributes: The `srcset` attribute specifies a list of image files and their respective widths. The `sizes` attribute defines the sizes of the image in the layout. These attributes help the browser choose the appropriate image variant.
<img srcset="/path/to/image-small.jpg 320w,
                /path/to/image-medium.jpg 768w,
                /path/to/image-large.jpg 1024w"
        sizes="(max-width: 320px) 280px,
               (max-width: 768px) 680px,
               940px"
        src="/path/to/image-small.jpg"
        alt="Description">

 


 
Monitor and Optimize Database Queries
 
Efficient database queries are essential for the overall performance of your AEM application. In this section, we'll explore key strategies for monitoring and optimizing database queries in AEM.
 
Query Performance Monitoring in AEM
 
Explanation
 
Monitoring query performance in AEM is crucial to identify bottlenecks and areas for improvement. AEM provides built-in tools and capabilities for tracking query performance:
 
1. Query Profiling: AEM allows you to enable query profiling, which records the execution time and details of each query. Profiling data can be accessed through the AEM Query Performance tool. You can access the Query Performance tool by navigating to the following URL in your AEM instance: `http://localhost:4502/libs/granite/operations/content/diagnosistools/queryPerformance.html`.
 
2. Log Analysis: You can analyze the AEM logs to identify slow-running queries. Look for log entries related to query execution and examine the execution times.
 
3. Adobe Granite Query Debugger: AEM provides the Adobe Granite Query Debugger tool, accessible through the AEM Web Console. It helps you analyze and optimize queries interactively. You can access the Query Debugger by navigating to the following URL in your AEM instance: `http://localhost:4502/system/console/depfinder/querydebug.html`.
 
Writing Better Queries in AEM
 
 
Writing efficient queries in AEM is vital for query performance. Follow these best practices when crafting your queries:
 
1. Use Indexing: Ensure that the properties you frequently query are indexed. Indexing speeds up query execution significantly. To check and manage indexes, navigate to the AEM Felix Web Console (`http://localhost:4502/system/console/indexmanager`).
 
2. Select Only Necessary Properties: Fetch only the properties you need in your query's result set. Avoid using wildcard selectors like `*` if you don't require all properties.
 
 
3. Avoid Deep Node Queries: Deep node queries (`//element`) can be resource-intensive. Whenever possible, specify the exact path to the nodes you're querying.
 
4. Limit Query Results: Use the `LIMIT` clause to restrict the number of results returned by your query. This can prevent large result sets and improve query performance.

 
Indexing and Customizing Index
 
 
Indexing is critical for query performance. AEM provides default indexes, but in some cases, you may need to customize or create new ones:
 
1. Custom Indexes: To create a custom index in AEM, you need to define the index definition XML file. Here's an example of an index definition:
<?xml version="1.0" encoding="UTF-8"?>
   <index oak:indexDefinition="{Name}CustomIndex" xmlns:oak="http://jackrabbit.apache.org/oak/query/1.0">
       <indexRules>
           <include>
               <pattern>/content/myapp/.*</pattern>
           </include>
           <include>
               <pattern>/content/dam/myassets/.*</pattern>
           </include>
       </indexRules>
   </index>

 

This XML file defines a custom index named "CustomIndex" that includes paths for your custom application content and assets.
 
2. Custom Index Rules: You can define custom indexing rules in AEM to control how properties are indexed. Custom rules allow you to fine-tune the indexing process for optimal query performance.
 
3. Monitoring Index Health: Regularly monitor the health and status of your indexes. The AEM Felix Web Console provides insights into index health, and you can take actions like rebuilding or optimizing indexes as needed.
 
By monitoring query performance, writing efficient queries, and optimizing indexing in AEM, you can ensure that database queries run smoothly and contribute to the overall performance and responsiveness of your AEM application.
 

 
Efficiently Manage DAM
 
Efficiently managing Digital Asset Management (DAM) is crucial for maintaining a well-organized and high-performing AEM application. In this section, we'll explore key strategies for optimizing DAM assets and metadata.
 
Asset Renditions
 
Asset renditions are variations of an asset optimized for different purposes, such as different screen sizes or image formats. Managing asset renditions efficiently can help reduce storage costs and improve performance:
 
1. Automatic Rendition Generation: Configure AEM to automatically generate renditions when an asset is uploaded or updated. Define rules for renditions, specifying their dimensions, quality, or formats.
 
2. Custom Rendition Profiles: Create custom rendition profiles to generate renditions tailored to your specific needs. These profiles can include renditions for web, mobile, or print usage.
<!-- Example code to define a custom rendition profile -->
   <jcr:content
       jcr:primaryType="nt:unstructured"
       sling:resourceType="dam/cfm/components/renditionprofile">
       <jcr:title>Web Renditions</jcr:title>
       <renditionDefinitions>
           <thumbnail
               jcr:primaryType="nt:unstructured"
               sling:resourceType="dam/cfm/components/renditiondefinition"
               width="{Long}150"
               height="{Long}150"
               format="jpeg"/>
       </renditionDefinitions>
   </jcr:content>

 

3. Rendition Purge Policy: Implement a policy to purge older or unused renditions to save storage space automatically. Ensure that the policy considers the access frequency and age of renditions.
 
Performance Impact
 
Efficiently managing DAM assets and optimizing metadata and renditions has several performance benefits:
 
- Faster Asset Retrieval: Well-structured metadata and organized assets make it easier to locate and retrieve the right assets quickly.
 
- Reduced Storage Costs: By purging unnecessary or outdated renditions, you can save storage space and reduce associated costs.
 
- Optimized Content Delivery: Generating appropriate renditions on-demand ensures that assets are delivered in the most suitable format and size for the user's device, improving page load times.
 
By implementing metadata optimization and efficiently managing asset renditions, you can ensure that your DAM assets are well-organized, easy to access, and contribute to an overall better-performing AEM application.

 
Sling Model Caching
 
By default, Sling Models do not do any caching of the adaptation result and every request for a model class will result in a new instance of the model class. However, there are two notable cases when the adaptation result can be cached. The first case is when the adaptable extends the SlingAdaptable base class. Most significantly, this is the case for many Resource adaptables as AbstractResource extends SlingAdaptable. SlingAdaptable implements a caching mechanism such that multiple invocations of adaptTo() will return the same object.
 
// assume that resource is an instance of some subclass of AbstractResource
ModelClass object1 = resource.adaptTo(ModelClass.class); // creates new instance of ModelClass
ModelClass object2 = resource.adaptTo(ModelClass.class); // SlingAdaptable returns the cached instance
assert object1 == object2;

 

Since API version 1.3.4, Sling Models can cache an adaptation result, regardless of the adaptable by specifying cache = true on the @Model annotation.
 
When cache = true is specified, the adaptation result is cached regardless of how the adaptation is done:
@Model(adaptable = SlingHttpServletRequest.class, cache = true)
public class ModelClass {}
...
// assume that request is some SlingHttpServletRequest object
ModelClass object1 = request.adaptTo(ModelClass.class); // creates new instance of ModelClass
ModelClass object2 = modelFactory.createModel(request, ModelClass.class); // Sling Models returns the cached instance
assert object1 == object2;

 

Performance Impact
 
Sling Model Caching significantly improves performance by reducing the need to fetch and adapt resources repeatedly. This results in faster content rendering and reduced server load.
 

Use Scheduled Jobs/CRON Jobs in AEM for tasks if possible
 
 
AEM allows you to schedule jobs or tasks using CRON expressions. This feature can automate various activities.
 
 
- Create Scheduled Jobs: Define custom Java classes for tasks and use CRON expressions for scheduling.
// Example scheduled job with CRON expression
@Scheduled(resourceType = "myproject/scheduled-job", cronExpression = "0 0 3 * * ?")
public class MyScheduledJob implements Runnable {
    // ...
}
 
- CRON Expressions: Specify schedules, e.g., daily at 3:00 AM, using CRON expressions.
 
- Task Automation: Automate tasks such as generating reports, archiving content, or triggering updates.
 
Scenario:
Imagine you need to generate a daily report of website traffic statistics and send it to stakeholders.
 
Implementation of Scenario with the Suggested Approach:
- Create a scheduled job that runs the traffic report generation process every day at 3:00 AM.
// Scheduled job to generate daily traffic report
@Scheduled(cronExpression = "0 0 3 * * ?")
public class TrafficReportJob {
    public void execute() {
        // Generate and send the report
    }
}

 

Performance Impact:
- Automation ensures tasks are executed on time without manual intervention.
- Improves consistency and timeliness in task execution.
- If you plan them accordingly, you can schedule tasks to run during idle hours, and they won't affect system performance.
 

 
Use Streams When Writing Java Code in AEM
 
Utilizing Java Streams is a lesser-known but powerful coding technique in AEM for processing data collections.
 
Utilize operations like `map`, `filter`, `reduce`, and `collect` to process data collections efficiently.
 
Scenario:
Suppose you have a large dataset of user information that needs to be filtered and transformed.
 
- Use Java Streams to filter and transform the user dataset, creating a new dataset with specific criteria.
List<User> users = // ... (populate the list)
List<User> filteredUsers = users.stream()
    .filter(user -> user.getAge() >= 18)
    .collect(Collectors.toList());

 

Since we deal with large datasets in AEM, if we need to traverse over children of thousands of nodes, the stream will always perform better than a loop.
 

Now let's look at some performance tips that we can apply to increase performance through UI changes.
 
Minify JS & CSS
 
Minifying JavaScript and CSS files is a crucial step in optimizing the performance of your AEM application. Minification reduces the file size by removing unnecessary characters like white spaces, comments, and line breaks. Smaller files load faster, reducing the overall page load time. Here's how to minify JS and CSS for your AEM application:
 
Explanation
 
1. Identify JS and CSS Files: Start by identifying the JavaScript and CSS files used in your AEM application. These files are typically located in your project's source code.
 
2. Use a Minification Tool: There are several minification tools available that can automatically minify your JS and CSS files. Here are some popular options:
 
   - UglifyJS: UglifyJS is a widely used JavaScript minification tool. You can install it using npm and run it from the command line. Here's an example:
npm install uglify-js -g
uglifyjs input.js -o output.min.js
 
   - CSSNano: CSSNano is a CSS minification tool. You can use it to minify your CSS files. Install it using npm and run it like this:
npm install cssnano -g
cssnano input.css output.min.css

 

3. Integrate Minification into Your Build Process: To automate the minification process, integrate it into your build process. For example, if you're using a build tool like Webpack, you can configure it to minify JavaScript as part of the build pipeline.
// webpack.config.js
const UglifyJsPlugin = require('uglifyjs-webpack-plugin');
module.exports = {
// ...
optimization: {
minimizer: [new UglifyJsPlugin()],
},
};
 
 
Performance Impact (Before and After)
 
Before implementing JS and CSS minification, your AEM application may have experienced slower page load times due to larger file sizes. This can lead to a suboptimal user experience, especially on slower network connections.
 
After implementing minification, the performance impact is significant:
- Faster Page Load Times: Minified JS and CSS files are smaller in size and load more quickly, resulting in faster page load times for your AEM application.
- Reduced Bandwidth Usage: Smaller file sizes mean reduced bandwidth usage for your server and users, particularly beneficial for mobile users.
 
By minifying JavaScript and CSS files, you can optimize your AEM application's performance, delivering faster and more responsive web experiences to your users. Incorporating minification into your build process ensures that your code remains streamlined as you continue to develop and enhance your AEM application.
 

 
Reduce 3rd Party Scripts and Defer the Loading of Scripts
 
Third-party scripts, such as those for analytics, ads, and social media integration, can significantly impact the performance of your AEM application. Managing and minimizing these scripts is crucial for improving website speed and user experience.
 
 
1. Audit and Analyze: Begin by auditing your website to identify all the third-party scripts currently in use. Evaluate the necessity of each script and its impact on performance.
 
2. Prioritize Scripts: Prioritize the scripts that are essential for your website's core functionality and user experience. Some scripts may be indispensable, while others can be deferred or removed.
 
3. Deferred Loading: For non-essential scripts, consider implementing deferred loading. Load these scripts after the core content of your page has loaded. This can be achieved using the `async` or `defer` attributes in the script tags.
 
<!-- Async loading -->
   <script src="third-party-script.js" async></script>

   <!-- Defer loading -->
<script src="third-party-script.js" defer></script>
   
 
   This prevents third-party scripts from blocking the initial rendering of your page.
 
4. Lazy Loading: Implement lazy loading for third-party scripts that are below the fold or not immediately needed. Lazy loading delays the loading of these scripts until the user scrolls down the page.
 
<script src="lazy-third-party-script.js" loading="lazy"></script>
 
   This reduces the initial page load time.
 
5. Consolidation and Minification: If possible, consolidate multiple third-party scripts into a single file and minify it. Fewer HTTP requests and smaller file sizes lead to faster loading times.
 
6. Script Execution Order: Ensure that scripts are loaded and executed in the optimal order. Some scripts may depend on others, so sequence them accordingly.
 
Performance Impact (Before and After)
 
Reducing third-party scripts can have a substantial impact on your AEM application's performance:
 
- Faster Page Load Times: Removing or deferring non-essential scripts and optimizing their loading significantly reduces the time it takes for your website to become interactive.
 
- Improved User Experience: Faster loading times translate to a better user experience. Visitors are more likely to stay engaged on a fast-loading website.
 
- Reduced Dependency: Fewer third-party scripts mean fewer dependencies and potential points of failure. Your website becomes more robust and reliable.
 
It's essential to regularly review and assess the third-party scripts used on your AEM website. Prioritize performance and user experience when deciding which scripts to keep, defer, or eliminate.
By reducing the impact of third-party scripts, you can create a more efficient and enjoyable user experience.

Avatar

Community Advisor

@Anmol_BhardwajKudos!
Very well written and organized article for best practices.

 

One thing surprised me 11K requests fact, is it really 11K requests for a WKND page?

Avatar

Community Advisor

Hi @iamnjain ,

Thanks.

 

One thing surprised me 11K requests fact, is it really 11K requests for a WKND page?

Yeah, that too for the landing page.

If you want to check how many requests your page is making to the repository you can enable :
TRACE on org.apache.sling.jcr.AccessLogger.operation & org.apache.sling.jcr.AccessLogger. 

Note: The size of the log file will grow very quickly, so make sure to turn it off when done.

Avatar

Community Advisor
Integrating AEM with CIF & Developing AEM Commerce Projects on AEM
 
Adobe Commerce Integration Framework (CIF) is a crucial tool for developing AEM Commerce projects on AEM as a Cloud Service. This framework provides seamless integration with Adobe Commerce and allows developers to create robust e-commerce experiences.
 
In this article, we will explore the essential steps to set up a CIF project on AEM as a Cloud Service.
 
Introduction

Adobe CIF is a powerful framework that plays a pivotal role in bridging the gap between content and commerce.
It empowers developers and businesses to create engaging and unified experiences by seamlessly integrating Adobe Experience Manager (AEM) with commerce platforms like Adobe Commerce (formerly Magento).
This integration offers a wide array of benefits and opens up various use cases, making it a compelling solution for modern e-commerce businesses.
 
Key Benefits
 
Before diving into the technical details, it's crucial to understand the key benefits of AEM + CIF integration:
 
- Unified Content and Commerce: AEM acts as a robust content management system, while CIF connects it to Adobe Commerce, enabling businesses to deliver unified content and commerce experiences to their customers.
 
- Rich E-commerce Storefronts: AEM's content management capabilities combined with Adobe Commerce's e-commerce functionality allow businesses to build visually appealing and feature-rich e-commerce storefronts.
 
- Personalization and Customer Engagement: By harnessing AEM's personalization and targeting features, you can create personalized shopping experiences for your users, increasing customer engagement and loyalty.
 
- Headless Commerce Experiences: AEM can function as a headless front-end, providing flexibility and responsiveness to your commerce architecture, while Adobe Commerce takes care of the back-end operations.
 
- Content-Driven Commerce Campaigns: AEM's marketing capabilities seamlessly incorporate commerce data for effective campaigns.
 
- Multi-Channel Commerce: AEM ensures consistent content delivery across diverse channels, enhancing brand experience.
 
Use Cases
 
Let's delve into specific use cases to understand how AEM + CIF integration can address real-world business scenarios:
 
1. Unified Content and Commerce Experience:
  - Use Case: AEM content is seamlessly integrated with Adobe Commerce, providing a unified experience where content and commerce work harmoniously.
  - Example: An online fashion retailer combines product pages with rich lifestyle content, offering customers not only product details but also fashion inspiration and styling tips.
 
2. Rich E-commerce Storefronts:
  - Use Case: AEM's content management powers e-commerce storefronts, making them visually appealing and content-rich.
  - Example: An electronics store combines product listings with informative blogs, reviews, and videos, enhancing the shopping experience.
 
3. Personalization and Customer Engagement:
  - Use Case: AEM leverages customer data to create personalized shopping experiences.
  - Example: An online bookstore recommends books based on a user's browsing and purchase history, enhancing user engagement and sales.
 
4. Headless Commerce Experiences:
  - Use Case: AEM acts as a headless front-end while Adobe Commerce handles the back-end e-commerce operations.
  - Example: An automotive parts retailer uses AEM as a headless front-end to provide a flexible and adaptive shopping experience across various devices and platforms.
 
5. Content-Driven Commerce Campaigns:
  - Use Case: AEM's marketing capabilities seamlessly incorporate commerce data for effective campaigns.
  - Example: A holiday-themed marketing campaign combines promotions, gift guides, and easy shopping access to boost holiday sales.
 
6. Multi-Channel Commerce:
  - Use Case: AEM ensures consistent content delivery across diverse channels, enhancing brand experience.
  - Example: An international cosmetics brand maintains a consistent brand presence on its website, mobile app, social media, and in-store kiosks.
 
Integration with Adobe Commerce (Magento)
 
Adobe Commerce (formerly known as Magento) is one of the commerce platforms supported by CIF. It plays a crucial role in the integration, serving as the e-commerce back-end that manages products, inventory, orders, and secure transactions. AEM enhances this by delivering content presentation, user experience, personalization, and multi-channel content delivery.
 
Technical Details
 
Let's dive into the technical details of setting up an AEM + CIF project on AEM as a Cloud Service.
 
Accessing the CIF Add-On
 
The CIF add-on is available as a Sling Feature archive and can be obtained from the Software Distribution portal as a zip file. It's compatible with both AEM author and AEM publish instances.
Here's how to access it:
 
1. Create a directory named `crx-quickstart/install` for the AEM instance.
2. Copy the appropriate Sling Feature archive file from the CIF add-on zip file into the `crx-quickstart/install` directory, depending on whether you're using AEM Author or AEM Publish.
3. Create a local OS environment variable called `COMMERCE_ENDPOINT` to hold the Adobe Commerce GraphQL endpoint, which is used by AEM to connect to your commerce system. The CIF add-on includes a local reverse proxy to make the Commerce GraphQL endpoint available locally.
 
Project Setup
 
Setting up a CIF project for AEM as a Cloud Service can be done in two ways:
 
1. Use AEM Project Archetype
 
The AEM Project Archetype is the primary method for bootstrapping a CIF project with all the required configurations and CIF Core Components. Follow these steps:
 
- Create a new AEM Commerce project using the archetype.
- Deploy the project locally within the AEM SDK environment by running the following Maven command from the project's root directory:
 
  
mvn clean install -PautoInstallSinglePackage
  ​
 
This will give you a working AEM Commerce project that you can further customize.
 
2. Use AEM Venia Reference Store
 
An alternative approach is to clone and customize the AEM Venia Reference Store. The AEM Venia Reference Store is a sample reference storefront application demonstrating CIF Core Component usage. You can clone this repository and tailor it to your specific requirements.
 
 
These are just the first steps of your journey towards creating a seamless content and commerce experience using AEM and CIF. In the subsequent sections, we'll cover more technical aspects and customization options.
 
Install Peregrine and CIF-AEP Connector Dependencies
 
To collect and send event data from the category and product pages of your AEM Commerce site, you need to install specific npm packages into the `ui.frontend` module of your AEM Commerce project. Here's how to do it:
 
1. Navigate to the `ui.frontend` module.
2. Install the required packages using the following commands:
 
npm i --save lodash.get@^4.4.2 lodash.set@^4.3.2
npm i --save apollo-cache-persist@^0.1.1
npm i --save redux-thunk@~2.3.0
npm i --save @adobe/apollo-link-mutation-queue@~1.1.0
npm i --save @magento/peregrine@~12.5.0
npm i --save @adobe/aem-core-cif-react-components --force
npm i --save-dev

@magento/babel-preset-peregrine@~1.2.1
npm i --save @adobe/aem-core-cif-experience-platform-connector --force
 
Configure Maven to Use --force Argument
 
As part of the Maven build process, you'll need to trigger the `npm clean install` using `npm ci`, which also requires the `--force` argument. To do this, update your project's root POM file (`pom.xml`) to include this argument.
 
Change Babel Configuration Format
 
Switch from the default `.babelrc` file format to `babel.config.js`. This format allows for more control over plugins and presets applied to the `node_modules`.
 
Configure Webpack to Use Babel
 
To transpile JavaScript files using Babel loader and Webpack, you'll need to modify the `webpack.common.js` file in the `ui.frontend` module.
 
Configure Apollo Client
 
The Apollo Client plays a crucial role in managing local and remote data with GraphQL. You'll need a `possibleTypes.js` file to make InMemoryCache work effectively. Ensure you generate this file following the specified guidelines.
 
Initialize Peregrine and CIF Core Components
 
To initialize the React-based Peregrine and CIF Core Components, create the necessary configuration and JavaScript files in the `ui.frontend` module.
 
Build and Deploy the Updated AEM Project
 
To ensure that the package installation, code, and configuration changes are correct, rebuild and deploy the AEM Commerce project using the following Maven command:
  
mvn clean install -PautoInstallSinglePackage
  ​
 
 
Your AEM Commerce project should now be updated and ready for further development.
 
Customize Adobe Experience Manager CIF Core Components
 
Clone the Venia Project
 
If you don't start with an existing project based on the AEM Project Archetype with CIF included, you can clone the Venia Project to begin your customization. This section explains the steps to clone the project and set up your own storefront connected to an Adobe Commerce instance.
 
1. Clone the project using the appropriate Git command.
2. Build and deploy the project to a local AEM instance.
3. Configure the necessary OSGi configurations to connect your AEM instance to an Adobe Commerce instance or add the configurations to the newly created project.
4. Check the working storefront by navigating to the homepage of your local AEM instance.
 
Author the Product Teaser
 
The Product Teaser Component is a crucial part of your storefront. To get started, add an instance of the Product Teaser to the homepage and configure it:
 
1. Navigate to the homepage of your site.
2. Insert a new Product Teaser Component into the layout container.
3. Configure the displayed product by selecting a product from the connected Adobe Commerce instance.
4. You should now see a product displayed in the Product Teaser Component.
 
Add a Custom Attribute in Adobe Commerce
 
To enhance your product data, you can add custom attributes in Adobe Commerce:
 
1. Log in to your Adobe Commerce instance.
2. Navigate to Catalog > Products.
3. Update the product you added earlier, or open a product you want to enhance.
4. Add a new attribute for "Eco Friendly" or any custom attribute you require. Set its value as "Yes" or "No."
 
Use a GraphQL IDE to Verify Attribute
 
Before integrating the attribute into your AEM code, verify it using a GraphQL IDE:
 
1. Open a GraphQL IDE and enter the GraphQL URL.
2. Create a GraphQL query to check the attribute.
 
Update the Sling Model for the Product Teaser
 
To extend the Product Teaser Component, implement a Sling Model to handle business logic. Follow these steps:
 
1. Navigate to the core module of your project.
2. Find the `MyProductTeaser` Java interface and add a method to check if the product is eco-friendly.
3. Update the `MyProductTeaserImpl` Java class to implement the added method and follow the delegation pattern for Sling Models.
4. Ensure that the new method retrieves the product's eco-friendly attribute using GraphQL.
 
Customizing the Markup of the Product Teaser
 
To customize the markup of the Product Teaser Component, override the HTL script used for rendering:
 
1. Navigate to the `ui.apps` module and locate the Product Teaser component folder.
2. Open the HTL script (e.g., `productteaser.html`) for the Product Teaser.
3. Modify the script to call the `isEcoFriendly` method to display the "Eco Friendly" text based on the attribute value.
4. Save the changes and deploy them to AEM.
 
With these steps, you've successfully integrated Adobe CIF with AEM and customized the Product Teaser Component to display custom attributes, such as "Eco Friendly."
 

Avatar

Community Advisor

Building Robust AEM Integrations

In the current digital environment, the integration of third-party applications with Adobe Experience Manager (AEM) is pivotal for building resilient digital experiences. This blog serves as a guide, highlighting crucial elements to bear in mind when undertaking AEM integrations It is accompanied by practical examples from real-world AEM implementations, offering valuable insights. Without further ado, let’s dive in:

Generic API Framework for integrations

When it comes to integrating AEM with various APIs, having a structured approach can significantly enhance efficiency, consistency, and reliability. One powerful strategy for achieving these goals is to create a generic API framework. This framework serves as the backbone of your AEM integrations, offering a host of benefits, including reusability, consistency, streamlined maintenance, and scalability.

Why Create a Generic API Framework for AEM Integrations?

  • Reusability: A core advantage of a generic API framework is its reusability. Rather than starting from scratch with each new integration, you can leverage the framework as a proven template. This approach saves development time and effort, promoting efficiency.
  • Consistency: A standardized framework ensures that all your AEM integrations adhere to the same conventions and best practices. This consistency simplifies development, troubleshooting, and maintenance, making your AEM applications more robust and easier to manage.
  • Maintenance and Updates: With a generic API framework in place, updates and improvements can be applied to the framework itself. This benefits all integrated services simultaneously, reducing the need to address issues individually across multiple integrations. This leads to more efficient maintenance and enhanced performance.
  • Scalability: As your AEM application expands and demands more integrations, the generic framework can easily accommodate new services and endpoints. You won’t need to start from scratch or adapt to different integration methodologies each time you add a new component. The framework can seamlessly scale to meet your growing integration needs.

Key Components of an integration Framework:

1. Best Practices:

The framework should encompass industry best practices, ensuring that integrations are built to a high standard from the start. These best practices might include data validation, error handling, and performance optimization techniques.

 

2. Retry Mechanisms:

Introduce a mechanism for retries within the framework. It’s not uncommon for API calls to experience temporary failures due to network disruptions or service unavailability. The incorporation of automatic retry mechanisms can significantly bolster the reliability of your integrations. This can be accomplished by leveraging tools such as Sling Jobs.

For instance, consider an AEM project that integrates with a payment gateway. To address temporary network issues during payment transaction processing, you can implement a retry logic. If a payment encounters a failure, the system can automatically attempt the transaction multiple times before notifying the user of an error.

For guidance on implementing Sling Jobs with built-in retry mechanisms, please refer to the resource Enhancing Efficiency and Reliability by Sling jobs

 

3. Circuit Breaker Pattern:

The Circuit Breaker pattern is a design principle employed for managing network and service failures within distributed systems. In AEM, you can apply the Circuit Breaker pattern to enhance the robustness and stability of your applications when interfacing with external services.

It caters to several specific needs:

  • Trigger a circuit interruption if the failure rate exceeds 10% within a minute.
  • After the circuit breaks, the system should periodically verify the recovery of the external service API through a background process.
  • Ensure that users are shielded from experiencing sluggish response times.
  • Provide a user-friendly message in the event of any service disruptions.

Visit Latency and fault tolerance in Adobe AEM using HystriX for details

 

4. Security Measures:

Incorporate authentication and authorization features into your framework to ensure the security of your data. This may involve integration with identity providers, implementing API keys, setting up OAuth authentication, and utilizing the Cross-Origin Resource Sharing (CORS) mechanism.

 

For instance, when securing content fetched via GraphQL queries, consider token-based authentication. For further details, please refer to the resource titled  Securing content for GraphQL queries via Closed user groups (CUG)

 

5. Logging

Ensuring robust logging and effective error-handling mechanisms are fundamental for the purposes of debugging and monitoring. Implementing comprehensive logging is crucial for recording significant events and errors, making troubleshooting and maintenance more streamlined. To achieve this:

  1. Comprehensive Monitoring and Logging: Implement thorough monitoring and logging for your integration to detect issues, track performance, and simplify debugging. It is advisable to categorize logging into three logical sets.
    • AEM Logging: This pertains to logging at the AEM application level.
    • Apache HTTPD Web Server/Dispatcher Logging: This encompasses logging related to the web server and Dispatcher on the Publish tier.
    • CDN Logging: This feature, although gradually introduced, handles logging at the CDN level.
  2. Selective Logging: Log only essential information in a format that is easy to comprehend. Properly employ log levels to prevent overloading the system with errors and warnings in a production environment. When detailed logs are necessary, ensure the ability to enable lower log levels like ‘debug’ and subsequently disable them.

Utilize AEM’s built-in logging capabilities to log API requests, responses, and errors. Consider incorporating Splunk, a versatile platform for log and data analysis, to centralize log management, enable real-time monitoring, conduct advanced search and analysis, visualize data, and correlate events. Splunk’s scalability, integration capabilities, customization options, and active user community make it an invaluable tool for streamlining log management, gaining insights, and enhancing security, particularly in the context of operations and compliance.

 

6. Payload Flexibility:

Payload flexibility in the context of building a framework for integrations refers to the framework’s capacity to effectively manage varying types and structures of data payloads. A data payload comprises the factual information or content transmitted between systems, and it may exhibit notable differences in format and arrangement.

 

These distinctions in structure can be illustrated with two JSON examples sourced from the same data source but intended for different end systems:

Example 1: JSON Payload for System A

{
  "orderID": "12345",
  "customerName": "John Doe",
  "totalAmount": 100.00,
  "shippingAddress": "123 Main Street"
}

Example 2: JSON Payload for System B

{
  "transactionID": "54321",
  "product": "Widget",
  "quantity": 5,
  "unitPrice": 20.00,
  "customerID": "Cust123"
}

Both examples originate from the same data source but require distinct sets of information for different target systems. Payload flexibility allows the framework to adapt seamlessly, enabling efficient integration with various endpoints that necessitate dissimilar data structures.

 

7. Data Validation and Transformation:

When integrating with a third-party app in Adobe Experience Manager (AEM), it’s crucial to ensure that data validation and transformation are performed correctly. This process helps maintain data integrity and prevents errors when sending or receiving data. Let’s consider an example where we are integrating with an e-commerce platform. We’ll validate and transform product data to ensure it aligns with the expected format and data types of the platform’s API, thus mitigating data-related issues.

private String transformProductData(String rawData) {
    try {
        // Parse the raw JSON data
        JSONObject productData = new JSONObject(rawData);

        // Validate and transform the data
        if (isValidProductData(productData)) {
            // Extract and transform the necessary fields
            String productName = productData.getString("name");
            double productPrice = productData.getDouble("price");
            String productDescription = productData.getString("description");

            // Complex data validation and transformation
            productDescription = sanitizeDescription(productDescription);

            // Create a new JSON object with the transformed data
            JSONObject transformedData = new JSONObject();
            transformedData.put("productName", productName);
            transformedData.put("productPrice", productPrice);
            transformedData.put("productDescription", productDescription);

            // Return the transformed data as a JSON string
            return transformedData.toString();
        } else {
            // If the data is not valid, consider it an error
            return null;
        }
    } catch (JSONException e) {
        // Handle any JSON parsing errors here
        e.printStackTrace();
        return null; // Return null to indicate a transformation error
    }
}

private boolean isValidProductData(JSONObject productData) {
    // Perform more complex validation here
    return productData.has("name") &&
           productData.has("price") &&
           productData.has("description") &&
           productData.has("images") &&
           productData.getJSONArray("images").length() > 0;
}

private String sanitizeDescription(String description) {
    // Implement data sanitization logic here, e.g., remove HTML tags
    return description.replaceAll("<[^>]*>", "");
}
  • We’ve outlined the importance of data validation and transformation when dealing with a third-party e-commerce platform in AEM.
  • The code demonstrates how to parse, validate, and transform the product data from the platform.
  • It includes more complex validation, such as checking for required fields
  • Additionally, it features data transformation methods like sanitizing product descriptions and selecting the main product image.

 

8. Client Call Improvements:

In the context of availability and performance concerns, a common challenge lies in the customer code’s interaction with third-party systems via HTTP connectivity. This challenge takes on paramount importance when these interactions are carried out synchronously within an AEM request. The direct consequence of any backend call’s latency is the immediate impact on AEM’s response time, with the potential to lead to service outages (for AEMaaCS) if these blocking outgoing requests consume the entire thread pool dedicated to handling incoming requests.

  • Reuse the HttpClient: Create a single HttpClient instance, closing it properly, to prevent connection issues and reduce latency.
  • Set Short Timeouts: Implement aggressive connection and read timeouts to optimize performance and prevent Jetty thread pool exhaustion.
  • Implement a Degraded Mode: Prepare your AEM application to gracefully handle slow or unresponsive backends, preventing application downtime and ensuring a smooth user experience.

 For details on improving the HTTP Client requests. Refer to: 3 rules how to use an HttpClient in AEM

 

9. Asynchronous Processing:

In the context of AEM projects, synchronous interactions with third-party services can lead to performance bottlenecks and delays for users. By implementing asynchronous processing, tasks like retrieving product information occur in the background without affecting the AEM server’s responsiveness. Users experience quicker responses while the heavy lifting takes place behind the scenes, ensuring a seamless user experience. Sling Jobs provide a reliable mechanism for executing tasks, even in the face of backend hiccups, preventing service outages. For details on how to implement Sling Jobs, refer to resource Enhancing Efficiency and Reliability by Sling jobs

 

10. Avoid Long-Running Sessions in AEM Repository

During integrations such as Translation imports, encountering Caused by: javax.jcr.InvalidItemStateException: OakState0001 is a common issue in the AEM repository. This problem arises due to long-running sessions and their interference with concurrent changes happening within the repository (like translation imports). When a session’s save() operation fails, temporary heap memory, where pending changes are stored, remains polluted, leading to subsequent failures. To mitigate this problem, two strategies can be employed:

 

1. Avoiding long-running sessions by using shorter-lived sessions, which is the preferable and easier approach in most cases. It also eliminates issues related to shared sessions.

2. Adding code to call session.refresh(true) before making changes. This action refreshes the session state to the HEAD state, reducing the likelihood of exceptions. If a RepositoryException occurs, explicitly clean the transient space using session.refresh(false), resulting in the loss of changes but ensuring the success of subsequent session.save() operations. This approach is suitable when creating new sessions is not feasible.”

 

11. Error Handling and Notifications:

Implement error-handling mechanisms, including sending notifications to relevant stakeholders when critical errors occur. Example: If integration with a payment gateway fails, notify the finance team immediately to address payment processing issues promptly.

 

12. Plan for Scalability

Scaling your AEM deployment and conducting performance tests are vital steps to ensure your application can handle increased loads and provide a seamless user experience. Here’s a structured plan:

  1. Define Expectations:
    • Identify the expected user base growth and usage patterns.
    • Set clear performance objectives, such as response times, throughput, and resource utilization.
    • Determine the scalability needs and expected traffic spikes.
  2. Assess Current Architecture:
    • Examine your current AEM architecture, including hardware, software, and configurations. Estimate resource requirements (CPU, memory, storage) based on expected loads and traffic.
    • Identify potential bottlenecks and areas for improvement.
  3. Capacity planning: Consider vertical scaling (upgrading hardware) and horizontal scaling (adding more instances) options.
  4. Performance Testing:
    • Create test scenarios that simulate real-world user interactions.
    • Use load testing tools to assess how the system performs under different loads.
    • Test for scalability by gradually increasing the number of concurrent users or transactions.
    • Monitor performance metrics (response times, error rates, resource utilization) during tests.
  5. Optimization:
    • Fine-tune configurations, caches, and application code.
    • Re-run tests to validate improvements.
  6. Monitoring and Alerts:
    • Implement real-time monitoring tools and define key performance indicators (KPIs).
    • Set up alerts for abnormal behavior or performance degradation.

Adobe New Relic One Monitoring Suite:

Adobe places great emphasis on monitoring, availability, and performance. AEM as a Cloud Service includes access to a custom New Relic One monitoring suite as a standard offering. This suite provides extensive visibility into your AEM as a Cloud Service system and environment performance metrics. Leverage this resource to proactively monitor, analyze, and optimize the performance of your AEM applications in the cloud.
 

13: Disaster Recovery:

Develop a disaster recovery plan to ensure high availability and data integrity in case of failures.

 

14. Testing Strategies:

Testing strategies play a critical role in ensuring the robustness and reliability of your applications. Here’s an explanation of various testing strategies specific to AEM:

  1. Integration Tests focus on examining the interactions between your AEM instance and external systems, such as databases, APIs, or third-party services. The goal is to validate that data is flowing correctly between these systems and that responses are handled as expected.
    • Example:You might use integration tests to verify that AEM can successfully connect to an external e-commerce platform, retrieve product information, and display it on your website.
    • Resource: For more information on conducting integration tests in AEM, you can refer to this resource
  2. Unit Tests are focused on testing individual components or functions within your AEM integration. These tests verify the correctness of specific code units, ensuring that they behave as intended.
    • Example: You could use unit tests to validate the functionality of custom AEM services or servlets developed for your integration.
  3. Performance Testing evaluates the ability of your AEM application to handle various loads and traffic levels. It helps in identifying potential performance bottlenecks, ensuring that the application remains responsive and performs well under expected and unexpected loads.
  4. Penetration testing assesses the security of your AEM system by simulating potential attacks from malicious actors. This testing identifies vulnerabilities and weaknesses in the AEM deployment that could be exploited by hackers.

 

15. Testing and Staging Environments:

Create separate development and staging environments to thoroughly test and validate your integration before deploying it to production. Example: Before integrating AEM with a new e-commerce platform, set up a staging environment to simulate real-world scenarios and uncover any issues or conflicts with your current setup.

 

16. Versioning and Documentation:

Maintain documentation of the third-party API integration, including version numbers and update procedures, to accommodate changes or updates

 

 


Aanchal Sikka

Avatar

Community Advisor

AEM as a Cloud Service Migration Journey 

 

Migrating an existing Adobe Experience Manager (AEM) setup to AEM as a Cloud Service (AEMaaCS) demands meticulous analysis and adjustments to ensure compatibility, efficiency, and adherence to cloud-native principles.
The migration process involves several critical steps and considerations, each aimed at facilitating a smooth transition and optimizing the system's performance in the cloud environment.
 
Cloud Readiness Analysis
 
  • The Cloud Readiness Analyzer is an essential initial step before migrating to AEMaaCS. It evaluates various aspects of the existing AEM setup to identify potential obstacles and areas requiring modification or refactoring.
  • This comprehensive analysis encompasses the codebase, configurations, integrations, and customizations.
  • Its primary goal is to pinpoint components that might not be compatible or optimized for the cloud-native architecture.
  • Areas requiring refactoring might include restructuring code to align with cloud-native principles, adjusting configurations for optimal performance, ensuring compatibility with AEMaaCS architecture, and reviewing customizations to adhere to best practices for cloud-based deployment models.
  • This assessment aims to enhance scalability, security, and performance by addressing potential obstacles before initiating the migration process.

 

Cloud Manager Code Quality Pipeline
 
  • The Cloud Manager code quality pipeline is a crucial mechanism to assess the existing AEM source code against the modifications and deprecated features in AEMaaCS.
  • This pipeline integrates into the development workflow using Adobe's Cloud Manager, analyzing the codebase for compatibility, adherence to coding standards, identification of deprecated functionalities, and detection of potential issues that might impede a successful migration.
  • Developers gain insights into areas of the code that require modifications or updates to comply with AEMaaCS standards.
  • It enhances code quality, ensures compatibility with the cloud-native environment, and facilitates a smoother transition during the migration process.
  • This code quality pipeline provides continuous checks and feedback loops within development environments, empowering teams to address issues early in the development lifecycle.

 

Changes in AEMaaCS
 
AEM as a Cloud Service introduces various changes to accommodate a more streamlined, cloud-native architecture.
These changes include the immutability of /apps and /libs directories, a shift to repository-based OSGi bundles, optimization of publish-side delivery mechanisms, enhancements in asset handling and delivery, modification in replication agents, deprecation of Classic UI, restrictions on custom runmodes, and changes to the publish repository.
 
- Immutable /apps and /libs: These directories become immutable in AEMaaCS, ensuring stability in core functionalities and reducing the risk of unintended changes.
 
- Repository-based OSGi bundles: The shift towards repository-based OSGi bundles simplifies the management of configurations and aligns with the cloud-native architecture.
 
- Publish-Side Delivery: Optimization of content delivery mechanisms improves content distribution efficiency.
 
- Asset Handling and Delivery: Enhancements streamline the management of digital assets, ensuring faster load times and better performance.
 
- Replication Agents: The traditional replication agents are replaced by Sling Content Distribution mechanisms, requiring adjustments in customizations.
 
- Classic UI Deprecation: AEMaaCS deprecates Classic UI, encouraging developers to transition to touch-enabled UI.
 
- Custom Runmodes: AEMaaCS restricts the usage of custom runmodes, simplifying environment setup and maintenance.
 
- Publish Repository Changes: Direct changes to the publish repository are restricted, ensuring consistency and adherence to best practices.
 
These changes signify a move towards a more streamlined, stable, and scalable architecture, emphasizing stability, performance, and scalability in the cloud-native environment.
 
Custom Code Quality Rules:
SonarQube
 
SonarQube presents crucial custom code quality rules for developing or maintaining code within AEM as a Cloud Service. These rules include recommendations for HTTP requests, product APIs, ResourceResolver objects, Sling servlet paths, logging, exception handling, avoiding hardcoded paths (/apps and /libs), Sling Scheduler, and deprecated AEM APIs.
 
Each rule emphasizes best practices to ensure code efficiency, stability, compatibility, and adherence to AEMaaCS standards. Proper implementation of these rules mitigates technical debt and ensures a more resilient and compatible codebase.
 
OakPAL
 
OakPAL provides additional custom code quality rules essential for maintaining a standardized and optimized AEM environment compatible with AEM as a Cloud Service.
These rules include guidelines regarding customer packages modifying /libs, duplicate OSGi configurations, content in config and install folders, overlapping packages, default authoring mode, touch UI dialogs, mixing mutable and immutable content, and the usage of reverse replication agents.
 
Adherence to these rules ensures cleaner deployments, better manageability, and compatibility with the cloud-based environment.
 
 
Code Refactoring - Tools
 
Tools for code refactoring and migration play a pivotal role in facilitating the transition to AEM as a Cloud Service.
These tools involve asset workflow migration, AEM Dispatcher Converter, AEM Modernization Tools, Content Transfer Tool, tools for static templates to editable templates, design configurations to policies transition tools, and Foundation Components to Core Components migration utilities.
Each tool aids in modernizing, optimizing, and adapting existing AEM elements to meet AEMaaCS standards and requirements.
 
Things to Avoid during  Cloud Migration
 
Customer Packages Should Not Create or Modify Nodes Under /libs
 
The /libs directory in AEM is reserved for the platform's core functionality and configurations. Customizing or modifying nodes under /libs can lead to issues during upgrades or migrations. It's crucial to keep customer-specific content separate from the AEM core library content to ensure stability and maintainability.
 
Encouraging developers to create and modify nodes under /apps or a custom namespace ensures that changes are isolated from the core AEM components. This practice reduces the risk of unintentionally altering critical functionalities and maintains compatibility during AEM upgrades or changes.
 
Packages Should Not Contain Duplicate OSGi Configurations
 
OSGi configurations define the behavior of components and services within an AEM instance. Having duplicate configurations in different packages can result in conflicts, leading to unpredictable behavior or errors in the AEM environment.
 
Encourage developers to maintain a centralized approach to OSGi configurations. Each configuration should be unique and avoid repetition across different packages. This practice streamlines the management of configurations, reducing the chances of conflicts and ensuring a more predictable deployment process.
 
Config and Install Folders Should Only Contain OSGi Nodes
 
The config and install folders in AEM are crucial for deploying OSGi configurations and bundles. These folders should exclusively contain OSGi-related nodes to maintain consistency and clarity in the deployment process.
 
Developers should adhere to a structured approach when organizing content within the config and install folders. This practice ensures that only OSGi-related content, such as configuration files, bundles, or service-related artifacts, is placed within these folders. It simplifies the deployment process and enhances the maintainability of the AEM instance.
 
Packages Should Not Overlap
 
Overlapping packages occur when multiple packages provide similar or conflicting functionalities or content, resulting in redundancy or conflicts within the AEM environment.
 
Encourage developers to perform thorough analysis and planning before creating packages to ensure they do not duplicate functionalities or content. This practice involves maintaining a clear distinction between the purpose of each package to avoid overlaps. It minimizes conflicts, enhances system stability, and facilitates easier troubleshooting during deployments.
 
Default Authoring Mode Should Not Be Classic UI
 
Classic UI is an older interface in AEM that is being phased out in favor of the Touch-Enabled UI. Using Classic UI as the default authoring mode can hinder the adoption of modern and more efficient authoring interfaces.
 
Developers should configure AEM instances to default to the Touch-Enabled UI for content authoring. This practice aligns with the evolving standards and user experience improvements provided by the Touch UI. It encourages the adoption of a more intuitive and responsive authoring interface, enhancing user productivity and experience.
 
Components With Dialogs Should Have Touch UI Dialogs
 
Touch UI is the preferred interface for authoring in AEM. Components designed for authoring should utilize Touch UI dialogs for consistent and user-friendly content editing.
 
Developers should create components that utilize Touch UI dialogs for content editing purposes. This practice ensures a uniform and intuitive editing experience for authors, aligning with the modern interface standards provided by the Touch UI. It enhances usability, consistency, and ease of use for content authors.
 
Packages Should Not Mix Mutable and Immutable Content
 
Mixing mutable (editable) and immutable (read-only) content within packages can lead to confusion and inconsistencies in content lifecycle management.
 
Encourage developers to maintain separation between mutable and immutable content in packages. Mutable content often includes user-generated content, while immutable content might consist of templates or configurations. This practice ensures clarity in content management, facilitating better control over content lifecycles and reducing the risk of unintended changes to critical configurations or templates.
 
Reverse Replication Agents Should Not Be Used
 
Reverse replication agents are deprecated in AEM and have been replaced by Sling Content Distribution mechanisms. Continued usage of reverse replication agents can lead to compatibility issues and deprecated functionality.
 
Developers should avoid using reverse replication agents and transition to Sling Content Distribution mechanisms for content replication needs. This practice ensures compatibility with current AEM standards, mitigates risks associated with deprecated features, and future-proofs the AEM environment.
 
 
 
 

Migrating to AEM as a Cloud Service demands a comprehensive understanding of the changes, adjustments, and best practices essential for compatibility, efficiency, and optimization in the cloud environment. It involves meticulous analysis, code refactoring, adherence to custom code quality rules, utilization of appropriate tools, and adherence to best practices for deployment and Go-Live preparations.
 
 

Anmol Bhardwaj

Anmol_Bhardwaj_0-1700546427147.png

 

Avatar

Community Advisor

Tools for Streamlining Development and Maintenance

In the dynamic landscape of AEM, where constant upgrades and migrations are the norm, the need for specialized tools to streamline development and ensure optimal performance is undeniable. AEM experts have crafted a set of powerful tools to address common challenges faced by developers and administrators. These tools not only enhance efficiency but also contribute to the overall robustness and security of AEM instances.

Let's delve into some of these fabulous tools that have been developed by AEM experts:

 

1. AEM Component and Template Usage Metrics

Amidst upgrades or migrations, accurately assessing the usage of components and templates across web pages proves to be a common challenge. The lack of an efficient process for discerning component and template usage can lead to disorganized project structures, impeding effective maintenance.

 

Enter the Solution:

Kiran Sg introduces a purpose-built tool designed to identify components and templates actively referenced or utilized on web pages during these transitions. This tool conducts a comprehensive analysis of component usage, aiding in the prioritization of component upgrades. Depending on the usage data, strategic decisions can be made, such as removing unused components, merging component versions, or updating to new versions, streamlining space and reducing maintenance efforts.

 

2. Dispatcher Optimizer Tool (DOT)

Enhance the cache hit ratio for your public-facing site, minimize the influence of unexpected or malicious requests, and mitigate the impact of activations on cached content. Achieving these objectives is made easier with the Dispatcher Optimizer Tool, abbreviated as DOT.

The DOT is available in two forms:

  • A Maven plugin for static configuration analysis during development
  • A code quality step in the Adobe Managed Services (AMS) Cloud Manager pipeline

A report from maven plugin would look like this:

aanchalsikka_0-1701141915730.png

For more info, please refer to link

 

 

3. Content Sync Tool for incremental sync across AEM Instances

Effortlessly synchronize AEM Author instances, ensuring seamless updates for pages, assets, experience fragments, content fragments, tags, and /conf/* data. Enjoy compatibility with AEM CS, preservation of binary data, version history, and node ordering, all while benefiting from incremental updates based on jcr:lastModified/cq:lastModified. Simplify your content management across instances with ACS AEM Common's Content Sync Tool

 

4. SecureAEM Tool

Secure AEM is a tool designed to identify prevalent security issues in your AEM instance. It conducts tests on both the author and publish instances, as well as the dispatcher, considering that certain resources should be restricted in the cache configuration. The tool assesses various aspects, including:

 

- Verification of changes to default passwords

- Evaluation of enabled protocols post-publishing to ensure no unnecessary ones are active

- Confirmation of the administrator console access being disabled

- Restriction of content-grabbing selectors on the dispatcher, among other checks

 

Each test comes with a description and a 'More Info' link, providing a reference to an external site for additional information about a specific security vulnerability.


Aanchal Sikka