Abstract
We have enjoyed integrating Adobe Experience Manager (AEM) with AI tools. So far, our Director of Technology Clemente Pereyra have talked about the potential for using AI with AEM and showed off our prototype for creating metadata and I shared a proof of concept I created to generate Tweets for written content.
If you’re using AEM for your website, you’re probably also using AEM Assets for your digital asset management (DAM). AI has great potential there too.
Why would I use AI with AEM Assets?
An important part of a site’s search engine optimization (SEO) is impacted by a few factors including key metadata such as alt-text attributed to images. Additionally, in order to provide the best user experience for site visitors who may be visually impaired for example, having a good description for each image is critical.
In addition to its importance to SEO, some image metadata is very useful for using the DAM itself and finding related images. Other metadata like tags can help create image collections for finding related items. The more images or files in the DAM, the more important this becomes for organizational purposes.
Writing metadata of any sort for images can be time-consuming. In the case of tags, it is imperative to be consistent in their usage. Adobe Sensei is an AI feature of AEM which can power smart tagging. But, AI can be used for writing other metadata too and in this case, we’ll use it to write an image description.
Adding an image description in AEM Assets using AI
We have integrated AEM with both Adobe Sensei and Open AI to leverage each platform's potential.
Everything starts immediately after an image is uploaded into AEM DAM. By configuring the Assets Cloud Post-Processing workflow, we are processing every image that gets uploaded to the AEM DAM. For each image, Adobe Sensei performs the analysis and assigns the corresponding smart tags. After this step, we use the Open AI GPT model to create an Alt Text based on the generated smart tags and store it in the image's metadata as its description.
It's worth noting that there are other ways to do this. In this case, as we use the smart tags that were already generated to create the alt text (instead of processing the image itself), we ensure consistency among the image's metadata and also the cost is significantly cheaper.
Read Full Blog
Q&A
Please use this thread to ask the related questions.