Author: Namit Gupta

Observability is a buzzword in the enterprise software industry. Conceptually, Observability is not new. This concept has been borrowed from the world of Engineering and Control Systems. Observability, by definition, is the ability to measure the internal state of a Software System based on the previous output obtained from it.
Observability demystified
Observability assumes greater significance in the backdrop of the adoption of decoupled system architecture like Microservices. It is imperative to deploy state of the art instrumentation to gauge the properties of an application and its performance as elaborate distributed systems evolve across the delivery pipeline and into production.
Figure 1: Observability Framework
Observability importance in Adobe Experience Platform
Adobe Experience Platform is a powerful, flexible, and open system on the market for building and managing complete solutions that drive customer experience. Adobe Experience Platform enables organizations to centralize and standardize customer data and content from any system to drive the delivery of rich, personalized customer experiences. Adobe Experience Platform, being the focal point of customer data, facilitates a multitude of operations like ingestion of data from different sources, and the unification of data. Being a high-velocity system, it is imperative to effectively monitor the system to ascertain the overall health and, report the anomalies in the case of failures.
Aggregate and Visualize Observability Insights
Adobe Experience Platform provides a rich set of Observability Insights APIs that allows a system to fetch key metrics. These metrics provide insights into Adobe Experience Platform usage statistics, health-checks for its services, historical trends, and performance indicators for various functionalities.
In a nutshell, observability insights emanating from Adobe Experience Platform provides the essential actionable intelligence about the health of the system. This data can be leveraged as an ingredient for designing a powerful organization wise health dashboard to enable the important stakeholders managing the infrastructure to visualize and create effective analysis around the metrics to enable preemption of anomalies/failure in the future.
Technical Set-up
1. Prerequisites
- Adobe I/O Integration Project linked to Adobe Experience Platform
- Installation of ELK stack
2. Architecture
Elastic stack, aka ELK, is a conglomerate of three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.
Figure 2: Solution Architecture
3. Implementation
In line with the aforesaid solution architecture, implementation has been illustrated with sample code snippet/configuration in the following subsections;
-
Import Observability Insights
Observability Insights from Adobe Experience Platform shall be imported in the Elasticsearch platform through Logstash event processing pipeline. Logstash event processing pipeline shall import, mutate/transform, and index the data in the Elasticsearch index.
input {
http_poller {
urls => {
app_insight =>{
url => "https://platform.adobe.io/data/infrastructure/observability/insights/metrics?metric=timeseries.identity.dataset.recordsuccess.count&metric=timeseries.ingestion.dataset.recordsuccess.count&metric=timeseries.ingestion.dataset.dailysize&metric=timeseries.ingestion.dataset.batchsuccess.count&metric=timeseries.ingestion.dataset.batchfailed.count&metric=timeseries.identity.graph.imsorg.uniqueidentities.count"
method => "get"
headers => {
Accept => "application/json, application/problem+json"
"x-gw-ims-org-id" => "<ORG_ID>"
"x-api-key" => "<API_KEY>"
"x-sandbox-name" => "prod"
"Authorization" => "Bearer <LONG_LIVED_BEARER_TOKEN>"
}
}
}
request_timeout => 60
schedule => { every => "<CRON_EXPRESSION>"}
codec => "json"
metadata_target => "http_poller_metadata"
}
}
filter {
mutate {
remove_field => ["id","imsOrgId","http_poller_metadata","[timeseries][granularity]"]
}
split {
field => "[timeseries][items]"
}
date {
match => ["[timeseries][items][timestamp]","YYYY-MM-dd HH:mm:ss.SSSZ", "ISO8601"]
timezone => "America/Toronto"
locale => "en"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "aep-metrics"
document_id => "%{[timeseries][items][timestamp]}"
document_type => "metric"
doc_as_upsert => "true"
action => "update"
}
}
-
Create Index Pattern
Typically, the indexes created through the Logstash import Jobs shall be rolled over after a fixed duration. The System Admins can create index patterns, which instructs Kibana to work with all those Elasticsearch Indices whose name matches the pattern.
Figure 3: Index Pattern Creation
Once an index pattern is in place, exploratory data analysis can be done inside Kibana by writing KQL queries e.g. An admin intends to find all the instances when some of the data batches failed while being ingested in Adobe Experience Platform. The following query should suffice the requirement;
timeseries.items.metrics.timeseries.ingestion.dataset.batchfailed.count> 0
Figure 4: Index Pattern Creation
-
Visualization Dashboard
Visualization of the Observability Insights can be created in Kibana. Kibana has OOTB various visualization apps like pie-chart and data table.
The OOTB apps can be configured with data attributes present in the input data and subsequently, added to the Kibana dashboard.
Figure 5: Kibana Dashboard
- Anomaly Alerts
Watcher, an event-based trigger, can be leveraged to send alerts in the event of an anomalous event e.g. a slack notification may be sent on a slack group in case batch ingestion failure.
"actions" : {
"notify-slack" : {
"transform" : { ... },
"throttle_period" : "5m",
"slack" : {
"message" : {
"to" : [ "#admins”],
"text" : "{{timeseries.items.metrics.timeseries.ingestion.dataset.batchfailed.count}} errors"
}
}
}
}
Summary
Effectively monitoring Adobe Experience Platform can eventually minimize potential bad/missing data issues affecting the quality of personalized experiences. The solution proposed is one of the many possible approaches in an ever-growing space of Observability.
The author sincerely solicits constructive feedback around the aforesaid approach along with possible alternatives. Please feel free to leave a comment here or his LinkedIn Profile here.
Follow the Adobe Experience Platform Community Blog for more developer stories and resources, and check out Adobe Developers on Twitter for the latest news and developer products. Sign up here for future Adobe Experience Platform Meetups.
Resources
- Adobe Experience Platform Observability API: https://www.adobe.io/apis/experienceplatform/home/api-reference.html#!acpdr/swagger-specs/observabil...
- Adobe I/O Integration Documentation: https://docs.adobe.com/content/help/en/places/using/web-service-api/adobe-i-o-integration.html
- ELK Stack Overview: https://www.elastic.co/what-is/elk-stack
- © Logstash Configuration: https://www.elastic.co/guide/en/logstash/current/config-setting-files.html
- Kibana Visualizations: https://www.elastic.co/guide/en/kibana/current/visualize.html
- Elasticsearch Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
Originally published: Jul 21, 2020
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.