


18-08-2022
Author: Artur Ciocanu, Jaemi Bremner
This is the second part of our series that covers Adobe Target NodeJS SDK with On-Device Decisioning capabilities and how to run it in a serverless/edge compute environment. In this second part, we will be covering AWS Lambda and specifically AWS Lambda@Edge.
This blog is Part 2 in a three-part series that will cover how anyone could use Adobe Target NodeJS SDK to run experimentation and personalization on an edge compute platform. The parts are:
As mentioned in our previous blog, we use Terraform heavily at Adobe Target. In this article, we will show how you can leverage Terraform and Adobe Target NodeJS SDK to create an AWS Lambda@Edge.
AWS Lambda@Edge is a great technology if you intend to run some piece of logic in 200+ points of presence provided by AWS CloudFront. However, it is not trivial to set up, especially if we want to set it up in a secure way. That's why we will be using Terraform to bootstrap all the infrastructure elements.
Before we begin there are a few prerequisites:
In order to use AWS Lambda@Edge we need to create a CloudFront distribution. At the same time, a CloudFront distribution requires an "origin". We don't really need an "origin", because we will use our own code to build an HTTP response. However to make AWS happy we will create a dummy S3 bucket. Here is the Terraform code to create a simple S3 bucket:
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.bucket_name
}
It is recommended to always keep S3 bucket private, so to make sure CloudFront can access our S3 bucket we need to create an Origin Access Identity. Here is the Terraform code to do it:
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
}
Once we have the S3 bucket and Origin Access Identity we can combine the two and create the S3 bucket policy. Here is the Terraform code to do it:
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.s3_bucket.arn}/*"]
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn]
}
}
}
resource "aws_s3_bucket_policy" "s3_bucket_policy" {
bucket = aws_s3_bucket.s3_bucket.id
policy = data.aws_iam_policy_document.s3_policy.json
}
Note: Here we have used Terraform data to create a policy document. We could have also used a JSON document and embedded into bucket policy, without a data element.
Once we have everything in place from “origin” perspective, the next step is to create the AWS Lambda function that will be referenced by CloudFront distribution. Here is the Terraform code to do it:
resource "aws_lambda_function" "main" {
function_name = var.function_name
description = var.function_description
filename = var.filename
source_code_hash = filebase64sha256(var.filename)
handler = var.handler
runtime = var.runtime
role = aws_iam_role.execution_role.arn
timeout = var.timeout
memory_size = var.memory_size
publish = true
}
Note: This is a bare-bones function, for production use cases you’ll want to make sure that function errors and logs are forwarded to AWS CloudWatch.
Looking at the Terraform code for AWS Lambda function we can se that there is a filename, handler and runtime fields. Let's see why we need these fields:
Having all the Terraform code related to AWS Lambda function out of the way, let's see how we can use Adobe Target NodeJS SDK to power the Lambda function.
In order to use Adobe Target NodeJS SDK we need to download it from NPM, we can use the following command:
$ npm i /target-nodejs-sdk -P
Once we have the Adobe Target NodeJS SDK dependency, we need to create the AWS Lambda function handler. Here is the sample code:
const TargetClient = require("@adobe/target-nodejs-sdk");
const RULES = require("./rules.json");
const createTargetClient = () => {
return new Promise(resolve => {
const result = TargetClient.create({
client: "<client code>",
organizationId: "<IMS organization ID>",
logger: console,
decisioningMethod: "on-device",
artifactPayload: RULES,
events: {
clientReady: () => resolve(result)
}
});
});
};
const getRequestBody = event => {
const request = event.Records[0].cf.request;
const body = Buffer.from(request.body.data, "base64").toString();
return JSON.parse(body);
};
const buildResponse = body => {
return {
status: "200",
statusDescription: "OK",
headers: {
"content-type": [{
key: "Content-Type",
value: "application/json"
}]
},
body: JSON.stringify(body)
}
};
const buildSuccessResponse = response => {
return buildResponse(response);
};
const buildErrorResponse = error => {
const response = {
message: "Something went wrong.",
error
};
return buildResponse(response);
};
const targetClientPromise = createTargetClient();
exports.handler = (event, context, callback) => {
// extremely important otherwise execution hangs
context.callbackWaitsForEmptyEventLoop = false;
const request = getRequestBody(event);
targetClientPromise
.then(client => client.getOffers({request}))
.then(deliveryResponse => {
console.log("Response", deliveryResponse);
callback(null, buildSuccessResponse(deliveryResponse.response));
})
.catch(error => {
console.log("Error", error);
callback(null, buildErrorResponse(error));
});
};
Note: The RULES constant references the On-Device Decisioning artifact rules.json file. This file can be downloaded from https://assets.adobetarget.com/<client code>/production/v1/rules.json. This file will be available only after you have enabled On-Device Decisioning for your Adobe Target account.
There is one thing worth mentioning, in the context of AWS Lambda function, Adobe Target NodeJS SDK has been created and tested in a server-side context and it has a few "background processes" like polling for On-Device Decisioning artifact updates, etc, so in order to make sure that AWS Lambda function does not hang and timeouts, we have to use:
context.callbackWaitsForEmptyEventLoop = false;
For more details around context.callbackWaitsForEmptyEventLoop please check the official Amazon documentation, which can be found here.
We have the sample AWS Lambda function handler and we have the On-Device Decisioning artifact aka rules.json. To be able to use this code we need to package it in a ZIP archive. On a UNIX based system this can be done using:
$ zip -r function.zip .
To connect all the dots, we need to create the CloudFront distribution. Here is the Terraform code to do it:
resource "aws_cloudfront_distribution" "cloudfront_distribution" {
enabled = true
is_ipv6_enabled = true
origin {
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
}
domain_name = aws_s3_bucket.s3_bucket.bucket_domain_name
origin_id = var.bucket_name
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
default_cache_behavior {
target_origin_id = var.bucket_name
allowed_methods = ["HEAD", "DELETE", "POST", "GET", "OPTIONS", "PUT", "PATCH"]
cached_methods = ["GET", "HEAD"]
lambda_function_association {
event_type = "viewer-request"
lambda_arn = aws_lambda_function.main.qualified_arn
include_body = true
}
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 7200
max_ttl = 86400
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
There is a lot of boilerplate, but the most interesting pieces are:
If everything was set up properly, then you should have a CloudFront distribution domain name. Using the domain name you could run a simple cURL command to check that everything is looking good. Here is a sample:
curl --location --request POST 'dpqwfa2gsmjjr.cloudfront.net/v1/personalization' \
--header 'Content-Type: application/json' \
--data-raw '{
"execute": {
"pageLoad": {}
}
}
'
This will simulate a “pageLoad” request aka “Target global mbox” call. The output would look something like this:
{
"status": 200,
"requestId": "63575665f53944a1af93337ebcd68a47",
"id": {
"tntId": "459b761e8c90453885ec68a845b3d0da.37_0"
},
"client": "targettesting",
"execute": {
"pageLoad": {
"options": [
{
"type": "html",
"content": "<div>Srsly, who dis?</div>"
},
{
"type": "html",
"content": "<div>mouse</div>"
}
]
}
}
}
By looking at the sheer amount of Terraform code one might ask:
Here are a few benefits:
Follow the Adobe Tech Blog for more developer stories and resources, and check out Adobe Developers on Twitter for the latest news and developer products. Sign up here for future Adobe Experience Platform Meetup.
Originally published: May 20, 2021
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.