Your achievements

Level 1

0% to

Level 2

Tip /
Sign in

Sign in to Community

to gain points, level up, and earn exciting badges like the new
Bedrock Mission!

Learn more

View all

Sign in to view all badges

Cost Optimization on Adobe Experience Platform Through Code and Infrastructure

Avatar

Avatar
Give Back 25
Community Manager
NimashaJain
Community Manager

Likes

44 likes

Total Posts

193 posts

Correct reply

1 solution
Top badges earned
Give Back 25
Seeker
Engage 1
Applaud 50
Validate 10
View profile

Avatar
Give Back 25
Community Manager
NimashaJain
Community Manager

Likes

44 likes

Total Posts

193 posts

Correct reply

1 solution
Top badges earned
Give Back 25
Seeker
Engage 1
Applaud 50
Validate 10
View profile
NimashaJain
Community Manager

21-10-2021

Authors: Jaemi Bremner (#Jaemi_Bremner), Nav Hothi, and Douglas Paton.

landing-banner.jpeg

In this blog, we continue our series on reducing cost on Adobe Experience Platform with a look at cost optimization. Enterprise CIOs are asked to do more with less and we want to share our learnings. We explore how cost-optimization differs from cost-cutting. And, we break down our process for finding areas we can reduce spend without sacrificing service for our customers.

As we talked about in our post on cost management in Adobe Experience Platform, after six years of expansive growth with Adobe Experience Platform Pipeline, we needed to start finding ways to reduce the amount of money it costs to operate.

To do this we used a cost optimization process. Cost optimization involves a continuous look at spending. The goal is to reduce spending, without sacrificing service. It differs from cost-cutting because it’s a proactive approach, rather than a reactive approach to reducing spending. Cost-cutting is often the last resort. It happens when the only real option you have left is cutting costs or closing down. And, it leads to decreased levels of service for customers.

On the other hand, cost optimization is a conscientious practice that should be a regular part of any business. If you optimize costs properly, you don’t have to worry about cutting them. Instead, you see a boost in profits, you create long-term sustainability, and you reduce the number of wasted resources in your system.

That’s exactly what we wanted. We wanted something that ran leaner, but still had the power to process up to 100 billion messages a day across 13 data centers. To achieve this we had to find the balance between quality and price.

Within Adobe Experience Platform, we focused our cost optimization efforts on two areas: infrastructure and code.

When looking at Infrastructure, we need to look for areas where we may be over-provisioned. Before cloud technology allowed us to easily scale servers up and down the way we can now, it was common practice to over-provision infrastructure that could handle large spikes in usage. When the spikes occurred, we had to be ready for them, which meant a lot of computing resources sitting idle most of the time.

The need to over-provision can be addressed with a combination of containerization and scaling technology. At Adobe Experience Platform we are heavy users of Kubernetes which allows for scaling at both the cluster level by adding and removing worker nodes as needed, as well as at the microservice level using either horizontal pod scaling or vertical pod scaling.

By containerizing monolithic applications into microservices, we can break apart a large machine into multiple smaller machines, which opens the doors to scalability. Imagine a large monolithic web application that does a variety of tasks including serving a UI frontend for users in addition to backend components that would run computation tasks or make database calls. If a single component of the large web application needed to be scaled up such as the frontend UI, scaling up would entail scaling up all components of the large web application including those which did not require it. By breaking up the large web application into smaller microservices, we can scale up only those components which need it.

Kubernetes offers quite a bit more control over your infrastructure. Its built-in scaling capabilities means your system has the necessary elasticity to cover any spikes you experience. At the cluster level, we can use the cluster autoscaler, which can add or remove worker nodes to the Kubernetes node pool depending on the need. At the pod level, we can use the horizontal pod autoscaler, which will add or remove pods depending on the scaling criteria. We can also use the vertical pod autoscaler that can scale down CPU or memory requests for pods that are over-requesting resources or scale up CPU and memory requests for pods that require them.

Figure 1: Considering technologies that have to scale built-in can help to further optimize costs.Figure 1: Considering technologies that have to scale built-in can help to further optimize costs.

Scaling technology can save you a lot of money here. If you build elasticity into your infrastructure, you’ll no longer have to worry about having too much or too little. The system will scale up and down as necessary. This is great for situations where you experience usage spikes, but can’t justify the cost of putting a static system in place to cover those spikes. Ultimately, this saves you a ton of money, while maintaining the level of service your customers expect.

Another way to optimize infrastructure is to audit the infrastructure on a recurring basis, you have to look closely at what you’re using and what you’re not using. This is where monitoring plays a critical role. Kubernetes has some built-in monitoring functionality that provides great insight into how you’re using your resources. In addition, there are many monitoring services or applications out there which should be incorporated into your technology stack.

Monitoring gives you the data you need to make the decisions when doing an infrastructure audit. Collecting the metrics you need to analyze usage patterns helps you identify where efficiencies can be found. The more data you have, the better your decisions will be.

We’ve saved millions already this year alone by looking at our infrastructure and assessing whether or not we need what we have. We found that we had a lot of storage space that simply wasn’t being used, and compute resources that were sitting idle.

The trick is to be creative when you’re looking for areas you can reduce spend. Thinking outside the box can result in unexpected savings.

With code, the question you have to ask yourself is are you using the right code for the job? For example, if you have something that computes intensive or time-sensitive, using an interpreted language like Python doesn’t make a lot of sense. You’d probably be better off using compiled language Java. A rest API microservice written in Java will typically use fewer resources than a microservice written in Python. A Java microservice that uses one less CPU or one less GB of RAM compared to a Python microservice that does the same job can result in significant cost savings if the deployment is at a large scale.

Figure 2: Choosing the right code for the job means evaluating usage and doing code reviews.Figure 2: Choosing the right code for the job means evaluating usage and doing code reviews.

There’s no need to reinvent the wheel. You don’t need to create something to optimize your code. Just use the tools you have.

Evaluate your usage, do some performance testing to establish a baseline, and then tune your setting based on your findings. You want to make sure that you’re not using something that is unnecessarily “pigging out” on resources (and driving up the cost in the process).

One thing that helps is doing code reviews. You want to look for unneeded libraries within your code. This could be things that were useful at one point, but are no longer needed or libraries that serve features no longer included. You may find that refactoring is necessary. This can be a time-consuming process, but it’s worth the effort in terms of savings. Doing these reviews regularly can help you identify and eliminate any unnecessary costs.

On the other hand, if you’re doing something involving machine learning, it makes more sense to go with Python over Java. While Java is not a bad language for machine language, using Python makes more sense because it offers hundreds of machine learning libraries along with a huge machine learning community which combined, can cut down on overall development time. This is time that can be reused in other aspects like code optimization whose benefit includes cost optimization.

At its core, cost optimization encourages you to constantly monitor cost and usage. You’re looking for unneeded things and eliminating them.

When you build this practice into your business as a regular thing, you not only start saving money without reducing the quality of your product, but you also make your boss happy (which makes you look good). And, when you reduce resources you’re unnecessarily consuming, you make your business greener, too.
Follow the Adobe Experience Platform Community Blog for more developer stories and resources, and check out Adobe Developers on Twitter for the latest news and developer products. Sign up here for future Adobe Experience Platform Meetup.

Meta — A look at how cost-optimization Adobe Experience Platform has been helping reduce costs, without sacrificing service.

References

  1. Adobe Experience Platform — https://www.adobe.com/experience-platform.html
  2. Adobe Experience Platform Pipeline — https://theblog.adobe.com/creating-adobe-experience-platform-pipeline-with-kafka/
  3. Kubernetes — https://kubernetes.io/
  4. Azure — https://azure.microsoft.com/en-ca/
  5. Cosmos DB — https://docs.microsoft.com/en-us/azure/cosmos-db/
  6. Java — https://www.java.com/
  7. Python — https://www.python.org/

Originally published: Jun 23, 2020