Expand my Community achievements bar.

Webinar: Adobe Customer Journey Analytics Product Innovations: A Quarterly Overview. Come learn for the Adobe Analytics Product team who will be covering AJO reporting, Graph-based Stitching, guided analysis for CJA, and more!

Stop Structuring Contracts Based (directly) on AA Server Call Volume

Avatar

Community Advisor

7/14/18

I understand that there was once a time when the cost to supply and maintain Adobe Analytics was based directly on the number (and size) of server calls being processed and stored.  I would guess that this is no longer one of the main operating costs in providing and supporting Adobe Analytics client companies.

I'm not saying that bigger companies with higher volume shouldn't pay more than smaller ones with lower volume.  But, the strict 1:1 relationship between server calls and dollars presents a perverse economic incentive that twists implementation design and methodology in ways that don't best serve the goal of providing good information for analytics.

The primary example of this can be seen in the implementation of a product detail page on an e-commerce website.  We have a few things that we might want to track on this page:

  1. The load of the page itself (for AA traffic reporting, Pathing, etc).
  2. The product view for the primary product (or products on the page).
  3. Impressions of cross-selling products served from a 3rd party service (e.g. Rich Relevance, Certona, Adobe Target).
  4. Product Rating and review information served from a 3rd party server (e.g. Bazaarvoice, or PowerReviews)
  5. Any other latent 3rd party tag info relative to the product or customer which may be loaded asynchronously.

If cost were no object, I would argue that it is a better design to send an s.t() (page view) tag to AA for #1 above and send s.tl() (custom events) for #2-#5 above.

Since cost is a factor (driven directly by server call volume), as an implementer I am forced to suppress the page load (#1 above) and write code to wait for events #2-#5 hoping to consolidate all the information into a single server call.

  • This leads to less accurate reporting since the delayed page view tag is recorded less often.
  • It also leads to more complicated implementations which are harder to maintain.
  • It is a mindset based in the past, based on synchronous, monolithic, server-managed websites.  It does not translate well to Single Page Apps, or microservice/component-based design patterns.
8 Comments

Avatar

Employee

7/14/18

Very interesting idea/comment.

If we did away with billing by server call volume, do you have a suggestion on how we could structure an alternative pricing model that wouldn't have bad incentives?

Avatar

Level 2

7/14/18

I agree. Or maybe simple database size? It’s volume of data stored and

manipulated not server calls. We can send every event separately and not

increase the record sizes.

On Sat, Jul 14, 2018 at 11:14 PM stewarts16448458 <forums_noreply@adobe.com>

Avatar

Community Advisor

7/21/18

After a bit more thought, I think that visit based metering would be the best bet.  Since it's a server-based metric, it can't be gamed and since it already exists, contract conversion could be automated.

Avatar

Employee

8/8/18

I like your idea! Certainly not an easy change by any stretch of the imagination, but I'll take this feedback back to the team. Last question for you - if we charged not only by hit count, but by size of hit (I guess that'd be similar to volume or GB pricing) would that help ease the strange implementation incentives?

For example, would you prefer to send over lots of hits that don't have as much data in them vs not as many hits but they have lots of data? We experimented with this idea years ago using something called "lite" server calls, but they were never widely adopted, so we killed them.

Avatar

Community Advisor

8/9/18

My guess is that if Adobe moved from hit metering to GB metering, customers would start making implementation changes to optimize for smaller beacons.  We'd see a big move toward classification usage and a resurgence in all the old tricks that were once used when browser limitations dictated that beacons be smaller than 2K.

I think that this would be unproductive time spent and would probably result in limiting a client's analysis capabilities. On the bright side, it would stop people from duplicating props and eVars... 

I go back to my original thesis statement, "I understand that there was once a time when the cost to supply and maintain Adobe Analytics was based directly on the number (and size) of server calls being processed and stored.  I would guess that this is no longer one of the main operating costs in providing and supporting Adobe Analytics client companies."

In today's cloud-based environment, data collection and storage is cheap (and it's only getting cheaper).  Adobe will see plenty of competition from vendors (or roll-your-own solutions) that will cost much less for the collection and storage of the same amount of data. The reality is that people don't/won't choose Adobe for their ability to store data; the choice is made based on what Adobe enables you to do with the data (Analysis Workspace(AA), Sensei, End-to-end integration with Digital Asset creation and delivery(AEM), Integration with Digital Marketing(AEM), Integrated e-comm (Magento).