Expand my Community achievements bar.

SOLVED

Adobe IO Runtime Payload and Result Limit 1 MB

Avatar

Level 2

Hi community,

 from the documentation https://developer.adobe.com/runtime/docs/guides/#lets-talk-numbers--understanding-the-system-setting... here it's specify a limit for Payload and Result in 1MB. 

It's also specify if you need more, the customer need to consider S3 Bucket. 

There is other way to incremet this limit? If the customer has a payload of 1.5 MB considering an S3 it's to much.

 

Thanks a lot

 

Topics

Topics help categorize Community content and increase your ability to discover relevant content.

1 Accepted Solution

Avatar

Correct answer by
Level 2

Thank for your feedback about this. The only way to activate it within Adobe I/O I think is to create a Runtime as in this example in which we could use gzip and brotli to compress payload and response. As I said the only implication is CPU usage.

 

/**
 * Generate an `OK` response, which is potentially compressed, based on request headers.
 * @param {Object} requestHeaders The original request headers.
 * @param {Object} payload The response JSON payload.
 * @param {Object} additionalHeaders Optional additional response headers to set.
 * @returns {Object} a response object.
 */
function compressedSuccess(
  requestHeaders = {},
  payload = {},
  additionalHeaders = {},
) {
  // https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3
  const accept = Accept.encoding(requestHeaders['accept-encoding'], ENCODING_PREFERENCE);

  let compressionHeaders = {};
  let raw;

  switch (accept) {
    case ENCODING_BR:
      raw = zlib.brotliCompressSync(json2Buffer(payload), COMPRESSION_OPTS);
      break;
    case ENCODING_GZIP:
      raw = zlib.gzipSync(json2Buffer(payload), COMPRESSION_OPTS);
      break;
    case ENCODING_DEFLATE:
      raw = zlib.deflateSync(json2Buffer(payload), COMPRESSION_OPTS);
      break;
    default:
      raw = { payload };
  }

  // response body is compressed, add relevant headers
  // also, openwhisk requires binary responses to be base64 encoded
  if (accept !== ENCODING_IDENTITY) {
    compressionHeaders = {
      'Content-Encoding': accept,
      Vary: 'Accept-Encoding',
    };
    raw = raw.toString('base64');
  }

  return createRawResponse(
    HttpStatus.OK,
    raw,
    {
      ...JSON_RESPONSE_HEADERS,
      ...compressionHeaders,
      ...additionalHeaders,
    },
  );

View solution in original post

4 Replies

Avatar

Level 3

We are working on the same issue.

 

My bit (that might make the problem go away): is it possible to activate a gzip layer architecturally (a bit like the AEM dispatcher does with Apache when serving the page to the browser)?

On the dispatcher this optimization reduces the weight of the page considerably (between 80 and 90 per cent).

I could not find anything in the documentation about it.

Avatar

Level 2

Thank you for your insight. I suppose on this there is not so much documentation due to, the limit of 1 MB is connected to a limitation of Apache Kafka that is used in Adobe IO Runtime architecture.

 

Via this libraries: https://github.com/adobe/aio-lib-files we could manage to overcome this limits but I don't know if it's useful on Server Side Rendering use case.

 

During some discussion with our Team on Adobe I/O seems to be possible using compression on response via gzip or brotli. This could overcome the limitation of 1 MB at expense of CPU usage.

 

 

Avatar

Level 3

Thank you for the feedback!

 

In my case (I use Adobe IO for SSR), doing some comparison checks on similar clients, I notice that the pages could go up to 2MB (conservatively), so the general 1MB limit is really not much.

 

Reasoning absurdly, I ask myself: even if we go up to 1MB per page, does it really make sense to let 1MB pass through the network rather than compress?

Then compression could solve my issue, but it is not clear if/how to activate it in Adobe I/O SSR then discuss with the customer on the CPU implications (Adobe IO / AEM publish) https://www.conduktor.io/kafka/kafka-message-compression

Avatar

Correct answer by
Level 2

Thank for your feedback about this. The only way to activate it within Adobe I/O I think is to create a Runtime as in this example in which we could use gzip and brotli to compress payload and response. As I said the only implication is CPU usage.

 

/**
 * Generate an `OK` response, which is potentially compressed, based on request headers.
 * @param {Object} requestHeaders The original request headers.
 * @param {Object} payload The response JSON payload.
 * @param {Object} additionalHeaders Optional additional response headers to set.
 * @returns {Object} a response object.
 */
function compressedSuccess(
  requestHeaders = {},
  payload = {},
  additionalHeaders = {},
) {
  // https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3
  const accept = Accept.encoding(requestHeaders['accept-encoding'], ENCODING_PREFERENCE);

  let compressionHeaders = {};
  let raw;

  switch (accept) {
    case ENCODING_BR:
      raw = zlib.brotliCompressSync(json2Buffer(payload), COMPRESSION_OPTS);
      break;
    case ENCODING_GZIP:
      raw = zlib.gzipSync(json2Buffer(payload), COMPRESSION_OPTS);
      break;
    case ENCODING_DEFLATE:
      raw = zlib.deflateSync(json2Buffer(payload), COMPRESSION_OPTS);
      break;
    default:
      raw = { payload };
  }

  // response body is compressed, add relevant headers
  // also, openwhisk requires binary responses to be base64 encoded
  if (accept !== ENCODING_IDENTITY) {
    compressionHeaders = {
      'Content-Encoding': accept,
      Vary: 'Accept-Encoding',
    };
    raw = raw.toString('base64');
  }

  return createRawResponse(
    HttpStatus.OK,
    raw,
    {
      ...JSON_RESPONSE_HEADERS,
      ...compressionHeaders,
      ...additionalHeaders,
    },
  );