File size
Hi there,
I was wondering if I can upload very large files(up to 10GB) on AEM system. The files should be shared with our agencies over the world and can be downloaded though any kind of API called in a Windows application. Thanks.!
Hi there,
I was wondering if I can upload very large files(up to 10GB) on AEM system. The files should be shared with our agencies over the world and can be downloaded though any kind of API called in a Windows application. Thanks.!
When uploading large amounts of assets to Adobe Experience Manager, to allow for unexpected spikes in memory consumption and to prevent JVM fails with OutOfMemoryErrors, reduce the configured maximum size of the buffered image cache. Consider an example that you have a system with a maximum heap (-Xmx param) of 5 GB, an Oak BlobCache set at 1 GB, and document cache set at 2 GB. In this case, the buffered cache would take maximum 1.25 GB and memory, that would leave only 0.75 GB memory for unexpected spikes.
Configure the buffered cache size in the OSGi Web Console. At http://host:port/system/console/configMgr/com.day.cq.dam.core.impl.cache.CQBufferedImageCache, set the property cq.dam.image.cache.max.memory in bytes. For example, 1073741824 is 1 GB (1024 x 1024 x 1024 = 1 GB).
From AEM 6.1 SP1, if you're using a sling:osgiConfig node for configuring this property, make sure to set the data type to Long. For more details, see CQBufferedImageCache consumes heap during Asset uploads.
Also, It is not recommended if you are using Cold standby or S3 Datastore:
There are two major known issues related to large files in AEM. When files reach sizes greater than 2 GB, cold standby synchronization can run into an out-of-memory situation. In some cases, it prevents standby sync from running. In other cases, it causes the primary instance to crash. This scenario applies to any file in AEM that is larger than 2GB, including content packages.
Likewise, when files reach 2GB in size while using a shared S3 datastore, it may take some time for the file to be fully persisted from the cache to the filesystem. As a result, when using binary-less replication, it is possible that the binary data may not have been persisted before replication completes. This situation can lead to issues, especially if the availability of data is important, for example in offloading scenarios.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.