There is CDP data ingestion guardrails, that talk about what size of data can be ingested vs dropped. We don't find any document that clearly explains how to measure these during solutioning or implementation phase ? Adobe support always gives answer about profile richness which is dataLake size device by profile count, however that's not related to these guardrails.
1) Maximum ExperienceEvent size 10KB System-enforced guardrail The maximum size of an event is 10KB. Ingestion will continue, however any events larger than 10KB will be dropped.
2) Maximum profile record size 100KB System-enforced guardrail The maximum size of a profile record is 100KB. Ingestion will continue, however profile records larger than 100KB will be dropped.
3) Maximum profile fragment size 50MB System-enforced guardrail The maximum size of a single profile fragment is 50MB. Segmentation, exports, and lookups may fail for any profile fragment that is larger than 50MB.
4) Maximum profile storage size 50MB Performance guardrail The maximum size of a stored profile is 50MB. Adding new profile fragments into a profile that is larger than 50MB will affect system performance. For example, a profile could contain a single fragment that is 50MB or it could contain multiple fragments across multiple datasets with a combined total size of 50MB. Attempting to store a profile with a single fragment larger than 50MB, or multiple fragments that total more than 50MB in combined size, will affect system performance.