This conversation has been locked due to inactivity. Please create a new post.
This conversation has been locked due to inactivity. Please create a new post.
Hi guys,
Quite a lot I am seeing "Low Traffic" value as a number 1 in the report result. I think I have read somewhere that the "Low Traffic" variable means --> values with small contribution. But most of the time the "Low Traffic" value contributes over 90%. (attached pict)
Any hint how I get rid of it? When I break it down, it shows "NONE". Hard to do any analysis on this data set.
Thanks,
Jakub
Solved! Go to Solution.
Views
Replies
Total Likes
Hi Jakub,
Due to the high cardinality of search variations and how you are merging several values there is not a workaround aside from capturing the data at a higher up summarized level or splitting apart your string. Right now how the data is captured in one combined string will causally yield a high low traffic % in Reports & Analytics and is only viable for analysis in your raw clickstream in a big data type effort.
Best,
Brian
Views
Replies
Total Likes
Hi Jakub,
The low traffic (uniques exceeded) designation for variable values is handled via an Adobe Analytics algorithm (methodology details are explained in the links below). It appears in this case you have extreme high cardinality if low traffic makes 90%. I would recommend reviewing the implementation to confirm if this is necessary and by design. Anything you can do within the implementation approach to cut down on the number of unique value variations will help, but if the current variable value granularity is required for the business then there may not be much to do in R&A.
http://blogs.adobe.com/digitalmarketing/analytics/high-cardinality-reports/
http://helpx.adobe.com/analytics/kb/uniques-exceeded.html
Best,
Brian
Views
Replies
Total Likes
Hi Brian,
thanks for the reply, I kinda thought this could be the reason. Yes, there
is a large volume of unique instances.
The strings come from "SEARCH", so pretty much every search is different,
hence the large volume. The string is constructed in processing rules. -->
(if search triggered the string gets country|state|area|pricefrom|
and so on....) so its very unlikely that the search instance would be same.
I am not sure how to change the implementation. The site has a volume of
300K+ of UV's a day, so on an average if user does 3 or 4 searches we end
up with over 1mil Unique strings constructed from the Search.
Any hint on how to improve the implementation so it will give us better
insights?
Thanks in advance,
Jakub
Views
Replies
Total Likes
Hi Jakub,
Due to the high cardinality of search variations and how you are merging several values there is not a workaround aside from capturing the data at a higher up summarized level or splitting apart your string. Right now how the data is captured in one combined string will causally yield a high low traffic % in Reports & Analytics and is only viable for analysis in your raw clickstream in a big data type effort.
Best,
Brian
Views
Replies
Total Likes
Views
Likes
Replies
Views
Like
Replies