Expand my Community achievements bar.

Webinar: Adobe Customer Journey Analytics Product Innovations: A Quarterly Overview. Come learn for the Adobe Analytics Product team who will be covering AJO reporting, Graph-based Stitching, guided analysis for CJA, and more!

Ability to add metrics to the difference score

Avatar

Level 2

10/26/18

The existing difference score is helpful, but it is frustrating that we are unable to add different metrics to the data table to run the difference score on metrics of our choosing. In the screen shot below we wanted to add the metrics in rows 4, 5, and 6. I understand rows 4 and 5 not getting a score, but row 6 should have produced a difference score.

The official help section calls out that metrics added after the difference score has run will get no results. What if we want to compare metrics that were not generated by the default table?

Official Help Page:

Segment Comparison

7 Comments

Avatar

Employee

10/29/18

Hi Brian,

Thanks for submitting this idea. It makes a lot of sense. Of course you have the ability to choose which metrics are included/excluded before you run a segment comparison (and can then remove metrics from the results as well), but I get the sense from your description that you don't always know which metrics you want to include or exclude, so you want to be able to make that decision on the fly, which is a bit of a different use case.

This isn't something segment comparison can do today; it runs its calculations all at once, and doesn't have an automatic refresh (which is why you won't get a score for metrics added after it has run). We'll keep an eye on this idea for votes and comments, and will consider it for a future release accordingly.

Thanks again!

Ben

Avatar

Level 2

10/30/18

Hey Ben,

Thanks for the response. You mentioned something in your answer that I wanted to follow up on.

Ben said:

Of course you have the ability to choose which metrics are included/excluded before you run a segment comparison

I might be missing something obvious here, but I could not force any metrics into the segment comparison before it ran when I tried. How are you able to “choose which metrics are included” before the segment comparison is run? Thanks in advance for your help.

Back to my original ask, yes I think the ability for users to compare metrics on the fly would be very helpful and go with the spirit of the analysis workspace. Workspace, in my mind, is a great area to do investigations as you can keep breaking things down by other dimensions to get to the piece of information you are looking for.

Cheers,

Brian Mallis

Slack: @brian.mallis

Email: brian.mallis@sony.com

Phone: (949) 616 5521

Avatar

Employee

10/30/18

Hi Brian,

When setting up a segment comparison, you should be able to click on "Show Advanced Settings", and from there you can exclude dimensions/metrics/segments that you do NOT want to be included in the analysis (so it's a blacklist, not a whitelist).  This mechanism provides a way for you to get rid of anything obvious or irrelevant, and then let the algorithm find anything else interesting for you.

Avatar

Level 2

10/30/18

Hey Brandon,

Thanks for that. I had no idea about the black list functionality. That might work for groups that have a lot fewer metrics, dimensions, and segments than our organization. The sheer number of items that a user would have to add to get the desired effect here would be a lot work for us. Thanks for that idea though.

Cheers,

Brian Mallis

Slack: @brian.mallis

Email: brian.mallis@sony.com

Phone: (949) 616 5521

Avatar

Employee

10/30/18

Brian,

Thanks for your thoughts.  I'd love to hear more about your use case.  Would your ideal be a whitelist instead?  We actually used to have a whitelist, but we changed it to a blacklist, because it seems that most of our customers want the algorithm to find metrics/dimensions/segments that they wouldn't think of (that would be different across to the two segments being compared), and only exclude those components that really don't make sense.

Even though the blacklist might not be exactly what you're looking for, we have a "set as default" feature that might make it work for your use case.  Even if you need to exclude a hundreds of dimensions/metrics/segments, you can do it once, and then click "set as default", and it will save that blacklist for all future segment comparison runs (for your user).

Avatar

Level 2

10/31/18

Hey Brandon,

I understand the blacklist use case as a “discovery” tool. Our whitelist case breaks down into two paths. Path A is more about wanting to have some standard metrics show when doing a comparison. We always want to compare our segments on ARPU or something similar. These are key metrics that those above that sponsored the analysis always want to see. The two groups might not be that different on this metric but it is critical that we show this information as it will come up in discussions with higher ups and later meetings.

Path B is when there is a belief that the two segments are different on a specific measure. Now you could argue that “the metric did not show up in the difference section because it is not that different”. This is not lost on me, but we all know that there will be doubt. A number of other arguments could be made that the metric in question was the next one down on the list and because the list was only 5 or 6 long it did not cover the metric in question. There are other concerns that if we just showed the metric in question and showed that it had a difference score of 0 or close to it, that would end the discussion.

If I was to go and put everything on the blacklist, this would force all of our users to specifically know what metrics they were looking for before the analysis was run. That would negate the “discovery” aspect of your tool. Granted we have a lot of metrics and need to do some clean up on our side, but I think a whitelist approach (do analysis on set of metrics A) along with a “discovery” list that shows the top 5/10/15 (maybe a selector option to help those who want to see a large number of metrics see that and others who only want to see the top 5 have that option) might solve both problems.

Basically this would have the difference score produce an additional list of difference scores that should be fairly short, or at least specifically defined. Since, as I understand it, the difference score already runs through all of the metrics when it does the calculation this would just be exposing some of those numbers to the user in a more helpful fashion. Please let me know if this makes sense.

Cheers,

Brian Mallis

Slack: @brian.mallis

Email: brian.mallis@sony.com

Phone: (949) 616 5521