Expand my Community achievements bar.

Join us January 15th for an AMA with Champion Achaia Walton, who will be talking about her article on Event-Based Reporting and Measuring Content Groups!

Improve Anomaly Detection UI

Avatar

Level 5

3/20/15

I am loving Anomaly Detection and Contribution Analysis. As a feature this is such a helpful thing to have, and I know we'll engage with it often. However, I've noticed there are a few hiccups / usability issues that could improve use of the tool:

 

 

  1. Sometimes I can't click an anomaly in the timeline and have it show below, and it's not entirely clear why
  2. When I open up the Analysis Queue (which is a little hard to see, honestly), the middle anomaly detection report is only partially fluid to that menu, so although the graphs change in scale, the date-range selector gets partially covered. I think that could get simplified to a single calendar icon so it all fits on-screen (1280 width here)
  3. When there are no anomalies found, I would really like to see that metric/those metrics trended so I can validate for myself / check if I should be more sensitive witih my training period. I realize that doesn't fit with the current user flow (see anomaly, click, then see metric trended with anomalies.) However, when looking at a metric that gets counted millions of times a month and certainly has inorganic shifts, it's hard to believe when only an error message is returned and no data. Then I need to pop over into a metric report to observe highs and lows, and ruin my flow. 
  4. In contribution analysis, I really wish I could rename completed reports to something else, as I have to hover and peck to find ones that have already been completed (and, if I change my dimension criteria but look at an anomaly on the same metric and same day, it's impossible to tell them apart.)
  5. Sometimes I hover over an anomaly in the timeline and its caption shifts to the left or right (but it always looks like the caption is pointing to the right)
  6. I would REALLY love to use calculated metrics in here! This is a great start, but calculated metrics are where you really see shifts in behavior. 

 

Overall, though, I am very happy! Please continue to work on tools like this, that bring faster insight out of our data!

9 Comments

Avatar

Employee

3/20/15

This is great feedback, @danielle_wiley . Thanks for sharing. I'm going to keep an eye on it for votes, but I'm also going to show this to our engineering team right away. Maybe we can chip away at some of these. 

Avatar

Employee

3/20/15

Also, @danielle_wiley , you WILL be able to use calculated metrics in Anomaly Detection later this spring. So I'm going to mark this as "Under Review" for now! 

Avatar

Level 5

3/20/15

Woohoo, looking forward to it. Thanks again to the entire team that worked on this. My favorite thing from the Summit (besides the cookies.)

Avatar

Level 5

3/21/15

A few more, since I've already started:

 

X-axis labeling of the timeline is inconsistent, especially for a long training period. So, "Fri 20, Feb 4, Mon 9"... it makes it hard to see which month an anomaly happened in. This might be better fixed by doing major markers/gridlines for the start of each month, or repeating the date in the caption when hovering over a point (as happens in the resulting metric timeline, which I must compare to what I just selected above...)

 

When looking at a 60 day window, it's very difficult to select anomalies from the top chart. When looking at a 90 day window, they don't show up anymore and it's blank, despite anomalies existing as per metric chart below. Understand that the solution may have been to not display them in that UI anymore, but then it shouldn't be there. 

 

It took me a while to totally "get" what was going on with the metric list below because of the arrangement... I see now that clicking a relevant anomaly rearranges their order, and doesn't necessarily select the anomaly you just clicked on, just the metric, so you must choose again. A few on my team were thrown by it, too, and the fact that graph belongs to the metric listed above it and not the ones below it (they're all... "grouped" together in the UI... odd gestalt). We're also not entirely sure what happens when there are two distinct metric anomalies in the same day. 

Avatar

Level 5

10/25/16

@benjamingaines the new feature is great. I ended up moving this to report builder to get the granularity needed and use of calculated metrics, so the workspace solution saves a LOT of time in managing that.

 

I couldn't find updated documentation, but does the AW solution use a 30-day rolling training period and 95% confidence?