I am loving Anomaly Detection and Contribution Analysis. As a feature this is such a helpful thing to have, and I know we'll engage with it often. However, I've noticed there are a few hiccups / usability issues that could improve use of the tool:
Overall, though, I am very happy! Please continue to work on tools like this, that bring faster insight out of our data!
This is great feedback, @danielle_wiley . Thanks for sharing. I'm going to keep an eye on it for votes, but I'm also going to show this to our engineering team right away. Maybe we can chip away at some of these.
Also, @danielle_wiley , you WILL be able to use calculated metrics in Anomaly Detection later this spring. So I'm going to mark this as "Under Review" for now!
Woohoo, looking forward to it. Thanks again to the entire team that worked on this. My favorite thing from the Summit (besides the cookies.)
A few more, since I've already started:
X-axis labeling of the timeline is inconsistent, especially for a long training period. So, "Fri 20, Feb 4, Mon 9"... it makes it hard to see which month an anomaly happened in. This might be better fixed by doing major markers/gridlines for the start of each month, or repeating the date in the caption when hovering over a point (as happens in the resulting metric timeline, which I must compare to what I just selected above...)
When looking at a 60 day window, it's very difficult to select anomalies from the top chart. When looking at a 90 day window, they don't show up anymore and it's blank, despite anomalies existing as per metric chart below. Understand that the solution may have been to not display them in that UI anymore, but then it shouldn't be there.
It took me a while to totally "get" what was going on with the metric list below because of the arrangement... I see now that clicking a relevant anomaly rearranges their order, and doesn't necessarily select the anomaly you just clicked on, just the metric, so you must choose again. A few on my team were thrown by it, too, and the fact that graph belongs to the metric listed above it and not the ones below it (they're all... "grouped" together in the UI... odd gestalt). We're also not entirely sure what happens when there are two distinct metric anomalies in the same day.
Calculated metrics can now be used in Anomaly Detection.
@danielle_wiley: With the introduction of Anomaly Detection into Analysis Workspace, would you consider this idea "implemented?"
@benjamingaines the new feature is great. I ended up moving this to report builder to get the granularity needed and use of calculated metrics, so the workspace solution saves a LOT of time in managing that.
I couldn't find updated documentation, but does the AW solution use a 30-day rolling training period and 95% confidence?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.