내 커뮤니티 업적 표시줄을 확대합니다.

Submissions are now open for the 2026 Adobe Experience Maker Awards
해결됨

Help required with efficiently calculating statistical signifiance in adobe workspace

Avatar

Level 2

Our team does A/B testing rigorously. I am from reporting side and for the initial reporting for every A/B test, we have a adobe dashboard explaining the results with KPIs like impressions, clicks, CTR, lift and few down stream metrics for each version. Typically, we have 2-4 versions for very test - one control and 3 variants. I want to add statistical significance in each dashboard. I have calculated it using the below logic.

 

Imagine a test with only 2 recipes - 1 control and 1 variant

CTR 1 = clicks1/impressions1

CTR2 = clicks 2/impressions 2

Pooled CTR = (clicks 1+clicks 2)/(impressions 1+impressions 2)

Standard Error (SE) = Square root(Pooled CTR*(1-Pooled CTR)*((1/Imp1)+(1/Imp2)))

Z Score = abs(CTR1-CTR2)/SE

Standard significance = cdf-z function (Z score)

 

I create every calculated metric separately and finally calculate the standard significance. But this is not scalable. Because I have to create 7 calculated metrics for each test. So if a A/B test has 4 variants, I would be needing statistical significance to be calculated 3 times against the control (Which makes it 7*3=21 calculated metrics).

 

Is there any easy way to do this? Any efficient way of achieving this would be of great help. Thanks!

주제

토픽은 커뮤니티 콘텐츠를 분류하여 관련성 있는 콘텐츠를 찾는 데 도움이 됩니다.

1 채택된 해결책 개

Avatar

정확한 답변 작성자:
Community Advisor and Adobe Champion

Unless you are reusing the same naming for different test variables, then I do not think that there is a scalable way to do these tests in workspace. If I were you, I would consider setting up an Excel report builder report that flows into a table and automate in Excel for this. This adds reporting outside of workspace which isn't great but you can add a link into a text box in workspace to easily link to the external report.

원본 게시물의 솔루션 보기

2 답변 개

Avatar

정확한 답변 작성자:
Community Advisor and Adobe Champion

Unless you are reusing the same naming for different test variables, then I do not think that there is a scalable way to do these tests in workspace. If I were you, I would consider setting up an Excel report builder report that flows into a table and automate in Excel for this. This adds reporting outside of workspace which isn't great but you can add a link into a text box in workspace to easily link to the external report.

Avatar

Community Advisor and Adobe Champion

Can you not make your clicks and impressions shared for all tests, and then just create a segment for each test that can be applied to the panel?

 

I'm not sure what A/B tool you are using, but when I implemented ours, I actually used s.products and merchandising eVars to capture our data...

 

So I have some standard metrics that are used for each variation's impression, and a "standard" goal metric... since I am stitching everything in my Products List, I can create breakdowns using those eVars... 

 

It's still some work setting things up, but maybe something similar could work for you?