You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are considering using Spotify confidence to report on all the experiments running on our experimentation platform. So, I did some tests by running a sample of our data (see image below) against Ztest class to see if it could be used to meet our needs of running it simultaneously for various experiments and conversion events. And my findings were as follows:
For a Single Experiment (Variation_Type, Conversion_Event_Name)
For a single experiment with multiple metrics, the following methods, summary(), difference(), and multiple_difference(), worked correctly.
For a Multiple Experiments and conversion_Eevnts by making use of concatenation (Variation_Type, "Experiment_Key~Conversion_Event_Name")
Similar results to the previous one, but satisfying to see that it works perfectly for all experiments and events if we do a concatenation between the fields "Experiment_Key~Conversion_Event_Name".
**For all experiments using the above table as it is. (Experiment_Key, Variation_Type, Country, Conversion_Event_Name)
The summary class works even if I change the conversion_event from the categorical group to metric_column.
While the methods difference () and multiple_difference() return errors regardless of the combinations, I can try in both the class and the method.
ztest.multiple_difference(level='control', groupby=['EXPERIMENT_KEY','CONVERSION_EVENT_NAME'], level_as_reference=True) ValueError: cannot handle a non-unique multi-index! (for both trials)
I've been searching inside the repository notebooks, but I couldn't find the place that explains or execute this error message.
So after this test, I wondered:
Is there any configuration between the class and the method that meets our needs?
what is the use case for the variable "metric_column "?
at which level the "correction_method='bonferroni' " is applied?
Thanks, and looking forward to leveraging this package.
The text was updated successfully, but these errors were encountered:
fdesouza-git
changed the title
Is possible to run the Ztest class for multiple experiments and events at the same time?
Is possible to run the Ztest class and multiple_difference() method for multiple experiments and events at the same time?
Feb 23, 2023
If you're using the dataframe above, you would need to add "Country" to categorical_group_columns and to the groupby argument of multiple difference. If you don't want to split by country you would need to sum up you df first, something like df.groupby(['VARIATION_TYPE','EXPERIMENT_KEY','CONVERSION_EVENT_NAME']).sum().reset_index() might do.
We use it for some multiple correction variants e.g. "spot-1-bonferroni", where we only Bonferroni correct for "success metrics", not for "guardrail metrics". For normal Bonferroni correction it doens't matter if you put 'CONVERSION_EVENT_NAME' as metric_column or in categorical_group_columns.
It's applied to the total number of comparisons, so if you have 3 experiments, 5 metrics and control + 2 treatment groups in each you would get 352=30 comparisons in total.
We are considering using Spotify confidence to report on all the experiments running on our experimentation platform. So, I did some tests by running a sample of our data (see image below) against Ztest class to see if it could be used to meet our needs of running it simultaneously for various experiments and conversion events. And my findings were as follows:
For a single experiment with multiple metrics, the following methods, summary(), difference(), and multiple_difference(), worked correctly.
Similar results to the previous one, but satisfying to see that it works perfectly for all experiments and events if we do a concatenation between the fields "Experiment_Key~Conversion_Event_Name".
The summary class works even if I change the conversion_event from the categorical group to metric_column.
While the methods difference () and multiple_difference() return errors regardless of the combinations, I can try in both the class and the method.
ztest.multiple_difference(level='control', groupby=['EXPERIMENT_KEY','CONVERSION_EVENT_NAME'], level_as_reference=True)
ValueError: cannot handle a non-unique multi-index! (for both trials)
I've been searching inside the repository notebooks, but I couldn't find the place that explains or execute this error message.
So after this test, I wondered:
Thanks, and looking forward to leveraging this package.
The text was updated successfully, but these errors were encountered: