-
Notifications
You must be signed in to change notification settings - Fork 0
/
bayes-ab-testing.qmd
854 lines (643 loc) · 25.6 KB
/
bayes-ab-testing.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
---
title: "Introduction to Bayesian A/B testing"
author: "Johann de Boer"
date: "2022-10-22"
format:
revealjs:
toc: true
toc-depth: 1
toc-title: "Agenda"
footer: "Johann de Boer - Sydney Measurecamp 2022"
scrollable: false
theme: dark
df-print: kable
slide-number: true
logo: "Logo_MeasureCamp_SYD_2015-3.png"
html:
toc: true
theme: darkly
df-print: kable
page-layout: full
pdf:
toc: true
df-print: kable
number-sections: true
papersize: "A4"
editor: visual
---
```{r}
#| include: false
library(tidyverse)
library(glue)
library(ggdark)
library(gganimate)
theme_set(dark_theme_gray())
render_animations <- FALSE
render_animations <- TRUE
```
# Setting the scene
## Randomised Control Trials (RCTs)
A simplistic example:
- Users are assigned at **random** to two groups, A and B, with equal probability.
- Let A be our **control** group and B be our **treatment** group.
We want to know what effect our treatment has.
::: notes
Early on during an experiment, differences between these groups could simply be due to the random allocation of participants. As the groups get larger, these random differences will diminish, bringing us closer to the difference caused by the treatment.
Applying Bayesian inference effectively gives the experiment a guided head start by including more data (probabilistic data, not real data) in the form of **priors**.
:::
## Hypothetical scenario
::: columns
::: {.column width="70%"}
- A button on a landing page that takes users to a sign up form.
- At present, the button is labelled "Register your interest".
- Test whether changing it to "Get started" will result in an increased click-through rate (CTR).
- "Get started" was suggested by an experienced and skilled UX designer.
:::
::: {.column width="30%"}
![](A_B%20Button%20CTA%20test.png){fig-alt="Two button CTA being compared: \"Register your interest\" vs \"Get started\"" fig-align="center"}
:::
:::
# Priors and probability distributions
The key to speeding up your experiment
## Prior knowledge and beliefs
Before running an experiment, we form opinions and gather evidence such as:
- The baseline click-through rate of the button (with its current label) and knowledge of any outside variables that affects click-through rate, e.g. seasonality
- Effects we have seen from similar previous experiments
- Qualitative research, such as usability tests, focus groups, and surveys that are related to the test
- Opinions (including critical) from stakeholders and experts
## Priors are probability distributions {.smaller}
::: columns
::: {.column width="40%"}
Express prior beliefs about the click-through rate of the control group using a **probability distribution**.
::: {.fragment fragment-index="1"}
Here's an example of an extremely uninformative prior -- a uniform prior that says any range of click-through rate is as probable as any other equally wide range, i.e. naive.
:::
:::
::: {.column width="60%"}
::: {.fragment fragment-index="1"}
```{r}
plot_beta_pdf <- function(shape1, shape2, group) {
p <- seq(0, 100, by = 0.1) / 100 # 0 to 100 percent
df <- pmap_dfr(
list(shape1, shape2, group),
function(shape1, shape2, group) {
tibble(
p = p,
d = dbeta(p, shape1, shape2),
group = group
)
}
)
labels_df <- tibble(
group = group,
p = shape1 / (shape1 + shape2),
d = dbeta(p, shape1, shape2) / 2,
label = glue("Beta({shape1}, {shape2})")
)
ggplot(df) +
aes(x = p, y = d, fill = group) +
geom_area(alpha = 0.5, position = position_identity()) +
geom_line(aes(colour = group), alpha = 0.5) +
geom_text(aes(label = label), data = labels_df) +
scale_x_continuous(labels = scales::percent) +
labs(
subtitle = "Probability density function (PDF)",
x = "Click-through rate",
y = "Probability density",
colour = "Group",
fill = "Group"
)
}
experiment_groups <- c("control", "treatment")
plot_beta_pdf(1, 1, group = factor("control", levels = experiment_groups)) +
labs(
title = "Prior for click-through rate of control group"
)
```
:::
:::
:::
::: callout-tip
The **Beta distribution** is a **probability density function (PDF)** with two **shape parameters**: $B(shape1, shape2)$. It's used to describe proportions, such as click-through rate.
:::
::: notes
The total area under the curve will always add to 100%. That is, the curve represents all possibilities regardless of what shape parameters are used.
:::
## Something a little more informative {.smaller}
```{r}
plot_beta_pdf(4, 7, group = factor("control", levels = experiment_groups)) +
labs(
title = "Prior for click-through rate of control group"
)
```
As the curve narrows, notice that the shape parameters of the Beta distribution increase.
## Something even more informative {.smaller}
```{r fig.height=3.5, fig.width=11}
plot_beta_pdf(12, 28, group = factor("control", levels = experiment_groups)) +
labs(
title = "Prior for click-through rate of control group"
)
```
::: columns
::: {.column width="75%"}
The shape parameters (`shape1` and `shape2`) of the Beta distribution can be considered counts of **successes** and **failures**, respectively. The mean probability of success (i.e. average click-through rate) can be calculated by this formula:
:::
::: {.column width="25%"}
$$
\frac{shape1}{shape1 + shape2}
$$
:::
:::
::: notes
The shape parameters are actually slightly more than the count of successes and failures, i.e. $successes = \alpha - 1$ and $failures = \beta - 1$, or $successes = \alpha - 0.5$ and $failures = \beta - 0.5$ if using Jeffreys prior.
:::
## Let's say we've settled on this: {.smaller}
```{r}
plot_beta_pdf(
shape1 = 120, shape2 = 288,
group = factor("control", levels = experiment_groups)
) +
labs(
title = "Prior for click-through rate of control group"
)
```
The more confident we are about our beliefs, the narrower the curve.
## What about the treatment group?
- We expect that the click-through rates of the treatment and control groups will be correlated.
- We're unsure about how correlated they will be, but we're not expecting a dramatic difference.
- We're more confident than not that the treatment will be an improvement, but we're open to other possibilities.
- We don't want to bias the experiment results in favour of treatment or control, or towards a conclusion of there being a difference or no difference.
## We've settled on these priors:
```{r}
priors <- tibble(
group = factor(c("control", "treatment"), levels = experiment_groups),
shape1 = c(120, 27),
shape2 = c(288, 60)
)
do.call(plot_beta_pdf, priors) +
labs(
title = "Prior click-through rates of control and treatment groups"
)
```
# Running a Bayesian A/B test
## Prior agreement
- Agreement must be reached on the priors before collecting and analysing data from the experiment.
- Once the priors are agreed to and locked in, we can start the experiment.
Here's a summary of the priors we have chosen:
```{r}
priors %>%
mutate(
mean = (shape1 / (shape1 + shape2)) %>%
scales::percent(),
sd = sqrt(
shape1 * shape2 /
((shape1 + shape2) ^ 2 * (shape1 + shape2 + 1))
) %>%
scales::percent(suffix = " p.p.")
)
```
::: notes
It's important to not change your original priors after seeing data collected from the experiment. Doing so is effectively double-dipping, whereby your priors are being influenced by the data you have collected.
:::
## Let's run an experiment
```{r}
true_ctr <- c(control = 32, treatment = 35) / 100
```
We'll generate some fake data to mimic a real experiment.
It'll be rigged though, as we'll already know the click-through rates for control and treatment, which are:
- Control: `r scales::percent(true_ctr["control"])`
- Treatment: `r scales::percent(true_ctr["treatment"])`
That's a relative uplift of `r scales::percent(true_ctr["treatment"] / true_ctr["control"] - 1, accuracy = 0.1)`.
If we're successful at applying Bayesian inference then we should hope (but can't guarantee due to randomness) that the results somewhat match with these expected CTRs.
## The next day
```{r}
avg_daily_users <- 150
experiment_simulator <- function(rate_of_users, true_ctr) {
experiment_groups <- names(true_ctr)
n_users <- rpois(1, lambda = rate_of_users)
group_assignment <- sample(
seq_along(experiment_groups),
size = n_users, replace = TRUE
) %>%
factor(labels = experiment_groups)
group_sizes <- table(group_assignment)
clicks_by_group <- map2(group_sizes, true_ctr, rbernoulli)
map_dfr(clicks_by_group, function(clicks) {
list(
click_data = list(clicks),
clicked = sum(clicks),
not_clicked = sum(!clicks)
)
}, .id = "group") %>%
mutate(group = factor(group, levels = experiment_groups))
}
```
Let's pretend that on average `r avg_daily_users` users enter our experiment each day, and we've received the following data from day 1:
```{r}
experiment_data <- list(
batch1 = experiment_simulator(avg_daily_users, true_ctr)
)
experiment_data[['batch1']] %>%
mutate(
CTR = scales::percent(clicked / (clicked + not_clicked)),
total_users = clicked + not_clicked
) %>%
select(group, total_users, clicked, not_clicked, CTR)
```
::: notes
Our experiment simulator randomly selects users and assigns them to each group using a Poisson process. It then randomly chooses which users had clicked using Bernoulli trials (e.g. coin flips).
:::
## Let's now incorporate our priors {.smaller}
**Posteriors** represent your updated beliefs once you've incorporated experiment data with your priors. Like priors, posteriors represent your beliefs about the metric of interest, which in our case is click-through rate.
::: columns
::: {.column width="50%"}
For each experiment group, we derive our posterior shape parameters through simple arithmetic addition:
::: {.fragment fragment-index="1"}
- Increment the first shape parameter by the count users who had **clicked**
:::
::: {.fragment fragment-index="2"}
- Increment the second shape parameter by the count users who **didn't click**
:::
:::
::: {.column width="50%"}
```{r}
posterior_update <- function(priors, experiment_data) {
experiment_data %>%
left_join(priors, by = "group") %>%
rename(prior_shape1 = shape1, prior_shape2 = shape2) %>%
select(-click_data) %>%
mutate(
posterior_shape1 = prior_shape1 + clicked,
posterior_shape2 = prior_shape2 + not_clicked
)
}
posteriors <- posterior_update(priors, experiment_data[['batch1']])
posteriors_long <- posteriors %>%
gather(key = "count", value = "value", -group, factor_key = TRUE) %>%
spread(group, value) %>%
mutate(count = count %>% fct_relevel(
c(
"prior_shape1", "clicked", "posterior_shape1",
"prior_shape2", "not_clicked", "posterior_shape2"
)
)) %>%
arrange(count)
```
::: {.fragment fragment-index="1"}
```{r}
posteriors_long %>%
filter(count %in% c("prior_shape1", "clicked", "posterior_shape1"))
```
:::
::: {.fragment fragment-index="2"}
```{r}
posteriors_long %>%
filter(count %in% c("prior_shape2", "not_clicked", "posterior_shape2"))
```
:::
:::
:::
::: notes
The process of incorporating data with priors is called Bayesian updating. The data generated follows a Bernoulli distribution (Binomial with 1 trial). The prior follows a Beta distribution, which is conjugate to the Binomial distribution.
:::
## Posterior distribution of each group
We have now updated our beliefs. These posteriors can now be thought of as our new updated priors.
```{r}
priors_initial <- priors
priors <- posteriors %>%
select(
group = group,
shape1 = posterior_shape1,
shape2 = posterior_shape2
)
do.call(plot_beta_pdf, priors) +
labs(
title = "Posterior click-through rates of control and treatment groups"
)
```
## Another six days later...
We've collected more data, so let's again update our priors to form new posteriors for the click-through rates of each group.
```{r}
experiment_data[['batch2']] <- experiment_simulator(
avg_daily_users * 6, true_ctr
)
posteriors <- posterior_update(priors, experiment_data[['batch2']])
```
```{r}
priors <- posteriors %>%
select(
group = group,
shape1 = posterior_shape1,
shape2 = posterior_shape2
)
do.call(plot_beta_pdf, priors) +
labs(
title = "Posterior click-through rates of control and treatment groups"
)
```
## Another three weeks later...
```{r}
experiment_data[['batch3']] <- experiment_simulator(
avg_daily_users * 7 * 3, true_ctr
)
posteriors <- posteriors %>%
select(
group = group,
shape1 = posterior_shape1,
shape2 = posterior_shape2
) %>%
posterior_update(experiment_data[['batch3']])
experiment_data_df <- experiment_data %>% map_dfr(~., .id = "batch")
```
We've now observed a total sample of `r scales::number(with(experiment_data_df, sum(clicked, not_clicked)), accuracy = 1, big.mark = ",")` users and a decision is made to end the experiment.
```{r}
posteriors <- posteriors %>%
select(
group = group,
shape1 = posterior_shape1,
shape2 = posterior_shape2
)
do.call(plot_beta_pdf, posteriors) +
labs(
title = "Posterior click-through rates of control and treatment groups"
)
```
# Posterior analysis
Statistical inferences using the posterior distributions
## Monte Carlo simulation
::: columns
::: {.column width="60%"}
Let's draw a very large quantity of random samples from our posterior distributions to make inferences about the experiment.
This is called Monte Carlo simulation -- named after a casino.
:::
::: {.column width="40%"}
[![](Real_Monte_Carlo_Casino.jpg){fig-alt="Monte Carlo Casino, Monaco, France" fig-align="center"}](https://creativecommons.org/licenses/by/2.0)
:::
:::
::: notes
The more samples drawn, the greater the resolution of the inferences you make, but this comes at the cost of computational time and memory. Nowadays, computer processing speed and memory are more than adequate for what we need. Analytical solutions, providing the greatest level of precision, are also sometimes possible.
:::
```{r}
simulation_size <- 100
```
## `r scales::number(simulation_size)` simulations {.smaller}
Let's start slowly by drawing `r scales::number(simulation_size)` random samples from our distributions and plot them using histograms...
::: columns
::: {.column width="55%"}
```{r}
draw_posterior_samples <- function(posteriors, simulation_size) {
posteriors %>%
rowwise() %>%
mutate(
sim_id = list(1:simulation_size),
ctr = list(
rbeta(
n = simulation_size,
shape1 = shape1,
shape2 = shape2
)
)
) %>%
select(group, sim_id, ctr) %>%
unnest(cols = c(sim_id, ctr))
}
posterior_samples <- draw_posterior_samples(posteriors, simulation_size)
```
::: {.fragment .fade-up}
Here's some of our Monte Carlo samples:
```{r}
#| warning: false
#| message: false
posterior_samples %>%
spread(group, ctr) %>%
mutate(
uplift = treatment / control - 1,
beats_control = treatment > control
) %>%
select(-sim_id) %>%
head(n = 7) %>%
knitr::kable(digits = 3)
```
:::
:::
::: {.column width="45%"}
::: fragment
```{r fig.asp=1, fig.width=5}
plot_posterior_samples <- function(
posterior_samples, bins = 100, animate = FALSE
) {
posterior_samples <- posterior_samples %>%
mutate(
frame = cut(log(sim_id), breaks = 50, labels = FALSE)
)
p <- ggplot(posterior_samples) +
facet_wrap(~group, ncol = 1) +
aes(ctr, fill = group) +
geom_histogram(bins = bins) +
scale_x_continuous(labels = scales::percent) +
labs(
title = "Posterior distributions",
subtitle = "Click-through rate",
x = "Click-through rate",
fill = "Group",
y = "Count of simulations"
) +
guides(fill = guide_none())
if(animate) {
p <- p +
transition_manual(frame, cumulative = TRUE) +
view_follow()
p <- p %>%
animate(duration = 10, renderer = gifski_renderer(loop = FALSE))
}
p
}
posterior_samples %>%
plot_posterior_samples(animate = FALSE, bins = 50)
```
:::
:::
:::
## Let's now beef it up a bit...
```{r}
simulation_size <- 1000000
```
We'll now draw `r scales::number(simulation_size, accuracy = 1, big.mark = ",")` samples...
::: fragment
```{r}
posterior_samples <- draw_posterior_samples(posteriors, simulation_size)
posterior_samples %>%
plot_posterior_samples(animate = render_animations)
```
:::
::: notes
Notice how these histograms follow the same distribution as our posteriors. That is because these samples have been drawn at random according to those posterior distributions.
:::
## We can now make some inferences
A summary of our `r scales::number(simulation_size, accuracy = 1, big.mark = ",")` posterior samples for click-through rate:
```{r}
posterior_comparison <- posterior_samples %>%
spread(group, ctr) %>%
mutate(
uplift = treatment / control - 1,
beats_control = treatment > control
)
posterior_comparison %>%
select(-sim_id) %>%
summary()
```
- How do these compare to our theoretical CTRs of `r scales::percent(true_ctr["control"])` for control and `r scales::percent(true_ctr["treatment"])` for treatment, and uplift of `r scales::percent(true_ctr["treatment"] / true_ctr["control"] - 1, accuracy = 0.1)`?
```{r}
posterior_uplift <- with(
posterior_comparison,
mean(beats_control)
)
```
- What is the posterior probability that the CTR of the treatment is greater than that of control? Answer: `r scales::percent(posterior_uplift, 0.01)`
::: notes
Out of our `r scales::number(simulation_size)` simulations, we can see how often the treatment bet control. This tells us the probability that treatment is the winner.
If we filtered our simulations to those where control won and calculated the median CTR uplift, and did the same for cases where treatment won, we can determine the expected losses of choosing either variant as the winner. We should prefer the variant with the lowest expected loss or continue to run the experiment longer to improve our confidence.
:::
## Posterior distribution of the CTR uplift
```{r}
p <- ggplot(
posterior_comparison %>%
mutate(
frame = cut(log(sim_id), breaks = 10, labels = FALSE)
)
) +
aes(
uplift,
fill = factor(
if_else(beats_control, "Yes", "No"),
levels = c("No", "Yes")
)
) +
geom_histogram(bins = 1000) +
scale_x_continuous(labels = scales::percent) +
scale_fill_discrete(drop = FALSE) +
labs(
title = "Uplift to click-through rate",
subtitle = "Treatment relative to control",
x = "Uplift to click-through rate",
y = "Count of simulations",
fill = "Does the treatment\nbeat control?"
)
if (render_animations & FALSE) {
p <- p +
transition_manual(frame, cumulative = TRUE) +
view_follow()
p <- p %>%
animate(duration = 5, renderer = gifski_renderer(loop = FALSE))
}
p
```
# Summary and some final remarks
## Before starting an experiment
Gather prior knowledge and articulate beliefs:
- Establish a baseline - what do you know about the control group?
- What do you expect the effect of the treatment to be? How sure are you?
Express those beliefs and knowledge as distributions - these are your priors for your control and treatment groups.
::: notes
Ensure that the priors encapsulate the collective knowledge and beliefs of all interested parties so that there is agreement. This is to avoid the results from being challenged later.
:::
## Running the experiment
- Start the experiment, gather data, and update your priors to form posteriors about the metric of interest
- Draw inferences by running a large number of Monte Carlo simulations using the posterior distributions
- Know when to end the experiment -- try to plan for this ahead of running the experiment
::: notes
Null-hypothesis significance testing (NHST) is not what Bayesian is for:
- Bayesian tells you the probability of some effect being within some range, given the data. I.e. Given everything we know so far, what are the risks associated with the choices we have?
- NHST tells you the probability of data at least as extreme as what has been observed, given there is no real effect. I.e. How ridiculous would this outcome be if it were due to chance alone?
NHST is often referred to as the frequentist approach, where decisions are made using p-values and some arbitrary threshold $\alpha$ (i.e. false positive rate).
Unlike NHST, Bayesian A/B testing doesn't give you a yes/no answer -- it instead informs you about the probabilities and risks associated with the choices you have.
:::
# Thank you!
Further topics that might interest you:
- **Bayesian Generalised Linear Models** to better isolate the effect of the treatment from other predictors.
- **Survival Analysis**, such as **Kaplan Meier**, to analyse lagged conversion outcomes.
::: columns
::: {.column width="80%"}
::: {.callout-tip appearance="simple"}
These slides and simulations were produced in RStudio using Quarto. Download the source code and slides at: <https://github.com/jdeboer/measurecamp2022>
:::
:::
::: {.column width="20%"}
![](qrcode.png)
:::
:::
# When to stop a Bayesian A/B test?
## If using *uninformative* priors...
If your original priors are uninformative or too weak, then you face the same risks as with frequentist experiments.
Perform **power analysis** ahead of running the experiment. This is to determine the required sample size before any inferences are made.
Before commencing the experiment, decide on:
- The **minimum detectable effect** size
- The accepted **false positive rate**
- The accepted **false negative rate**
## If using *informative* priors...
If your priors are relatively informative and chosen carefully, then this can reduce the chances of false positives and negatives. But:
- Be careful not to bias the results of the experiment.
- Power analysis is still recommended in order to gauge the worse case scenario for how long the experiment might run.
Bayesian inference, with informative priors, can make it **possible to end an experiment early**.
## If deciding to end early...
Ask yourself:
- Has the experiment run for at least a couple of cycles? (e.g. at least two full weeks)
- Have the results stabilised? Is there a clear winner?
- Could it be worth running longer to learn more?
- What are the **risks** of continuing or ending now? What if the results you see are just a fluke and are therefore misguiding you? What is the impact of making the wrong choice? What are the chances?
# Extras
## Bayes theorem {.smaller}
::: columns
::: {.column width="60%"}
$$
P(B \cap A) = P(A \cap B) \\
$$
$$
P(B) \times P(A|B) = P(A) \times P(B|A) \\
$$
$$
P(A|B) = \frac{P(B|A) P(A)}{P(B)}
$$
$$
f(\theta|data) = \frac{f(data|\theta) f(\theta)}{f(data)}
$$
$$
Posterior \propto \mathcal{L}(\theta|data) \times prior
$$
:::
::: {.column width="40%"}
![Thomas Bayes - 1701 -- 1761](Thomas-Bayes.jpg)
:::
:::
## Some useful formulas {.smaller}
Let $\alpha$ and $\beta$ represent the first and second shape parameters of the Beta distribution, respectively.
The mean of this distribution is: $\mu = \frac{\alpha}{\alpha + \beta}$
The standard deviation is: $\sigma = \sqrt{\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}}$
Through substitution and rearrangement, you can determine $\alpha$ and $\beta$ from $\mu$ and $\sigma$.
$$
v = \frac{\mu (1 - \mu)}{\sigma ^ 2} - 1
$$
$$
\alpha = \mu v
$$
$$
\beta = (1 - \mu) v
$$
This way, you can determine the shape parameters based on centrality and spread.
# Game of chances
## To play this game:
- There is one host and at least 2 contestants.
- The host will need a uniform random number generator.
## Instructions {.smaller}
1. The host secretly picks a number, Y, between 0% and 100% and writes it down.
2. The host will then secretly generate a random number, X, again between 0% and 100%:
- If X is less than Y then the host will mark it as a 'success', otherwise as a 'failure', using a tally board that's visible to all contestants.
3. The objective of the game is for contestants to estimate Y by asking the host to perform step 2 as many times as they need (to a reasonable limit). The closer their guess is, the better. However, each contestant can only call out their guess once.
4. Once two contestants have called out what they believe Y is, then the game ends and the host reveals the true answer for Y.
- The first contestant to call out their guess wins the game if they are within 5 points of Y. If not, then the contestant whose guess is closest to Y wins.
## Lessons of the game
- As the number of successes and failures increases, you get closer to knowing what Y is.
- The compromise each contestant makes between speed and certainty will influence who wins.
::: callout-tip
Try a modification to the game where Y is a number related to a topic that the audience will have some prior knowledge about. This way they can incorporate their prior expectations when making a guess.
:::