-
Notifications
You must be signed in to change notification settings - Fork 1
/
MBML.html
5272 lines (5242 loc) · 595 KB
/
MBML.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="generator" content="pandoc">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">
<meta name="author" content="Dr Leena Murgai">
<meta name="dcterms.date" content="2023-03-26">
<meta name="google-site-verification" content="P2QDxs8DF7oWhzhn6gwF-hjhnxJHhfG71FOX0v56hf0" />
<meta property="og:url" content="https://mitigatingbias.ml">
<meta property="og:type" content="book">
<meta property="og:title" content="Mitigating Bias in Machine Learning">
<meta property="og:description" content="Mitigating Bias in Machine Learning discusses how practicing model developers might build fairer predictive systems, and avoid causing harm. Part I offers context (philosophical, legal, technical) and practical solutions. Part II discusses how we quantify different notions of fairness, where possible making connections with ideologies from other disciplines (discussed in part I). Part III analyses methods for mitigating bias, looking at the impact on the various metrics (discussed in part II).">
<meta property="og:book:author" content="Leena Murgai">
<meta property="og:image" content="https://raw.githubusercontent.com/leenamurgai/leenamurgai.github.io/main/profile/figures/SocialPreviewLandscape.png">
<meta property="og:image:type" content="image/png">
<meta property="og:image:width" content="1280">
<meta property="og:image:height" content="640">
<title>Mitigating Bias in Machine Learning</title>
<style type="text/css">code{white-space: pre;}</style>
<link rel="stylesheet" href="tex2html/tufte/tufte.css">
<link rel="stylesheet" href="tex2html/css/pandoc.css">
<link rel="stylesheet" href="tex2html/css/navbar.css">
<link rel="stylesheet" href="tex2html/css/tweak.css">
<script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml-full.js" type="text/javascript"></script>
<!--[if lt IE 9]>
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
<![endif]-->
</head>
<body>
<article>
<header>
<h1 class="title">Mitigating Bias in Machine Learning</h1>
<p class="byline"><a href="https://leenamurgai.co.uk" target="_blank" rel="noopener noreferrer">Dr Leena Murgai</a></p>
<p class="byline"><a href="https://github.com/leenamurgai/mitigatingbias.ml" target="_blank" rel="noopener noreferrer">26 March 2023</a></p>
<p class="byline"><a href="https://raw.githubusercontent.com/leenamurgai/mitigatingbias.ml/main/profile/mbml_citation.bib" target="_blank" rel="noopener noreferrer">Cite this book</a></p>
</header>
<div class="TOC">
<nav id="TOC">
<div class="shortthickbar"></div>
<div class="shortthickbar"></div>
<div class="shortthickbar"></div>
<ul>
<li><a href="#part-i-introduction" id="toc-part-i-introduction">Part I Introduction</a></li>
<li><a href="#ch_Background" id="toc-ch_Background"><span class="toc-section-number">1</span> Context</a>
<ul>
<li><a href="#bias-in-machine-learning" id="toc-bias-in-machine-learning"><span class="toc-section-number">1.1</span> Bias in Machine Learning</a></li>
<li><a href="#sec_FairnessJustice" id="toc-sec_FairnessJustice"><span class="toc-section-number">1.2</span> A Philosophical Perspective</a></li>
<li><a href="#a-legal-perspective" id="toc-a-legal-perspective"><span class="toc-section-number">1.3</span> A Legal Perspective</a></li>
<li><a href="#sec_SimpsParadox" id="toc-sec_SimpsParadox"><span class="toc-section-number">1.4</span> A Technical Perspective</a></li>
<li><a href="#sec_harms" id="toc-sec_harms"><span class="toc-section-number">1.5</span> What’s the Harm?</a></li>
<li><a href="#summary" id="toc-summary">Summary</a></li>
</ul></li>
<li><a href="#ch_EthicalDev" id="toc-ch_EthicalDev"><span class="toc-section-number">2</span> Ethical development</a>
<ul>
<li><a href="#machine-learning-cycle" id="toc-machine-learning-cycle"><span class="toc-section-number">2.1</span> Machine Learning Cycle</a></li>
<li><a href="#sec_ResponseDev" id="toc-sec_ResponseDev"><span class="toc-section-number">2.2</span> Model Development and Deployment Life Cycle</a></li>
<li><a href="#sec_ProcessPolicy" id="toc-sec_ProcessPolicy"><span class="toc-section-number">2.3</span> Responsible Model Development and Deployment</a></li>
<li><a href="#common-causes-of-harm" id="toc-common-causes-of-harm"><span class="toc-section-number">2.4</span> Common Causes of Harm</a></li>
<li><a href="#linking-common-causes-of-harm-to-the-workflow" id="toc-linking-common-causes-of-harm-to-the-workflow"><span class="toc-section-number">2.5</span> Linking Common Causes of Harm to the Workflow</a></li>
<li><a href="#summary-1" id="toc-summary-1">Summary</a></li>
</ul></li>
<li><a href="#part-ii-measuring-bias" id="toc-part-ii-measuring-bias">Part II Measuring Bias</a></li>
<li><a href="#ch_GroupFairness" id="toc-ch_GroupFairness"><span class="toc-section-number">3</span> Group Fairness</a>
<ul>
<li><a href="#sec_BalOut" id="toc-sec_BalOut"><span class="toc-section-number">3.1</span> Comparing Outcomes</a></li>
<li><a href="#sec_BalErr" id="toc-sec_BalErr"><span class="toc-section-number">3.2</span> Comparing Errors</a></li>
<li><a href="#sec_Impossible" id="toc-sec_Impossible"><span class="toc-section-number">3.3</span> Incompatibility Between Fairness Criteria</a></li>
<li><a href="#concluding-remarks" id="toc-concluding-remarks"><span class="toc-section-number">3.4</span> Concluding Remarks</a></li>
<li><a href="#summary-2" id="toc-summary-2">Summary</a></li>
</ul></li>
<li><a href="#ch_IndividualFairness" id="toc-ch_IndividualFairness"><span class="toc-section-number">4</span> Individual Fairness</a>
<ul>
<li><a href="#individual-fairness-as-continuity" id="toc-individual-fairness-as-continuity"><span class="toc-section-number">4.1</span> Individual Fairness as Continuity</a></li>
<li><a href="#individual-fairness-as-randomness" id="toc-individual-fairness-as-randomness"><span class="toc-section-number">4.2</span> Individual Fairness as Randomness</a></li>
<li><a href="#similarity-metrics" id="toc-similarity-metrics"><span class="toc-section-number">4.3</span> Similarity Metrics</a></li>
<li><a href="#measuring-individual-fairness-in-practice" id="toc-measuring-individual-fairness-in-practice"><span class="toc-section-number">4.4</span> Measuring Individual Fairness in Practice</a></li>
<li><a href="#summary-3" id="toc-summary-3">Summary</a></li>
</ul></li>
<li><a href="#ch_UtilityFairness" id="toc-ch_UtilityFairness"><span class="toc-section-number">5</span> Utility as Fairness</a>
<ul>
<li><a href="#measuring-inequality" id="toc-measuring-inequality"><span class="toc-section-number">5.1</span> Measuring Inequality</a></li>
<li><a href="#generalised-entropy-indices" id="toc-generalised-entropy-indices"><span class="toc-section-number">5.2</span> Generalised Entropy Indices</a></li>
<li><a href="#defining-a-benefit-function" id="toc-defining-a-benefit-function"><span class="toc-section-number">5.3</span> Defining a Benefit Function</a></li>
<li><a href="#fairness-as-utility" id="toc-fairness-as-utility"><span class="toc-section-number">5.4</span> Fairness as Utility</a></li>
<li><a href="#summary-4" id="toc-summary-4">Summary</a></li>
</ul></li>
<li><a href="#app_Notation" id="toc-app_Notation"><span class="toc-section-number">A</span> Notation and Conventions</a></li>
<li><a href="#app_Metrics" id="toc-app_Metrics"><span class="toc-section-number">B</span> Performance Metrics</a></li>
<li><a href="#app_ProbRules" id="toc-app_ProbRules"><span class="toc-section-number">C</span> Rules of Probability</a></li>
<li><a href="#app_Solutions" id="toc-app_Solutions"><span class="toc-section-number">D</span> Proofs and Code</a>
<ul>
<li><a href="#sec_app_GFSolutions" id="toc-sec_app_GFSolutions"><span class="toc-section-number">D.1</span> Group Fairness</a></li>
<li><a href="#sec_app_IFSolutions" id="toc-sec_app_IFSolutions"><span class="toc-section-number">D.2</span> Individual Fairness</a></li>
<li><a href="#sec_app_IISolutions" id="toc-sec_app_IISolutions"><span class="toc-section-number">D.3</span> Utility as Fairness</a></li>
</ul></li>
<li><a href="#app_AIF360" id="toc-app_AIF360"><span class="toc-section-number">E</span> AIF360</a>
<ul>
<li><a href="#app_AIF360_GF" id="toc-app_AIF360_GF"><span class="toc-section-number">E.1</span> Group Fairness</a></li>
<li><a href="#app_AIF360_IF" id="toc-app_AIF360_IF"><span class="toc-section-number">E.2</span> Individual Fairness</a></li>
<li><a href="#app_AIF360_II" id="toc-app_AIF360_II"><span class="toc-section-number">E.3</span> Utility as Fairness</a></li>
</ul></li>
<li><a href="#bibliography" id="toc-bibliography">References</a></li>
</ul>
</nav>
</div>
<div id="collapsiblemenu">
<button class="collapsible">
<div class="shortthickbar"></div>
<div class="shortthickbar"></div>
<div class="shortthickbar"></div>
</button>
<div class="content">
<ul>
<li><a href="#part-i-introduction" id="toc-part-i-introduction">Part I Introduction</a></li>
<li><a href="#ch_Background" id="toc-ch_Background"><span class="toc-section-number">1</span> Context</a>
<ul>
<li><a href="#bias-in-machine-learning" id="toc-bias-in-machine-learning"><span class="toc-section-number">1.1</span> Bias in Machine Learning</a></li>
<li><a href="#sec_FairnessJustice" id="toc-sec_FairnessJustice"><span class="toc-section-number">1.2</span> A Philosophical Perspective</a></li>
<li><a href="#a-legal-perspective" id="toc-a-legal-perspective"><span class="toc-section-number">1.3</span> A Legal Perspective</a></li>
<li><a href="#sec_SimpsParadox" id="toc-sec_SimpsParadox"><span class="toc-section-number">1.4</span> A Technical Perspective</a></li>
<li><a href="#sec_harms" id="toc-sec_harms"><span class="toc-section-number">1.5</span> What’s the Harm?</a></li>
<li><a href="#summary" id="toc-summary">Summary</a></li>
</ul></li>
<li><a href="#ch_EthicalDev" id="toc-ch_EthicalDev"><span class="toc-section-number">2</span> Ethical development</a>
<ul>
<li><a href="#machine-learning-cycle" id="toc-machine-learning-cycle"><span class="toc-section-number">2.1</span> Machine Learning Cycle</a></li>
<li><a href="#sec_ResponseDev" id="toc-sec_ResponseDev"><span class="toc-section-number">2.2</span> Model Development and Deployment Life Cycle</a></li>
<li><a href="#sec_ProcessPolicy" id="toc-sec_ProcessPolicy"><span class="toc-section-number">2.3</span> Responsible Model Development and Deployment</a></li>
<li><a href="#common-causes-of-harm" id="toc-common-causes-of-harm"><span class="toc-section-number">2.4</span> Common Causes of Harm</a></li>
<li><a href="#linking-common-causes-of-harm-to-the-workflow" id="toc-linking-common-causes-of-harm-to-the-workflow"><span class="toc-section-number">2.5</span> Linking Common Causes of Harm to the Workflow</a></li>
<li><a href="#summary-1" id="toc-summary-1">Summary</a></li>
</ul></li>
<li><a href="#part-ii-measuring-bias" id="toc-part-ii-measuring-bias">Part II Measuring Bias</a></li>
<li><a href="#ch_GroupFairness" id="toc-ch_GroupFairness"><span class="toc-section-number">3</span> Group Fairness</a>
<ul>
<li><a href="#sec_BalOut" id="toc-sec_BalOut"><span class="toc-section-number">3.1</span> Comparing Outcomes</a></li>
<li><a href="#sec_BalErr" id="toc-sec_BalErr"><span class="toc-section-number">3.2</span> Comparing Errors</a></li>
<li><a href="#sec_Impossible" id="toc-sec_Impossible"><span class="toc-section-number">3.3</span> Incompatibility Between Fairness Criteria</a></li>
<li><a href="#concluding-remarks" id="toc-concluding-remarks"><span class="toc-section-number">3.4</span> Concluding Remarks</a></li>
<li><a href="#summary-2" id="toc-summary-2">Summary</a></li>
</ul></li>
<li><a href="#ch_IndividualFairness" id="toc-ch_IndividualFairness"><span class="toc-section-number">4</span> Individual Fairness</a>
<ul>
<li><a href="#individual-fairness-as-continuity" id="toc-individual-fairness-as-continuity"><span class="toc-section-number">4.1</span> Individual Fairness as Continuity</a></li>
<li><a href="#individual-fairness-as-randomness" id="toc-individual-fairness-as-randomness"><span class="toc-section-number">4.2</span> Individual Fairness as Randomness</a></li>
<li><a href="#similarity-metrics" id="toc-similarity-metrics"><span class="toc-section-number">4.3</span> Similarity Metrics</a></li>
<li><a href="#measuring-individual-fairness-in-practice" id="toc-measuring-individual-fairness-in-practice"><span class="toc-section-number">4.4</span> Measuring Individual Fairness in Practice</a></li>
<li><a href="#summary-3" id="toc-summary-3">Summary</a></li>
</ul></li>
<li><a href="#ch_UtilityFairness" id="toc-ch_UtilityFairness"><span class="toc-section-number">5</span> Utility as Fairness</a>
<ul>
<li><a href="#measuring-inequality" id="toc-measuring-inequality"><span class="toc-section-number">5.1</span> Measuring Inequality</a></li>
<li><a href="#generalised-entropy-indices" id="toc-generalised-entropy-indices"><span class="toc-section-number">5.2</span> Generalised Entropy Indices</a></li>
<li><a href="#defining-a-benefit-function" id="toc-defining-a-benefit-function"><span class="toc-section-number">5.3</span> Defining a Benefit Function</a></li>
<li><a href="#fairness-as-utility" id="toc-fairness-as-utility"><span class="toc-section-number">5.4</span> Fairness as Utility</a></li>
<li><a href="#summary-4" id="toc-summary-4">Summary</a></li>
</ul></li>
<li><a href="#app_Notation" id="toc-app_Notation"><span class="toc-section-number">A</span> Notation and Conventions</a></li>
<li><a href="#app_Metrics" id="toc-app_Metrics"><span class="toc-section-number">B</span> Performance Metrics</a></li>
<li><a href="#app_ProbRules" id="toc-app_ProbRules"><span class="toc-section-number">C</span> Rules of Probability</a></li>
<li><a href="#app_Solutions" id="toc-app_Solutions"><span class="toc-section-number">D</span> Proofs and Code</a>
<ul>
<li><a href="#sec_app_GFSolutions" id="toc-sec_app_GFSolutions"><span class="toc-section-number">D.1</span> Group Fairness</a></li>
<li><a href="#sec_app_IFSolutions" id="toc-sec_app_IFSolutions"><span class="toc-section-number">D.2</span> Individual Fairness</a></li>
<li><a href="#sec_app_IISolutions" id="toc-sec_app_IISolutions"><span class="toc-section-number">D.3</span> Utility as Fairness</a></li>
</ul></li>
<li><a href="#app_AIF360" id="toc-app_AIF360"><span class="toc-section-number">E</span> AIF360</a>
<ul>
<li><a href="#app_AIF360_GF" id="toc-app_AIF360_GF"><span class="toc-section-number">E.1</span> Group Fairness</a></li>
<li><a href="#app_AIF360_IF" id="toc-app_AIF360_IF"><span class="toc-section-number">E.2</span> Individual Fairness</a></li>
<li><a href="#app_AIF360_II" id="toc-app_AIF360_II"><span class="toc-section-number">E.3</span> Utility as Fairness</a></li>
</ul></li>
<li><a href="#bibliography" id="toc-bibliography">References</a></li>
</ul>
</div>
</div>
<section id="part-i-introduction" class="level1 unnumbered">
<h1 class="unnumbered">Part I Introduction</h1>
<p>Welcome to Mitigating Bias in Machine Learning. If you’ve made it here chances are you’ve worked with models and have some awareness of the problem of biased machine learning algorithms. You might be a student with a foundational course in machine learning under your belt, or a Data Scientist or Machine Learning Engineer, concerned about the impact your models might have on the world.</p>
<p>In this book we are going to learn and analyse a whole host of techniques for measuring and mitigating bias in machine learning models. We’re going to compare them, in order to understand their strengths and weaknesses. Mathematics is an important part of modelling, and we won’t shy away from it. Where possible, we will aim to take a mathematically rigorous approach to answering questions.</p>
<p>Mathematics, just like code, can contain bugs. In this book, each has been used to verify the other. The analysis in this book, was completed using Python. The <a href="https://github.com/leenamurgai/mitigatingbias.ml/tree/main/code">Jupyter Notebooks</a> are available on GitHub, for those who would like to see/use them. That said, this book is intended to be self contained, and does not contain code. We will focus on the concepts, rather than the implementation.</p>
<p>Mitigating Bias in Machine Learning is ultimately about fairness. The goal of this book is to understand how we, as practising model developers, might build fairer predictive systems and avoid causing harm (sometimes that might mean not building something at all). There are many facets to solving a problem like this, not all of them involve equations and code. The first two chapters (part I) are dedicated to discussing these.</p>
<p>In a sense, over the course of the book, we will zoom in on the problem, or rather narrow our perspective. In chapter 1, we’ll discuss philosophical, political, legal, technical and social perspectives. In chapter two we take a more practical view on the problem of ethical development (how to build and organise the development of models, with a view to reducing ethical risk).</p>
<p>In part II we will talk about how we quantify different notions of fairness.</p>
<p>In part III, we will look at methods for mitigating bias through model interventions and analyse their impact.</p>
<p>Let’s get started.</p>
</section>
<section id="ch_Background" class="level1" data-number="1">
<h1 data-number="1"><span class="header-section-number">1</span> Context</h1>
<div class="chapsumm">
<p><strong>This chapter at a glance</strong></p>
<ul>
<li><p>Problems with machine learning in sociopolitical domains</p></li>
<li><p>Contrasting socio-political theories of fairness in decision systems</p></li>
<li><p>The history, application and interpretation of anti-discrimination law</p></li>
<li><p>Association paradoxes and the difficulty in identifying bias</p></li>
<li><p>The different types of harm caused by biased systems</p></li>
</ul>
</div>
<p>The goal of this chapter is to shed light on the problem of bias in machine learning, from a variety of different perspectives. The word <em>bias</em> can mean many things but in this book, we use it interchangeably with the term <em>unfairness</em>. We’ll talk about why later.</p>
<p>Perhaps the biggest challenge in developing <em>sociotechnical systems</em> is that it inevitably involve questions which are social, philosophical, political, and legal in nature; questions to which there is often no definitive answer but rather competing viewpoints and trade-offs to be made. As we’ll see, this does not change when we attempt to quantify the problem. There are many multiple definitions of fairness that have been proven to be impossible to satisfy simultaneously. The problem of bias in sociotechnical systems is very much an interdisciplinary one and, in this chapter, we discuss them as such. We will make connections between concepts and language from the various subjects over the course of this book.</p>
<p>In this chapter we shall discuss some philosophical theories of fairness in sociopolitical systems and consider how they might relate to model training and fairness criteria. We’ll take a legal perspective, looking at anti-discrimination laws in the US as an example. We’ll discuss some of the history behind and practical application of them; and the tensions that exist in their interpretation. Data can be misleading; correlation does not imply causation which is why domain knowledge in building sociotechnical systems is imperative. We will discuss the technical difficulty in identifying bias in static data through illustrative examples of Simpson’s paradox. Finally, we’ll discuss why it’s important to consider the fairness of automated systems. We’ll finish the chapter by discussing some of the different types of harm caused by biased machine learning systems, not just allocative but representational harms which are currently less well defined and potentially valuable research areas.</p>
<p>Let’s start by describing the types of problems we are interested in.</p>
<section id="bias-in-machine-learning" class="level2" data-number="1.1">
<h2 data-number="1.1"><span class="header-section-number">1.1</span> Bias in Machine Learning</h2>
<p>Machine learning can be described as the study of computer algorithms that improve with (or learn) experience. It can be broadly subdivided into the fields of supervised, unsupervised and reinforcement learning.</p>
<section id="supervised-learning" class="level5 unnumbered">
<h5 class="unnumbered">Supervised learning</h5>
<p>For supervised learning problems, the experience come in the form of labelled training data. Given a set of features <span class="math inline">\(X\)</span> and labels (or targets) <span class="math inline">\(Y\)</span>, we want to learn a function or mapping <span class="math inline">\(f\)</span>, such that <span class="math inline">\(Y = f(X)\)</span>, where <span class="math inline">\(f\)</span> generalizes to previously unseen data.</p>
</section>
<section id="unsupervised-learning" class="level5 unnumbered">
<h5 class="unnumbered">Unsupervised learning</h5>
<p>For unsupervised learning problems there are no labels <span class="math inline">\(Y\)</span>, only features <span class="math inline">\(X\)</span>. Instead we are interested in looking for patterns and structure in the data. For example, we might want to subdivide the data into clusters of points with similar (previously unknown) characteristics or we might want to reduce the dimensionality of the data (to be able to visualize it or simply to make a supervised learning algorithm more efficient). In other words, we are looking for a new feature <span class="math inline">\(Y\)</span> and the mapping <span class="math inline">\(f\)</span> from <span class="math inline">\(X\)</span> to <span class="math inline">\(Y\)</span>.</p>
</section>
<section id="reinforcement-learning" class="level5 unnumbered">
<h5 class="unnumbered">Reinforcement learning</h5>
<p>Reinforcement learning is concerned with the problem of optimally navigating a state space to reach a goal state. The problem is framed as an agent that takes actions, which result in rewards (or penalties). The task is then to maximize the cumulative reward. As with unsupervised learning, the agent is not given a set of examples of optimal actions in various states, but rather must learn them through trial and error. A key aspect of reinforcement learning is the existence of a trade-off between exploration (searching unexplored territory in the hope of finding a better choice) and exploitation (exploiting what has been learned so far).</p>
<p>In this we will focus on the first two categories (essentially algorithms that capture and or exploit patterns in data), primarily because these are the fields in which problems related to bias in machine learning are most pertinent (automation and prediction). As one would expect then, these are also the areas in which many of the technical developments in measuring and mitigating bias have been concentrated.</p>
<p>The idea that the kinds of technologies described above are <em>learning</em> is an interesting one. The analogy is clear, learning by example is certainly a way to learn. In less modern disciplines one might simply think of <em>training</em> a model as; solving an equation, interpolating data, or optimising model parameters. So where does the terminology come from? The term <em>machine learning</em> was coined by Arthur Samuel in the 1950’s when, at IBM, he developed an algorithm capable of playing draughts (checkers). By the mid 70’s his algorithm was competitive at amateur level. Though it was not called reinforcement learning at the time, the algorithm was one of the earliest implementations of such ideas. Samuel used the term <em>rote learning</em> to describe a memorisation technique he implemented where the machine remembered all the states it had visited and the corresponding reward function, in order to extend the search tree.</p>
</section>
<section id="what-is-a-model" class="level3" data-number="1.1.1">
<h3 data-number="1.1.1"><span class="header-section-number">1.1.1</span> What is a Model?</h3>
<p>Underlying every machine learning algorithm is a model (often several of them) and these have been around for millennia. Based on the discovery of palaeolithic tally sticks (animal bones carved with notches) it’s believed that humans have kept numerical records for over 40,000 years. The earliest mathematical models (from around 4,000 BC) were geometric and used to advance the fields of astronomy and architecture. By 2,000 BC, mathematical models were being used in an algorithmic manner to solve specific problems by at least three civilizations (Babylon, Egypt and India).</p>
<p>A model is a simplified representation of some real world phenomena. It is an expression of the relationship between things; a function or mapping which, given a set of input variables (features), returns a decision or prediction (target). A model can be determined with the help of data, but it need not be. It can simply express an opinion as to how things should be related.</p>
<p>If we have a model which represents a theoretical understanding of the world (under a series of simplifying assumptions) we can test it by measuring and comparing the results to reality. Based on the results we can assess how accurate our understanding of the world was and update our model accordingly. In this way, making simplifying assumptions can be a means to iteratively improve our understanding of the world. Models play an incredibly important role in the pursuit of knowledge. They have provided a mechanism to understand the world around us, and explain why things behave as they do; to prove that the earth could not be flat, explain why the stars move and shift in brightness as they do or, (somewhat) more recently in the case of my PhD, explain why supersonic flows behave uncharacteristically, when a shock wave encounters a vortex.</p>
<p>As the use of models has been adopted by industry, increasingly their purpose has been geared towards prediction and automation, as a way to monetize that knowledge. But the pursuit of profit inevitably creates conflicts of interests. If your goal is to learn more, finding out where your theory is wrong and fixing it is the goal. In business, less so. I recall a joke I heard at school describing how one could tell which field of science an experiment belonged to. If it changes colour, it’s biology; if it explodes, it’s chemistry and if it doesn’t work, it’s physics. Models of real world phenomena fail. They are, by their very nature, a reductive representation of an infinitely more complex real world system. Obtaining adequately rich and relevant data is a major limitation of machine learning models and yet, they are increasingly being applied to problems, where that kind of data simply doesn’t exist.</p>
</section>
<section id="sociotechnical-systems" class="level3" data-number="1.1.2">
<h3 data-number="1.1.2"><span class="header-section-number">1.1.2</span> Sociotechnical systems</h3>
<p>We use the term <em>sociotechnical systems</em> to describe systems that involve algorithms that manage people. They make efficient decisions for and about us, determine what we see, direct us and more. But managing large numbers of people inevitably exerts a level of authority and control. An extreme example is the adoption of just-in-time scheduling algorithms by large retailers in the US to manage staffing needs. To predict footfall, the algorithms take into account everything from weather forecasts to sporting events. The cost of this efficiency is passed onto employees. The number of hours allocated are optimised to fall short of qualifying for costly health insurance. Employees are subjected to haphazard schedules that prevent them from being able to prioritise anything other than work; eliminating the possibility of any opportunity that might enable them to advance beyond the low-wage work pool.</p>
<p>Progress in the field of deep learning combined with increased availability and decreased cost of computational resources has led to an explosion in data and model use. Automation seemingly offers a path to making our lives easier, improving the efficiency and efficacy of the many industries we transact with day to day; but there are growing and legitimate concerns over how the benefit (and cost) of these efficiencies are distributed. Machine learning is already being used to automate decisions in just about every aspect of modern life; deciding which adverts to show to whom, deciding which transactions might be fraud when we shop, deciding who is able to access to financial services such as loans and credit cards, determining our treatment when sick, filtering candidates for education and employment opportunities, in determining which neighbourhoods to police and even in the criminal justice system to decide what level bail should be set at, or the length of a given sentence. At almost every major life event, going to university, getting a job, buying a house, getting sick, decisions are being made by machines.</p>
</section>
<section id="what-kind-of-bias" class="level3" data-number="1.1.3">
<h3 data-number="1.1.3"><span class="header-section-number">1.1.3</span> What Kind of Bias?</h3>
<p>The word <em>bias</em> is rather overloaded; it has numerous different interpretations even within the same discipline. Let’s talk about the kinds of biases that are relevant here. The word bias is used to describe systematic errors in variable estimation (predictions) from data. If the goal is to create systems that work similarly well for all types of people, we certainly want to avoid these. In a social context, bias is spoken of as prejudice or discrimination in a given context, based on characteristics that we as a society deem to be unacceptable or unfair (for example hiring practices that systematically disadvantage women). Mitigating bias though is not just about avoiding discriminating, it can also manifest when a system fails to adequately discriminate based on characteristics that are relevant to the problem (for example systematically higher rates of error in visual recognition systems for darker skinned individuals). Systemic bias and discrimination are observed in data in numerous ways; historical decisions of course are susceptible, but more importantly perhaps in the very definition of the categories, who is recognised and who is erased. Bias need not be conscious, in reality it starts at the very inception of technology, in deciding which problems are worth solving in the first place. Bias exists in how we measure the cost and benefit of new technologies. For sociotechnical systems, these are all deeply intertwined.</p>
<p>Ultimately, mitigating bias in our models is about fairness and in this book we shall use the terms interchangeably. Machine learning models are capable of not only of proliferating existing societal biases, but amplifying them, and are easily deployed at scale. But how do we even define fairness? And from whose perspective do we mean fair? The law can provide <em>some</em> context here. Laws, in many cases, define <em>protected</em> characteristics and domains (we’ll talk more about these later). We can potentially use these as a guide and we certainly have a responsibility to be law abiding citizens. A common approach historically has been to ignore protected characteristics. There’s a few reasons for this. One reason is the false belief that, an algorithm cannot discriminate based on features not included in the data. This assumption is is easy to disprove with a counter example. A reasonably fool-proof way to systematically discriminate by race or rather ethnicity (without explicitly using it), is to discriminate by location/residence; that is, another variable that’s strongly correlated and serves as a proxy. The legality of this practice depends on the domain. In truth, you don’t need a feature, or a proxy, to discriminate based on it, you just need enough data, to be able to predict it. If it is predictable, the information there and the algorithm is likely using it. Another reason for ignoring protected features is avoiding legal liability (we’ll talk more about this when we take a legal perspective later in the chapter).</p>
<section id="example-amazon-prime-same-day-delivery-service" class="level4 unnumbered">
<h4 class="unnumbered">Example: Amazon Prime same day delivery service</h4>
<p>In 2016, analysis published by Bloomberg uncovered racial disparities in eligibility for Amazon’s same day delivery services for Prime customers<span class="sidenote-wrapper"><label for="sn-0" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-0" class="margin-toggle"/><span class="sidenote">To be clear, the same day delivery was free for eligible Amazon Prime customers on sales exceeding $35. Amazon Prime members pay a fixed annual subscription fee, thus the disparity is in the level of service provided for Prime customers who are eligible verses those that are not.<br />
<br />
</span></span><span class="citation" data-cites="AmazonSameDayPrime"><a href="#ref-AmazonSameDayPrime" role="doc-biblioref">[1]</a></span><span class="marginnote"><span id="ref-AmazonSameDayPrime" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[1] </span><span class="csl-right-inline">D. Ingold and S. Soper, <span>“Amazon doesn’t consider the race of its customers. Should it?”</span> <em>Bloomberg</em>, 2016.</span>
</span>
</span>. The study used census data to identify Black and White residents and plot the data points on city maps which simultaneously showed the areas that qualified for the Prime customer same day delivery. The disparities are glaring at a glance. In six major cities, New York, Boston, Atlanta, Chicago, Dallas, and Washington, DC where the service did not have broad coverage, it was mainly Black neighbourhoods that were ineligible. In the latter four cities, Black residents were about half as likely to live in neighbourhoods eligible for Amazon same-day delivery as White residents.</p>
<p>At the time Amazon’s process in determining which ZIP codes to serve was reportedly a cost benefit calculation that did not explicitly take race into account but for those who have seen redlining maps from the 1930’s is hard to not see the resemblance. Redlining was the (now illegal) practice of declining (or raising prices for) financial products to people based on the neighbourhood where they lived. Because neighbourhoods were racially segregated (a legacy that lives on today), public and private institutions were able to systematically exclude minority populations from the housing market and deny loans for house improvements without explicitly taking race into account. Between 1934 and 1962, the Federal Housing Administration distributed $120 billion in loans. Thanks to redlining, 98% of these went to White families.</p>
<p>Amazon is a private enterprise, and it is legally entitled to make decisions about where to offer services based on how profitable it is. Some might argue they have a right to be able to make those decisions. Amazon is not responsible for the injustices that created such racial disparities, but the reality is that such disparities in access to goods and services perpetuate it. If same-day delivery sounds like a luxury, it’s worth considering the context. The cities affected have a long histories of racial segregation and economic inequality resulting from systemic racism, now deemed illegal. They are neighbourhoods which to this day are underserved by brick and mortar retailers, where residents are forced to travel further and pay more for household essentials. Now we are in the midst of a pandemic, where once delivery of household goods used to be a luxury, with so many forced to quarantine, suddenly it’s become far more of a necessity. What we consider to be a necessity changes over time, it depends on where one lives, one’s circumstances and more. Finally, consider the scale of Amazon’s operations, in 2016 one third of retail e-commerce spending in the US was with Amazon (that number has since risen to almost 50%).</p>
</section>
</section>
</section>
<section id="sec_FairnessJustice" class="level2" data-number="1.2">
<h2 data-number="1.2"><span class="header-section-number">1.2</span> A Philosophical Perspective</h2>
<p>Developing a model is not an objective scientific process, it involves making a series subjective choices. Cathy O’Neil describes them as “opinions embedded in code”. One of the most fundamental ways in which we impose our opinion on a machine learning model, is in deciding how we measure success. Let’s look at the process of training a model. We start with some parametric representation (a family of models), which you hope is sufficiently complex to be able to reflect the relationships between the variables in the data. The goal in training is to determine which model (in our chosen family) is <em>best</em>. The <em>best</em> model being the one that maximises it’s utility (from the model developers perspective).</p>
<p>For sociotechnical systems, our predictions don’t only impact the decision maker, they also result in a benefit (or harm) to those subjected to them. The very purpose of codifying a decision policy is often to cheaply deploy it at scale. The more people it processes, the more value there is in codifying the decision process. Another, way to look such models instead then, is as a system for distributing benefits (or harms) among a population. Given this, which model is the <em>right</em> one so to speak. In this section we briefly discuss some more philosophical theories relevant to these types of problems. We start with utilitarianism which is perhaps the easiest theory to draw parallels with in modelling.</p>
<section id="utilitarianism" class="level3" data-number="1.2.1">
<h3 data-number="1.2.1"><span class="header-section-number">1.2.1</span> Utilitarianism</h3>
<p>Utilitarianism provides a framework for moral reasoning in decision making. Under this framework, the correct course of action, when faced with a dilemma, is the one that maximises the benefit for the greatest number of people. The doctrine demands that the benefits to all people are are counted equally. Variations of the theory have evolved over the years. Some differ in their notion of how benefits are understood. Others distinguish between the quality of various kinds of benefit. In a business context, one might consider it as financial benefit (and cost). Although, this in itself depends on one’s perspective. Some doctrines advocate that the impact of the action in isolation should be considered, while others ask what the impact would be if everyone in the population took the same actions.</p>
<p>There are some practical problems with utilitarianism as the sole guiding principle for decision making. How do we measure benefit? How do we navigate the complexities of placing a value on immeasurable and vastly different consequences? What is a life, time, money or particular emotion worth and how do we compare and aggregate them? How can one even be certain of the consequences? Longer term consequences are hard if not impossible to predict. Perhaps the most significant flaw in utilitarianism for moral reasoning, is the omission of justice as a consideration.</p>
<p>Utilitarian reasoning judges actions based solely on consequences, and aggregates them over a population. So, if an action that unjustly harms a minority group happens to be the one that maximises the aggregate benefit over a population, it is nevertheless the correct action to take. Under utilitarianism, theft or infidelity might be morally justified, if those it would harm are none the wiser. Or punishing an innocent person for a crime they did not commit could be justified, if it served to quell unrest among a population. For this reason it is widely accepted that utilitarianism is insufficient as a framework for decision making.</p>
<p>Utilitarianism is a flavour of consequentialism, a branch of ethical theory that holds that consequences are the yard stick against which we must judge the morality of our actions. In contrast deontological ethics judges the morality of actions against a set of rules that define our duties or obligations towards others. Here it is not the consequences of our actions that matter but rather intent.</p>
<p>The conception of utilitarianism is attributed to British philosopher Jeremy Bentham who authored the first major book on the topic <em>An Introduction to the Principles of Morals and Legislation</em> in 1780. In it Bentham argues that, it is the pursuit of pleasure and avoidance of pain alone that motivate individuals to act. Given this he saw utilitarianism as a principle by which to govern. Broadly speaking, the role of government, in his view, was to assign rewards or punishments to actions, in proportion to the happiness or suffering they produced among the governed. At the time, the idea that the well-being of all people should be counted equally, and that that morality of actions should be judged accordingly was revolutionary. Bentham was a progressive in his time, he advocated for women’s rights (to vote, hold office and divorce), decriminalisation of homosexual acts, prison reform and the abolition of slavery and more. He argued many of his beliefs as a simple economic calculation of how much happiness they would produce. Importantly, he didn’t claim that all people were equal, but rather only that their happiness mattered equally.</p>
<p>Times have changed. Over the last century, as civil rights have advanced, the weaknesses of utilitarianism in practice have been exposed time and time again. Utilitarian reasoning has increasingly been seen as hindering social progress, rather than advancing it. For example, utilitarian arguments were used by Whites in apartheid South Africa, who claimed that all South Africans were better-off under White rule, and that a mixed government would lead to social decline as it had in other African nations. Utilitarian reasoning has been used widely by capitalist nations in the form of trickle-down economics. The theory being that the benefits of tax-breaks for the wealthy drive economic growth and ‘trickle-down’ to the rest of the population. But evidence suggests that trickle-down economic policies in more recent decades have done more damage than good, increasing national debt and fuelling income inequality. Utilitarian principles have also been tested in the debate over torture, capturing a rather callous conviction, one where the ‘means justify the ends’.</p>
<p>Historian and author, Yuval Noah Harari has eloquently abstracted this problem. He argues that historically, decentralization of power and efficiency have aligned; so much so, that many of us cannot think of democracy as being capable of failing, to more totalitarian regimes. But in this new age, data is power. We can train enormous models, that require vast amounts of data, to process people en masse, organise and sort them. And importantly, one does not have to have a perfect system in order to have an impact because of the scale on which they can be deployed. The question Yuval poses is, <em>might the benefits of centralised data, offer a great enough advantage, to tip the balance of efficiency, in favour of more centralised models of power?</em></p>
</section>
<section id="justice-as-fairness" class="level3" data-number="1.2.2">
<h3 data-number="1.2.2"><span class="header-section-number">1.2.2</span> Justice as Fairness</h3>
<p>In his theory Justice As Fairness<span class="citation" data-cites="JusticeFairness"><a href="#ref-JusticeFairness" role="doc-biblioref">[2]</a></span><span class="marginnote"><span id="ref-JusticeFairness" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[2] </span><span class="csl-right-inline">J. Rawls, <em>Justice as fairness: A restatement</em>. Cambridge, Mass.: Harvard University Press, 2001.</span>
</span>
</span>, John Rawls takes a different approach. He describes an idealised democratic framework, based on liberal principles and explains how unified laws can be applied (in a free society made up of people with disparate world views) to create a stable sociopolitical system. One where citizens would not only freely co-operate, but further advocate. He described a political conception of justice which would:</p>
<ol>
<li><p>grant all citizens a set of basic rights and liberties</p></li>
<li><p>give special priority to the aforementioned rights and liberties over demands to further the general good, e.g. increasing the national wealth</p></li>
<li><p>assure all citizens sufficient means to make use of their freedoms.</p></li>
</ol>
<p>The special priority given to the basic rights and liberties in the political conception of justice contrasts with a utilitarian doctrine. Here constraints are placed on how benefits can be distributed among the population and a strategy for determining some minimum.</p>
<section id="principles-of-justice-as-fairness" class="level4 unnumbered">
<h4 class="unnumbered">Principles of Justice as Fairness</h4>
<ol>
<li><p><strong>Liberty principle:</strong> Each person has the same indefeasible claim to a fully adequate scheme of equal basic liberties, which is compatible with the same scheme of liberties for all;</p></li>
<li><p><strong>Equality principle:</strong> Social and economic inequalities are to satisfy two conditions:</p>
<ol>
<li><p><strong>Fair equality of opportunity:</strong> The offices and positions to which they are attached are open to all, under conditions of fair equality of opportunity;</p></li>
<li><p><strong>Difference (maximin) principle</strong> They must be of the greatest benefit to the least-advantaged members of society.</p></li>
</ol></li>
</ol>
<p>The principles of Justice as Fairness are ordered by priority so that fulfilment of the liberty principle takes precedence over the equality principles and fair equality of opportunity takes precedence over the difference principle.</p>
<p>The first principle grants basic rights and liberties to all citizens which are prioritised above all else and cannot be traded for other societal benefits. It’s worth spending a moment thinking about what those rights and liberties look like. They are the the basic needs that are important for people to be free, to have choices and the means to pursue their aspirations. Today many of what Rawls considered to be basic rights and liberties are allocated algorithmically; education, employment, housing, healthcare, consistent treatment under the law to name a few.</p>
<p>The second principle requires positions to be allocated meritocratically, with all similarly talented (with respect to the skills and competencies required for the position) individuals having the same chance of attaining such positions i.e. that allocation of such positions should be independent of social class or background. We will return to the concept of <em>equality of opportunity</em> in chapter <a href="#ch_GroupFairness" data-reference-type="ref" data-reference="ch_GroupFairness">3</a> when discussing <em>Group Fairness</em>.</p>
<p>The third principle acts to prevent redistribution of social and economic currency from the rich to the poor by requiring that inequalities are of maximal benefit to the least advantaged in a society, also described as the maximin principle. In this principle, Rawls does not take the simplistic view that inequality and fairness are mutually exclusive but rather concisely articulates when the existence of inequality becomes unfair. In a sense Rawls opposes utilitarian thinking (that everyone matters equally) in prioritising the least advantaged. We shall return to maximin principle when we look at the use of <em>inequality indices</em> to measure algorithmic unfairness in a later chapter.</p>
</section>
</section>
</section>
<section id="a-legal-perspective" class="level2" data-number="1.3">
<h2 data-number="1.3"><span class="header-section-number">1.3</span> A Legal Perspective</h2>
<p>It’s important to remember that anti-discrimination laws are the result of long-standing and systemic discrimination against oppressed people. Their existence is a product of history; subjugation, genocide, civil war, mass displacement of entire communities, racial hierarchies and segregation, supremacist policies (exclusive access to publicly funded initiatives), voter suppression and more. The law provides an important historical record of what we as a society deem fair and unfair, but without history there is no context. The law does not define the benchmark for fairness. Laws vary by jurisdiction and change over time and in particular they often do not adequately recognise or address issues related to discrimination that are known and accepted by the sciences (social, mathematical, medical,...).</p>
<p>In this section we’ll look at the history, practical application and interpretation of the law in the US (acknowledging the narrow scope of our discussion) Finally, we’ll take a brief look at what might be on the legislative horizon for predictive algorithms, based on more recent global developments.</p>
<section id="a-brief-history-of-anti-discrimination-law-in-the-us" class="level3" data-number="1.3.1">
<h3 data-number="1.3.1"><span class="header-section-number">1.3.1</span> A Brief History of Anti-discrimination Law in the US</h3>
<p>Anti-discrimination laws in the US rest on the 14th amendment to the constitution which grants citizens <em>equal protections of the law</em>. Class action law suit Brown v Board (of Education of Topeka, Kansas) was a landmark case which in 1954, legally ended racial segregation in the US. Justices ruled unanimously that racial segregation of children in public schools was unconstitutional, establishing the precedent that “separate-but-equal” was, in fact, not equal at all. Though Brown v Board did not end segregation in practice, resistance to it in the south fuelled the civil rights movement. In the years that followed the NAACP (National Association for the Advancement of Coloured People) challenged segregation laws. In 1955, Rosa parks refusing to give up her seat on a bus in Montgomery (Alabama) led to sit ins and boycotts, many of them led by Martin Luther King Jr. The resulting Civil rights act of 1964 eventually brought an end to “Jim Crow” laws which barred Blacks from sharing buses, schools and other public facilities with Whites.</p>
<p>After the violent attack by Alabama state troopers on participants of a peaceful march from Selma to Montgomery was televised, The Voting Rights Act of 1965 was passed. It overcame many barriers (including literacy tests), at state and local level, used to prevent Black people from voting. Before this incidents of voting officials asking Black voters to “recite the entire Constitution or explain the most complex provisions of state laws”<span class="citation" data-cites="LBJ"><a href="#ref-LBJ" role="doc-biblioref">[3]</a></span><span class="marginnote"><span id="ref-LBJ" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[3] </span><span class="csl-right-inline">P. L. B. Johnson, <span>“Speech to a joint session of congress on march 15, 1965,”</span> <em>Public Papers of the Presidents of the United States</em>, vol. I, entry 107, pp. 281–287, 1965.</span>
</span>
</span> in the south were common place.</p>
<p>In the years following the second world war, there were many attempts to pass an Equal Pay Act. Initial efforts were led by unions who feared men’s salaries would be undercut by women who were paid less for doing their jobs during the war. By 1960, women made up 37% of the work force but earned on average 59 cents for each dollar earned by men. The Equal Pay Act was eventually passed in 1963 in a bill which endorsed “equal pay for equal work”. Laws for gender equality were strengthened the following year by the Civil Rights Act of 1964.</p>
<p>Throughout the 1800’s the American federal government displaced Native American communities to facilitate White settlement. In 1830 the Indian Removal Act was passed in order to relocate hundreds of thousands of Native Americans. Over the following two decades, thousands of those forced to march hundreds of miles west on the perilous “Trail of Tears” died. By the middle on the century, the term “manifest destiny” was popularised to describe the belief that White settlement in North America was ordained by God. In 1887, the Dawes Act laid the groundwork for the seizing and redistribution of reservation lands from Native to White Americans. Between 1945 and 1968 the federal government terminated recognition of more than 100 tribal nations placing them under state jurisdiction. Once again Native Americans were relocated, this time from reservations to urban centres.</p>
<p>In addition to displacing people of colour, the federal government also enacted policies that reduced barriers to home ownership almost exclusively for White citizens - subsidizing the development of prosperous "White Caucasian" tenant/owner only suburbs, guaranteeing mortgages and enabling access to job opportunities by building highway systems for White commuters, often through communities of colour, simultaneously devaluing the properties in them. Even government initiatives aimed at helping veterans of World War II to obtain home loans accommodated Jim Crow laws allowing exclusion of Black people. In the wake of the Vietnam war, just days after the assassination of Martin Luther King J, the Fair Housing Act of 1968 was passed, prohibiting discrimination concerning the sale, rental and financing of housing based on race, religion, national origin or sex.</p>
<p>The Civil Rights Act of 1964 acted as a catalyst for many other civil rights movements, including those protecting people with disabilities. The Rehabilitation Act (1973) removed architectural, structural and transportation barriers and set up affirmative action programs. The Individuals with Disabilities Education Act (IDEA 1975) required free, appropriate public education in the least restrictive environment possible for children with disabilities. The Air Carrier Access Act (1988) which prohibited discrimination on the basis of disability in air travel and ensured equal access to air transportation services. The Fair Housing Amendments Act (1988) prohibited discrimination in housing against people with disabilities.</p>
<p>Title IX of the education amendments of 1972 prohibits federally funded educational institutions from discriminating against students or employees based on sex. The law ensured that schools (elementary to university level) that were recipients of federal funding (nearly all schools) provided fair and equal treatment of the sexes in all areas, including athletics. Before this few opportunities existed for female athletes. The National Collegiate Athletic Association (NCAA) offered no athletic scholarships for women and held no championships for women’s teams. Since then the number of female college athletes has grown five fold. The amendment is credited with decreasing dropout rates and increasing the numbers of women gaining college degrees.</p>
<p>The Equal Credit Opportunity Act was passed in 1974 when discrimination against women applying for credit in the US was rife. It was common practice for mortgage lenders to discount incomes of women that were of ’child bearing’ age or simply deny credit to them. Two years later the law was amended to prohibit lending discrimination based on race, color, religion, national origin, age, the receipt of public assistance income, or exercising one’s rights under consumer protection laws.</p>
<p>In 1978, congress passed the Pregnancy Discrimination Act in response to two Supreme Court cases that ruled that excluding pregnancy related disabilities from disability benefit coverage was not gender based discrimination, and did not violate the equal protection clause.</p>
<p>Table <a href="#tbl:RegDom" data-reference-type="ref" data-reference="tbl:RegDom">1.1</a> shows a (far from exhaustive) summary of regulated domains with corresponding US legislation. Note that legislation in these domains extend to marketing and advertising not just the final decision.</p>
<div id="tbl:RegDom">
<table>
<caption>Table 1.1: Regulated domains in the private sector under US federal law.</caption>
<thead>
<tr class="header">
<th style="text-align: left;">Domain</th>
<th style="text-align: left;">Legislation</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Finance</td>
<td style="text-align: left;">Equal Credit Opportunity Act</td>
</tr>
<tr class="even">
<td rowspan="3" style="text-align: left;">Education</td>
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Education Amendment (1972)</td>
</tr>
<tr class="even">
<td style="text-align: left;">IDEA (1975)</td>
</tr>
<tr class="odd">
<td rowspan="2" style="text-align: left;">Employment</td>
<td style="text-align: left;">Equal Pay Act(1963)</td>
</tr>
<tr class="even">
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
<tr class="odd">
<td rowspan="2" style="text-align: left;">Housing</td>
<td style="text-align: left;">Fair Housing Act (1968)</td>
</tr>
<tr class="even">
<td style="text-align: left;">Fair Housing Amendments Act (1988)</td>
</tr>
<tr class="odd">
<td rowspan="3" style="text-align: left;">Transport</td>
<td style="text-align: left;">Urban Mass Transit Act (1970)</td>
</tr>
<tr class="even">
<td style="text-align: left;">Rehabilitation Act (1973)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Air Carrier Access Act (1988)</td>
</tr>
<tr class="even">
<td style="text-align: left;">Public accommodation<sup>a</sup></td>
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
</tbody>
</table>
</div>
<div class="tablenotes">
<p><sup>a</sup>Prevents refusal of customers.</p>
</div>
<p>Table <a href="#tbl:ProtChar" data-reference-type="ref" data-reference="tbl:ProtChar">1.2</a> provides a list of protected characteristics under US federal law with corresponding legislation (again not exhaustive).</p>
<div id="tbl:ProtChar">
<table>
<caption>Table 1.2: Protected characteristics under US Federal Law.</caption>
<thead>
<tr class="header">
<th style="text-align: left;">Protected Characteristic</th>
<th style="text-align: left;">Legislation</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Race</td>
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
<tr class="even">
<td rowspan="3" style="text-align: left;">Sex</td>
<td style="text-align: left;">Equal Pay Act (1963)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
<tr class="even">
<td style="text-align: left;">Pregnancy Discrimination Act (1978)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Religion</td>
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
<tr class="even">
<td style="text-align: left;">National Origin</td>
<td style="text-align: left;">Civil Rights Act (1964)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Citizenship</td>
<td style="text-align: left;">Immigration Reform & Control Act</td>
</tr>
<tr class="even">
<td style="text-align: left;">Age</td>
<td style="text-align: left;">Age Discrimination in Employment Act (1967)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Familial status</td>
<td style="text-align: left;">Civil Rights Act (1968)</td>
</tr>
<tr class="even">
<td rowspan="2" style="text-align: left;">Disability status</td>
<td style="text-align: left;">Rehabilitation Act of 1973</td>
</tr>
<tr class="odd">
<td style="text-align: left;">American with Disabilities Act of 1990</td>
</tr>
<tr class="even">
<td rowspan="2" style="text-align: left;">Veteran status</td>
<td style="text-align: left;">Veterans’ Readjustment Assistance Act 1974</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Uniformed Services Employment & Reemployment Rights Act</td>
</tr>
<tr class="even">
<td style="text-align: left;">Genetic Information</td>
<td style="text-align: left;">Civil Rights Act(1964)</td>
</tr>
</tbody>
</table>
</div>
</section>
<section id="sec_AppLaw" class="level3" data-number="1.3.2">
<h3 data-number="1.3.2"><span class="header-section-number">1.3.2</span> Application and Interpretation of the Law</h3>
<p>To get an idea of how anti-discrimination laws are be applied in practice and how they might translate to algorithmic decision making, we look at Title VII of the Civil rights act of 1964 in the context of employment discrimination<span class="citation" data-cites="BarocasSelbst"><a href="#ref-BarocasSelbst" role="doc-biblioref">[4]</a></span><span class="marginnote"><span id="ref-BarocasSelbst" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[4] </span><span class="csl-right-inline">S. Barocas and A. D. Selbst, <span>“Big data’s disparate impact,”</span> <em>Calif Law Rev.</em>, vol. 104, pp. 671–732, 2016.</span>
</span>
</span>. Legal liability for discrimination against protected classes can be established as disparate treatment and/or disparate impact. Disparate treatment (also described as direct discrimination in Europe) refers to both differing treatment of individuals based on protected characteristics, and intent to discriminate. Disparate impact (or indirect discrimination in Europe) does not consider intent but addresses policies and practices that disproportionately impact protected classes.</p>
<section id="disparate-treatment" class="level4 unnumbered">
<h4 class="unnumbered">Disparate Treatment</h4>
<p>Disparate treatment effectively prohibits rational prejudice (backed by data showing the protected feature to be correlated) as well as denial of opportunities based on protected characteristics. For an algorithm, it effectively prevents the use of protected characteristics as inputs. It’s noteworthy that in the case of disparate treatment, the actual impact of using the protected features on the outcome is irrelevant; so even if a company could show that the target variable produced by their model had zero correlation with the protected characteristic, the company would still be liable for disparate treatment. This fact is somewhat bizarre given that not using the protected feature in the algorithm provides no guarantee that the algorithm is not biased in relation to it. Indeed an organisation could very well use their data to predict the protected characteristic.</p>
<p>In an effort to avoid disparate treatment liability, many organisations do not even collect data relating to protected characteristics, leaving them unable to accurately measure, let alone address, bias in their algorithms, even if they might want to<span class="sidenote-wrapper"><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle"/><span class="sidenote">In fact, I met a data scientist at a conference, who was working for a financial institution, that said her team was trying to predict sensitive features such as race and gender in order to measure bias in their algorithms.<br />
<br />
</span></span>. In summary, disparate treatment as applied today does not resolve the problem of unconscious discrimination against disadvantaged classes through their use of machine learning algorithms. Further it acts as a deterrent to ethically minded companies that might want to measure the biases in their algorithms.</p>
<div class="lookbox">
<p><strong>Disparate treatment</strong></p>
<p>Suppose a company predicts the sensitive feature and uses this as an input to its model. Should this be considered disparate treatment?</p>
</div>
<p>What about the case where the employer implements an algorithm, finds out that it has a disparate impact, and uses it anyway? Doesn’t that become disparate treatment? No it doesn’t and in fact, somewhat surprisingly, deciding not to apply it upon noting the disparate impact could result in a disparate treatment claim in the opposite direction<span class="citation" data-cites="FireFighters"><a href="#ref-FireFighters" role="doc-biblioref">[5]</a></span><span class="marginnote"><span id="ref-FireFighters" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[5] </span><span class="csl-right-inline"><span>“<span class="nocase">Ricci v. DeStefano, 557 U.S. 557</span>.”</span> 2009.</span>
</span>
</span>. We’ll return to this later. Okay, so what about disparate impact?</p>
</section>
<section id="disparate-impact" class="level4 unnumbered">
<h4 class="unnumbered">Disparate Impact</h4>
<p>In order to establish a violation, it is not enough to simply show that there is a disparate impact, but it must also be shown either that there is no business justification for it, or if there is, that the employer refuses to use another, less discriminatory, means of achieving the desired result. So how much of an impact is enough to warrant a disparate impact claim? There are no rules here only guidelines. The Uniform Guidelines on Employment selection procedures from the Equal Employment Opportunity Commission (EEOC) provides a guideline that if the selection rate from one protected group is less than four fifths of that from another, it will generally be regarded as evidence of adverse impact, though it also states that the threshold would depend on the circumstances.</p>
<p>Assuming the disparate impact is demonstrated, the issue becomes proving business justification. The requirement for business justification has softened in favour of the employer over the years; treated as “business necessity”<span class="citation" data-cites="BusinessNecessity"><a href="#ref-BusinessNecessity" role="doc-biblioref">[6]</a></span><span class="marginnote"><span id="ref-BusinessNecessity" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[6] </span><span class="csl-right-inline"><span>“<span class="nocase">Griggs v. Duke Power Co., 401 U.S. 424</span>.”</span> 1971.</span>
</span>
</span> earlier on and later interpreted as “business justification”<span class="citation" data-cites="BusinessJustification"><a href="#ref-BusinessJustification" role="doc-biblioref">[7]</a></span><span class="marginnote"><span id="ref-BusinessJustification" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[7] </span><span class="csl-right-inline"><span>“<span class="nocase">Wards Cove Packing Co. v. Atonio, 490 U.S. 642</span>.”</span> 1989.</span>
</span>
</span>. Today, it’s generally accepted that business justification lies somewhere between the extremes of “job-relatedness” and “business necessity”. As a concrete example of disparate impact and taking the extreme of job-relatedness - the EEOC along with several federal courts have determined that discrimination on the sole basis of a criminal record to be a violation under disparate impact unless the particular conviction is related to the role, because Non-White applicants are more likely to have a criminal conviction.</p>
<p>For a machine learning algorithm, business justification boils down to the question of job-relatedness of the target variable. If the target variable is improperly chosen, a disparate impact violation can be established. In practice however the courts will accept most plausible explanations of job-relatedness since not accepting it would set a precedent that it is determined discriminatory. Assuming the target variable to be proven job-related then, there is no requirement to validate the model’s ability to predict said trait, only a guideline which sets a low bar (a statistical significance test showing that the target variable correlates with the trait) and which the court is free to ignore.</p>
<p>Assuming business justification is proven by the employer, the final burden then falls on the plaintiff to show that the employer refused to use a less discriminatory “alternative employment practice”. If the less discriminatory alternative would incur additional cost (as is likely) would this be considered refusing? Likely not.</p>
<p>While on the surface, disparate impact might seem like a solution, the current framework of a weak business justification (in terms of a plausible target variable) and the employer refusing an alternative employment practice with no requirement to validate the model offers little resolve. Clearly there is need for reform.</p>
</section>
<section id="anti-classification-versus-anti-subordination" class="level4 unnumbered">
<h4 class="unnumbered">Anti-classification versus Anti-subordination</h4>
<p>Just as the meaning of fairness is subjective so is the interpretation of anti-discrimination laws. At one extreme, anti-classification holds the weaker interpretation, that the law is intended to prevent classification of people based on protected characteristics. At the other extreme, anti-subordination defines the stronger stance, that anti-discrimination laws exist to prevent social hierarchies, class or caste systems based on protected features and, that it should actively work to eliminate them where they exist. An important ideological difference between the two schools of thought is in the application of positive discrimination policies. Under anti-subordination principles, one might advocate for affirmative action as a means to bridge gaps in access to employment, housing, education and other such pursuits, that are a direct result of historical systemic discrimination against particular groups. A strict interpretation of the anti-classification principle would prohibit such actions. Both anti-classification and anti-subordination ideologies have been argued and upheld in landmark cases.</p>
<p>In 2003, the Supreme Court held that a student admissions process that favours “under-represented minority groups” does not violate the Fourteenth Amendment<span class="citation" data-cites="UnderRepStudents"><a href="#ref-UnderRepStudents" role="doc-biblioref">[8]</a></span><span class="marginnote"><span id="ref-UnderRepStudents" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[8] </span><span class="csl-right-inline"><span>“<span class="nocase">Grutter v. Bollinger, 539 U.S. 306</span>.”</span> 2003.</span>
</span>
</span>, provided it evaluated applicants holistically at an individual level. The same year, the New Haven Fire Department administered a two part test in order to fill 15 openings. Examinations were governed in part by the City of New Haven. Under the city charter, civil service positions must be filled by one of the top three scoring individuals. 118 (White, Black and Hispanic) fire fighters took the exams. Of the resulting 19 candidates who scored highest on the tests and could the considered for the positions, none were Black. After heated public debate and under threat of legal action either way, the city threw out the test results. This action was later determined to be a disparate treatment violation. In 2009, the court ruled that disparate treatment could not be used to avoid disparate impact without sufficient evidence of liability of the latter<span class="citation" data-cites="FireFighters"><a href="#ref-FireFighters" role="doc-biblioref">[5]</a></span>. This landmark case was the first example of conflict between the two doctrines of disparate impact and disparate treatment or anti-classification and anti-subordination.</p>
<p>Disparate treatment seems to align well with anti-classification principles, seeking to prevent intentional discrimination based on protected characteristics. In the case of disparate impact, things are less clear. Is it a secondary ‘line of defence’ designed to weed out well masked intentional discrimination? Or is its intention to address inequity that exists as a direct result of historical injustice? One can draw parallels here with the ‘business necessity’ versus ‘business justification’ requirements discussed earlier.</p>
</section>
</section>
<section id="future-legislation" class="level3" data-number="1.3.3">
<h3 data-number="1.3.3"><span class="header-section-number">1.3.3</span> Future Legislation</h3>
<p>In May 2018, the European Union (EU) brought into action the General Data Protection (GDPR) a legal framework around the protection of personal data of EU citizens. The framework is divided into binding and non-binding recitals. The regulation sets provisions for processing of data in relation to decision making, described as ‘profiling’ under recital 71<span class="citation" data-cites="GDPR"><a href="#ref-GDPR" role="doc-biblioref">[9]</a></span><span class="marginnote"><span id="ref-GDPR" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[9] </span><span class="csl-right-inline"><span>“<span>General Data Protection Regulation (GDPR): (EU) 2016/679 Recital 71</span>.”</span> 2016.</span>
</span>
</span>. Though currently non-binding, it provides an indication of what’s to come. The recital talks specifically about having the right not to be subject to decisions based solely on automated processing. It specifically talks about credit applications, e-recruiting and any system which analyses or predicts aspects of a persons performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements. The recital also talks about requirements around using “appropriate mathematical or statistical procedures” to prevent “discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation”. More recently in 2021, the EU has proposed taking a risk based approach to the question of which technologies should be regulated, dividing it into four categories. Unacceptable risk, high risk, limited risk, minimal risk<span class="citation" data-cites="ECPR"><a href="#ref-ECPR" role="doc-biblioref">[10]</a></span><span class="marginnote"><span id="ref-ECPR" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[10] </span><span class="csl-right-inline"><span>“<span class="nocase">Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence</span>.”</span> 2021.</span>
</span>
</span>. While things may change as the proposed law is debated but once agreed, it’s not unlikely that it will serve as a prototype for legislation in the U.S. (and other countries around the world), as did GDPR.</p>
<p>In April 2019, the <a href="https://www.congress.gov/bill/116th-congress/house-bill/2231">Algorithmic Accountability Act</a> was proposed to the US Senate. The bill requires specified commercial entities to conduct impact assessments of automated decision systems and specifically states that assessments must include evaluations and risk assessment in relation to “accuracy, fairness, bias, discrimination, privacy, and security” not just for the model output but for the training data. The bill has cosponsors in 22 states and has been referred to the Committee on Commerce, Science, and Transportation for review. These examples are clear indications that the issues of fairness and bias in automated decision making systems are on the radar of regulators.</p>
</section>
</section>
<section id="sec_SimpsParadox" class="level2" data-number="1.4">
<h2 data-number="1.4"><span class="header-section-number">1.4</span> A Technical Perspective</h2>
<p>The problem of distinguishing correlation from causation is an important one in identifying bias. Using illustrative examples of Simpson’s paradox, we demonstrate the danger of assuming causal relationships based on observational data.</p>
<section id="simpsons-paradox" class="level3" data-number="1.4.1">
<h3 data-number="1.4.1"><span class="header-section-number">1.4.1</span> Simpson’s Paradox</h3>
<p>In 1973, University of California, Berkeley received approximately 15,000 applications for the fall quarter<span class="citation" data-cites="Berkeley"><a href="#ref-Berkeley" role="doc-biblioref">[11]</a></span><span class="marginnote"><span id="ref-Berkeley" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[11] </span><span class="csl-right-inline">P. J. Bickel, E. A. Hammel, and J. W. O’Connell, <span>“Sex bias in graduate admissions: Data from berkeley,”</span> <em>Science</em>, vol. 187, Issue 4175, pp. 398–404, 1975.</span>
</span>
</span>. At the time it was made up of 101 departments. 12,763 applications reached the decision stage. Of these 8442 were male and 4321 were female. The acceptance rates for the applicants were 44% and 35% respectively (see Table <a href="#tbl:BerkAdm1" data-reference-type="ref" data-reference="tbl:BerkAdm1">1.3</a>).</p>
<div id="tbl:BerkAdm1">
<table>
<caption>Table 1.3: Graduate admissions data from Berkeley (fall 1973).</caption>
<thead>
<tr class="header">
<th style="text-align: left;">Gender</th>
<th style="text-align: right;">Admitted</th>
<th style="text-align: right;">Rejected</th>
<th style="text-align: right;">Total</th>
<th style="text-align: right;">Acceptance Rate</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Male</td>
<td style="text-align: right;">3738</td>
<td style="text-align: right;">4704</td>
<td style="text-align: right;">8442</td>
<td style="text-align: right;">44.3%</td>
</tr>
<tr class="even">
<td style="text-align: left;">Female</td>
<td style="text-align: right;">1494</td>
<td style="text-align: right;">2827</td>
<td style="text-align: right;">4321</td>
<td style="text-align: right;">34.6%</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Aggregate</td>
<td style="text-align: right;">5232</td>
<td style="text-align: right;">7531</td>
<td style="text-align: right;">12763</td>
<td style="text-align: right;">41.0%</td>
</tr>
</tbody>
</table>
</div>
<p>With a whopping 10% difference in acceptance rates, it seems a likely case of discrimination against women. Indeed, a <span class="math inline">\(\chi^2\)</span> hypothesis test for independence between the variables (gender and application acceptance) reveals that the probability of observing such a result or worse, assuming they are independent, is <span class="math inline">\(6\times10^{-26}\)</span>. A strong indication that they are not independent and therefore evidence of bias in favour of male applicants. Since admissions are determined by the individual departments, it’s worth trying to understand which departments might be responsible. We focus on the data for the six largest departments, shown in Table <a href="#tbl:BerkAdm2" data-reference-type="ref" data-reference="tbl:BerkAdm2">1.4</a>. Here again we see a similar pattern. There appears to be bias in favour of male applicants, and a <span class="math inline">\(\chi^2\)</span> test shows that the probability of seeing this result under the assumption of independence is <span class="math inline">\(1\times10^{-21}\)</span>. It looks like we have quickly narrowed down our search.</p>
<div id="tbl:BerkAdm2">
<table>
<caption>Table 1.4: Graduate admissions data from Berkeley (fall 1973) for the six largest departments.</caption>
<thead>
<tr class="header">
<th style="text-align: left;">Gender</th>
<th style="text-align: right;">Admitted</th>
<th style="text-align: right;">Rejected</th>
<th style="text-align: right;">Total</th>
<th style="text-align: right;">Acceptance Rate</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Male</td>
<td style="text-align: right;">1198</td>
<td style="text-align: right;">1493</td>
<td style="text-align: right;">2691</td>
<td style="text-align: right;">44.5%</td>
</tr>
<tr class="even">
<td style="text-align: left;">Female</td>
<td style="text-align: right;">557</td>
<td style="text-align: right;">1278</td>
<td style="text-align: right;">1835</td>
<td style="text-align: right;">30.4%</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Aggregate</td>
<td style="text-align: right;">1755</td>
<td style="text-align: right;">2771</td>
<td style="text-align: right;">4526</td>
<td style="text-align: right;">38.8%</td>
</tr>
</tbody>
</table>
</div>
<p>Figure <a href="#fig:SimpsParAccByDept" data-reference-type="ref" data-reference="fig:SimpsParAccByDept">1.1</a> shows the acceptance rates for each department by gender, in decreasing order of acceptance rates. Performing <span class="math inline">\(\chi^2\)</span> tests for each department reveals the only department where there is strong evidence of bias is A, but the bias is in favour of female applicants. The probability of observing the data for department A, under the assumption of independence, is <span class="math inline">\(5\times10^{-5}\)</span>.</p>
<figure>
<img src="01_Context/figures/Fig_BerkeleyAccByDept.png" id="fig:SimpsParAccByDept" style="width:85.0%" alt="Figure 1.1: Acceptance rate distributions by department for male and female applicants." />
<figcaption aria-hidden="true">Figure 1.1: Acceptance rate distributions by department for male and female applicants.</figcaption>
</figure>
<p>So what’s going on? Figure <a href="#fig:SimpsParAppByDept" data-reference-type="ref" data-reference="fig:SimpsParAppByDept">1.2</a> shows the application distributions for male and female applicants for each of the six departments. From the plots we are able to see a pattern. Female applicants are more often applying for departments with a lower acceptance rate.</p>
<figure>
<img src="01_Context/figures/Fig_BerkeleyAppByDept.png" id="fig:SimpsParAppByDept" style="width:85.0%" alt="Figure 1.2: Application distributions by department for male and female applicants." />
<figcaption aria-hidden="true">Figure 1.2: Application distributions by department for male and female applicants.</figcaption>
</figure>
<p>In other words a larger proportion of the women are being filtered out overall, simply because they are applying to departments that are harder to get into.</p>
<p>This is a classic example of Simpson’s Paradox (also known as the reversal paradox and Yule-Simpson effect). We have an observable relationship between two categorical variables (in this case gender and acceptance) which disappears or reverses, after controlling for one or more other variables (in this case department). Simpson’s Paradox is a special case of so called association paradoxes (where the variables are categorical, and the relationship changes qualitatively), but the same rules also apply to continuous variables. The <em>marginal</em> (unconditional) measure of association (e.g. correlation) between two variables need not be bounded by the <em>partial</em> (conditional) measures of association (after controlling for one or more variables). Although Edward Hugh Simpson famously wrote about the paradox in 1951, it was not discovered by him. In fact, it was reported by George Udny Yule as early as 1903. The association paradox for continuous variables was demonstrated by Karl Pearson in 1899.</p>
<p>Let’s discuss another quick example. A 1996 follow-up study on the effects of smoking recorded the mortality rate for the participants over a 20 year period. They found higher mortality rates among the non-smokers, 31.4% compared to 23.9% which, in itself, might imply a considerable protective affect from smoking. Clearly there’s something fishy going on. Disaggregating the data by age group showed that the mortality rates were higher for smokers in all but one of them. Looking at the age distribution of the populations of smokers and non-smokers, it’s apparent that the age distribution of the non-smoking group is more positively skewed, and so they are older on average. This concords with the rationale that non-smokers live longer - hence the difference in age distributions of the participants.</p>
<figure>
<img src="01_Context/figures/Fig_SimpParaReg.png" id="fig:SimpsPara" style="width:98.0%" alt="Figure 1.3: Visualisation of Simpsons Paradox. Wikipedia." />
<figcaption aria-hidden="true">Figure 1.3: Visualisation of Simpsons Paradox. <a href="https://en.wikipedia.org/wiki/Simpson%27s_paradox">Wikipedia</a>.</figcaption>
</figure>
</section>
<section id="causality" class="level3" data-number="1.4.2">
<h3 data-number="1.4.2"><span class="header-section-number">1.4.2</span> Causality</h3>
<p>In both the above examples, it appears that the salient information is found in the disaggregated data (we’ll come back to this later). In both cases it is the disaggregated data that enables us to understand the <em>true nature</em> of the relationship between the variables of interest. As we shall see in this section, this need not be the case. To show this, we discuss two examples. In each case, the data is identical but the variables is not. The examples are those Simpson gave in his original 1951 paper<span class="citation" data-cites="Simpson"><a href="#ref-Simpson" role="doc-biblioref">[12]</a></span><span class="marginnote"><span id="ref-Simpson" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[12] </span><span class="csl-right-inline">E. Simpson, <span>“The interpretation of interaction in contingency tables,”</span> <em>Journal of the Royal Statistical Society</em>, vol. Series B, 13, pp. 238–241, 1951.</span>
</span>
</span>.</p>
<p>Suppose we have three binary variables, <span class="math inline">\(A\)</span>, <span class="math inline">\(B\)</span> and <span class="math inline">\(C\)</span>, and we are interested in understanding the relationship between <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span> given a set of 52 data points. A summary of the data showing the association between variables <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span> are shown in Table <a href="#tbl:SimpPara" data-reference-type="ref" data-reference="tbl:SimpPara">1.5</a>, first for all the data points and then stratified (separated) by the value of <span class="math inline">\(C\)</span> (note the first table is the sum of the latter two). The first table indicates that <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span> are unconditionally independent (since changing the value of one variable does not change the distribution of the other). The next two tables suggest <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span> are conditionally dependent given <span class="math inline">\(C\)</span>.</p>
<div id="tbl:SimpPara">
<table>
<caption>Table 1.5: Data summary showing the association between variables <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span>, first for all the data points and then stratified by the value of <span class="math inline">\(C\)</span>.</caption>
<thead>
<tr class="header">
<th colspan="5" style="text-align: center;"></th>
<th colspan="4" style="text-align: center;"><span style="color: SteelBlue">Stained?</span> / <span style="color: FireBrick">Male?</span></th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td colspan="5" style="text-align: center;"></td>
<td colspan="2" style="text-align: center;"><span class="math inline">\(C=1\)</span></td>
<td colspan="2" style="text-align: center;"><span class="math inline">\(C=0\)</span></td>
</tr>
<tr class="even">
<td rowspan="2" style="text-align: center;"><span style="color: SteelBlue">Black?</span>/ <span style="color: FireBrick">Died?</span></td>
<td colspan="2" style="text-align: center;"><span style="color: SteelBlue">Plain?</span>/ <span style="color: FireBrick">Treated?</span></td>
<td rowspan="5" style="text-align: center;"></td>
<td rowspan="2" style="text-align: center;"><span style="color: SteelBlue">Black?</span>/ <span style="color: FireBrick">Died?</span></td>
<td colspan="4" style="text-align: center;"><span style="color: SteelBlue">Plain?</span>/ <span style="color: FireBrick">Treated?</span></td>
</tr>
<tr class="odd">
<td style="text-align: center;"><span class="math inline">\(A=1\)</span></td>
<td style="text-align: center;"><span class="math inline">\(A=0\)</span></td>
<td style="text-align: center;"><span class="math inline">\(A=1\)</span></td>
<td style="text-align: center;"><span class="math inline">\(A=0\)</span></td>
<td style="text-align: center;"><span class="math inline">\(A=1\)</span></td>
<td style="text-align: center;"><span class="math inline">\(A=0\)</span></td>
</tr>
<tr class="even">
<td style="text-align: center;"><span class="math inline">\(B=1\)</span></td>
<td style="text-align: center;">20</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;"><span class="math inline">\(B=1\)</span></td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">15</td>
<td style="text-align: center;">3</td>
</tr>
<tr class="odd">
<td style="text-align: center;"><span class="math inline">\(B=0\)</span></td>
<td style="text-align: center;">20</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;"><span class="math inline">\(B=0\)</span></td>
<td style="text-align: center;">8</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">12</td>
<td style="text-align: center;">2</td>
</tr>
<tr class="even">
<td style="text-align: center;"><span class="math inline">\(\mathbb{P}(B|A)\)</span></td>
<td style="text-align: center;">50%</td>
<td style="text-align: center;">50%</td>
<td style="text-align: center;"><span class="math inline">\(\mathbb{P}(B|A,C)\)</span></td>
<td style="text-align: center;">38%</td>
<td style="text-align: center;">43%</td>
<td style="text-align: center;">56%</td>
<td style="text-align: center;">60%</td>
</tr>
</tbody>
</table>
</div>
<div class="tablenotes">
<p><sup>a</sup>Each cell of the table shows the number of examples in the dataset satisfying the conditions given in the corresponding row and column headers.</p>
</div>
<section id="question" class="level5 unnumbered">
<h5 class="unnumbered">Question:</h5>
<p>Which distribution gives us the most relevant understanding of the association between <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span>, the marginal (i.e. unconditional) <span class="math inline">\(\mathbb{P}(A,B)\)</span> or conditional distribution <span class="math inline">\(\mathbb{P}(A,B|C)\)</span>? To show that causal relationships matter, we consider two different examples.</p>
</section>
<section id="example-a-pack-of-cards-colliding-variable" class="level4 unnumbered">
<h4 class="unnumbered"><span style="color: SteelBlue">Example a) Pack of Cards (Colliding Variable)</span></h4>
<p>Suppose the population is a pack of cards. It so happens that baby Milen has been messing about with the cards and made some dirty in the process. Let’s summarise our variables,</p>
<ul>
<li><p><span class="math inline">\(A\)</span> tells us the character of the card, either plain (<span class="math inline">\(A=1\)</span>) or royal (King, Queen, Jack; <span class="math inline">\(A=0\)</span>).</p></li>
<li><p><span class="math inline">\(B\)</span> tells us the colour of the card, either black (<span class="math inline">\(B=1\)</span>) or red (<span class="math inline">\(B=0\)</span>).</p></li>
<li><p><span class="math inline">\(C\)</span> tells us if the card is dirty (<span class="math inline">\(C=1\)</span>) or clean (<span class="math inline">\(C=0\)</span>).</p></li>
</ul>
<p>In this case, the aggregated data showing <span class="math inline">\(\mathbb{P}(A,B)\)</span> is relevant since the cleanliness of the cards <span class="math inline">\(C\)</span> has no bearing on the association between the character <span class="math inline">\(A\)</span> and colour <span class="math inline">\(B\)</span> of the cards.</p>
</section>
<section id="example-b-treatment-effect-on-mortality-rate-confounding-variable" class="level4 unnumbered">
<h4 class="unnumbered"><span style="color: FireBrick">Example b) Treatment Effect on Mortality Rate (Confounding Variable)</span></h4>
<p>Next, suppose that the data relates to the results of medical trials for a drug on a potentially lethal illness. This time,</p>
<ul>
<li><p><span class="math inline">\(A\)</span> tells us if the subject was treated (<span class="math inline">\(A=1\)</span>) or not (<span class="math inline">\(A=0\)</span>).</p></li>
<li><p><span class="math inline">\(B\)</span> tells us if the subject died (<span class="math inline">\(B=1\)</span>) or recovered (<span class="math inline">\(B=0\)</span>).</p></li>
<li><p><span class="math inline">\(C\)</span> tells us the gender of the subject, either male (<span class="math inline">\(C=1\)</span>) or female (<span class="math inline">\(C=0\)</span>).</p></li>
</ul>
<p>In this case the disaggregated data shows the more relevant association, <span class="math inline">\(\mathbb{P}(A,B|C)\)</span>. From it, we can see that female patients are more likely to die than males overall; 56 and 60% versus 38 and 43%, depending on if they were treated or not. In both cases we see that treatment with the drug <span class="math inline">\(A\)</span> reduces the mortality rate for both male and female participants, and the effect is obscured by aggregating the data over gender <span class="math inline">\(C\)</span>.</p>
</section>
<section id="back-to-causality" class="level4 unnumbered">
<h4 class="unnumbered">Back to Causality</h4>
<p>The key difference between these examples is the causal relationship between the variables rather than the statistical structure of the data. In the first example with the playing cards, the variable <span class="math inline">\(C\)</span> is a <em>colliding</em> variable, in the second example looking at patient mortality, it is a <em>confounding</em> variable. Figure <a href="#fig:CollConfProg" data-reference-type="ref" data-reference="fig:CollConfProg">1.4</a> a) and b) show the causal relationships between the variables in the two cases.</p>
<figure class="fullwidth">
<img src="01_Context/figures/Fig_CollConfProg.png" id="fig:CollConfProg" alt="Figure 1.4: Causal diagrams for A, B and C when C is a colliding, confounding and prognostic variable." />
<figcaption aria-hidden="true">Figure 1.4: Causal diagrams for <span class="math inline">\(A\)</span>, <span class="math inline">\(B\)</span> and <span class="math inline">\(C\)</span> when <span class="math inline">\(C\)</span> is a colliding, confounding and prognostic variable.</figcaption>
</figure>
<p>The causal diagram in Figure <a href="#fig:CollConfProg" data-reference-type="ref" data-reference="fig:CollConfProg">1.4</a> a) shows the variables <span class="math inline">\(A\)</span>, <span class="math inline">\(B\)</span> and <span class="math inline">\(C\)</span> for the first example. The arrows exist both from card character and colour to cleanliness because apparently, baby Milen had a preference for royal cards over plain and red cards over black. Conditioning on a collider <span class="math inline">\(C\)</span> generates an association (e.g. correlation) between <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span>, even if they are unconditionally independent. This common effect is often observed as <em>selection</em> or <em>representation bias</em>. Representation bias can induce correlation between variables, even where there is none. For decision systems, this can lead to feedback loops that increase the extremity of the representation bias in future data. We’ll come back to this in chapter <a href="#ch_EthicalDev" data-reference-type="ref" data-reference="ch_EthicalDev">2</a>, when we talk about common causes of bias.</p>
<p>The causal diagram in Figure <a href="#fig:CollConfProg" data-reference-type="ref" data-reference="fig:CollConfProg">1.4</a> b) shows the variables <span class="math inline">\(A\)</span>, <span class="math inline">\(B\)</span> and <span class="math inline">\(C\)</span> for the second example. The arrows exist from <span class="math inline">\(gender\)</span> to treatment because men were less likely to be treated, and from gender to death because men were also less likely to die. The arrow from <span class="math inline">\(A\)</span> to <span class="math inline">\(B\)</span> represents the effect of treatment on mortality which is observable only by conditioning on gender. Note that there are two sources of association in opposite directions between variables <span class="math inline">\(A\)</span> and <span class="math inline">\(B\)</span> (treatment and death); a positive association, because men were less likely to be treated; and a negative association, because male patients are less likely to die. The two effects cancel each other out when the data is aggregated.</p>
<p>We see through the discussion of these two examples, that statistical reasoning is not sufficient to be able to determine which of the distributions (marginal or conditional) are relevant. Note that the above conclusions in relation to colliding and confounding variables does not generalize to complex time varying problems.</p>
<p>Before moving on from causality, we return to the example we discussed at the very start of this section. According to our analysis of the Berkeley admissions data, we concluded that the disaggregated data contained the <em>salient</em> information explaining the disparity in acceptance rates for male and female applicants. The problem is, we have only shown that application rates to be one of many possible <em>causes</em> of the differing acceptance rates (we cannot see outside of our data). In addition, we have not proven <em>gender discrimination</em>, not to be the cause. What we have evidenced, is the existence of disparities in both acceptance rates and application rates across sex. One problem is that <em>gender discrimination</em> is not a measurable thing in itself. It’s complicated. It is made up of many components, most of which are not contained in the data. Beliefs, personal preferences, behaviours, actions, and more. A valid question we cannot answer is, <em>why do the application rates differ by sex?</em> How do we know that this is itself, is not a result of gender discrimination. Perhaps some departments are less welcoming of women than others or, perhaps some are just much more welcoming of men than women? So how would we know if gender discrimination is at play here? We need to ask the right questions to collect the right data.</p>
</section>
</section>
<section id="sec_collapsibility" class="level3" data-number="1.4.3">
<h3 data-number="1.4.3"><span class="header-section-number">1.4.3</span> Collapsibility</h3>
<p>We have demonstrated that correlation does not imply causation in the manifestation of Simpson’s Paradox. But there is second factor that can have an impact; and that is the nature of the measure of association in question.</p>
<section id="example-c-treatment-effect-on-mortality-rate-prognostic-variable" class="level4 unnumbered">
<h4 class="unnumbered"><span style="color: SeaGreen">Example c) Treatment Effect on Mortality Rate (Prognostic Variable)</span></h4>
<p>Suppose that in the study of the efficacy of the treatment (in Example 2 above), we remedy the selection bias so that male and female patients are equally likely to be treated. We remove the causal relationship between variables <span class="math inline">\(A\)</span> and <span class="math inline">\(C\)</span> (treatment and gender). In this case, the variable <span class="math inline">\(C\)</span> becomes <em>prognostic</em> rather than confounding. See Figure <a href="#fig:CollConfProg" data-reference-type="ref" data-reference="fig:CollConfProg">1.4</a> c). In this case the decision as to which distributions (marginal or conditional) are most relevant would depend only on the target population in question. In the absence of the confounding variable in our study one might reasonably expect the marginal measure of association to be bounded by the partial measures of association. Such intuition is correct only if the measure of association is <em>collapsible</em> (that is, it can be expressed as the weighted average of the partial measures), not otherwise. Some examples of collapsible measures of association are the risk ratio and risk difference. The odds ratio however is not collapsible. If you don’t know what these are, don’t worry, we’ll return to them in chapter <a href="#ch_GroupFairness" data-reference-type="ref" data-reference="ch_GroupFairness">3</a>.</p>
</section>
</section>
</section>
<section id="sec_harms" class="level2" data-number="1.5">
<h2 data-number="1.5"><span class="header-section-number">1.5</span> What’s the Harm?</h2>
<p>In this section we discuss the recent and broader societal concerns related to machine learning technologies.</p>
<section id="the-illusion-of-objectivity" class="level3" data-number="1.5.1">
<h3 data-number="1.5.1"><span class="header-section-number">1.5.1</span> The Illusion of Objectivity</h3>
<p>One of the most concerning things about the machine learning revolution, is perception that these algorithms are somehow objective (unlike humans), and are therefore a better substitute for human judgement. This viewpoint is not just a belief of laymen but an idea that is also projected from within the machine learning community. There are often financial incentives to exaggerate the efficacy of such systems.</p>
<section id="automation-bias" class="level4 unnumbered">
<h4 class="unnumbered">Automation Bias</h4>
<p>The tendency for people to favour decisions made by automated systems despite contradictory information from non-automated sources, or <em>automation bias</em>, is a growing problem as we integrate more and more machines in our decision making processes especially in infrastructure - healthcare, transportation, communication, power plants and more.</p>
<p>It is important to be clear that in general, machine learning systems are not objective. Data is produced by a necessarily subjective set of decisions (how and who to sample, how to group events or characteristics, which features to collect). Modelling also involves making choices about how to process the data, what class of model to use and perhaps most importantly how success is determined. Finally, even if our model is calibrated to the data well, it says nothing about the distribution of errors across the population. The consistency of algorithms in decision making compared to humans (who individually make decisions on a case by case basis) is often described as a benefit<span class="sidenote-wrapper"><label for="sn-2" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-2" class="margin-toggle"/><span class="sidenote">One must not confuse consistency with objectivity. For algorithms, consistency also means consistently making the same errors.<br />
<br />
</span></span>, but it’s their very consistency that makes them dangerous - capable of discriminating systematically and at scale.</p>
<section id="example-compas" class="level5 unnumbered">
<h5 class="unnumbered">Example: COMPAS</h5>
<p>(Correctional Offender Management Profiling for Alternative Sanctions) is a “case management system for criminal justice practitioners”. The system, produces recidivism risk scores. It has been used in New York, California and Florida, but most extensively in Wisconsin since 2012, at a variety of stages in the criminal justice, from sentencing to parole. The <a href="https://assets.documentcloud.org/documents/2840784/Practitioner-s-Guide-to-COMPAS-Core.pdf">documentation</a> for the software describes it as an “objective statistical risk assessment tool”.</p>
<p>In 2013, Paul Zilly was convicted of stealing a push lawnmower and some tools in Barron County, Wisconsin. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal. But Judge James Babler upon seeing Zilly’s COMPAS risk scores overturned the plea deal that had been agreed on by the prosecution and defence, and imposed two years in state prison and three years of supervision. At an appeals hearing later that year, Babler said “Had I not had the COMPAS, I believe it would likely be that I would have given one year, six months”<span class="citation" data-cites="ProPub1"><a href="#ref-ProPub1" role="doc-biblioref">[13]</a></span><span class="marginnote"><span id="ref-ProPub1" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[13] </span><span class="csl-right-inline">J. Angwin, J. Larson, S. Mattu, and L. Kirchner, <span>“Machine bias,”</span> <em>ProPublica</em>, 2016.</span>
</span>
</span>. In other words the judge believed the risk scoring system to hold more insight that the prosecutor who had personally interacted with the defendant.</p>
</section>
</section>
<section id="the-ethics-of-classification" class="level4 unnumbered">
<h4 class="unnumbered">The Ethics of Classification</h4>
<p>The appeal of classification is clear. It creates a sense of order and understanding. It enables us to formulate problems neatly and solve them. An email is spam or it’s not; an x-ray shows tuberculosis or it doesn’t; a treatment was effective or it wasn’t. It can make finding things more efficient in a library or online. There are lots of useful applications of classification.</p>
<p>We tend to think of taxonomies as objective categorisations, but often they are not. They are snapshots in time, representative of the culture and biases of the creators. The very act of creating a taxonomy, can give life by existence to some individuals, while erasing others. Classifying people inevitably has the effect of reducing them to labels; labels that can result in people being treated as members of a group, rather than individuals; labels that can linger for much longer than they should (something it’s easy to forget when creating them). The Dewey Decimal System for example, was developed in the late 1800’s and widely adopted in the 1930’s to classify books. Until 2015, it categorised homosexuality as a mental derangement.</p>
</section>
<section id="classification-of-people" class="level4 unnumbered">
<h4 class="unnumbered">Classification of People</h4>
<p>From the 1930’s until the second world war, machine classification systems were used by Nazi Germany to process census data in order to identify and locate Jews, determine what property and businesses they owned, find anything of value that could be seized and finally to send them to their deaths in concentration camps. Classification systems have often been entangled with political and social struggle across the world. In Apartheid South Africa, they were been used extensively in many parts of the world to enforce social and racial hierarchies that determined everything from where people could live and work to whom they could marry. In 2019 it was estimated that some half a million Uyghurs (and other minority Muslims) are being held in internment camps in China without charge for the purposes of countering extremism and promoting social integration.</p>
<p>Recent papers on detecting criminality”<span class="citation" data-cites="CriminalFace"><a href="#ref-CriminalFace" role="doc-biblioref">[14]</a></span><span class="marginnote"><span id="ref-CriminalFace" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[14] </span><span class="csl-right-inline">X. Wu and X. Zhang, <span>“Automated inference on criminality using face images.”</span> 2017.Available: <a href="https://arxiv.org/abs/1611.04135">https://arxiv.org/abs/1611.04135</a></span>
</span>
</span> and sexuality<span class="citation" data-cites="SexualityFace"><a href="#ref-SexualityFace" role="doc-biblioref">[15]</a></span><span class="marginnote"><span id="ref-SexualityFace" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[15] </span><span class="csl-right-inline">Y. Wang and M. Kosinski, <span>“Deep neural networks are more accurate than humans at detecting sexual orientation from facial images,”</span> <em>Journal of Personality and Social Psychology</em>, 2018.</span>
</span>
</span> and ethnicity<span class="citation" data-cites="EthnicityFace"><a href="#ref-EthnicityFace" role="doc-biblioref">[16]</a></span><span class="marginnote"><span id="ref-EthnicityFace" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[16] </span><span class="csl-right-inline">C. Wang, Q. Zhang, W. Liu, Y. Liu, and L. Miao, <span>“Facial feature discovery for ethnicity recognition,”</span> <em>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</em>, 2018.</span>
</span>
</span> from facial images have sparked controversy in the academic community. The latter in particular looks for facial features that identify among others, Chinese Uyghurs. Physiognomy (judging character from the physical features of a persons face) and phrenology (judging a persons level of intelligence from the shape and dimensions of their cranium) have historically been used as pseudo-scientific tools of oppressors, to prove the inferiority races and justify subordination and genocide. it is not without merit then to ask if some technologies should be built at all. Machine gaydar might be a fun application to mess about with friends for some, but in the 70 countries where homosexuality is still illegal (some of which enforce the death penalty) it is something rather different.</p>
</section>
</section>
<section id="personalisation-and-the-filter-bubble" class="level3" data-number="1.5.2">
<h3 data-number="1.5.2"><span class="header-section-number">1.5.2</span> Personalisation and the Filter Bubble</h3>
<p>Many believed the internet would breath new life into democracy. The decreased cost and increased accessibility of information would result in greater decentralization of power and flatter social structures. In this new era, people would be able to connect, share ideas and organise grass roots movements at a such a scale that would enable a step change in the rate of social progress. Some of these ideas have been realised to an extent but the increased ability to create and distribute content and corresponding volume of data has created new problems. The amount of information available to us through the internet is overwhelming. Email, blog posts, Twitter, Facebook, Instagram, Linked In, What’s App, You Tube, Netflix, TikTok and more. Today there are seemingly endless ways and places for us to communicate and share information. This barrage of information has resulted in what has been described as the attention crash. There is simply too much information for us to attend to all of it meaningfully. The mechanisms through which we can acquire new information that demands our attention too have expanded. We carry our smart phones everywhere we go and sleep beside them. There is hardly a waking moment, when we are unplugged and inaccessible. The demands on our attention and focus have never been greater. Media producers themselves have adapted their content in order to accommodate our new shortened attention spans.</p>
<p>With so much information available it’s easy to see the appeal of automatic filtering and curation. And of course, how good would said system really be if it didn’t take into account our personal tastes and preferences? So what’s the problem?! Over the last decade, personalisation has become entrenched in the systems we interact with day to day. Targeted advertising was just the beginning. Now it’s not just the trainers you browsed once that follow you around the web until you buy them, it’s everything. Since 2009, Google has returned personalised results every time someone queries their search engine, so two people who enter the same text don’t get the same result. In 2021 You Tube had more than two billion logged-in monthly users. Three quarters of adults in the US use it (more than facebook and Instagram) and 80% of U.S. parents of children under 11 watch it. It is the second most visited site in the world, after Google with visitors checking on average just under 9 pages, and spending 42 minutes per day there. In 2018, 70% of the videos people watched on You Tube were recommended. Some 40% of Americans under thirty get their news through social networking sites such as twitter and Facebook but this may be happening without you even knowing. Since 2010, it’s not the Washington Post that decides which news story you see in the prime real estate that is the top right hand corner of their home page, it’s Facebook - the same goes for the New York Times. So the kinds of algorithms that once determined what we spent our money on now determine our very perception of the world around us. The only question is, what are they optimising for?</p>
<p>Ignoring, for a moment, the fact that having the power to shape people’s perception of the world, in just a few powerful hands is in itself a problem. A question worth pondering on is what kind of citizens people who only ever see things they ‘like’, or feel the impulse to ’comment’ on (or indeed any other proxy for interest/engagement/attention) would make. As Eli Pariser put it in his book The Filter Bubble, “what one seems to like may not be what one actually wants, let alone what one needs to know to be an informed member of their community or country”. The internet has made the world smaller and with it we’ve seen great benefits. But the idea that, because anyone (regardless of their background) could be our neighbour, people would find common ground has not been realised to the extent people hoped. In some senses personalisation does the exact opposite. It risks us all living in a world full of mirrors, where we only ever hear the voices of people who see the world as we do, being deprived of differing perspectives. Of course we have always lived in our own filter bubble in some respects but the thing that has changed is that now we don’t make the choice and often don’t even know when we are in it. We don’t know when or how decisions are made about what we should see. We are more alone in our bubbles than we have ever been before.</p>
<p>Social capital is created by the interpersonal bonds we build in shared identity, values, trust and reciprocity. It encourages people to collaborate in order to solve common problems for the common good. There are two kinds of social capital, bonding and bridging. Bonding capital is acquired through development of connections in groups that have high levels of similarity in demographics and attitudes - the kind you might build by, say socialising with colleagues from work. Bridging capital is created when people from different backgrounds (race, religion, class) connect - something that might happen at a town hall meeting say. The problem with personalisation is that by construction it reduces opportunities to see the world through the eyes of people who don’t necessarily look like us. It reduces bridging capital and that exactly the kind of social capital we need to solve wider problems that extend beyond our own narrow or short term self interests.</p>
</section>
<section id="disinformation" class="level3" data-number="1.5.3">
<h3 data-number="1.5.3"><span class="header-section-number">1.5.3</span> Disinformation</h3>
<p>In June 2016, it was announced that Britain would be leaving the EU. 33.5 million people voted in the referendum of which 51.9% voted to leave. The decision that will impact the UK for, not just a term, but generations to come, rested on less than 2% of voters. Ebbw Vale is a small town in Wales where 62% of the electorate (the largest majority in the country) voted to leave. The town has a history in steel and coal dating back to the late 1700’s. By the 1930’s the Ebbw Vale Steelworks was the largest in Europe by volume. In the 1960’s it employed some 14,500 people. But, towards the end of the 1900’s, after the collapse of the UK steel industry, the town suffered one of the highest unemployment rates in Britain. What was strange about the overwhelming support to leave was that Ebbw Vale was perhaps one of the largest recipients of EU development funding in the UK. A £350m regeneration project funded by the EU replaced the industrial wasteland left behind when the steelworks closed in 2002 with The Works (a housing, retail and office space, wetlands, learning campus and more). A further £33.5 in funding from the European Social Fund paid for a new college and apprenticeships, to help young people learn a trade. An additional £30 million for a new railway line, £80 million for road improvements and shortly before the vote a further £12.2 million for other upgrades and improvements were all from the EU.</p>
<p>When journalist Carole Cadwalladr returned to the small town where she had grown up to report on why residents had voted so overwhelmingly in favour of leaving the EU, she was no less confused. It was clear how much the town had benefited from being part of the EU. The new road, train station, college, leisure centre and enterprise zones (flagged an EU tier 1 area, eligible for the highest level of grant aid in the UK), everywhere she went she saw signs with proudly displayed EU flags saying so. So she wandered around town asking people and was no less perplexed by their answers. Time and time again people complained about immigration and foreigners. They wanted to take back control. But the immigrants were nowhere to be found, because Ebbw Vale had one of the lowest rates of immigration in the country. So how did this happen? How did a town with hundreds of millions of pounds of EU funding vote to leave the EU because of immigrants that didn’t exist? In her emotive TED talk<span class="citation" data-cites="CarCadTED"><a href="#ref-CarCadTED" role="doc-biblioref">[17]</a></span><span class="marginnote"><span id="ref-CarCadTED" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[17] </span><span class="csl-right-inline">C. Cadwalladr, <em>Facebook’s role in <span>Brexit</span> - and the threat to democracy</em>. TED, 2019.</span>
</span>
</span>, Carole shows images of some the adverts on Facebook, people were targeted with as part of the leave campaign (see Figure <a href="#fig:Brexit" data-reference-type="ref" data-reference="fig:Brexit">1.5</a>). They were all centred around a lie - that Turkey was joining the EU.</p>
<figure>
<img src="01_Context/figures/Fig_Brexit.png" id="fig:Brexit" alt="Figure 1.5: Targeted disinformation adverts shown on Facebook[17]." />
<figcaption aria-hidden="true">Figure 1.5: Targeted disinformation adverts shown on Facebook<sup><span class="citation" data-cites="CarCadTED"><a href="#ref-CarCadTED" role="doc-biblioref">[17]</a></span></sup>.</figcaption>
</figure>
<p>Most people in the UK saw adverts on buses and billboards with false claims, for example that the National Health Service (NHS) would have an extra £350 million a week, if we left the EU. Although many believed them, those adverts circulated in the open for everyone to see, giving the mainstream media at the opportunity to debunk them. The same cannot be said for the adverts in Figure <a href="#fig:Brexit" data-reference-type="ref" data-reference="fig:Brexit">1.5</a>. They were targeted towards specific individuals, as part of an evolving stream of information displayed in their Facebook ‘news’ feed. The leave campaign paid Cambridge Analytica (a company that had illegally gained access to the data of 87 million Facebook users) to identify individuals that could be manipulated into voting leave. In the UK, spending on elections in the is limited by law as a means to ensure fair elections. After a nine month investigation, the UK’s Electoral Commission confirmed these spending limits had been breached by the leave campaign. There are ongoing criminal investigations into where the funds for the campaign originate (overseas funding of election campaigns is also illegal) but evidence suggests ties with Russia. Brexit was the precursor to the Trump administration winning the US election just a few months later that year. The same people and companies used the same strategies. It’s become clear that current legislation protecting democracy is inadequate. Facebook, was able to profit from politically motivated money without recognizing any responsibility in ensuring the transactions were legal. Five years later, the full extent of the disinformation campaign on Facebook has yet to be understood. Who was shown what and when, how people were targeted, what other lies were told, who paid for the adverts or where the money came from.</p>
<p>Since then deep learning technology has advanced to the point of being able to pose as human in important ways that risk enabling disinformation not just through targeted advertising but machines impersonating humans. GANs can fabricate facial images, videos (deepfakes) and audio. Advancements in language models (Open AIs GPT-2 and more recently GPT-3) are capable of creating lengthy human like prose given just a few prompts. Deep learning now provides all the tools to fabricate human identities and target dissemination of false information at scale. There are growing concerns that in the future, bots will drown out actual human voices. As for the current state of play, it’s difficult to know the exact numbers but in 2017, researchers estimated that between 9 and 15% of all twitter accounts were bots<span class="citation" data-cites="FakeTwitter"><a href="#ref-FakeTwitter" role="doc-biblioref">[18]</a></span><span class="marginnote"><span id="ref-FakeTwitter" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[18] </span><span class="csl-right-inline">O. Varol, E. Ferrara, C. A. Davis, F. Menczer, and A. Flammini, <span>“Online human-bot interactions: Detection, estimation, and characterization.”</span> 2017.Available: <a href="https://arxiv.org/abs/1703.03107">https://arxiv.org/abs/1703.03107</a></span>
</span>
</span>. In 2020 a study by researchers at Carnegie Mellon University reported that 45% of the 200 million tweets they analysed discussing coronavirus came from accounts that behaved like bots<span class="citation" data-cites="FakeCovid"><a href="#ref-FakeCovid" role="doc-biblioref">[19]</a></span><span class="marginnote"><span id="ref-FakeCovid" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[19] </span><span class="csl-right-inline">B. Allyn, <span>“Researchers: Nearly half of accounts tweeting about coronavirus are likely bots,”</span> <em>NPR</em>, May 2020.</span>
</span>
</span>. For Facebook, things are less clear as we must rely on their own reporting. In mid-2019, Facebook estimated that only 5% of its 2.4 billion monthly active users were fake though its reporting raised some questions<span class="citation" data-cites="FakeFB"><a href="#ref-FakeFB" role="doc-biblioref">[20]</a></span><span class="marginnote"><span id="ref-FakeFB" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[20] </span><span class="csl-right-inline">J. Nicas, <span>“Does facebook really know how many fake accounts it has?”</span> <em>The New York Times</em>, 2019.</span>
</span>
</span>.</p>
</section>
<section id="harms-of-representation" class="level3" data-number="1.5.4">
<h3 data-number="1.5.4"><span class="header-section-number">1.5.4</span> Harms of Representation</h3>
<p>The interventions we’ll talk about in most of this book are designed to measure and mitigate harms of allocation in machine learning systems.</p>
<section id="harms-of-allocation" class="level4 unnumbered">
<h4 class="unnumbered">Harms of Allocation</h4>
<p>An allocative harm happens when a system allocates or withholds an opportunity or resource. Systems that approve or deny credit allocate financial resources; systems that decide who should and should not see adverts for high paying jobs allocate employment opportunities and systems that determine who will make a good tenant allocate housing resources. Harms of allocation happen as a result of discrete decisions at a given point in time, the immediate impact of which can be quantified. This makes it possible to challenge the justice and fairness of specific determinations and outcomes.</p>
<p>Increasingly however, machine learning systems are affecting us, not just through allocation, but are shaping our view of the world and society at large by deciding what we do and don’t see. These harms are far more difficult to quantify.</p>
</section>
<section id="harms-of-representation-1" class="level4 unnumbered">
<h4 class="unnumbered">Harms of Representation</h4>
<p>Harms of representation occur when systems enforce the subordination of groups through characterizations that affect the perception of them. In contrast to harms of allocation, harms of representation have long-term effects on attitudes and beliefs. They create identities and labels for humans, societies and their cultures. Harms of representation don’t just affect our perception of each other, they affect how we see ourselves. They are difficult to formalise and in turn difficult to quantify but the effect is real.</p>
<div class="lookbox">
<p><strong>The Surgeon’s Dilemma</strong></p>
<p>A father and his son are involved in a horrific car crash and the man died at the scene. But when the child arrived at the hospital and was rushed into the operating theatre, the surgeon pulled away and said: “I can’t operate on this boy, he’s my son”. How can this be?</p>
</div>
<p>Did you figure it out? How long did it take? There is, of course, no reason why the surgeon couldn’t be the boy’s mother. If it took you a while to figure out, or came to a different conclusion, you’re not alone. More than half the people presented with this riddle do, and that includes women. The point of this riddle is to demonstrate the existence of unconscious bias. Representational harms are insidious. They silently fix ideas in peoples subconscious about what people of a particular gender, nationality, faith, race, occupation and more, are like. They draw boundaries between people and affect our perception of world. Below we describe five different harms of representation:</p>
</section>
<section id="stereotyping" class="level4 unnumbered">
<h4 class="unnumbered">Stereotyping</h4>
<p>Stereotyping occurs through excessively generalised portrayals of groups. In 2016, the Oxford English Dictionary was publicly criticised<span class="citation" data-cites="SexistOED"><a href="#ref-SexistOED" role="doc-biblioref">[21]</a></span><span class="marginnote"><span id="ref-SexistOED" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[21] </span><span class="csl-right-inline">E. O’Toole, <span>“A dictionary entry citing <span>‘rabid feminist’</span> doesn’t just reflect prejudice, it reinforces it,”</span> <em>The Guardian</em>, 2016.</span>
</span>
</span> for employing the phrase “rabid feminist” as a usage example for the word rabid. The dictionary included similarly sexist common usages for other words like shrill, nagging and bossy. But even before this, historical linguists observed that words referring to women undergo pejoration (when the meaning of a word deteriorates over time) far more often than those referring to men<span class="citation" data-cites="Pejoration"><a href="#ref-Pejoration" role="doc-biblioref">[22]</a></span><span class="marginnote"><span id="ref-Pejoration" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[22] </span><span class="csl-right-inline">D. Shariatmadari, <span>“Eight words that reveal the sexism at the heart of the english language,”</span> <em>The Guardian</em>, 2016.</span>
</span>
</span>. Consider words like mistress (once simply the female equivalent of master, now used to describe a woman in an illicit relationship with a married man); madam (once simply the female equivalent of sir, now also used to describe a woman who runs a brothel); hussy (once a neutral term for the head of a household, now used to describe an immoral or ill-behaved woman); and governess (female equivalent of governor, later used to describe a woman responsible for the care of children).</p>
<p>Unsurprisingly then, gender stereotyping is known to be a problem in natural language processing systems. In 2016 Bolukbasi et al. showed that word embeddings exhibited familiar gender biases in relation to occupations<span class="citation" data-cites="WomanHomemaker"><a href="#ref-WomanHomemaker" role="doc-biblioref">[23]</a></span><span class="marginnote"><span id="ref-WomanHomemaker" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[23] </span><span class="csl-right-inline">T. Bolukbasi, K.-W. Chang, J. Zou, V. Saligrama, and A. Kalai, <span>“Man is to computer programmer as woman is to homemaker? Debiasing word embeddings.”</span> 2016.Available: <a href="https://arxiv.org/abs/1607.06520">https://arxiv.org/abs/1607.06520</a></span>
</span>
</span>. By performing arithmetic on word vectors, they were able to uncover relationships such as <span class="math display">\[\overrightarrow{\textrm{man}} - \overrightarrow{\textrm{woman}} \approx \overrightarrow{\textrm{computer programmer}} - \overrightarrow{\textrm{homemaker}}.\]</span></p>
<p>In 2017 Caliskan et al. found that Google Translate contained similar gender biases.<span class="citation" data-cites="BiasSemantics"><a href="#ref-BiasSemantics" role="doc-biblioref">[24]</a></span><span class="marginnote"><span id="ref-BiasSemantics" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[24] </span><span class="csl-right-inline">A. Caliskan, J. J. Bryson, and A. Narayanan, <span>“Semantics derived automatically from language corpora contain human-like biases,”</span> <em>Science</em>, vol. 356, pp. 183–186, 2017.</span>
</span>
</span> In their research they found that “translations to English from many gender-neutral languages such as Finnish, Estonian, Hungarian, Persian, and Turkish led to gender-stereotyped sentences”. So for example when they translated Turkish sentences with genderless pronouns: “O bir doktor. O bir hemişre.”, the resulting English sentences were: “He is a doctor. She is a nurse.” They performed these types of tests for 50 occupations and found that the stereotypical gender association of the word almost perfectly predicted the resulting pronoun in the English translation.</p>
</section>
<section id="recognition" class="level4 unnumbered">
<h4 class="unnumbered">Recognition</h4>
<p>Harms of recognition happen when groups of people are in some senses erased by a system through failure to recognise. In her <a href="https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/transcript?language=en">TED Talk</a>, Joy Buolamwini, talks about how as an undergraduate studying computer science she worked on social robots. One of her projects involved creating a robot which could play peek-a-boo, but she found that her robot (which used third party software for facial recognition) could not see her. She was forced to borrow her roommate’s face to complete the project. After her work auditing several popular gender classification packages from IBM, Microsoft and Face++ in the project <a href="http://gendershades.org/overview.html">Gender Shades</a><span class="citation" data-cites="GenderShades"><a href="#ref-GenderShades" role="doc-biblioref">[25]</a></span><span class="marginnote"><span id="ref-GenderShades" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[25] </span><span class="csl-right-inline">J. Buolamwini and T. Gerbru, <em>Gender shades: Intersectional accuracy disparities in commercial gender classification</em>, vol. 81. Proceedings of Machine Learning Research, 2018, pp. 1–15.</span>
</span>
</span> in 2017 and seeing the failure of these technologies on the faces of some of the most recognizable Black women of her time, including Oprah Winfrey, Michelle Obama, and Serena Williams, she was prompted to echo the words of Sojourner Truth in asking “<a href="https://medium.com/@Joy.Buolamwini/when-ai-fails-on-oprah-serena-williams-and-michelle-obama-its-time-to-face-truth-bf7c2c8a4119">Ain’t I a Woman?</a>”. Harms of recognition are failures in seeing humanity in people.</p>
</section>
<section id="denigration" class="level4 unnumbered">
<h4 class="unnumbered">Denigration</h4>
<p>In 2015, much to the horror of many people, it was reported that <a href="https://www.bbc.com/news/technology-33347866">Google Photos had labelled a photo of a Black couple as Gorillas</a>. It’s hard to find the right words to describe just how offensive an error this is. It demonstrated how a machine, carrying out a seemingly benign task of labelling photos, could deliver an attack on a person’s human dignity.</p>
<p>In 2020, an ethical audit of several large computer vision datasets<span class="citation" data-cites="TinyImages"><a href="#ref-TinyImages" role="doc-biblioref">[26]</a></span><span class="marginnote"><span id="ref-TinyImages" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[26] </span><span class="csl-right-inline">V. U. Prabhu and A. Birhane, <span>“Large image datasets: A pyrrhic win for computer vision?”</span> 2020.Available: <a href="https://arxiv.org/abs/2006.16923">https://arxiv.org/abs/2006.16923</a></span>
</span>
</span>, revealed some disturbing results. TinyImages (a dataset of 79 million 32 x 32 pixel colour photos compiled in 2006, by MIT’s Computer Science and Artificial Intelligence Lab for image recognition tasks) contained racist, misogynistic and demeaning labels with corresponding images. Figure <a href="#fig:TinyImages" data-reference-type="ref" data-reference="fig:TinyImages">1.6</a> shows a subset of the data found in TinyImages.</p>
<figure>
<img src="01_Context/figures/Fig_TinyImages.png" id="fig:TinyImages" alt="Figure 1.6: Subset of data in TinyImages exemplifying toxicity in both the images and labels[26]." />
<figcaption aria-hidden="true">Figure 1.6: Subset of data in TinyImages exemplifying toxicity in both the images and labels<span class="citation" data-cites="TinyImages"><a href="#ref-TinyImages" role="doc-biblioref">[26]</a></span>.</figcaption>
</figure>
<p>The problem, unfortunately, does not end here. Many of the datasets used to train and benchmark, not just computer vision but natural language processing tasks, are related. Tiny Images was compiled by searching the internet for images associated with words in WordNet (a machine readable, lexical database, organised by meaning, developed at Princeton), which is where TinyImages inherited its labels from. ImageNet (widely considered to be a turning point in computer vision capabilities) is also based on WordNet and, Cifar-10 and Cifar-100 were derived from TinyImages.</p>
<p>Vision and language datasets are enormous. The time, effort and consideration in collecting the data that forms the foundation of these technologies (compared to that which has gone into advancing the models built on them), is questionable to say the least. Furthermore a dataset can have impact beyond the applications trained on it, because datasets often don’t just die, they evolve. This calls into question the technologies that are in use today, capable of creating persistent representations of our world, and trained on datasets so large they are difficult and expensive to audit.</p>
<p>And there’s plenty of evidence to suggest that this is a problem. For example, in 2013, a study found that Google searches were more likely to return personalised advertisements that were suggestive of arrest records for Black names<span class="citation" data-cites="LatanyaSweeney"><a href="#ref-LatanyaSweeney" role="doc-biblioref">[27]</a></span><span class="marginnote"><span id="ref-LatanyaSweeney" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[27] </span><span class="csl-right-inline">L. Sweeney, <span>“Discrimination in online ad delivery,”</span> <em>SSRN</em>, 2013.</span>
</span>
</span> than White<span class="sidenote-wrapper"><label for="sn-3" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-3" class="margin-toggle"/><span class="sidenote">Suggestive of an arrest record in the sense that they claim to have arrest records specifically for the name that you searched, regardless of whether they do in reality have them.<br />
<br />
</span></span> This doesn’t just result in allocative harms for people applying for jobs for example, it’s denigrating. <a href="https://www.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias">Google’s Natural Language API for sentiment analysis is also known to have problems</a>. In 2017, it was assigning negative sentiment to sentences such as “I’m a jew” and “I’m a homosexual” and “I’m black”; neutral sentiment to the phrase “white power” and positive sentiment to the sentences “I’m christian” and “I’m sikh”.</p>
</section>
<section id="under-representation" class="level4 unnumbered">
<h4 class="unnumbered">Under-representation</h4>
<p>In 2015, the New York Times reported, that “<a href="https://www.nytimes.com/2015/03/03/upshot/fewer-women-run-big-companies-than-men-named-john.html">Fewer women run big companies than men named John</a>”, despite this Google’s image search still managed to under-represent women in search results for the word “CEO”. Does this really matter? What difference would an alternate set of search results make? A study the same year found that “people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people’s perceptions about real-world distributions.”<span class="citation" data-cites="OccupationImageSearch"><a href="#ref-OccupationImageSearch" role="doc-biblioref">[28]</a></span><span class="marginnote"><span id="ref-OccupationImageSearch" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[28] </span><span class="csl-right-inline">M. Kay, C. Matuszek, and S. A. Munson, <span>“Unequal representation and gender stereotypes in image search results for occupations,”</span> <em>ACM</em>, 2015.</span>
</span>
</span>.</p>
</section>
<section id="ex-nomination" class="level4 unnumbered">
<h4 class="unnumbered">Ex-nomination</h4>
<p>Ex-nomination occurs through invisible means and affects people’s views of the norms within societies. It tends to happen through mechanisms which amplify the presence of some groups and suppress the presence of others. The cultures, beliefs, politics of ex-nominated groups over time become the default. The most obvious example is the ex-nomination of Whiteness and White culture in western society, which might sound like a bizarre statement - what is White culture? But such is the effect of ex-nomination, you can’t describe it, because it is just the norm and everything else is not. Richard Dyer in his book White examines the reproduction and preservation of whiteness in visual media over five centuries, from the depiction of the crucifixion to modern day film. It’s perhaps should not come as a surprise then, when facial recognition software can’t see black faces; or when gender recognition software fails more often than not for black women; or that a generative model that improves the resolution of images, converted a pixelated picture of Barack Obama, into a high-resolution image of a white man.</p>
<p>The ex-nomination of White culture is evident in our language too, in terminology like whitelist and white lie. If you look up white in dictionary and or thesaurus and you’ll find words like innocent and pure, light, transparent, immaculate, neutral. Doing the same for the word black on the other hand, returns very different associations, dirty, soiled, evil, wicked, black magic, black arts, black mark, black humour, blacklist and black is often used as a prefix in describing disastrous events. A similar assessment can be made for gender with women being under-represented in image data and feminine versions of words more often undergoing pejoration (when the meaning or status of a word deteriorates over time).</p>
<p>Members of ex-nominated groups experience a kind of privilege that it is easy to be unaware of. It is a power that comes from being the norm. They have advantages that are not earned, outside of their financial standing or effort, that the ‘equivalent’ person outside the ex-nominated group would not. Their hair type, skin tone, accent, food preferences and more are catered to by every store, product, service and system and it cost less to access them; they see themselves represented in the media and are more often represented in a positive light; they are not subject to profiling or stereotypes; they are more likely to be treated as individuals rather than as representative of (or as exceptions to) a group; they are more often humanised - more likely to be be given the benefit of the doubt, treated with compassion and kindness and thus recover from mistakes; they are less likely to be suspected of crimes; more likely to be trusted financially; they have greater access to opportunities, resources and power and are able to climb financial, social and professional ladders faster. The advantages enjoyed by ex-nominated groups accumulate over time and compound over generations.</p>
</section>
</section>
</section>
<section id="summary" class="level2 unnumbered">
<h2 class="unnumbered">Summary</h2>
<section id="bias-in-machine-learning-1" class="level3 unnumbered">
<h3 class="unnumbered">Bias in Machine learning</h3>
<ul>
<li><p>In this book we use algorithm and model interchangeably. A model can be determined using data, but it need not be. It can simply express an opinion on the relationship between variables. In practice the implementation is an algorithm either way. More precisely, a model is a function or mapping; given a set of input variables (features) it returns a decision or prediction for the target variable.</p></li>
<li><p>Obtaining adequately rich and relevant data is a major limitation of machine learning models.</p></li>
<li><p>At almost every important life event, going to university, getting a job, buying a house, getting sick, decisions are increasingly being made by machines. By construction, these models encode existing societal biases. They not only proliferate but are capable of amplifying them and are easily deployed at scale. Understanding the shortcomings of these models and ensuring such technologies are deployed responsibly are essential if we are to safeguard social progress.</p></li>
</ul>
</section>
<section id="a-philosophical-perspective" class="level3 unnumbered">
<h3 class="unnumbered">A Philosophical Perspective</h3>
<ul>
<li><p>According to uilitarian doctrine, the correct course of action (when faced with a dilemma) is the one that maximises the benefit for the greatest number of people. The doctrine demands that the benefits to all people are are counted equally.</p></li>
<li><p>The approach to training a model (assuming errors in either direction are equally harmful and accurate predictions are equally beneficial), is loosely justified in a utilitarian sense; we optimise our decision process to maximise benefit for the greatest number of people.</p></li>
<li><p>Utilitarianism is a flavour of consequentialism, a branch of ethical theory that holds that consequences are the yardstick against which we must judge the morality of our actions. In contrast deontological ethics judges the morality of actions against a set of rules that define our duties or obligations towards others. Here it is not the consequences of our actions that matter but rather intent.</p></li>
<li><p>There are some practical problems with utilitarianism but perhaps the most significant flaw in utilitarianism for moral reasoning is the omission of justice as a consideration.</p></li>
<li><p>Principles of Justice as Fairness:</p>
<ol>
<li><p><strong>Liberty principle:</strong> Each person has the same indefeasible claim to a fully adequate scheme of equal basic liberties, which is compatible with the same scheme of liberties for all;</p></li>
<li><p><strong>Equality principle:</strong> Social and economic inequalities are to satisfy two conditions:</p>
<ol>
<li><p><strong>Fair equality of opportunity:</strong> The offices and positions to which they are attached are open to all under conditions of fair equality of opportunity;</p></li>
<li><p><strong>Difference principle</strong> They must be of the greatest benefit to the least-advantaged members of society.</p></li>
</ol></li>
</ol>
<p>The principles of justice as fairness are ordered by priority so that fulfilment of the liberty principle takes precedence over the equality principles and fair equality of opportunity takes precedence over the difference principle. In contrast to utilitarianism, justice as fairness introduces a number of constraints that must be satisfied for a decision process to be fair. Applied to a machine learning one might interpret the liberty principle as a requirement of some minimum accuracy level (maximum probability of error) to be set for all members of the population, even if this means the algorithm is less accurate overall. Parallels can be drawn here in machine learning where there is a trade-off between fairness and utility of an algorithm.</p></li>
</ul>
</section>
<section id="a-legal-perspective-1" class="level3 unnumbered">
<h3 class="unnumbered">A Legal Perspective</h3>
<ul>
<li><p>Anti-discrimination laws were born out of long-standing, vast and systemic discrimination against historically oppressed and disadvantaged classes. Such discrimination has contributed to disparities in all measures of prosperity (health, wealth, housing, crime, incarceration) that persist today.</p></li>
<li><p>Legal liability for discrimination against protected classes may be established through both disparate treatment and disparate impact. Disparate treatment (also described as direct discrimination in Europe) refers to both formal differences in the treatment of individuals based on protected characteristics, and the intent to discriminate. Disparate impact (also described as indirect discrimination in Europe) does not consider intent but is concerned with policies and practices that disproportionately impact protected classes.</p></li>
<li><p>Just as the meaning of fairness is subjective, so too is the interpretation of anti-discrimination laws. Two conflicting interpretations are anti-classification and anti-subordination. Anti-classification is a weaker interpretation, that the law is intended to prevent classification of people based on protected characteristics. Anti-subordination is the stronger interpretation that anti-discrimination laws exist to prevent social hierarchies, class or caste systems based on protected features and, that it should actively work to eliminate them where they exist.</p></li>
</ul>
</section>
<section id="a-technical-perspective" class="level3 unnumbered">
<h3 class="unnumbered">A Technical Perspective</h3>
<ul>
<li><p>Identifying bias in data can be tricky. Data can be misleading. An association paradox is a phenomenon where an observable relationship between two variables disappears or reverses after controlling for one or more other variables.</p></li>
<li><p>In order to know which associations (or distributions) are relevant, i.e. the marginal (unconditional) or partial associations (conditional distributions), one must understand the causal nature of the relationships.</p></li>
<li><p>Association paradoxes can also occur for non-collapsible measures of association. Collapsible measures of association are those which can be expressed as the weighted average of the partial measures.</p></li>
</ul>
</section>
<section id="whats-the-harm" class="level3 unnumbered">
<h3 class="unnumbered">What’s the harm?</h3>
<ul>
<li><p>It is important to be clear that in general, machine learning systems are not objective. Data is produced by a necessarily subjective set of decisions. The consistency of algorithms in decision making compared to humans (who make decisions on a case by case basis) is often described as a benefit, but it’s their very consistency that makes them dangerous - capable of discriminating systematically and at scale.</p></li>
<li><p>Classification creates a sense of order and understanding. It enables us to find things more easily, formulate problems neatly and solve them. But classifying people inevitably has the effect of reducing people labels; labels that can result in people being treated as members of a group, rather than individuals.</p></li>
<li><p>Personalisation algorithms that shape our perception of the world in a way that covertly mirror our beliefs can have the effect of trading bridging for bonding capital, the former kind is important in solving global problems that require collective action, such as global warming.</p></li>
<li><p>Targeted political advertising and technologies that enable machines to impersonate humans are powerful tools that can be used as part of orchestrated campaigns of disinformation that manipulate perceptions at an individual level and yet at scale. They are capable of causing great harm to political and social institutions and pose a threat to security.</p></li>
<li><p>An allocative harm happens when a system allocates or withholds an opportunity or resource. Harms of representation occur when systems enforce the subordination of groups through characterizations that affect the perception of them. In contrast to harms of allocation, harms of representation have long-term effects on attitudes and beliefs. They create identities and labels for humans, societies and their cultures. Harms of representation affect our perception of each other and even ourselves. Harms of representation are difficult to quantify. Some types of harms of representation are, stereotyping, (failure of) recognition, denigration, under-representation and ex-nomination.</p></li>
</ul>
</section>
</section>
</section>
<section id="ch_EthicalDev" class="level1" data-number="2">
<h1 data-number="2"><span class="header-section-number">2</span> Ethical development</h1>
<div class="chapsumm">
<p><strong>This chapter at a glance</strong></p>
<ul>
<li><p>The machine learning cycle - feedback from models to data</p></li>
<li><p>The machine learning development and deployment life cycle</p></li>
<li><p>A practical approach to ethical development and deployment</p></li>
<li><p>A taxonomy of common causes of bias</p></li>
</ul>
</div>
<p>In this chapter, we transition to a more systematic approach to understanding the problem of fairness in decisions making systems. In later chapters we will look at different measures of fairness and bias mitigation techniques but before we discuss and analyse these methods, we review some more practical aspects of responsible model development and deployment. None of the bias mitigation techniques that we will talk about in part three of this book will rectify a poorly formulated, discriminatory machine learning problem or remedy negligent deployment of a predictive algorithm. A model in itself is not the source of unfair or illegal discrimination, models are developed and deployed by people as part of a process. In order to address the problem of unfairness we need to look at the whole system, not just the data or the model.</p>
<p>We’ll start by looking at the machine learning cycle and discuss the importance of how a model is used in the feedback effect it has on data. Where models can be harmful we should expect to have processes in place that aim to avoid common, foreseeable or catastrophic failures. We’ll discuss how to take a proactive rather than reactive approach to managing risks associated with models. We’ll discuss where in the machine learning model development cycle bias metrics and modelling interventions fit. Finally, we’ll classify the most common causes of bias, identifying the parts of the workflow to which they relate.</p>
<p>Our goal is to present problems and interventions schematically, creating a set of references for building, reviewing, deploying and monitoring machine learning solutions that aim to avoid the common pitfalls that result in unfair models. We take a high enough view that the discussion remains applicable to many machine learning applications. The specifics of the framework, can be tailored for a particular use case. Indeed the goal is for the resources in this chapter can be used as a starting point for data science teams that want to develop their own set of standards. Together we will progress towards thinking critically about the whole machine learning cycle, development, validation, deployment and monitoring of machine learning systems. By the end of this chapter we will have a clearer picture of what due diligence in model development and deployment might look like from a practical perspective.</p>
<section id="machine-learning-cycle" class="level2" data-number="2.1">
<h2 data-number="2.1"><span class="header-section-number">2.1</span> Machine Learning Cycle</h2>
<figure>
<img src="02_EthicalDevelopment/figures/Fig_MLCycle.png" id="fig:MLCycle" style="width:65.0%" alt="Figure 2.1: The machine learning cycle" />
<figcaption aria-hidden="true">Figure 2.1: The machine learning cycle</figcaption>
</figure>
<p>Machine learning systems can have long-term and compounding effects on the world around us. In this section we analyse the impact in a variety of different examples to breakdown the mechanisms that determine the nature and magnitude of the effect. In Figure <a href="#fig:MLCycle" data-reference-type="ref" data-reference="fig:MLCycle">2.1</a>, we present the machine learning cycle - a high-level depiction of the interaction between a machine learning solution and the real world. A machine learning system starts with a set of objectives. These can be achieved in a myriad of different ways. The translation of these objectives, into a tractable machine learning problem, consists of a series of subjective decisions; what data we collect to train a model on, what events we predict, what features we use, how we clean and process the data, how we evaluate the model and the decision policy are all choices. They determine the model we create, the actions we take and finally the resulting cycle of feedback on the data.</p>
<p>The most familiar parts of the cycle to most developers of machine learning solutions are on the right hand side; processing data, model selection, training and cross validation and prediction. Each action taken on the basis of our model prediction creates a new world state, which generates new data, which we collect and train our model on, and around it goes again. The actions we take based on our model predictions define how we use the model. The same model used in a different way can result in a very different feedback cycle.</p>
<p>Notice that the world state and data are distinct nodes in in the cycle. Most machine learning models rely on the assumption that the training data is accurate, rich and representative of the population, but this is often not the case. Data is a necessarily subjective representation of the world. The sample may be biased, contain an inadequate collection of features, subjective decisions around how to categorise features into groups, systematic errors or be tainted with prejudice decisions. We may not even be able to measure the true metric we wish to impact. Data collected for one purpose is often reused for another under the assumption that it represents the ground truth when it does not.</p>
<section id="feedback-from-model-to-data" class="level3" data-number="2.1.1">
<h3 data-number="2.1.1"><span class="header-section-number">2.1.1</span> Feedback from Model to Data</h3>
<p>In cases where the ground truth assignment (target variable choice) systematically disadvantages certain classes, actions taken based on predictions from models trained on the data can reinforce the bias and even amplify it. Similarly, decisions made on the basis of results derived from machine learning algorithms, trained on data that under or over-represents disadvantaged classes, can have feedback effects that further skew the representation of those classes in future data. The cycle of training on biased data (which justifies inaccurate beliefs), taking actions in kind, and further generating data that reinforces those biases can become a kind of self-fulfilling prophecy. The good news is that just as we can create pernicious cycles that exaggerate disparities, we can create virtuous ones that have the effect of reducing them. Let’s take two illustrative examples.</p>
<section id="predictive-policing" class="level4 unnumbered">
<h4 class="unnumbered">Predictive Policing</h4>
<p>In the United States, predictive policing has been implemented by police departments in several states including California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York and Illinois. Such algorithms use data on the time, location and nature of past crimes, in order to determine how and where to patrol and thus improve the efficiency with which policing resources are allocated. A major flaw with these algorithms pertains to the data used to train them. It is not of where crimes occurred, but rather where there have been previous arrests. A proxy target variable (arrests) is used in place of the desired target variable (crime). Racial disparities in policing in the US is a well publicised problem. Figure <a href="#fig:drugs" data-reference-type="ref" data-reference="fig:drugs">2.2</a> demonstrates this disparity for policing of drug related crimes. In 2015, an analysis by The Hamilton Project found that at the state level, Blacks were 6.5 times as Whites to be incarcerated for drug-related crimes<span class="citation" data-cites="HamProj"><a href="#ref-HamProj" role="doc-biblioref">[29]</a></span><span class="marginnote"><span id="ref-HamProj" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[29] </span><span class="csl-right-inline"><span>“Rates of drug use and sales, by race; rates of drug related criminal justice measures, by race.”</span> The Hamilton Project, 2015.</span>
</span>
</span> despite drug related crime being more prevalent among Whites. Taking actions based on predictions from an algorithm trained on arrest data will likely amplify existing disparities between under and over-policed neighbourhoods which correlate with race.</p>
<figure>
<img src="02_EthicalDevelopment/figures/Fig_RatesDrugUseSaleRace.png" id="fig:drugs" alt="Figure 2.2: Rates of drug use and sales compared to criminal justice measures by race[29]." />
<figcaption aria-hidden="true">Figure 2.2: Rates of drug use and sales compared to criminal justice measures by race<span class="citation" data-cites="HamProj"><a href="#ref-HamProj" role="doc-biblioref">[29]</a></span>.</figcaption>
</figure>
</section>
<section id="car-insurance" class="level4 unnumbered">
<h4 class="unnumbered">Car insurance</h4>
<p>As a comparative example, let’s consider car insurance. It is well publicised that car insurance companies discriminate against young male drivers (despite age and gender being legally protected characteristics in the countries where these insurance companies operate) since statistically, they are at higher risk of being involved in accidents. Insurance companies act on risk predictions by determining the price of insurance at an individual level - the higher the risk, the more expensive the cost of insurance. What is the feedback effect of this on the data? Of course young men are disadvantaged by having to pay more, but one can see how this pricing structure acts as an incentive to drive safely. It is in the drivers interest to avoid having an accident that would result in an increase in their car insurance premiums. For a high risk driver in particular, an accident could potentially make it prohibitively expensive for them to drive. The feedback effect on the data would be to reduce the disparity in incidents of road traffic accidents among high and low risk individuals.</p>
<p>Along with the difference in the direction of the feedback effects in the examples given above, there is another important distinction to be made in terms of the magnitude of the feedback effect. This is related to how much control the institution making decisions based on the predictions, has over the data. In the predictive policing example the data is entirely controlled by the police department. They decide where to police and who to arrest, ultimately determining the places and people that do (and don’t) end up in the data. They produce the training data, in its entirety, as a result of their actions. Consequently, we would expect the feedback effect of acting on predictions based on the data to be strong and capable of dramatically shifting the distribution of data generated over time. Insurance companies by comparison, have far less influence over the data (consisting individuals involved in road traffic accidents). Though they can arguably encourage certain driving behaviours through pricing, they do not ultimately determine who is and who is not involved in a car accident. As such, feedback effects of risk-related pricing in car insurance are likely to be less strong in comparison.</p>
<div class="lookbox">
<p><strong>Risk related pricing and discrimination</strong></p>
<p>Do you think age and gender based discrimination in car insurance are fair? Why?</p>
</div>
</section>
</section>
<section id="model-use" class="level3" data-number="2.1.2">
<h3 data-number="2.1.2"><span class="header-section-number">2.1.2</span> Model Use</h3>
<p>We’ve seen some examples illustrating how the strength and direction of feedback from models to (future) data can vary. In this section we’ll demonstrate how the same model can have a very different feedback cycle depending on how it is used (i.e. the actions that are taken based on its predictions). A crucial part of responsible model development and deployment then should be clearly defining and documenting the way in which a model is intended to be used and relevant tests and checks that were performed. In addition, considering potential use cases for which one might be tempted to use the model but for which it is not suitable and documenting them can prevent misuse. Setting out the specific use case is an important part of enabling effective and focused analysis and testing in order to understand both its strengths and weaknesses.</p>
<p>The idea that the use case for a product, tool or model should be well understood before release; that it should be validated and thoroughly tested for that use case and further that the potential harms caused (even for unintended uses) should be mitigated is not novel. In fact, many industries have safety standards set by a regulatory body that enshrine these ideas in law. The motor vehicle industry has a rich history of regulation aimed at reducing risk of death or serious injury from road traffic accidents that continues to evolve today. In the early days, protruding knobs and controls on the dash would impale people in collisions. It was not until the 1960s that seatbelts, collapsing steering columns and head restraints became a requirement. Safety testing and requirements have continued to expand to including rear brake lights, a variety of impact crash tests, ISOFIX child car seat anchors among others. There are many more such examples across different industries but it is perhaps more instructive to consider an example that involves the use of models.</p>
<p>Let’s look at an example in the banking industry. Derivatives are financial products in the form of a contract that result in payments to the holder contingent on future events. The details, such as payment amounts, dates and events that lead to them are outlined in the contract. The simplest kinds of derivatives are called vanilla options; if at expiry, the underlying asset is above (call option) or below (put option) a specified limit, the holder receives the difference. In order to price them one must model the behaviour of the underlying asset over time. As the events which result in payments become more elaborate, so does the modelling required to be able to price them, as does the certainty with which they can be priced. In derivatives markets, it is a well understood fact that valuation models are product specific. A model that is suitable for pricing a simple financial instrument will not necessarily be appropriate for pricing a more complex one. For this reason, regulated banks that trade derivatives must validate models specifically for the instruments they will be used to price and document their testing. Furthermore they must track their product inventory (along with the models being used to price them) in order to ensure that they are not using models to price products for which the are inappropriate. Model suitability is determined via an approval process, where approved models have been validated as part of a model review process to some standard of due diligence has been carried out for the specified use case.</p>
<p>Though machine learning models are not currently regulated in this way, it’s easy to draw parallels when it comes to setting requirements around model suitability. But clear consideration of the use case for a machine learning model is not just about making sure that the model performs well for the intended use case. How a predictive model is used, ultimately determines the actions that are taken in kind, and thus the nature of the feedback it has on future data. Just as household appliances come with manuals and warnings against untested / inappropriate / dangerous uses, datasets and models could be required to be properly documented with descriptions, metrics, analysis around use case specific performance and warnings.</p>
<p>It is worth noting that COMPAS<span class="citation" data-cites="ProPub2"><a href="#ref-ProPub2" role="doc-biblioref">[30]</a></span><span class="marginnote"><span id="ref-ProPub2" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[30] </span><span class="csl-right-inline">J. Larson, S. Mattu, L. Kirchner, and J. Angwin, <span>“How we analyzed the COMPAS recidivism algorithm,”</span> <em>ProPublica</em>, 2016.</span>
</span>
</span> was not developed to be used in sentencing. Tim Brennan (the co-founder of Northpointe and co-creator of its COMPAS risk scoring system) himself stated in a court testimony that they “wanted to stay away from the courts”. Documentation<span class="citation" data-cites="COMPASguide"><a href="#ref-COMPASguide" role="doc-biblioref">[31]</a></span><span class="marginnote"><span id="ref-COMPASguide" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[31] </span><span class="csl-right-inline">Northpointe, <em>Practitioners guide to COMPAS core</em>. 2015.</span>
</span>
</span> for the software (dated 2015 two years later) describes it as a risk and needs assessment and case management system. It talks about it being used “to inform decisions regarding the placement, supervision and case management of offenders” and probation officers using the recidivism risk scales to “triage their case loads”. There is no mention of its use in sentencing. Is it reasonable to assume that a model, developed as a case management tool for probation officers could be used to advise judges with regards to sentencing? Napa County, California, uses a similar risk scoring system in the courts. There a Superior Court Judge who trains other judges in evidence-based sentencing cautions colleagues in their interpretation of the scores. He outlines a concrete example of where the model falls short. “A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job. Meanwhile, a drunk guy will look high risk because he’s homeless. These risk factors don’t tell you whether the guy ought to go to prison or not; the risk factors tell you more about what the probation conditions ought to be.”<span class="citation" data-cites="ProPub2"><a href="#ref-ProPub2" role="doc-biblioref">[30]</a></span></p>
<p>Propublica’s review of COMPAS looked at recidivism risk for more than 10,000 criminal defendants in Broward County, Florida<span class="citation" data-cites="ProPub3"><a href="#ref-ProPub3" role="doc-biblioref">[32]</a></span><span class="marginnote"><span id="ref-ProPub3" class="csl-entry" role="doc-biblioentry">
<span class="csl-left-margin">[32] </span><span class="csl-right-inline">J. Larson, <span>“ProPublica analysis of data from broward county, fla.”</span> ProPublica, 2016.</span>
</span>
</span>. Their analysis found the distributions of risk scores for Black and White defendants to be markedly different, with White defendants being more likely to be scored low-risk - see Figure <a href="#fig:COMPAS" data-reference-type="ref" data-reference="fig:COMPAS">2.3</a>.</p>
<figure>
<img src="02_EthicalDevelopment/figures/Fig_Propublica.png" id="fig:COMPAS" style="width:85.0%" alt="Figure 2.3: Comparison of recidivism risk scores for White and Black defendants[32]" />
<figcaption aria-hidden="true">Figure 2.3: Comparison of recidivism risk scores for White and Black defendants<sup><span class="citation" data-cites="ProPub3"><a href="#ref-ProPub3" role="doc-biblioref">[32]</a></span></sup></figcaption>
</figure>
<p>Comparing predicted recidivism rates for over 7,000 of the defendants with the rate that actually occurred over a two-year period, they found the accuracy of the algorithm in predicting recidivism for Black and White defendants to be similar (59% for White and 63% for Black defendants), however the errors revealed a different pattern. They found that Blacks were almost twice as likely as Whites to be labelled as higher risk but not actually re-offend . The errors for White defendants were in the opposite direction; while being more likely to be labelled as low-risk, they more often went on to commit further crimes. See Table <a href="#tbl:COMPAS" data-reference-type="ref" data-reference="tbl:COMPAS">2.1</a>.</p>
<div id="tbl:COMPAS">
<table>
<caption>Table 2.1: COMPAS comparison of risk score errors for White versus Black defendants</caption>
<thead>
<tr class="header">
<th style="text-align: left;">Error type</th>
<th style="text-align: right;">White</th>
<th style="text-align: right;">Black</th>
</tr>
</thead>
<tbody>