-
Notifications
You must be signed in to change notification settings - Fork 54.5k
/
memory-barriers.txt
3014 lines (2298 loc) · 112 KB
/
memory-barriers.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
============================
LINUX KERNEL MEMORY BARRIERS
============================
By: David Howells <[email protected]>
Paul E. McKenney <[email protected]>
Will Deacon <[email protected]>
Peter Zijlstra <[email protected]>
==========
DISCLAIMER
==========
This document is not a specification; it is intentionally (for the sake of
brevity) and unintentionally (due to being human) incomplete. This document is
meant as a guide to using the various memory barriers provided by Linux, but
in case of any doubt (and there are many) please ask. Some doubts may be
resolved by referring to the formal memory consistency model and related
documentation at tools/memory-model/. Nevertheless, even this memory
model should be viewed as the collective opinion of its maintainers rather
than as an infallible oracle.
To repeat, this document is not a specification of what Linux expects from
hardware.
The purpose of this document is twofold:
(1) to specify the minimum functionality that one can rely on for any
particular barrier, and
(2) to provide a guide as to how to use the barriers that are available.
Note that an architecture can provide more than the minimum requirement
for any particular barrier, but if the architecture provides less than
that, that architecture is incorrect.
Note also that it is possible that a barrier may be a no-op for an
architecture because the way that arch works renders an explicit barrier
unnecessary in that case.
========
CONTENTS
========
(*) Abstract memory access model.
- Device operations.
- Guarantees.
(*) What are memory barriers?
- Varieties of memory barrier.
- What may not be assumed about memory barriers?
- Address-dependency barriers (historical).
- Control dependencies.
- SMP barrier pairing.
- Examples of memory barrier sequences.
- Read memory barriers vs load speculation.
- Multicopy atomicity.
(*) Explicit kernel barriers.
- Compiler barrier.
- CPU memory barriers.
(*) Implicit kernel memory barriers.
- Lock acquisition functions.
- Interrupt disabling functions.
- Sleep and wake-up functions.
- Miscellaneous functions.
(*) Inter-CPU acquiring barrier effects.
- Acquires vs memory accesses.
(*) Where are memory barriers needed?
- Interprocessor interaction.
- Atomic operations.
- Accessing devices.
- Interrupts.
(*) Kernel I/O barrier effects.
(*) Assumed minimum execution ordering model.
(*) The effects of the cpu cache.
- Cache coherency vs DMA.
- Cache coherency vs MMIO.
(*) The things CPUs get up to.
- And then there's the Alpha.
- Virtual Machine Guests.
(*) Example uses.
- Circular buffers.
(*) References.
============================
ABSTRACT MEMORY ACCESS MODEL
============================
Consider the following abstract model of the system:
: :
: :
: :
+-------+ : +--------+ : +-------+
| | : | | : | |
| | : | | : | |
| CPU 1 |<----->| Memory |<----->| CPU 2 |
| | : | | : | |
| | : | | : | |
+-------+ : +--------+ : +-------+
^ : ^ : ^
| : | : |
| : | : |
| : v : |
| : +--------+ : |
| : | | : |
| : | | : |
+---------->| Device |<----------+
: | | :
: | | :
: +--------+ :
: :
Each CPU executes a program that generates memory access operations. In the
abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
perform the memory operations in any order it likes, provided program causality
appears to be maintained. Similarly, the compiler may also arrange the
instructions it emits in any order it likes, provided it doesn't affect the
apparent operation of the program.
So in the above diagram, the effects of the memory operations performed by a
CPU are perceived by the rest of the system as the operations cross the
interface between the CPU and rest of the system (the dotted lines).
For example, consider the following sequence of events:
CPU 1 CPU 2
=============== ===============
{ A == 1; B == 2 }
A = 3; x = B;
B = 4; y = A;
The set of accesses as seen by the memory system in the middle can be arranged
in 24 different combinations:
STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
STORE B=4, ...
...
and can thus result in four different combinations of values:
x == 2, y == 1
x == 2, y == 3
x == 4, y == 1
x == 4, y == 3
Furthermore, the stores committed by a CPU to the memory system may not be
perceived by the loads made by another CPU in the same order as the stores were
committed.
As a further example, consider this sequence of events:
CPU 1 CPU 2
=============== ===============
{ A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4; Q = P;
P = &B; D = *Q;
There is an obvious address dependency here, as the value loaded into D depends
on the address retrieved from P by CPU 2. At the end of the sequence, any of
the following results are possible:
(Q == &A) and (D == 1)
(Q == &B) and (D == 2)
(Q == &B) and (D == 4)
Note that CPU 2 will never try and load C into D because the CPU will load P
into Q before issuing the load of *Q.
DEVICE OPERATIONS
-----------------
Some devices present their control interfaces as collections of memory
locations, but the order in which the control registers are accessed is very
important. For instance, imagine an ethernet card with a set of internal
registers that are accessed through an address port register (A) and a data
port register (D). To read internal register 5, the following code might then
be used:
*A = 5;
x = *D;
but this might show up as either of the following two sequences:
STORE *A = 5, x = LOAD *D
x = LOAD *D, STORE *A = 5
the second of which will almost certainly result in a malfunction, since it set
the address _after_ attempting to read the register.
GUARANTEES
----------
There are some minimal guarantees that may be expected of a CPU:
(*) On any given CPU, dependent memory accesses will be issued in order, with
respect to itself. This means that for:
Q = READ_ONCE(P); D = READ_ONCE(*Q);
the CPU will issue the following memory operations:
Q = LOAD P, D = LOAD *Q
and always in that order. However, on DEC Alpha, READ_ONCE() also
emits a memory-barrier instruction, so that a DEC Alpha CPU will
instead issue the following memory operations:
Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
mischief.
(*) Overlapping loads and stores within a particular CPU will appear to be
ordered within that CPU. This means that for:
a = READ_ONCE(*X); WRITE_ONCE(*X, b);
the CPU will only issue the following sequence of memory operations:
a = LOAD *X, STORE *X = b
And for:
WRITE_ONCE(*X, c); d = READ_ONCE(*X);
the CPU will only issue:
STORE *X = c, d = LOAD *X
(Loads and stores overlap if they are targeted at overlapping pieces of
memory).
And there are a number of things that _must_ or _must_not_ be assumed:
(*) It _must_not_ be assumed that the compiler will do what you want
with memory references that are not protected by READ_ONCE() and
WRITE_ONCE(). Without them, the compiler is within its rights to
do all sorts of "creative" transformations, which are covered in
the COMPILER BARRIER section.
(*) It _must_not_ be assumed that independent loads and stores will be issued
in the order given. This means that for:
X = *A; Y = *B; *D = Z;
we may get any of the following sequences:
X = LOAD *A, Y = LOAD *B, STORE *D = Z
X = LOAD *A, STORE *D = Z, Y = LOAD *B
Y = LOAD *B, X = LOAD *A, STORE *D = Z
Y = LOAD *B, STORE *D = Z, X = LOAD *A
STORE *D = Z, X = LOAD *A, Y = LOAD *B
STORE *D = Z, Y = LOAD *B, X = LOAD *A
(*) It _must_ be assumed that overlapping memory accesses may be merged or
discarded. This means that for:
X = *A; Y = *(A + 4);
we may get any one of the following sequences:
X = LOAD *A; Y = LOAD *(A + 4);
Y = LOAD *(A + 4); X = LOAD *A;
{X, Y} = LOAD {*A, *(A + 4) };
And for:
*A = X; *(A + 4) = Y;
we may get any of:
STORE *A = X; STORE *(A + 4) = Y;
STORE *(A + 4) = Y; STORE *A = X;
STORE {*A, *(A + 4) } = {X, Y};
And there are anti-guarantees:
(*) These guarantees do not apply to bitfields, because compilers often
generate code to modify these using non-atomic read-modify-write
sequences. Do not attempt to use bitfields to synchronize parallel
algorithms.
(*) Even in cases where bitfields are protected by locks, all fields
in a given bitfield must be protected by one lock. If two fields
in a given bitfield are protected by different locks, the compiler's
non-atomic read-modify-write sequences can cause an update to one
field to corrupt the value of an adjacent field.
(*) These guarantees apply only to properly aligned and sized scalar
variables. "Properly sized" currently means variables that are
the same size as "char", "short", "int" and "long". "Properly
aligned" means the natural alignment, thus no constraints for
"char", two-byte alignment for "short", four-byte alignment for
"int", and either four-byte or eight-byte alignment for "long",
on 32-bit and 64-bit systems, respectively. Note that these
guarantees were introduced into the C11 standard, so beware when
using older pre-C11 compilers (for example, gcc 4.6). The portion
of the standard containing this guarantee is Section 3.14, which
defines "memory location" as follows:
memory location
either an object of scalar type, or a maximal sequence
of adjacent bit-fields all having nonzero width
NOTE 1: Two threads of execution can update and access
separate memory locations without interfering with
each other.
NOTE 2: A bit-field and an adjacent non-bit-field member
are in separate memory locations. The same applies
to two bit-fields, if one is declared inside a nested
structure declaration and the other is not, or if the two
are separated by a zero-length bit-field declaration,
or if they are separated by a non-bit-field member
declaration. It is not safe to concurrently update two
bit-fields in the same structure if all members declared
between them are also bit-fields, no matter what the
sizes of those intervening bit-fields happen to be.
=========================
WHAT ARE MEMORY BARRIERS?
=========================
As can be seen above, independent memory operations are effectively performed
in random order, but this can be a problem for CPU-CPU interaction and for I/O.
What is required is some way of intervening to instruct the compiler and the
CPU to restrict the order.
Memory barriers are such interventions. They impose a perceived partial
ordering over the memory operations on either side of the barrier.
Such enforcement is important because the CPUs and other devices in a system
can use a variety of tricks to improve performance, including reordering,
deferral and combination of memory operations; speculative loads; speculative
branch prediction and various types of caching. Memory barriers are used to
override or suppress these tricks, allowing the code to sanely control the
interaction of multiple CPUs and/or devices.
VARIETIES OF MEMORY BARRIER
---------------------------
Memory barriers come in four basic varieties:
(1) Write (or store) memory barriers.
A write memory barrier gives a guarantee that all the STORE operations
specified before the barrier will appear to happen before all the STORE
operations specified after the barrier with respect to the other
components of the system.
A write barrier is a partial ordering on stores only; it is not required
to have any effect on loads.
A CPU can be viewed as committing a sequence of store operations to the
memory system as time progresses. All stores _before_ a write barrier
will occur _before_ all the stores after the write barrier.
[!] Note that write barriers should normally be paired with read or
address-dependency barriers; see the "SMP barrier pairing" subsection.
(2) Address-dependency barriers (historical).
[!] This section is marked as HISTORICAL: it covers the long-obsolete
smp_read_barrier_depends() macro, the semantics of which are now
implicit in all marked accesses. For more up-to-date information,
including how compiler transformations can sometimes break address
dependencies, see Documentation/RCU/rcu_dereference.rst.
An address-dependency barrier is a weaker form of read barrier. In the
case where two loads are performed such that the second depends on the
result of the first (eg: the first load retrieves the address to which
the second load will be directed), an address-dependency barrier would
be required to make sure that the target of the second load is updated
after the address obtained by the first load is accessed.
An address-dependency barrier is a partial ordering on interdependent
loads only; it is not required to have any effect on stores, independent
loads or overlapping loads.
As mentioned in (1), the other CPUs in the system can be viewed as
committing sequences of stores to the memory system that the CPU being
considered can then perceive. An address-dependency barrier issued by
the CPU under consideration guarantees that for any load preceding it,
if that load touches one of a sequence of stores from another CPU, then
by the time the barrier completes, the effects of all the stores prior to
that touched by the load will be perceptible to any loads issued after
the address-dependency barrier.
See the "Examples of memory barrier sequences" subsection for diagrams
showing the ordering constraints.
[!] Note that the first load really has to have an _address_ dependency and
not a control dependency. If the address for the second load is dependent
on the first load, but the dependency is through a conditional rather than
actually loading the address itself, then it's a _control_ dependency and
a full read barrier or better is required. See the "Control dependencies"
subsection for more information.
[!] Note that address-dependency barriers should normally be paired with
write barriers; see the "SMP barrier pairing" subsection.
[!] Kernel release v5.9 removed kernel APIs for explicit address-
dependency barriers. Nowadays, APIs for marking loads from shared
variables such as READ_ONCE() and rcu_dereference() provide implicit
address-dependency barriers.
(3) Read (or load) memory barriers.
A read barrier is an address-dependency barrier plus a guarantee that all
the LOAD operations specified before the barrier will appear to happen
before all the LOAD operations specified after the barrier with respect to
the other components of the system.
A read barrier is a partial ordering on loads only; it is not required to
have any effect on stores.
Read memory barriers imply address-dependency barriers, and so can
substitute for them.
[!] Note that read barriers should normally be paired with write barriers;
see the "SMP barrier pairing" subsection.
(4) General memory barriers.
A general memory barrier gives a guarantee that all the LOAD and STORE
operations specified before the barrier will appear to happen before all
the LOAD and STORE operations specified after the barrier with respect to
the other components of the system.
A general memory barrier is a partial ordering over both loads and stores.
General memory barriers imply both read and write memory barriers, and so
can substitute for either.
And a couple of implicit varieties:
(5) ACQUIRE operations.
This acts as a one-way permeable barrier. It guarantees that all memory
operations after the ACQUIRE operation will appear to happen after the
ACQUIRE operation with respect to the other components of the system.
ACQUIRE operations include LOCK operations and both smp_load_acquire()
and smp_cond_load_acquire() operations.
Memory operations that occur before an ACQUIRE operation may appear to
happen after it completes.
An ACQUIRE operation should almost always be paired with a RELEASE
operation.
(6) RELEASE operations.
This also acts as a one-way permeable barrier. It guarantees that all
memory operations before the RELEASE operation will appear to happen
before the RELEASE operation with respect to the other components of the
system. RELEASE operations include UNLOCK operations and
smp_store_release() operations.
Memory operations that occur after a RELEASE operation may appear to
happen before it completes.
The use of ACQUIRE and RELEASE operations generally precludes the need
for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is
-not- guaranteed to act as a full memory barrier. However, after an
ACQUIRE on a given variable, all memory accesses preceding any prior
RELEASE on that same variable are guaranteed to be visible. In other
words, within a given variable's critical section, all accesses of all
previous critical sections for that variable are guaranteed to have
completed.
This means that ACQUIRE acts as a minimal "acquire" operation and
RELEASE acts as a minimal "release" operation.
A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
RELEASE variants in addition to fully-ordered and relaxed (no barrier
semantics) definitions. For compound atomics performing both a load and a
store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
only to the store portion of the operation.
Memory barriers are only required where there's a possibility of interaction
between two CPUs or between a CPU and a device. If it can be guaranteed that
there won't be any such interaction in any particular piece of code, then
memory barriers are unnecessary in that piece of code.
Note that these are the _minimum_ guarantees. Different architectures may give
more substantial guarantees, but they may _not_ be relied upon outside of arch
specific code.
WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
----------------------------------------------
There are certain things that the Linux kernel memory barriers do not guarantee:
(*) There is no guarantee that any of the memory accesses specified before a
memory barrier will be _complete_ by the completion of a memory barrier
instruction; the barrier can be considered to draw a line in that CPU's
access queue that accesses of the appropriate type may not cross.
(*) There is no guarantee that issuing a memory barrier on one CPU will have
any direct effect on another CPU or any other hardware in the system. The
indirect effect will be the order in which the second CPU sees the effects
of the first CPU's accesses occur, but see the next point:
(*) There is no guarantee that a CPU will see the correct order of effects
from a second CPU's accesses, even _if_ the second CPU uses a memory
barrier, unless the first CPU _also_ uses a matching memory barrier (see
the subsection on "SMP Barrier Pairing").
(*) There is no guarantee that some intervening piece of off-the-CPU
hardware[*] will not reorder the memory accesses. CPU cache coherency
mechanisms should propagate the indirect effects of a memory barrier
between CPUs, but might not do so in order.
[*] For information on bus mastering DMA and coherency please read:
Documentation/driver-api/pci/pci.rst
Documentation/core-api/dma-api-howto.rst
Documentation/core-api/dma-api.rst
ADDRESS-DEPENDENCY BARRIERS (HISTORICAL)
----------------------------------------
[!] This section is marked as HISTORICAL: it covers the long-obsolete
smp_read_barrier_depends() macro, the semantics of which are now implicit
in all marked accesses. For more up-to-date information, including
how compiler transformations can sometimes break address dependencies,
see Documentation/RCU/rcu_dereference.rst.
As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
DEC Alpha, which means that about the only people who need to pay attention
to this section are those working on DEC Alpha architecture-specific code
and those working on READ_ONCE() itself. For those who need it, and for
those who are interested in the history, here is the story of
address-dependency barriers.
[!] While address dependencies are observed in both load-to-load and
load-to-store relations, address-dependency barriers are not necessary
for load-to-store situations.
The requirement of address-dependency barriers is a little subtle, and
it's not always obvious that they're needed. To illustrate, consider the
following sequence of events:
CPU 1 CPU 2
=============== ===============
{ A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4;
<write barrier>
WRITE_ONCE(P, &B);
Q = READ_ONCE_OLD(P);
D = *Q;
[!] READ_ONCE_OLD() corresponds to READ_ONCE() of pre-4.15 kernel, which
doesn't imply an address-dependency barrier.
There's a clear address dependency here, and it would seem that by the end of
the sequence, Q must be either &A or &B, and that:
(Q == &A) implies (D == 1)
(Q == &B) implies (D == 4)
But! CPU 2's perception of P may be updated _before_ its perception of B, thus
leading to the following situation:
(Q == &B) and (D == 2) ????
While this may seem like a failure of coherency or causality maintenance, it
isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
Alpha).
To deal with this, READ_ONCE() provides an implicit address-dependency barrier
since kernel release v4.15:
CPU 1 CPU 2
=============== ===============
{ A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4;
<write barrier>
WRITE_ONCE(P, &B);
Q = READ_ONCE(P);
<implicit address-dependency barrier>
D = *Q;
This enforces the occurrence of one of the two implications, and prevents the
third possibility from arising.
[!] Note that this extremely counterintuitive situation arises most easily on
machines with split caches, so that, for example, one cache bank processes
even-numbered cache lines and the other bank processes odd-numbered cache
lines. The pointer P might be stored in an odd-numbered cache line, and the
variable B might be stored in an even-numbered cache line. Then, if the
even-numbered bank of the reading CPU's cache is extremely busy while the
odd-numbered bank is idle, one can see the new value of the pointer P (&B),
but the old value of the variable B (2).
An address-dependency barrier is not required to order dependent writes
because the CPUs that the Linux kernel supports don't do writes until they
are certain (1) that the write will actually happen, (2) of the location of
the write, and (3) of the value to be written.
But please carefully read the "CONTROL DEPENDENCIES" section and the
Documentation/RCU/rcu_dereference.rst file: The compiler can and does break
dependencies in a great many highly creative ways.
CPU 1 CPU 2
=============== ===============
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
B = 4;
<write barrier>
WRITE_ONCE(P, &B);
Q = READ_ONCE_OLD(P);
WRITE_ONCE(*Q, 5);
Therefore, no address-dependency barrier is required to order the read into
Q with the store into *Q. In other words, this outcome is prohibited,
even without an implicit address-dependency barrier of modern READ_ONCE():
(Q == &B) && (B == 4)
Please note that this pattern should be rare. After all, the whole point
of dependency ordering is to -prevent- writes to the data structure, along
with the expensive cache misses associated with those writes. This pattern
can be used to record rare error conditions and the like, and the CPUs'
naturally occurring ordering prevents such records from being lost.
Note well that the ordering provided by an address dependency is local to
the CPU containing it. See the section on "Multicopy atomicity" for
more information.
The address-dependency barrier is very important to the RCU system,
for example. See rcu_assign_pointer() and rcu_dereference() in
include/linux/rcupdate.h. This permits the current target of an RCU'd
pointer to be replaced with a new modified target, without the replacement
target appearing to be incompletely initialised.
CONTROL DEPENDENCIES
--------------------
Control dependencies can be a bit tricky because current compilers do
not understand them. The purpose of this section is to help you prevent
the compiler's ignorance from breaking your code.
A load-load control dependency requires a full read memory barrier, not
simply an (implicit) address-dependency barrier to make it work correctly.
Consider the following bit of code:
q = READ_ONCE(a);
<implicit address-dependency barrier>
if (q) {
/* BUG: No address dependency!!! */
p = READ_ONCE(b);
}
This will not have the desired effect because there is no actual address
dependency, but rather a control dependency that the CPU may short-circuit
by attempting to predict the outcome in advance, so that other CPUs see
the load from b as having happened before the load from a. In such a case
what's actually required is:
q = READ_ONCE(a);
if (q) {
<read barrier>
p = READ_ONCE(b);
}
However, stores are not speculated. This means that ordering -is- provided
for load-store control dependencies, as in the following example:
q = READ_ONCE(a);
if (q) {
WRITE_ONCE(b, 1);
}
Control dependencies pair normally with other types of barriers.
That said, please note that neither READ_ONCE() nor WRITE_ONCE()
are optional! Without the READ_ONCE(), the compiler might combine the
load from 'a' with other loads from 'a'. Without the WRITE_ONCE(),
the compiler might combine the store to 'b' with other stores to 'b'.
Either can result in highly counterintuitive effects on ordering.
Worse yet, if the compiler is able to prove (say) that the value of
variable 'a' is always non-zero, it would be well within its rights
to optimize the original example by eliminating the "if" statement
as follows:
q = a;
b = 1; /* BUG: Compiler and CPU can both reorder!!! */
So don't leave out the READ_ONCE().
It is tempting to try to enforce ordering on identical stores on both
branches of the "if" statement as follows:
q = READ_ONCE(a);
if (q) {
barrier();
WRITE_ONCE(b, 1);
do_something();
} else {
barrier();
WRITE_ONCE(b, 1);
do_something_else();
}
Unfortunately, current compilers will transform this as follows at high
optimization levels:
q = READ_ONCE(a);
barrier();
WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
if (q) {
/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
do_something();
} else {
/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
do_something_else();
}
Now there is no conditional between the load from 'a' and the store to
'b', which means that the CPU is within its rights to reorder them:
The conditional is absolutely required, and must be present in the
assembly code even after all compiler optimizations have been applied.
Therefore, if you need ordering in this example, you need explicit
memory barriers, for example, smp_store_release():
q = READ_ONCE(a);
if (q) {
smp_store_release(&b, 1);
do_something();
} else {
smp_store_release(&b, 1);
do_something_else();
}
In contrast, without explicit memory barriers, two-legged-if control
ordering is guaranteed only when the stores differ, for example:
q = READ_ONCE(a);
if (q) {
WRITE_ONCE(b, 1);
do_something();
} else {
WRITE_ONCE(b, 2);
do_something_else();
}
The initial READ_ONCE() is still required to prevent the compiler from
proving the value of 'a'.
In addition, you need to be careful what you do with the local variable 'q',
otherwise the compiler might be able to guess the value and again remove
the needed conditional. For example:
q = READ_ONCE(a);
if (q % MAX) {
WRITE_ONCE(b, 1);
do_something();
} else {
WRITE_ONCE(b, 2);
do_something_else();
}
If MAX is defined to be 1, then the compiler knows that (q % MAX) is
equal to zero, in which case the compiler is within its rights to
transform the above code into the following:
q = READ_ONCE(a);
WRITE_ONCE(b, 2);
do_something_else();
Given this transformation, the CPU is not required to respect the ordering
between the load from variable 'a' and the store to variable 'b'. It is
tempting to add a barrier(), but this does not help. The conditional
is gone, and the barrier won't bring it back. Therefore, if you are
relying on this ordering, you should make sure that MAX is greater than
one, perhaps as follows:
q = READ_ONCE(a);
BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
if (q % MAX) {
WRITE_ONCE(b, 1);
do_something();
} else {
WRITE_ONCE(b, 2);
do_something_else();
}
Please note once again that the stores to 'b' differ. If they were
identical, as noted earlier, the compiler could pull this store outside
of the 'if' statement.
You must also be careful not to rely too much on boolean short-circuit
evaluation. Consider this example:
q = READ_ONCE(a);
if (q || 1 > 0)
WRITE_ONCE(b, 1);
Because the first condition cannot fault and the second condition is
always true, the compiler can transform this example as following,
defeating control dependency:
q = READ_ONCE(a);
WRITE_ONCE(b, 1);
This example underscores the need to ensure that the compiler cannot
out-guess your code. More generally, although READ_ONCE() does force
the compiler to actually emit code for a given load, it does not force
the compiler to use the results.
In addition, control dependencies apply only to the then-clause and
else-clause of the if-statement in question. In particular, it does
not necessarily apply to code following the if-statement:
q = READ_ONCE(a);
if (q) {
WRITE_ONCE(b, 1);
} else {
WRITE_ONCE(b, 2);
}
WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
It is tempting to argue that there in fact is ordering because the
compiler cannot reorder volatile accesses and also cannot reorder
the writes to 'b' with the condition. Unfortunately for this line
of reasoning, the compiler might compile the two writes to 'b' as
conditional-move instructions, as in this fanciful pseudo-assembly
language:
ld r1,a
cmp r1,$0
cmov,ne r4,$1
cmov,eq r4,$2
st r4,b
st $1,c
A weakly ordered CPU would have no dependency of any sort between the load
from 'a' and the store to 'c'. The control dependencies would extend
only to the pair of cmov instructions and the store depending on them.
In short, control dependencies apply only to the stores in the then-clause
and else-clause of the if-statement in question (including functions
invoked by those two clauses), not to code following that if-statement.
Note well that the ordering provided by a control dependency is local
to the CPU containing it. See the section on "Multicopy atomicity"
for more information.
In summary:
(*) Control dependencies can order prior loads against later stores.
However, they do -not- guarantee any other sort of ordering:
Not prior loads against later loads, nor prior stores against
later anything. If you need these other forms of ordering,
use smp_rmb(), smp_wmb(), or, in the case of prior stores and
later loads, smp_mb().
(*) If both legs of the "if" statement begin with identical stores to
the same variable, then those stores must be ordered, either by
preceding both of them with smp_mb() or by using smp_store_release()
to carry out the stores. Please note that it is -not- sufficient
to use barrier() at beginning of each leg of the "if" statement
because, as shown by the example above, optimizing compilers can
destroy the control dependency while respecting the letter of the
barrier() law.
(*) Control dependencies require at least one run-time conditional
between the prior load and the subsequent store, and this
conditional must involve the prior load. If the compiler is able
to optimize the conditional away, it will have also optimized
away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
can help to preserve the needed conditional.
(*) Control dependencies require that the compiler avoid reordering the
dependency into nonexistence. Careful use of READ_ONCE() or
atomic{,64}_read() can help to preserve your control dependency.
Please see the COMPILER BARRIER section for more information.
(*) Control dependencies apply only to the then-clause and else-clause
of the if-statement containing the control dependency, including
any functions that these two clauses call. Control dependencies
do -not- apply to code following the if-statement containing the
control dependency.
(*) Control dependencies pair normally with other types of barriers.
(*) Control dependencies do -not- provide multicopy atomicity. If you
need all the CPUs to see a given store at the same time, use smp_mb().
(*) Compilers do not understand control dependencies. It is therefore
your job to ensure that they do not break your code.
SMP BARRIER PAIRING
-------------------
When dealing with CPU-CPU interactions, certain types of memory barrier should
always be paired. A lack of appropriate pairing is almost certainly an error.
General barriers pair with each other, though they also pair with most
other types of barriers, albeit without multicopy atomicity. An acquire
barrier pairs with a release barrier, but both may also pair with other
barriers, including of course general barriers. A write barrier pairs
with an address-dependency barrier, a control dependency, an acquire barrier,
a release barrier, a read barrier, or a general barrier. Similarly a
read barrier, control dependency, or an address-dependency barrier pairs
with a write barrier, an acquire barrier, a release barrier, or a
general barrier:
CPU 1 CPU 2
=============== ===============
WRITE_ONCE(a, 1);
<write barrier>
WRITE_ONCE(b, 2); x = READ_ONCE(b);
<read barrier>
y = READ_ONCE(a);
Or:
CPU 1 CPU 2
=============== ===============================
a = 1;
<write barrier>
WRITE_ONCE(b, &a); x = READ_ONCE(b);
<implicit address-dependency barrier>
y = *x;
Or even:
CPU 1 CPU 2
=============== ===============================
r1 = READ_ONCE(y);
<general barrier>
WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) {
<implicit control dependency>
WRITE_ONCE(y, 1);
}
assert(r1 == 0 || r2 == 0);
Basically, the read barrier always has to be there, even though it can be of
the "weaker" type.
[!] Note that the stores before the write barrier would normally be expected to
match the loads after the read barrier or the address-dependency barrier, and
vice versa:
CPU 1 CPU 2
=================== ===================
WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
<write barrier> \ <read barrier>
WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);