-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT: Add a disabled-by-default implementation of strength reduction #104243
Conversation
This adds a disabled-by-default implementation of strength reduction. At this point the implementation should be correct, however it is currently both a size and perfscore regression when it is enabled. More work will be needed to get the heuristics right and to make it kick in for more cases. Strength reduction replaces "expensive" operations computed on every loop iteration with cheaper ones by creating more induction variables. In C# terms it effectively transforms something like ``` private struct S { public int A, B, C; } [MethodImpl(MethodImplOptions.NoInlining)] private static float Sum(S[] ss) { int sum = 0; foreach (S v in ss) { sum += v.A; sum += v.B; sum += v.C; } return sum; } ``` into an equivalent ``` int sum = 0; ref S curS = ref ss[0]; for (int i = 0; i < ss.Length; i++) { sum += curS.A; sum += curS.B; sum += curS.C; curS = ref Unsafe.Add(ref curS, 1); } ``` With strength reduction enabled this PR thus changes codegen of the standard `foreach` version above from ```asm G_M63518_IG03: ;; offset=0x0011 lea r10, [rdx+2*rdx] lea r10, bword ptr [rcx+4*r10+0x10] mov r9d, dword ptr [r10] mov r11d, dword ptr [r10+0x04] mov r10d, dword ptr [r10+0x08] add eax, r9d add eax, r11d add eax, r10d inc edx cmp r8d, edx jg SHORT G_M63518_IG03 ;; size=36 bbWeight=4 PerfScore 39.00 ``` to ```asm G_M63518_IG04: ;; offset=0x0011 mov r8, rcx mov r10d, dword ptr [r8] mov r9d, dword ptr [r8+0x04] mov r8d, dword ptr [r8+0x08] add eax, r10d add eax, r9d add eax, r8d add rcx, 12 dec edx jne SHORT G_M63518_IG04 ;; size=31 bbWeight=4 PerfScore 34.00 ``` on x64. Further improvements can be made to enable better address modes. The current heuristics try to ensure that we do not actually end up with more primary induction variables. The strength reduction only kicks in when it thinks that all uses of the primary IV can be replaced by the new primary IV. However, uses inside loop exit tests are allowed to stay unreplaced by the assumption that the downwards loop transformation will be able to get rid of them. Getting the cases around overflow right turned out to be hard and required reasoning about trip counts that was added in a previous PR. Generally, the issue is that we need to prove that transforming a zero extension of an add recurrence to a 64-bit add recurrence is legal. For example, for a simple case of ``` for (int i = 0; i < arr.Length; i++) sum += arr[i]; ``` the IV analysis is eventually going to end up wanting to show that `zext<64>(int32 <L, 0, 1>) => int64 <L, 0, 1>` is a correct transformation. This requires showing that the add recurrence does not step past 2^32-1, which requires the bound on the trip count that we can now compute. The reasoning done for both the trip count and around the overflow is still very limited but can be improved incrementally. The implementation works by considering every primary IV of the loop in turn, and by initializing 'cursors' pointing to each use of the primary IV. It then tries to repeatedly advance these cursors to the parent of the uses while it results in a new set of cursors that still compute the same (now derived) IV. If it manages to do this once, then replacing the cursors by a new primary IV should result in the old primary IV no longer being necessary, while having replaced some operations by cheaper ones.
Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch |
/azp run runtime-coreclr jitstress, runtime-coreclr libraries-jitstress |
Azure Pipelines successfully started running 2 pipeline(s). |
@MichalStrehovsky Have you seen this one before during ilc? https://dev.azure.com/dnceng-public/public/_build/results?buildId=726752&view=logs&j=15d34cf8-8614-5f8a-e364-3c281b6c41dd&t=2f724b17-e574-5ab4-a401-210ce585d93f&l=767 Do I understand correctly this is during the ilc that was jitted without my changes from this PR? |
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Still have some correctness issues to sort out.. for example, we create an illegal byref in public static void Main()
{
Test(ref Unsafe.NullRef<ushort>(), () => false);
}
[MethodImpl(MethodImplOptions.NoInlining)]
public static int Test(ref ushort b, Func<bool> sumIndex)
{
int sum = 0;
for (nuint i = 0; i < 10000; i++)
{
if (sumIndex())
sum += Unsafe.Add(ref b, 0x100000 + i);
}
return sum;
} G_M401_IG02: ;; offset=0x000B
xor esi, esi
lea rdi, bword ptr [rcx+0x00200000]
mov ebp, 0x2710
;; size=14 bbWeight=0.25 PerfScore 0.25
G_M401_IG03: ;; offset=0x0019
mov rcx, gword ptr [rbx+0x08]
call [rbx+0x18]System.Func`1[ubyte]:Invoke():ubyte:this
test eax, eax
je SHORT G_M401_IG05
;; size=11 bbWeight=4 PerfScore 25.00
G_M401_IG04: ;; offset=0x0024
movzx rax, word ptr [rdi]
add esi, eax
;; size=5 bbWeight=2 PerfScore 4.50
G_M401_IG05: ;; offset=0x0029
add rdi, 2
dec rbp
jne SHORT G_M401_IG03
;; size=9 bbWeight=4 PerfScore 6.00 We end up creating and reporting a byref |
Just going to bail indiscriminately on the GC pointers case for now. Will handle some of the cases where we can prove it is ok in a follow-up. |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
cc @dotnet/jit-contrib PTAL @AndyAyersMS The jitstress/libraries-jitstress failures look like #103577, #104316, #104317, #102706. Diffs with this enabled do not look too encouraging yet. They looked quite a bit better before disabling the GC types, so hopefully will be able to get it into a better state once I start improving on the legality checks and heuristics. I want to do that in some smaller changes separately. Am going to push a commit to disable this by default again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks great. I had few questions but they can be addressed (or ignored) in follow-ups.
return false; | ||
} | ||
|
||
bool isInsideExitTest = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should compute the set of exit blocks during loop analysis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or at least introduce some helper for this... let me do it as part of a follow-up.
// | ||
// It is not a correctness requirement that we remove all uses; if we end up | ||
// not doing so (e.g. because a cursor was not created by this function), | ||
// then we may just end up with extra primary IVs in the loop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be interesting to have a stress mode where you intentionally don't create cursors all the places you could?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think so. Good idea for a follow-up.
// The algorithm here works in the following way: we process each primary | ||
// IV in turn. For every primary IV, we create a 'cursor' pointing to every | ||
// use of that primary IV. We then continuously advance each cursor to the | ||
// parent node as long as all cursors represent the same derived IV. Once we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious about the "same" derived IV requirement ... if we are say converting from a char array to an int array
int[] a = ...
char[] b = ...
for (int i = 0; ...)
{
a[i] = (int) b[i];
}
if I understand correctly the current code would not strength reduce because we'd have <L, 0, 4> vs <L, 0, 2>? Seems like we still might want to strength reduce as long as the set of cursors doesn't expand into too many different new IVs? Or say if it is relatively cheap to convert from one family to another (say multiply by 2 which can be done with scaled address mode).
Also wonder if this profitability equation changes if we have post-increment modes (I forget where we ended up on this).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I understand correctly the current code would not strength reduce because we'd have <L, 0, 4> vs <L, 0, 2>? Seems like we still might want to strength reduce as long as the set of cursors doesn't expand into too many different new IVs? Or say if it is relatively cheap to convert from one family to another (say multiply by 2 which can be done with scaled address mode).
Yeah, that's right. I think you're right we can generalize this: we can keep going if the addrecs have the same 'start' and all step values are divisible by the minimum step value. I'll consider that as a follow-up.
Also wonder if this profitability equation changes if we have post-increment modes (I forget where we ended up on this).
Currently there's really no heuristics here; we kind of just do strength reduction whenever we have a candidate. I definitely need to sit down and consider more carefully how the transformation affects performance and size (usually it seems to regress size because it requires computing some 'start' value in the preheader).
break; | ||
} | ||
|
||
// TODO-CQ: If this is now the source to a store, we can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know how often you see this? I suppose for normal array uses it's rare, we may only see it for pointer math where users have CSE'd parts of the computation or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fairly often:
Advanced cursor node type
ADD : 26343
CAST : 15729
LSH : 11239
BOUNDS_CHECK : 8355
IND : 5714
LT : 3143
GT : 2536
STORE_LCL_VAR : 1319
GE : 1252
SUB : 1008
CALL : 993
STORE_BLK : 907
HWINTRINSIC : 892
BLK : 841
UMOD : 822
LE : 437
STOREIND : 357
EQ : 297
MUL : 89
AND : 63
NE : 61
MOD : 38
INTRINSIC : 16
NEG : 14
UDIV : 10
COMMA : 7
BITCAST : 2
DIV : 2
RSH : 2
There's a few diffs from some of the new sanity checks added when creating SCEVs. |
This adds a disabled-by-default implementation of strength reduction. At this point the implementation should be correct, however it is currently both a size and perfscore regression when it is enabled. More work will be needed to get the heuristics right and to make it kick in for more cases.
Strength reduction replaces "expensive" operations computed on every loop iteration with cheaper ones by creating more induction variables. In C# terms it effectively transforms something like
into an equivalent
With strength reduction enabled this PR thus changes codegen of the standard
foreach
version above fromto
on x64. Further improvements can be made to enable better address modes.
The current heuristics try to ensure that we do not actually end up with more primary induction variables. The strength reduction only kicks in when it thinks that all uses of the primary IV can be replaced by the new primary IV. However, uses inside loop exit tests are allowed to stay unreplaced by the assumption that the downwards loop transformation will be able to get rid of them.
Getting the cases around overflow right turned out to be hard and required reasoning about trip counts that was added in a previous PR. Generally, the issue is that we need to prove that transforming a zero extension of an add recurrence to a 64-bit add recurrence is legal. For example, for a simple case of
the IV analysis is eventually going to end up wanting to show that
zext<64>(int32 <L, 0, 1>) => int64 <L, 0, 1>
is a correct transformation. This requires showing that the add recurrence does not step past 2^32-1, which requires the bound on the trip count that we can now compute. The reasoning done for both the trip count and around the overflow is still very limited but can be improved incrementally.The implementation works by considering every primary IV of the loop in turn, and by initializing 'cursors' pointing to each use of the primary IV. It then tries to repeatedly advance these cursors to the parent of the uses while it results in a new set of cursors that still compute the same (now derived) IV. If it manages to do this once, then replacing the cursors by a new primary IV should result in the old primary IV no longer being necessary, while having replaced some operations by cheaper ones.