Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix parallel increments #1446

Merged
merged 4 commits into from
Sep 16, 2020
Merged

Fix parallel increments #1446

merged 4 commits into from
Sep 16, 2020

Conversation

FabioLuporini
Copy link
Contributor

@codecov
Copy link

codecov bot commented Sep 10, 2020

Codecov Report

Merging #1446 into master will decrease coverage by 1.20%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1446      +/-   ##
==========================================
- Coverage   87.13%   85.93%   -1.21%     
==========================================
  Files         189      189              
  Lines       27437    27493      +56     
  Branches     3717     3722       +5     
==========================================
- Hits        23908    23625     -283     
- Misses       3104     3439     +335     
- Partials      425      429       +4     
Impacted Files Coverage Δ
devito/ir/support/basic.py 92.35% <100.00%> (+0.22%) ⬆️
devito/passes/iet/openmp.py 95.94% <100.00%> (+0.15%) ⬆️
tests/test_dle.py 97.83% <100.00%> (+0.19%) ⬆️
examples/seismic/test_seismic_utils.py 0.00% <0.00%> (-100.00%) ⬇️
examples/seismic/viscoacoustic/operators.py 0.00% <0.00%> (-100.00%) ⬇️
examples/seismic/viscoacoustic/wavesolver.py 0.00% <0.00%> (-100.00%) ⬇️
...les/seismic/viscoacoustic/viscoacoustic_example.py 0.00% <0.00%> (-72.23%) ⬇️
examples/seismic/tti/operators.py 62.50% <0.00%> (-33.34%) ⬇️
examples/seismic/elastic/elastic_example.py 47.22% <0.00%> (-30.56%) ⬇️
examples/seismic/tti/tti_example.py 35.13% <0.00%> (-21.63%) ⬇️
... and 12 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6b80e6c...3cd1b2c. Read the comment docs.

@FabioLuporini FabioLuporini force-pushed the fix-parallel-incs branch 2 times, most recently from cd2fa96 to d243b7d Compare September 10, 2020 13:41
Copy link
Contributor

@tjb900 tjb900 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven’t looked at above code I confess, but generated code looks good and fixes issue. Thanks!

for i in reduction:
if i.is_Indexed:
# OpenMP expects a size not an index as input of reduction,
# such as reduction(+:f[d_m:d_M+1])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update this since not the generated anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gotcha, fixed

@@ -194,7 +194,7 @@ def test_timeparallel_reduction(self):
assert not tree.root.pragmas
assert len(tree[1].pragmas) == 1
assert tree[1].pragmas[0].value ==\
'omp target teams distribute parallel for collapse(3) reduction(+:f[0])'
'omp target teams distribute parallel for collapse(3) reduction(+:f[0:1])'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should have been caught by if i.is_Indexed: and still be f[0] since it's a single index. Is it from my PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, from mine, I was just handling both cases (number and symbol) homegenously. But I've now rolled back to previous behaviour

@@ -523,6 +521,62 @@ def test_scheduling(self):
assert not iterations[3].is_Affine
assert 'schedule(dynamic,chunk_size)' in iterations[3].pragmas[0].value

@pytest.mark.parametrize('so', [0, 1, 2])
@pytest.mark.parametrize('dim', [1, 2])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the dim=0 case fix with the other part of that PR (the reduction isn't picked up if most outer loop).

Copy link
Contributor Author

@FabioLuporini FabioLuporini Sep 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm gonna check now but I think so since the assert 'reduction(...) in op` is now for all test cases

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it looks correct:

    #pragma omp parallel num_threads(nthreads)
    {
      #pragma omp for collapse(3) schedule(static,1) reduction(+:f[0:f_vec->size[0]])
      for (int x = x_m; x <= x_M; x += 1)
      {
        for (int y = y_m; y <= y_M; y += 1)
        {
          for (int z = z_m; z <= z_M; z += 1)
          {
            f[y] += u[t0][x + 1][y + 1][z + 1] + 1;
          }
        }
      }
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's still the y dimension not x, need to change @pytest.mark.parametrize('dim', [0, 1, 2]) to get that previously problematic case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added 0 back to dims and seems to be working fine now.

@FabioLuporini FabioLuporini merged commit 269fb44 into master Sep 16, 2020
@FabioLuporini FabioLuporini deleted the fix-parallel-incs branch September 16, 2020 07:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-C bug in the generated code compiler
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants