Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix grid velocities in dual-time RANS simulations with deforming grids (aeroelastic and legacy FSI) #1199

Merged
merged 5 commits into from
Feb 17, 2021

Conversation

pcarruscag
Copy link
Member

Proposed Changes

Found while debugging differences between legacy and new mesh deformation with @Nicola-Fonzi in #1197.
Explained below.

Related Work

#1197

PR Checklist

  • I am submitting my contribution to the develop branch.
  • My contribution generates no new compiler warnings (try with the '-Wall -Wextra -Wno-unused-parameter -Wno-empty-body' compiler flags, or simply --warnlevel=2 when using meson).
  • My contribution is commented and consistent with SU2 style.
  • I have added a test case that demonstrates my contribution, if necessary.
  • I have updated appropriate documentation (Tutorials, Docs Page, config_template.cpp) , if necessary.

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Comment on lines 229 to 236
void CIntegration::SetDualTime_Solver(const CGeometry *geometry, CSolver *solver, const CConfig *config, unsigned short iMesh) {

SU2_OMP_PARALLEL
{
/*--- Store old solution, volumes and coordinates (in case there is grid movement). ---*/

solver->GetNodes()->Set_Solution_time_n1();
solver->GetNodes()->Set_Solution_time_n();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function would push the coordinates and volumes back in time.
This function is called 2 times with the same geometry in RANS simulations.
Grid velocities would then be ~1.5 times what they should be.
This was not a problem with the new mesh solver, but the volumes would also be incorrect.

Just saw I need to update the comment... FML

void CIntegration::SetDualTime_Solver(CGeometry *geometry, CSolver *solver, CConfig *config, unsigned short iMesh) {
void CIntegration::SetDualTime_Geometry(CGeometry *geometry, CSolver *mesh_solver, const CConfig *config, unsigned short iMesh) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now there is a specific function for dual time geometry updates.

if (config[val_iZone]->GetDeform_Mesh()) {
solver[val_iZone][val_iInst][MESH_0][MESH_SOL]->SetDualTime_Mesh();
}
SetDualTime_Aeroelastic(config[val_iZone], iMesh);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The aeroelastic feature was also broken since it has its own dual time update which was also repeated for RANS

Comment on lines 203 to +212
}

/*--- Update dual time solver for the weakly coupled energy equation ---*/

if (config[val_iZone]->GetWeakly_Coupled_Heat()) {
integration[val_iZone][val_iInst][HEAT_SOL]->SetDualTime_Solver(geometry[val_iZone][val_iInst][MESH_0],
solver[val_iZone][val_iInst][MESH_0][HEAT_SOL],
config[val_iZone], MESH_0);
integration[val_iZone][val_iInst][HEAT_SOL]->SetConvergence(false);
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TobiKattmann @oleburghardt , I think this was missing for dual time weakly coupled stuff, can you see if it makes sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Although I didn't use the WEAKLY_COUPLED_HEAT in a long time tbh and for now do not intend to use in the future.
I also checked: There is a Testcase discadj_heat using WEAKLY_COUPLED_HEAT (although it is of course not unsteady)

Thanks for actualy thinking of that 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, thanks for having a look. I don't blame you, there is not a lot going for weakly coupled if the flow solver still drags temperature around. Otherwise having 4 variables (in 3D) would make computers very happy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok i dont get the last sentence :) Do you mean the unnecessary energy-equation that is carried around even for cases without it being active. Or did I miss the memo that states 4>>5 for #equations. Please elaborate if you have a few moments to spare ✍️

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that too since it makes the linear system 1.44 times larger and slower (cost of sparse operations is proportional to amount of memory used).
Then 4 is a really good number because all the blocks in the matrix would be aligned and all the operations would pack perfectly into AVX registers.
The vector blocks would also be aligned which would make prefetching (loading into cache ahead of time) effective because it operates on cache lines (and 8 doubles in one of those).
Finally the output vector could be written to with non-temporal stores (algo because of alignment) which improves caching of the input vector.
So in all these ways 4 > 5 (by about 5-10% maybe on top of the 1.44).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes sure, if forgot that computers had this thing going with powers of 2 🤦 thanks!

@Nicola-Fonzi
Copy link
Contributor

Great fix! The code is also much better organised now.

@Nicola-Fonzi
Copy link
Contributor

Screenshot 2021-02-15 165247

The error was spotted while updating the old python routines to the new drivers/mesh solver. In particular, the test case in py_wrapper folder, related to a flat plate plunging up and down, showed a different coefficient of lift as computed using the old and new mesh solver.

I attach here the comparison of the coefficient of lift history for three cases:

  • The legacy mesh solver, after the modification in this PR
  • The new mesh solver
  • The legacy mesh solver, before the modification

The first two are completely coincident

Copy link
Contributor

@TobiKattmann TobiKattmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing that. Good thing to split up geometry and solver " solution pushing" and making use of inheritance for the former and specialisation if needed for the latter.
And the constification of the code is a good thing as well 💐

Comment on lines 203 to +212
}

/*--- Update dual time solver for the weakly coupled energy equation ---*/

if (config[val_iZone]->GetWeakly_Coupled_Heat()) {
integration[val_iZone][val_iInst][HEAT_SOL]->SetDualTime_Solver(geometry[val_iZone][val_iInst][MESH_0],
solver[val_iZone][val_iInst][MESH_0][HEAT_SOL],
config[val_iZone], MESH_0);
integration[val_iZone][val_iInst][HEAT_SOL]->SetConvergence(false);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Although I didn't use the WEAKLY_COUPLED_HEAT in a long time tbh and for now do not intend to use in the future.
I also checked: There is a Testcase discadj_heat using WEAKLY_COUPLED_HEAT (although it is of course not unsteady)

Thanks for actualy thinking of that 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants