-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace dm-tree
with optree
#19306
Replace dm-tree
with optree
#19306
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #19306 +/- ##
==========================================
- Coverage 80.14% 75.75% -4.39%
==========================================
Files 341 366 +25
Lines 36163 40208 +4045
Branches 7116 7811 +695
==========================================
+ Hits 28982 30460 +1478
- Misses 5578 8062 +2484
- Partials 1603 1686 +83
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Thanks for the PR. Do you observe performance difference? How long does it take to run the unit test suite with the PyTorch backend before and after the change, for instance? |
I didn't observe a performance difference in the unit tests. Env:
There is no difference observed when using cuda, but there is a slight improvement when using cpu Logs: [2024-03-14 13:41:05,140] torch._dynamo.convert_frame: [WARNING] function: 'resume_in___call__' (/home/hongyu/workspace/keras/keras/layers/layer.py:695)
[2024-03-14 13:41:05,140] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:05,140] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:05,140] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:05,233] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:05,233] torch._dynamo.convert_frame: [WARNING] function: '__call__' (/home/hongyu/workspace/keras/keras/ops/operation.py:31)
[2024-03-14 13:41:05,233] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:05,233] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:05,233] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] function: 'fill_in' (/home/hongyu/workspace/keras/keras/ops/symbolic_arguments.py:31)
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] function: 'call' (/home/hongyu/workspace/keras/keras/models/functional.py:571)
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:05,778] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:05,779] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:05,779] torch._dynamo.convert_frame: [WARNING] function: '__call__' (/home/hongyu/workspace/keras/keras/layers/layer.py:692)
[2024-03-14 13:41:05,779] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:05,779] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:05,779] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:05,780] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:05,780] torch._dynamo.convert_frame: [WARNING] function: '_setattr_hook' (/home/hongyu/workspace/keras/keras/backend/torch/layer.py:28)
[2024-03-14 13:41:05,780] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:05,780] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:05,780] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:06,635] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:06,635] torch._dynamo.convert_frame: [WARNING] function: 'maybe_convert' (/home/hongyu/workspace/keras/keras/layers/layer.py:699)
[2024-03-14 13:41:06,635] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:06,635] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:06,635] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html.
[2024-03-14 13:41:07,629] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (8)
[2024-03-14 13:41:07,629] torch._dynamo.convert_frame: [WARNING] function: '_get_own_losses' (/home/hongyu/workspace/keras/keras/layers/layer.py:1061)
[2024-03-14 13:41:07,629] torch._dynamo.convert_frame: [WARNING] last reason: ___check_global_state()
[2024-03-14 13:41:07,629] torch._dynamo.convert_frame: [WARNING] To log all recompilation reasons, use TORCH_LOGS="recompiles".
[2024-03-14 13:41:07,629] torch._dynamo.convert_frame: [WARNING] To diagnose recompilation issues, see https://pytorch.org/docs/master/compile/troubleshooting.html. It seems that there are still some frames that cannot be converted successfully |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you for the contribution!
Related to #18442
Related to #18614
This PR refactors
keras.utils.tree
to useoptree
instead ofdm-tree
is_nested
,flatten
,unflatten_as
,map_structure
,map_structure_up_to
,assert_same_structure
,pack_sequence_as
,lists_to_tuples
) with the pathkeras.utils.tree.*
dm-tree
andtf.nest
)tf.nest
in the codebase (excluding legacy code)I have verified that exported APIs should meet the requirements of keras_cv and keras_nlp