Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform won't yield memory in tfx after transform and it takes up total memory #227

Open
axelning opened this issue Mar 8, 2021 · 5 comments

Comments

@axelning
Copy link

axelning commented Mar 8, 2021

If the bug is related to a specific library below, please raise an issue in the
respective repo directly:

TensorFlow Data Validation Repo

TensorFlow Model Analysis Repo

TensorFlow Transform Repo

TensorFlow Serving Repo

System information

  • Have I specified the code to reproduce the issue
    (Yes/No): yes
  • Environment in which the code is executed (e.g., Local
    (Linux/MacOS/Windows), Interactive Notebook, Google Cloud, etc): - TensorFlow
    version (you are using): 2.3.2- TFX Version: 0.26.1- Python version:3.6.7

Describe the current behavior
In tfx transform module it calls tensorflow_transform> beam >impl.py:1058

schema = schema_inference.infer_feature_schema_v2(
      structured_outputs,
      metadata_fn.get_concrete_function(),
      evaluate_schema_overrides=False)

this will call infer_feature_schma_v2 in schema_inference.py :163

in this function, tf2_utils.supply_missing_inputs(structured_inputs, batch_size=1) in line 195 will tries to convert inputs to tensor and will not release the gpu memory when finished. By default this operation takes 7715 MB on my singlee Tesla p40

When I run into OOM because the following training starts to apply for the GPU, and after I stop the whole process and continue, cause the transform has been saved and the trainning goes successful, which means this part does not need to keep in the GPU from when it ends.

@arghyaganguly arghyaganguly self-assigned this Mar 9, 2021
@arghyaganguly
Copy link

duplicate tfx#3343

@arghyaganguly
Copy link

arghyaganguly commented Mar 9, 2021

@zoyahav , shall we track this issue here or in tfx#3343 ?

@zoyahav
Copy link
Member

zoyahav commented Mar 10, 2021

Let's keep it here for now.

@axelning are you able to check if the issue occurs with CPU as well?

@axelning
Copy link
Author

axelning commented Apr 6, 2021

Let's keep it here for now.

@axelning are you able to check if the issue occurs with CPU as well?

by setting the growth limitation and worker_num limitation, this issue can be circumvented
and in cpu just bcz i got 32GB memory, so this issue is not reproduced during running。

still,the memory management of gpu is keeping emerging,may be some architect optimization is needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants
@varshaan @axelning @zoyahav @arghyaganguly and others