Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choosing the layer to retrieve embeddings from: allocate_tensors error, when using tensorflow 2.17.0 #531

Open
sammlapp opened this issue Dec 17, 2024 · 3 comments

Comments

@sammlapp
Copy link

Describe the bug
When I use the tflite Interpreter object, I used to be able to get both embeddings and class logits in the same way as done in this code base:

To Reproduce
I have tensorflow 2.17. I actually tried with tensorflow 2.15 and it worked fine, but I need to use tensorflow 2.16 or 2.17 for compatibility with my CUDA version 12.3 re this table https://www.tensorflow.org/install/source#gpu

# first downloaded checkpoint https://github.com/kahst/BirdNET-Analyzer/blob/v1.5.0/birdnet_analyzer/checkpoints/V2.4/BirdNET_GLOBAL_6K_V2.4_Model_FP16.tflite

tf_model = tflite.Interpreter(
            model_path=model_path, num_threads=num_tflite_threads
        ) 
input_details = tf_model.get_input_details()[0]
input_layer_idx = input_details["index"]

# choose which layer should be retrieved
output_details = tf_model.get_output_details()[0]
embedding_idx = output_details["index"] - 1
class_logits_idx = output_details["index"]

# forward pass
# we need to reshape the format of expected input tensor for TF model to include batch dimension
tf_model.resize_tensor_input(
    input_layer_idx, [len(batch_data), *batch_data[0].shape]
)
tf_model.allocate_tensors()
tf_model.set_tensor(input_layer_idx, np.float32(batch_data))
tf_model.invoke()  # forward pass

# get results
batch_logits =tf_model.get_tensor(class_logits_idx) # no error, returns logits

batch_embeddings =tf_model.get_tensor(embedding_idx)  # results in `ValueError: Tensor data is null. Run allocate_tensors() first`

Expected behavior
Can retrieve embedding (feature) values from output_details['index']-1 layer using tf_model.get_tensor()

How do I need to modify the code so that the tflite interpreter can be used with TensorFlow 2.17?
Thanks

@Josef-Haupt
Copy link
Collaborator

You answered your own question, we currently only support TF 2.15.x, there are breaking changes in TF 2.16 that affect our model loading and saving.
So yes there are changes needed to support TF 2.17 and it is on our roadmap for next year, but if you implement a solution, feel free to create a PR

@MacJudge
Copy link
Collaborator

I would like to help to solve the problem, as I'm having the same issue while extending the API, but downgrading my Linux development environment from Python 3.12, which doesn't support this outdated TF version, seems quite complicated. Unfortunately, I have no idea where to start, so perhaps we should meet next year and talk about a solution, Josef.

@MacJudge
Copy link
Collaborator

MacJudge commented Jan 7, 2025

Happy new year. :)

I'm back to work and played around with TensorFlow for some while. As it seems we are only using TensorFlow Lite, I did some research and discovered it got renamed to AI Edge LiteRT [1].
Unfortunately, only the nightly build [2] supports Python 3.12, but I was able to make it run and basically nothing changed, i.e. the error still persists.
By digging through the API documentation, I found a possible solution, finally:

Just set the Interpreter-parameter [3] "experimental_preserve_all_tensors" to "True" and the embeddings can be extracted as before.

As this parameter exists (with default value of "False") since TensorFlow 2.5 [4], I'm wondering why it worked until 2.15. So there may be other incompatibilities, but it seems to work for me, so give it a try.

[1] https://ai.google.dev/edge/litert
[2] https://pypi.org/project/ai-edge-litert-nightly/
[3] https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter
[4] https://www.tensorflow.org/versions/r2.5/api_docs/python/tf/lite/Interpreter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants