Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google Meets Background Segmentation Model? #56

Closed
kirawi opened this issue Dec 23, 2020 · 9 comments
Closed

Google Meets Background Segmentation Model? #56

kirawi opened this issue Dec 23, 2020 · 9 comments

Comments

@kirawi
Copy link

kirawi commented Dec 23, 2020

tensorflow/tfjs#4177

Would it be possible to get a .pb of the raw tflite model listed there, assuming it truly is Apache licensed?

@PINTO0309
Copy link
Owner

PINTO0309 commented Dec 23, 2020

I'm just going to do a reverse conversion like the other models in MediaPipe, so I can probably convert it to .pb.
google-ai-edge/mediapipe#245

I'll try it tonight.

@simon-lanf
Copy link

@PINTO0309 that would be a real christmas miracle.

@PINTO0309
Copy link
Owner

I decided to convert the model while optimizing it, so it will take some time.
Screenshot 2020-12-24 06:49:46

@simon-lanf
Copy link

Take your time :)

@PINTO0309
Copy link
Owner

I have generated and committed models for .pb, .tflite float32/float16, INT8, EdgeTPU, TFJS, TF-TRT, CoreML, and OpenVINO IR for testing. However, I was so exhausted that I did not create a test program to test it. I would be very happy if you could test it with your help. 😃
https://github.com/PINTO0309/PINTO_model_zoo/tree/master/082_MediaPipe_Meet_Segmentation

@kirawi kirawi closed this as completed Dec 25, 2020
@ldenoue
Copy link

ldenoue commented Jan 16, 2021

I didn’t see the CoreML model: is it somewhere?

@ldenoue
Copy link

ldenoue commented Jan 17, 2021

@PINTO0309 thanks, I saw it now! It works as is, but I am not able to convert my images to the required input.

So I've tried using coremltools to change the input from float32[1,144,256,3] to an RGB image:

import coremltools.proto.FeatureTypes_pb2 as ft 
spec = coremltools.utils.load_spec("model_coreml_float32.mlmodel")
input = spec.description.input[0]
input.type.imageType.colorSpace = ft.ImageFeatureType.RGB
input.type.imageType.height = 144 
input.type.imageType.width = 256
coremltools.utils.save_spec(spec, "model_coreml_float32_rgb_input.mlmodel")

Then I get a compilation error in XCode coremlc: error: compiler error: Espresso exception: "Invalid blob shape": Cannot broadcast blobs

Any idea how to fix this?
Or how could I transform my input image (UIImage or CGImage) into an MLMultiArray required by the mlmodel that you generated?

Really great repo, thanks for sharing all your experiments.

Laurent
image

@ldenoue
Copy link

ldenoue commented Feb 20, 2021

@PINTO0309 any idea why CoreML model complains once input is converted to RGB?

I noticed that your CoreML model finishes with AddBroadcastable followed by Deconvolution, but after I used coremltools to change the input into RGB, the exported CoreML model finishes with Deconvolution + AddBroadcastable (reversed).

Any idea on how to make the CoreML work with RGB input/output?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants