Skip to content

Commit f5df228

Browse files
MediaPipe Teamcamillol
MediaPipe Team
authored andcommitted
Project import generated by Copybara.
PiperOrigin-RevId: 264105834
1 parent 71a47bb commit f5df228

28 files changed

+276
-257
lines changed

Diff for: mediapipe/docs/face_detection_mobile_gpu.md

+10-21
Original file line numberDiff line numberDiff line change
@@ -8,33 +8,24 @@ that performs face detection with TensorFlow Lite on GPU.
88

99
## Android
1010

11-
Please see [Hello World! in MediaPipe on Android](hello_world_android.md) for
12-
general instructions to develop an Android application that uses MediaPipe.
11+
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu)
1312

14-
The graph below is used in the
15-
[Face Detection GPU Android example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu).
16-
To build the app, run:
13+
To build and install the app:
1714

1815
```bash
1916
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu
20-
```
21-
22-
To further install the app on an Android device, run:
23-
24-
```bash
2517
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facedetectiongpu/facedetectiongpu.apk
2618
```
2719

2820
## iOS
2921

30-
Please see [Hello World! in MediaPipe on iOS](hello_world_ios.md) for general
31-
instructions to develop an iOS application that uses MediaPipe.
22+
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/facedetectiongpu).
23+
24+
See the general [instructions](./mediapipe_ios_setup.md) for building iOS
25+
examples and generating an Xcode project. This will be the FaceDetectionGpuApp
26+
target.
3227

33-
The graph below is used in the
34-
[Face Detection GPU iOS example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/facedetectiongpu).
35-
To build the app, please see the general
36-
[MediaPipe iOS app building and setup instructions](./mediapipe_ios_setup.md).
37-
Specific to this example, run:
28+
To build on the command line:
3829

3930
```bash
4031
bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/facedetectiongpu:FaceDetectionGpuApp
@@ -51,7 +42,7 @@ below and paste it into [MediaPipe Visualizer](https://viz.mediapipe.dev/).
5142

5243
```bash
5344
# MediaPipe graph that performs face detection with TensorFlow Lite on GPU.
54-
# Used in the example in
45+
# Used in the examples in
5546
# mediapipie/examples/android/src/java/com/mediapipe/apps/facedetectiongpu and
5647
# mediapipie/examples/ios/facedetectiongpu.
5748

@@ -227,9 +218,7 @@ node {
227218
}
228219
}
229220

230-
# Draws annotations and overlays them on top of a GPU copy of the original
231-
# image coming into the graph. The calculator assumes that image origin is
232-
# always at the top-left corner and renders text accordingly.
221+
# Draws annotations and overlays them on top of the input images.
233222
node {
234223
calculator: "AnnotationOverlayCalculator"
235224
input_stream: "INPUT_FRAME_GPU:throttled_input_video"

Diff for: mediapipe/docs/hair_segmentation_mobile_gpu.md

+16-42
Original file line numberDiff line numberDiff line change
@@ -8,20 +8,12 @@ that performs hair segmentation with TensorFlow Lite on GPU.
88

99
## Android
1010

11-
Please see [Hello World! in MediaPipe on Android](hello_world_android.md) for
12-
general instructions to develop an Android application that uses MediaPipe.
11+
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu)
1312

14-
The graph below is used in the
15-
[Hair Segmentation GPU Android example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu).
16-
To build the app, run:
13+
To build and install the app:
1714

1815
```bash
1916
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu
20-
```
21-
22-
To further install the app on an Android device, run:
23-
24-
```bash
2517
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/hairsegmentationgpu/hairsegmentationgpu.apk
2618
```
2719

@@ -37,7 +29,7 @@ below and paste it into [MediaPipe Visualizer](https://viz.mediapipe.dev/).
3729
```bash
3830
# MediaPipe graph that performs hair segmentation with TensorFlow Lite on GPU.
3931
# Used in the example in
40-
# mediapipie/examples/ios/hairsegmentationgpu.
32+
# mediapipie/examples/android/src/java/com/mediapipe/apps/hairsegmentationgpu.
4133

4234
# Images on GPU coming into and out of the graph.
4335
input_stream: "input_video"
@@ -84,14 +76,11 @@ node: {
8476
}
8577
}
8678

87-
# Waits for a mask from the previous round of hair segmentation to be fed back
88-
# as an input, and caches it. Upon the arrival of an input image, it checks if
89-
# there is a mask cached, and sends out the mask with the timestamp replaced by
90-
# that of the input image. This is needed so that the "current image" and the
91-
# "previous mask" share the same timestamp, and as a result can be synchronized
92-
# and combined in the subsequent calculator. Note that upon the arrival of the
93-
# very first input frame, an empty packet is sent out to jump start the feedback
94-
# loop.
79+
# Caches a mask fed back from the previous round of hair segmentation, and upon
80+
# the arrival of the next input image sends out the cached mask with the
81+
# timestamp replaced by that of the input image, essentially generating a packet
82+
# that carries the previous mask. Note that upon the arrival of the very first
83+
# input image, an empty packet is sent out to jump start the feedback loop.
9584
node {
9685
calculator: "PreviousLoopbackCalculator"
9786
input_stream: "MAIN:throttled_input_video"
@@ -114,9 +103,9 @@ node {
114103

115104
# Converts the transformed input image on GPU into an image tensor stored in
116105
# tflite::gpu::GlBuffer. The zero_center option is set to false to normalize the
117-
# pixel values to [0.f, 1.f] as opposed to [-1.f, 1.f].
118-
# With the max_num_channels option set to 4, all 4 RGBA channels are contained
119-
# in the image tensor.
106+
# pixel values to [0.f, 1.f] as opposed to [-1.f, 1.f]. With the
107+
# max_num_channels option set to 4, all 4 RGBA channels are contained in the
108+
# image tensor.
120109
node {
121110
calculator: "TfLiteConverterCalculator"
122111
input_stream: "IMAGE_GPU:mask_embedded_input_video"
@@ -147,7 +136,7 @@ node {
147136
node {
148137
calculator: "TfLiteInferenceCalculator"
149138
input_stream: "TENSORS_GPU:image_tensor"
150-
output_stream: "TENSORS:segmentation_tensor"
139+
output_stream: "TENSORS_GPU:segmentation_tensor"
151140
input_side_packet: "CUSTOM_OP_RESOLVER:op_resolver"
152141
node_options: {
153142
[type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
@@ -157,23 +146,15 @@ node {
157146
}
158147
}
159148

160-
# The next step (tensors to segmentation) is not yet supported on iOS GPU.
161-
# Convert the previous segmentation mask to CPU for processing.
162-
node: {
163-
calculator: "GpuBufferToImageFrameCalculator"
164-
input_stream: "previous_hair_mask"
165-
output_stream: "previous_hair_mask_cpu"
166-
}
167-
168149
# Decodes the segmentation tensor generated by the TensorFlow Lite model into a
169-
# mask of values in [0.f, 1.f], stored in the R channel of a CPU buffer. It also
150+
# mask of values in [0.f, 1.f], stored in the R channel of a GPU buffer. It also
170151
# takes the mask generated previously as another input to improve the temporal
171152
# consistency.
172153
node {
173154
calculator: "TfLiteTensorsToSegmentationCalculator"
174-
input_stream: "TENSORS:segmentation_tensor"
175-
input_stream: "PREV_MASK:previous_hair_mask_cpu"
176-
output_stream: "MASK:hair_mask_cpu"
155+
input_stream: "TENSORS_GPU:segmentation_tensor"
156+
input_stream: "PREV_MASK_GPU:previous_hair_mask"
157+
output_stream: "MASK_GPU:hair_mask"
177158
node_options: {
178159
[type.googleapis.com/mediapipe.TfLiteTensorsToSegmentationCalculatorOptions] {
179160
tensor_width: 512
@@ -185,13 +166,6 @@ node {
185166
}
186167
}
187168

188-
# Send the current segmentation mask to GPU for the last step, blending.
189-
node: {
190-
calculator: "ImageFrameToGpuBufferCalculator"
191-
input_stream: "hair_mask_cpu"
192-
output_stream: "hair_mask"
193-
}
194-
195169
# Colors the hair segmentation with the color specified in the option.
196170
node {
197171
calculator: "RecolorCalculator"

Diff for: mediapipe/docs/hand_detection_mobile_gpu.md

+28-18
Original file line numberDiff line numberDiff line change
@@ -20,33 +20,32 @@ confidence score to generate the hand rectangle, to be further utilized in the
2020

2121
## Android
2222

23-
Please see [Hello World! in MediaPipe on Android](hello_world_android.md) for
24-
general instructions to develop an Android application that uses MediaPipe.
23+
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu)
2524

26-
The graph below is used in the
27-
[Hand Detection GPU Android example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu).
28-
To build the app, run:
25+
An arm64 APK can be
26+
[downloaded here](https://drive.google.com/open?id=1qUlTtH7Ydg-wl_H6VVL8vueu2UCTu37E).
27+
28+
To build the app yourself:
2929

3030
```bash
3131
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu
3232
```
3333

34-
To further install the app on an Android device, run:
34+
Once the app is built, install it on Android device with:
3535

3636
```bash
3737
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/handdetectiongpu/handdetectiongpu.apk
3838
```
3939

4040
## iOS
4141

42-
Please see [Hello World! in MediaPipe on iOS](hello_world_ios.md) for general
43-
instructions to develop an iOS application that uses MediaPipe.
42+
[Source](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/handdetectiongpu).
43+
44+
See the general [instructions](./mediapipe_ios_setup.md) for building iOS
45+
examples and generating an Xcode project. This will be the HandDetectionGpuApp
46+
target.
4447

45-
The graph below is used in the
46-
[Hand Detection GPU iOS example app](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/handdetectiongpu).
47-
To build the app, please see the general
48-
[MediaPipe iOS app building and setup instructions](./mediapipe_ios_setup.md).
49-
Specific to this example, run:
48+
To build on the command line:
5049

5150
```bash
5251
bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/handdetectiongpu:HandDetectionGpuApp
@@ -70,14 +69,24 @@ Visualizing Subgraphs section in the
7069

7170
```bash
7271
# MediaPipe graph that performs hand detection with TensorFlow Lite on GPU.
73-
# Used in the example in
74-
# mediapipie/examples/android/src/java/com/mediapipe/apps/handdetectiongpu.
72+
# Used in the examples in
73+
# mediapipie/examples/android/src/java/com/mediapipe/apps/handdetectiongpu and
7574
# mediapipie/examples/ios/handdetectiongpu.
7675

7776
# Images coming into and out of the graph.
7877
input_stream: "input_video"
7978
output_stream: "output_video"
8079

80+
# Throttles the images flowing downstream for flow control. It passes through
81+
# the very first incoming image unaltered, and waits for HandDetectionSubgraph
82+
# downstream in the graph to finish its tasks before it passes through another
83+
# image. All images that come in while waiting are dropped, limiting the number
84+
# of in-flight images in HandDetectionSubgraph to 1. This prevents the nodes in
85+
# HandDetectionSubgraph from queuing up incoming images and data excessively,
86+
# which leads to increased latency and memory usage, unwanted in real-time
87+
# mobile applications. It also eliminates unnecessarily computation, e.g., the
88+
# output produced by a node in the subgraph may get dropped downstream if the
89+
# subsequent nodes are still busy processing previous inputs.
8190
node {
8291
calculator: "FlowLimiterCalculator"
8392
input_stream: "input_video"
@@ -89,6 +98,7 @@ node {
8998
output_stream: "throttled_input_video"
9099
}
91100

101+
# Subgraph that detections hands (see hand_detection_gpu.pbtxt).
92102
node {
93103
calculator: "HandDetectionSubgraph"
94104
input_stream: "throttled_input_video"
@@ -123,7 +133,7 @@ node {
123133
}
124134
}
125135

126-
# Draws annotations and overlays them on top of the input image into the graph.
136+
# Draws annotations and overlays them on top of the input images.
127137
node {
128138
calculator: "AnnotationOverlayCalculator"
129139
input_stream: "INPUT_FRAME_GPU:throttled_input_video"
@@ -271,8 +281,8 @@ node {
271281
}
272282
}
273283

274-
# Maps detection label IDs to the corresponding label text. The label map is
275-
# provided in the label_map_path option.
284+
# Maps detection label IDs to the corresponding label text ("Palm"). The label
285+
# map is provided in the label_map_path option.
276286
node {
277287
calculator: "DetectionLabelIdToTextCalculator"
278288
input_stream: "filtered_detections"

0 commit comments

Comments
 (0)