Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Facemesh android cpu crash #5848

Closed
yonger001 opened this issue Feb 5, 2025 · 9 comments
Closed

Facemesh android cpu crash #5848

yonger001 opened this issue Feb 5, 2025 · 9 comments
Assignees
Labels
legacy:face mesh Issues related to Face Mesh os:linux-non-arm Issues on linux distributions which run on x86-64 architecture. DOES NOT include ARM devices. platform::android Android Solutions type:support General questions

Comments

@yonger001
Copy link

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

Linux ubuntu 20.04, Android 14,samsung SM-A5360

MediaPipe Tasks SDK version

MediaPipe v0.10.20 Latest on Dec 19, 2024

Task name (e.g. Image classification, Gesture recognition etc.)

FaceMesh

Programming Language and version (e.g. C++, Python, Java)

Android

Describe the actual behavior

When RUN_ON_GPU=false is set, clicking START CAMERA causes an immediate crash

Describe the expected behaviour

FaceMesh task successfully runs with the camera mode on CPU

Standalone code/steps you may have used to try to get what you need

just running the facemesh example of MediaPipe v0.10.20

Other info / Complete Logs

@kuaashish kuaashish added os:linux-non-arm Issues on linux distributions which run on x86-64 architecture. DOES NOT include ARM devices. platform::android Android Solutions labels Feb 5, 2025
@kuaashish
Copy link
Collaborator

Hi @yonger001,

Please provide the complete standalone code to help us understand and reproduce the issue if needed. Alternatively, you can point out the documentation you are following. This will help us to understand the issue better.

Thank you!!

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Feb 5, 2025
@yonger001
Copy link
Author

  1. Full path:com/google/mediapipe/examples/facemesh/MainActivity.java
  2. MainActivity.java
    // Copyright 2021 The MediaPipe Authors.
    //
    // Licensed under the Apache License, Version 2.0 (the "License");
    // you may not use this file except in compliance with the License.
    // You may obtain a copy of the License at
    //
    // http://www.apache.org/licenses/LICENSE-2.0
    //
    // Unless required by applicable law or agreed to in writing, software
    // distributed under the License is distributed on an "AS IS" BASIS,
    // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    // See the License for the specific language governing permissions and
    // limitations under the License.

package com.google.mediapipe.examples.facemesh;

import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.os.Bundle;
import android.provider.MediaStore;
import androidx.appcompat.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.FrameLayout;
import androidx.activity.result.ActivityResultLauncher;
import androidx.activity.result.contract.ActivityResultContracts;
import androidx.exifinterface.media.ExifInterface;
// ContentResolver dependency
import com.google.mediapipe.formats.proto.LandmarkProto.NormalizedLandmark;
import com.google.mediapipe.solutioncore.CameraInput;
import com.google.mediapipe.solutioncore.SolutionGlSurfaceView;
import com.google.mediapipe.solutioncore.VideoInput;
import com.google.mediapipe.solutions.facemesh.FaceMesh;
import com.google.mediapipe.solutions.facemesh.FaceMeshOptions;
import com.google.mediapipe.solutions.facemesh.FaceMeshResult;
import java.io.IOException;
import java.io.InputStream;

/** Main activity of MediaPipe Face Mesh app. */
public class MainActivity extends AppCompatActivity {
private static final String TAG = "MainActivity";

private long startTime; //
private long endTime; //

private FaceMesh facemesh;
// Run the pipeline and the model inference on GPU or CPU.
private static final boolean RUN_ON_GPU = false; //true,false

private enum InputSource {
UNKNOWN,
IMAGE,
VIDEO,
CAMERA,
}
private InputSource inputSource = InputSource.UNKNOWN;
// Image demo UI and image loader components.
private ActivityResultLauncher imageGetter;
private FaceMeshResultImageView imageView;
// Video demo UI and video loader components.
private VideoInput videoInput;
private ActivityResultLauncher videoGetter;
// Live camera demo UI and camera components.
private CameraInput cameraInput;

private SolutionGlSurfaceView glSurfaceView;

@OverRide
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// TODO: Add a toggle to switch between the original face mesh and attention mesh.
setupStaticImageDemoUiComponents();
setupVideoDemoUiComponents();
setupLiveDemoUiComponents();
}

@OverRide
protected void onResume() {
super.onResume();
if (inputSource == InputSource.CAMERA) {
// Restarts the camera and the opengl surface rendering.
cameraInput = new CameraInput(this);
cameraInput.setNewFrameListener(textureFrame -> facemesh.send(textureFrame));
glSurfaceView.post(this::startCamera);
glSurfaceView.setVisibility(View.VISIBLE);
} else if (inputSource == InputSource.VIDEO) {
videoInput.resume();
}
}

@OverRide
protected void onPause() {
super.onPause();
if (inputSource == InputSource.CAMERA) {
glSurfaceView.setVisibility(View.GONE);
cameraInput.close();
} else if (inputSource == InputSource.VIDEO) {
videoInput.pause();
}
}

private Bitmap downscaleBitmap(Bitmap originalBitmap) {
double aspectRatio = (double) originalBitmap.getWidth() / originalBitmap.getHeight();
int width = imageView.getWidth();
int height = imageView.getHeight();
if (((double) imageView.getWidth() / imageView.getHeight()) > aspectRatio) {
width = (int) (height * aspectRatio);
} else {
height = (int) (width / aspectRatio);
}
return Bitmap.createScaledBitmap(originalBitmap, width, height, false);
}

private Bitmap rotateBitmap(Bitmap inputBitmap, InputStream imageData) throws IOException {
int orientation =
new ExifInterface(imageData)
.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
if (orientation == ExifInterface.ORIENTATION_NORMAL) {
return inputBitmap;
}
Matrix matrix = new Matrix();
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
matrix.postRotate(90);
break;
case ExifInterface.ORIENTATION_ROTATE_180:
matrix.postRotate(180);
break;
case ExifInterface.ORIENTATION_ROTATE_270:
matrix.postRotate(270);
break;
default:
matrix.postRotate(0);
}
return Bitmap.createBitmap(
inputBitmap, 0, 0, inputBitmap.getWidth(), inputBitmap.getHeight(), matrix, true);
}

/** Sets up the UI components for the static image demo. /
private void setupStaticImageDemoUiComponents() {
// The Intent to access gallery and read images as bitmap.
imageGetter =
registerForActivityResult(
new ActivityResultContracts.StartActivityForResult(),
result -> {
Intent resultIntent = result.getData();
if (resultIntent != null) {
if (result.getResultCode() == RESULT_OK) {
Bitmap bitmap = null;
try {
bitmap =
downscaleBitmap(
MediaStore.Images.Media.getBitmap(
this.getContentResolver(), resultIntent.getData()));
} catch (IOException e) {
Log.e(TAG, "Bitmap reading error:" + e);
}
try {
InputStream imageData =
this.getContentResolver().openInputStream(resultIntent.getData());
bitmap = rotateBitmap(bitmap, imageData);
} catch (IOException e) {
Log.e(TAG, "Bitmap rotation error:" + e);
}
if (bitmap != null) {
facemesh.send(bitmap);
}
}
}
});
Button loadImageButton = findViewById(R.id.button_load_picture);
loadImageButton.setOnClickListener(
v -> {
if (inputSource != InputSource.IMAGE) {
stopCurrentPipeline();
setupStaticImageModePipeline();
}
// Reads images from gallery.
Intent pickImageIntent = new Intent(Intent.ACTION_PICK);
pickImageIntent.setDataAndType(MediaStore.Images.Media.INTERNAL_CONTENT_URI, "image/
");
imageGetter.launch(pickImageIntent);
});
imageView = new FaceMeshResultImageView(this);
}

/** Sets up core workflow for static image mode. */
private void setupStaticImageModePipeline() {
this.inputSource = InputSource.IMAGE;
// Initializes a new MediaPipe Face Mesh solution instance in the static image mode.
facemesh =
new FaceMesh(
this,
FaceMeshOptions.builder()
.setStaticImageMode(true)
.setRefineLandmarks(true)
.setRunOnGpu(RUN_ON_GPU)
.build());

// Connects MediaPipe Face Mesh solution to the user-defined FaceMeshResultImageView.
facemesh.setResultListener(
    faceMeshResult -> {
      startTime = System.currentTimeMillis();

      logNoseLandmark(faceMeshResult, /*showPixelValues=*/ true);
      imageView.setFaceMeshResult(faceMeshResult);
      runOnUiThread(() -> imageView.update());

      endTime = System.currentTimeMillis();
      Log.d("StaticImage FaceMeshTiming", "FaceMesh 耗时: " + (endTime - startTime) + " ms");

    });
facemesh.setErrorListener((message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message));

// Updates the preview layout.
FrameLayout frameLayout = findViewById(R.id.preview_display_layout);
frameLayout.removeAllViewsInLayout();
imageView.setImageDrawable(null);
frameLayout.addView(imageView);
imageView.setVisibility(View.VISIBLE);

}

/** Sets up the UI components for the video demo. /
private void setupVideoDemoUiComponents() {
// The Intent to access gallery and read a video file.
videoGetter =
registerForActivityResult(
new ActivityResultContracts.StartActivityForResult(),
result -> {
Intent resultIntent = result.getData();
if (resultIntent != null) {
if (result.getResultCode() == RESULT_OK) {
glSurfaceView.post(
() ->
videoInput.start(
this,
resultIntent.getData(),
facemesh.getGlContext(),
glSurfaceView.getWidth(),
glSurfaceView.getHeight()));
}
}
});
Button loadVideoButton = findViewById(R.id.button_load_video);
loadVideoButton.setOnClickListener(
v -> {
stopCurrentPipeline();
setupStreamingModePipeline(InputSource.VIDEO);
// Reads video from gallery.
Intent pickVideoIntent = new Intent(Intent.ACTION_PICK);
pickVideoIntent.setDataAndType(MediaStore.Video.Media.INTERNAL_CONTENT_URI, "video/
");
videoGetter.launch(pickVideoIntent);
});
}

/** Sets up the UI components for the live demo with camera input. */
private void setupLiveDemoUiComponents() {
Button startCameraButton = findViewById(R.id.button_start_camera);
startCameraButton.setOnClickListener(
v -> {
if (inputSource == InputSource.CAMERA) {
return;
}
stopCurrentPipeline();
setupStreamingModePipeline(InputSource.CAMERA);
});
}

/** Sets up core workflow for streaming mode. */
private void setupStreamingModePipeline(InputSource inputSource) {
this.inputSource = inputSource;
// Initializes a new MediaPipe Face Mesh solution instance in the streaming mode.
facemesh =
new FaceMesh(
this,
FaceMeshOptions.builder()
.setStaticImageMode(false)
.setRefineLandmarks(true)
.setRunOnGpu(RUN_ON_GPU)
.build());
facemesh.setErrorListener((message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message));

if (inputSource == InputSource.CAMERA)
{
  cameraInput = new CameraInput(this);
  cameraInput.setNewFrameListener(textureFrame -> {
            startTime = System.currentTimeMillis();  // 记录开始时间
            facemesh.send(textureFrame);
            endTime = System.currentTimeMillis();  // 记录结束时间

    Log.d("FaceMeshTiming", "单张人脸关键点检测耗时: " + (endTime - startTime) + " ms");
  }
  );
}
else if (inputSource == InputSource.VIDEO)
{
  videoInput = new VideoInput(this);
  videoInput.setNewFrameListener(textureFrame -> facemesh.send(textureFrame));
}



// Initializes a new Gl surface view with a user-defined FaceMeshResultGlRenderer.
glSurfaceView =
    new SolutionGlSurfaceView<>(this, facemesh.getGlContext(), facemesh.getGlMajorVersion());
glSurfaceView.setSolutionResultRenderer(new FaceMeshResultGlRenderer());
glSurfaceView.setRenderInputImage(true);

  //----------------------- , add 20250205

// glSurfaceView.getLayoutParams().width = 640;
// glSurfaceView.getLayoutParams().height = 360;
// glSurfaceView.requestLayout();
//-----------------------

facemesh.setResultListener(
    faceMeshResult -> {
      logNoseLandmark(faceMeshResult, /*showPixelValues=*/ false);
      glSurfaceView.setRenderData(faceMeshResult);
      glSurfaceView.requestRender();
    });

// The runnable to start camera after the gl surface view is attached.
// For video input source, videoInput.start() will be called when the video uri is available.
if (inputSource == InputSource.CAMERA) {
  glSurfaceView.post(this::startCamera);
}

// Updates the preview layout.
FrameLayout frameLayout = findViewById(R.id.preview_display_layout);
imageView.setVisibility(View.GONE);
frameLayout.removeAllViewsInLayout();
frameLayout.addView(glSurfaceView);
glSurfaceView.setVisibility(View.VISIBLE);

//-------------- , add by 20250205

// glSurfaceView.getLayoutParams().width = 640;
// glSurfaceView.getLayoutParams().height = 360;
// glSurfaceView.requestLayout();
//--------------
frameLayout.requestLayout();
}

private void startCamera() {
Log.d(TAG, "Starting Camera...");

  cameraInput.start(
          this,
          facemesh.getGlContext(),
          CameraInput.CameraFacing.FRONT,
          glSurfaceView.getWidth(),
          glSurfaceView.getHeight());

// glSurfaceView.getLayoutParams().width = 640;
// glSurfaceView.getLayoutParams().height = 360;
// glSurfaceView.requestLayout();
}

private void stopCurrentPipeline() {
if (cameraInput != null) {
cameraInput.setNewFrameListener(null);
cameraInput.close();
}
if (videoInput != null) {
videoInput.setNewFrameListener(null);
videoInput.close();
}
if (glSurfaceView != null) {
glSurfaceView.setVisibility(View.GONE);
}
if (facemesh != null) {
facemesh.close();
}
}

private void logNoseLandmark(FaceMeshResult result, boolean showPixelValues) {
if (result == null || result.multiFaceLandmarks().isEmpty()) {
return;
}
NormalizedLandmark noseLandmark = result.multiFaceLandmarks().get(0).getLandmarkList().get(1);
// For Bitmaps, show the pixel values. For texture inputs, show the normalized coordinates.
if (showPixelValues) {
int width = result.inputBitmap().getWidth();
int height = result.inputBitmap().getHeight();
Log.i(
TAG,
String.format(
"MediaPipe Face Mesh nose coordinates (pixel values): x=%f, y=%f",
noseLandmark.getX() * width, noseLandmark.getY() * height));
} else {
Log.i(
TAG,
String.format(
"MediaPipe Face Mesh nose normalized coordinates (value range: [0, 1]): x=%f, y=%f",
noseLandmark.getX(), noseLandmark.getY()));
}
}
}

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Feb 5, 2025
@kuaashish
Copy link
Collaborator

Hi @yonger001,

Unfortunately, you are using the legacy Face Mesh solution for Android, which has been upgraded and is now part of the Face Landmarker Task API. You can find the overview page here and the implementation guide for Android here . Support for the legacy Face Mesh has been completely discontinued. Please implement the updated API on a physical Android device and let us know if you encounter any issues.

Thank you!!

@kuaashish kuaashish added legacy:face mesh Issues related to Face Mesh type:support General questions stat:awaiting response Waiting for user response labels Feb 5, 2025
@yonger001
Copy link
Author

Do you mean the FaceMesh in version v0.10.20 of MediaPipe-solutions-examples is the legacy version?

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Feb 5, 2025
@kuaashish
Copy link
Collaborator

Hi @yonger001,

Correct. Support for this has not been maintained since the introduction of the Face Landmarker. You can refer to the documentation for details on the upgraded Task API solution. The newer Maven package for Face Landmarker is available here, and you can find the overview page and Android details in the comment above.

Thank you!!

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Feb 5, 2025
@kuaashish
Copy link
Collaborator

Hi @yonger001,

Please review the above comment and try using the newer API. We also recommend closing this issue so we can mark it as resolved internally. If you encounter any further issues with the newer API, please raise a new issue with a complete error log and steps to reproduce. We will certainly look into it.

Thank you!!

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Feb 6, 2025
@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Feb 6, 2025
Copy link

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale label Feb 14, 2025
Copy link

This issue was closed due to lack of activity after being marked stale for past 7 days.

Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@kuaashish kuaashish removed stat:awaiting response Waiting for user response stale labels Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legacy:face mesh Issues related to Face Mesh os:linux-non-arm Issues on linux distributions which run on x86-64 architecture. DOES NOT include ARM devices. platform::android Android Solutions type:support General questions
Projects
None yet
Development

No branches or pull requests

2 participants