Skip to content

Conversation

Deepchavda007
Copy link

@Deepchavda007 Deepchavda007 commented May 14, 2025

Summary

  1. Automatic device selection: Inference now automatically selects the best available hardware — preferring Apple Silicon (MPS), falling back to CUDA if available, and then to CPU.
  2. Modern Python environment support via uv: Added instructions and compatibility for using uv, a fast dependency manager designed for Python 3.10+, as an alternative to Conda.

  • predict.py

    • Replaced hardcoded "mps" device usage with a dynamic fallback:

      if torch.backends.mps.is_available():
          DEVICE = torch.device("mps")
          logger.info("Using MPS device")
      elif torch.cuda.is_available():
          DEVICE = torch.device("cuda")
          logger.info("Using CUDA device")
      else:
          DEVICE = torch.device("cpu")
          logger.info("Using CPU device")

Type of change

  • Improvement
  • Documentation update
  • Bug fix

Checklist

  • Inference tested on MacBook (MPS)
  • Inference tested with CUDA-enabled GPU
  • Tested with uv environment setup and pip fallback
  • Backward-compatible with conda environments

pizzato added a commit to pizzato/ml-fastvlm that referenced this pull request Aug 27, 2025
…isplay-issue-on-startup

Revert "Icons and names and other manual changes"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant