v0.22.0
🚀 Added
🔥 YOLOv11 in inference
🔥
We’re excited to announce that YOLOv11 has been added to inference
! 🚀 You can now use both inference
and the inference
server to get predictions from the latest YOLOv11 model. 🔥
All thanks to @probicheaux and @SolomonLake 🏅
skateboard_yolov11.mov
Try the model ininference
Python package
import cv2
from inference import get_model
image = cv2.imread("<your-image>")
model = get_model("yolov11n-640")
predictions = model.infer(image)
print(predictions)
💪 Workflows update
Google Vision OCR in workflows
Thanks to open source contribution from @brunopicinin we have Google Vision OCR integrated into workflow ecosystem. Great to see open source community contribution 🏅
google_vision_ocr.mp4
See 📖 documentation of the new block to explore it's capabilities.
Images stitch Workflow block
📷 Your camera is not able to cover the whole area you want to observe? Don't worry! @grzegorz-roboflow just added the Workflow block which would be able to combine the POV of multiple cameras into a single image that can be further processed in your Workflow.
image 1 | image 2 | stitched image |
---|---|---|
📏 Size measurement block
Thanks to @chandlersupple, we can now measure actual size of objects with Workflows! Take a look at 📖 documentation to discover how the block works.
Workflows profiler and Execution Engine speedup 🏇
We've added Workflows Profiler - ecosystem extension to profile the execution of your workflow. It works for inference
server requests (both self-hosted and on Roboflow platform) as well as for InferencePipeline
.
The cool thing about profiler is that it is compatible with chrome://tracing
- so you can easily grab profiler output and render it in Google Chrome browser.
To profile your Workflow execution use the following code snippet - traces are saved in ./inference_profiling
directory by default.
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="https://detect.roboflow.com",
api_key="<YOUR-API-KEY>"
)
results = client.run_workflow(
workspace_name="<your-workspace>",
workflow_id="<your-workflow-id>",
images={
"image": "<YOUR-IMAGE>",
},
enable_profiling=True,
)
See detailed report regarding speed optimisations in the PR #710
❗ Important note
As part of speed optimisation we enabled server-side caching for workflows definitions saved on Roboflow Platform - if you frequently change and your Workflow, to see results immediately you need to specify use_cache=False
parameter of client.run_workflow(...)
method
🔧 Fixed
- Fix prometheus scraping by @robiscoding in #712
- Fix the problem with VLMs on batch inference by @PawelPeczek-Roboflow in #718
🌱 Changed
- Docker auto-reload configuration by @EmilyGavrilenko in #703
- Multi-Label Classification UQL Operations by @EmilyGavrilenko in #714
- Add port forward to Notebook Landing Page message by @hansent in #711
- Add optional descriptions to dynamic blocks by @EmilyGavrilenko in #702
- Add descriptions to task types in
VLM as Detector
by @PawelPeczek-Roboflow in #704 - Add tests for Google Vision OCR by @PawelPeczek-Roboflow in #715
- Improvements regarding custom python blocks by @PawelPeczek-Roboflow in #716
🏅 New Contributors
We do want to honor @brunopicinin who made their first contribution to inference
in #709 as a part of Hacktoberfest 2024. We invite other open-source community members to contribute 😄
Full Changelog: v0.21.1...v0.22.0