Skip to content

v0.9.18

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 25 Mar 12:08
· 2406 commits to main since this release
2185ed7

🚀 Added

🎥 🎥 Multiple video sources 🤝 InferencePipeline

Previous versions of the InferencePipeline could only support a single video source. However, from now on, you can pass multiple videos into a single pipeline and have all of them processed! Here is a demo:

demo_short.mp4

Here's how to achieve the result:

from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
    video_reference=["your_video.mp4", "your_other_ideo.mp4"],
    model_id="yolov8n-640",
    on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()

There were a lot of internal changes made, but the majority of users should not experience any breaking changes. Please visit our 📖 documentation to discover all the differences. If you are affected by the changes we needed to introduce, here is the 🔧 migration guide.

Barcode detector in workflows

Thanks to @chandlersupple, we have ability to detect and read barcodes in workflows.

Visit our 📖 documentation to see how to bring this step into your workflow.

🌱 Changed

Easier data collection in inference 🔥

We've introduced a new parameter handled by the inference server (including hosted inference at Roboflow platform). This parameter, called active_learning_target_dataset, can now be added to requests to specify the Roboflow project where collected data should be stored.

Thanks to this change, you can now collect datasets while using Universe models. We've also updated Active Learning 📖 docs

from inference_sdk import InferenceHTTPClient, InferenceConfiguration

# prepare and set configuration
configuration = InferenceConfiguration(
    active_learning_target_dataset="my_dataset",
)
client = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="<YOUR_ROBOFLOW_API_KEY>",
).configure(configuration)

# run normal request and have your data sampled 🤯 
client.infer(
    "./path_to/your_image.jpg",
    model_id="yolov8n-640",
)

Other changes

🔨 Fixed

Thanks to contribution of @hvaria 🏅 we have two problems solved:

  • Ensure Graceful Interruption of Benchmark Process - Fixing for Bug #313: in #325
  • Better error handling in inference CLI: in #328

New Contributors

Full Changelog: v0.9.17...v0.9.18