You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Refactor to simplify support for additional detector types (blakeblackshear#3656)
* Refactor EdgeTPU and CPU model handling to detector submodules.
* Fix selecting the correct detection device type from the config
* Remove detector type check when creating ObjectDetectProcess
* Fixes after rebasing to 0.11
* Add init file to detector folder
* Rename to detect_api
Co-authored-by: Nicolas Mowen <[email protected]>
* Add unit test for LocalObjectDetector class
* Add configuration for model inputs
Support transforming detection regions to RGB or BGR.
Support specifying the input tensor shape. The tensor shape has a standard format ["BHWC"] when handed to the detector, but can be transformed in the detector to match the model shape using the model input_tensor config.
* Add documentation for new model config parameters
* Add input tensor transpose to LocalObjectDetector
* Change the model input tensor config to use an enumeration
* Updates for model config documentation
Co-authored-by: Nicolas Mowen <[email protected]>
Copy file name to clipboardExpand all lines: docs/docs/configuration/advanced.md
+26-1
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Examples of available modules are:
23
23
24
24
- `frigate.app`
25
25
- `frigate.mqtt`
26
-
- `frigate.edgetpu`
26
+
- `frigate.object_detection`
27
27
- `frigate.zeroconf`
28
28
- `detector.<detector_name>`
29
29
- `watchdog.<camera_name>`
@@ -50,6 +50,30 @@ database:
50
50
51
51
If using a custom model, the width and height will need to be specified.
52
52
53
+
Custom models may also require different input tensor formats. The colorspace conversion supports RGB, BGR, or YUV frames to be sent to the object detector. The input tensor shape parameter is an enumeration to match what specified by the model.
54
+
55
+
| Tensor Dimension | Description |
56
+
| :--------------: | -------------- |
57
+
| N | Batch Size |
58
+
| H | Model Height |
59
+
| W | Model Width |
60
+
| C | Color Channels |
61
+
62
+
| Available Input Tensor Shapes |
63
+
| :---------------------------: |
64
+
| "nhwc" |
65
+
| "nchw" |
66
+
67
+
```yaml
68
+
# Optional: model config
69
+
model:
70
+
path: /path/to/model
71
+
width: 320
72
+
height: 320
73
+
input_tensor: "nhwc"
74
+
input_pixel_format: "bgr"
75
+
```
76
+
53
77
The labelmap can be customized to your needs. A common reason to do this is to combine multiple object types that are easily confused when you don't need to be as granular such as car/truck. By default, truck is renamed to car because they are often confused. You cannot add new object types, but you can change the names of existing objects in the model.
54
78
55
79
```yaml
@@ -71,6 +95,7 @@ Note that if you rename objects in the labelmap, you will also need to update yo
71
95
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, a docker volume mapping can be used to overwrite the included ffmpeg build with an ffmpeg build that works for your specific hardware setup.
72
96
73
97
To do this:
98
+
74
99
1. Download your ffmpeg build and uncompress to a folder on the host (let's use `/home/appdata/frigate/custom-ffmpeg` for this example).
75
100
2. Update your docker-compose or docker CLI to include `'/home/appdata/frigate/custom-ffmpeg':'/usr/lib/btbn-ffmpeg':'ro'` in the volume mappings.
76
101
3. Restart frigate and the custom version will be used if the mapping was done correctly.
0 commit comments