Allow generic camera conf without still_image_url#62611
Allow generic camera conf without still_image_url#62611uvjustin merged 5 commits intohome-assistant:devfrom
Conversation
ce0abd8 to
cdd7a6d
Compare
|
I didn't test this live yet but this should avoid the sync->async->sync problem in generic. Basically if the sync |
|
Also renamed |
e345697 to
1d6ff21
Compare
|
@allenporter I realize this whole sync business is probably unnecessary. I think you are right that the wrappers are there just for backwards compatibility. |
5d90542 to
e213220
Compare
|
CI seems to be broken from elsewhere again |
| if not self._still_image_url: | ||
| if self.stream: | ||
| return await self.stream.async_get_image(width, height) | ||
| await self.async_create_stream() |
There was a problem hiding this comment.
What does this call do? It seems like it will create the stream, but not start it, is that right? It seems like the next call to async_get_image will be safe, but will just return None, unless something else starts the stream, if i understand right.
There was a problem hiding this comment.
Yes, you are right, we should start the stream there too
There was a problem hiding this comment.
This is a reasonably different behavior change, so i think this could also come in a follow up PR to evaluate on its own, but for now using the image if already preloaded since like a nice win.
There was a problem hiding this comment.
I already pushed the relevant change in b5b26bf
There was a problem hiding this comment.
Rebased to try and fix the CI problem.
I'm curious as to what you think of the stream.start() in here. It might make more sense to move that into Stream.async_get_image
There was a problem hiding this comment.
How about we give the stream a 15 second timeout when we start it this way? Since the normal cards grab images ~10 seconds apart, if we don't get 2 consecutive requests then we can stop the stream. Or maybe that's too quick, as the user might switch between views but still want to come back. Maybe a 2-3 minute timeout makes sense.
The current IdleTimer/Provider framework is based on StreamOutput. I'll have to see if we can expand it to work with KeyFrameConverter. Alternatively, maybe we actually want to start up a HLS provider, since a natural use case is to use an "auto" picture entity card, and the regular picture load will help start the HLS provider before the user clicks in to the individual camera.
I don't have any numbers, but my own instance has the rough CPU power of a RPi, and I haven't seen any issues with 10+ cameras refreshing every ~30 secs or so (I use very long keyframe intervals so 2 out of 3 requests get back the old image, so only one is actually doing work).
The CPU usage of the open stream is very low (as we know from preload stream - and if we don't use a HLS provider it's even lower), so the concern would be decoding and encoding. Although we're decoding and encoding using different codecs, encoding is usually a lot more resource intensive than decoding. On this front libjpeg-turbo seems to be very fast - better than PIL which is used in some other integrations. Also, I'm not sure how jpeg scaling works, but the naive assumption is that if any cameras are rescaling to use the new width and height parameters, they'd have to do a decode/reencode as well, so the CPU load would actually be similar to this.
There was a problem hiding this comment.
- I like your 2-3 minute timeout since we expect the frontend to be polling fairly often.
- I see what you mean about IdleTimer + StreamOutput. I can see that how Stream interacts w/ StreamOutput is very similar for this case (e.g. checking for idle timeout, shutting down, etc)..
- But the side where the worker pushes data to it is different. I could see how KeyframeConverter could be a StreamOutput, then the StreamMuxer would beed to know about different types of stream outputs (some that want just keyframes pushed in, or some that want Segments) but they could also just ignore the parts they don't care about, or an optimization could be to only do segmentation when the outputs are actually listening for segments, which could come later.
I am not sure if generalizing the idle time out side or stream output side is easier, but may be worth thinking through both to decide.
On performance, thanks for talking it through for me 👍🏼
There was a problem hiding this comment.
Did you look at the most recent push? I ended up just starting up the HLS output as that at least will start the IdleTimer to time everything out. I think it can go in as is now, and we can continue the IdleTimer/StreamOutput discussion.
There was a problem hiding this comment.
I see, i missed that part. Yes, that looks like it covers it fine, handling the error conditions.
b5b26bf to
82239ca
Compare
| if not self._still_image_url: | ||
| if self.stream: | ||
| return await self.stream.async_get_image(width, height) | ||
| await self.async_create_stream() |
There was a problem hiding this comment.
I have few cases I am thinking about here:
(1) If the stream fails and is not restarted, this will show the last image. The current ffmpeg call will get a fresh frame every time.
(2) Creating the stream without starting it will just get back None, so the fallback to ffmpeg would not longer happen
(3) Not yet sure how I feel about always starting the stream, but it would fix those issues at the cost of having the stream always on. That seems like a larger topic that may need more considerations, data, etc. (e.g. maybe we collect data about the impact of such a change to argue whether or not its ok? or maybe showing that the ffmpeg frame thing is super expensive, etc)
Another alternative that is more incremental could be to only use the stream if its created /and/ active (or only use the stream image with X minutes before it is considered stale if a stream failed) before falling back to ffmpeg. This is extra code, but preserves status quo behavior.
Curious to hear your thoughts on this. I definitely like the direction of trying to replace ffmpeg. This also makes me think it would be really cool to be able to collect stats w.r.t. stream bandwidth usage.
085aa31 to
4f907a6
Compare
|
Overall, i think i'm good with this given you're more familiar with how this integration is used than I am. Probably worth now also referencing a documentation PR for this new feature/behavior |
4f907a6 to
3c9a4af
Compare
|
Also make sure to send a documentation PR |
Proposed change
This PR updates generic to allow getting the still image from
Stream.get_image()which was added in #61918. To enable this the config schema was updated to allowCONF_STILL_IMAGE_URLto be omitted whenCONF_STREAM_SOURCEis present.GenericCamera.camera_image()was removed as it was unused and unnecessary.Type of change
Additional information
Checklist
black --fast homeassistant tests)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest.requirements_all.txt.Updated by running
python3 -m script.gen_requirements_all..coveragerc.The integration reached or maintains the following Integration Quality Scale:
To help with the load of incoming pull requests: