Skip to content

Restore state of ZHA binary_sensor if available#14755

Closed
Adminiuga wants to merge 1 commit intohome-assistant:devfrom
Adminiuga:zha-binary-sensor-restore
Closed

Restore state of ZHA binary_sensor if available#14755
Adminiuga wants to merge 1 commit intohome-assistant:devfrom
Adminiuga:zha-binary-sensor-restore

Conversation

@Adminiuga
Copy link
Copy Markdown
Contributor

@Adminiuga Adminiuga commented Jun 2, 2018

Description:

This is an enhancement to #14553
While setting ZHA binary_switch-es to 'Off" on initialization is more consistent with other platforms, why not to try restoring the old state from the recorder database?
In my setup async_update() consistently fails to read on_off attribute of OnOff output cluster, so losing switches state becomes a nuisance when you have to restart HA often, like in case of development.
This pull request implements the following logic for the "default state" upon HA initialization/restart:

  • default state is None
  • tries to update state using async_update(), since async_add_device(update_before_add=True)
  • uses async_added_to_hass() to restore the old state if avaiable

Checklist:

  • The code change is tested and works locally.
  • Local tests pass with tox. Your PR cannot be merged unless tests pass

@dshokouhi
Copy link
Copy Markdown
Member

Hmm should this also be done for contact sensors as well? I noticed that when I restart, the state also defers to off instead of the last position which could be on.

@Adminiuga
Copy link
Copy Markdown
Contributor Author

@dshokouhi are those IAS devices? Doesn't async_update() get the proper state?

@dshokouhi
Copy link
Copy Markdown
Member

@Adminiuga they are IAS devices (at least I think they are centralite contact sensors?) They do not restore the state for binary sensor but the temperature sensor does retain the state now.

@fabaff fabaff changed the title zha: restore state for binary_sensor if avaiable Restore state for binary_sensor if available Jun 2, 2018
@fabaff fabaff changed the title Restore state for binary_sensor if available Restore state of ZHA binary_sensor if available Jun 2, 2018
Copy link
Copy Markdown
Member

@pvizeli pvizeli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't restore states from exists device. Only level 2 components (logic)
or device with asumme states can do this. @balloob ?

@balloob
Copy link
Copy Markdown
Member

balloob commented Jun 2, 2018

Pascal is right. If we can query the device, we should not restore it from the database. If it's a sensor that only occasionally sends an update and that cannot be triggered, restoring is ok.

@Adminiuga
Copy link
Copy Markdown
Contributor Author

Only level 2 components (logic)

@pvizeli can you please provide an example of Level 1/2/3 components, so I could better understand what are you referring to?

@balloob we can query the device by sending the attribute read request, but as was mentioned in #14553 the device may not respond to it (Xiaomi devices) or the device could be asleep. If the device does respond to query, we'll use the reply as the initial state and skip state restoration from db. we revert to state restoration only if the query fails or does not produce any update. Specifically to zigbee switches, the OnOff cluster is an "output" cluster eg it sends updates to the controller and is not intended to be queried, unlike an "input" cluster. In my setup, a bunch of 45857GE & 45856GE wall switches and OSRAM Ligthify wireless dimmers none respond to attribute read request so cannot properly update the state upon HA restart. Not until user toggles each switch back and forth to get it back in sync with HA, which becomes more annoying as the number of the switches goes up.

@rcloran which switch devices do you use which respond to attribute read request on OnOff output cluster?

@dmulcahey
Copy link
Copy Markdown
Contributor

zha already caches attribute values. The reason there are many reports of mismatching values is because of this. The problem is that a majority of these devices are "sleepy" end devices. Meaning their radios are off until they have a change that they want to report. When the safe_read operation is used in conjunction with the recently update_before_added functionality zha is pulling the previously cached value from its db unless allow_cache is explicitly set to false on the call. @rcloran please keep me honest here incase I have something wrong...

That being said, I believe that devices with the onOff cluster in the output_cluster collection are remotes... not switches. switches have the onOff cluster in the input_cluster collection. This is a very important distinction. These are "sleepy" devices that are meant to be configured in zigbee groups with other devices (ex: light bulbs) but zha doesn't support zigbee group commissioning yet.

Not until user toggles each switch back and forth to get it back in sync with HA, which becomes more annoying as the number of the switches goes up.

#14795 should fix the out of sync issue w/ automations and bulbs or ha groups because it forces the events to fire on every button press. I would use the events in automations for now and completely ignore the state of the binary sensor that is created. Let the state of the device(s) you're trying to control filter the events that matter by setting conditions in the automations referencing them. Meaning only handle the osram on event if the light you're trying to control is off etc.

@Adminiuga
Copy link
Copy Markdown
Contributor Author

That being said, I believe that devices with the onOff cluster in the output_cluster collection are remotes... not switches. switches have the onOff cluster in the input_cluster collection. This is a very important distinction

Correct. But the problem I'm experiencing with remotes is that I cannot read OnOff attribute from OnOff output cluster, regardless if I do cached or uncached read. @dmulcahey do you get successfule reads of OnOff attribute on these remotes?
forced_update=True does make things better and I think is actually a good fix for these remotes. Although in my opinion, users would get a better experience with the addition of state restore for these remotes.

@dmulcahey
Copy link
Copy Markdown
Contributor

dmulcahey commented Jun 4, 2018

zha already handles state restore so long as allow_cache is set to true on the safe_read call. Should be a simple change on the HA entity...

@Adminiuga
Copy link
Copy Markdown
Contributor Author

Adminiuga commented Jun 4, 2018 via email

@dmulcahey
Copy link
Copy Markdown
Contributor

I’ll look into this when my son goes to bed. I thought cluster attribute updates were cached in general in Zigpy.

@Adminiuga
Copy link
Copy Markdown
Contributor Author

Adminiuga commented Jun 4, 2018 via email

@dmulcahey
Copy link
Copy Markdown
Contributor

Probably an oversight. I think a PR to Zigpy to change that makes sense. Thoughts?

@Adminiuga
Copy link
Copy Markdown
Contributor Author

Adminiuga commented Jun 4, 2018 via email

@dmulcahey
Copy link
Copy Markdown
Contributor

Are you on discord?

@Adminiuga
Copy link
Copy Markdown
Contributor Author

Adminiuga commented Jun 4, 2018 via email

@dmulcahey
Copy link
Copy Markdown
Contributor

HA discord channel

@balloob
Copy link
Copy Markdown
Member

balloob commented Jun 7, 2018

@Adminiuga: can you please provide an example of Level 1/2/3 components, so I could better understand what are you referring to?

Talking to devices is level 1. We get raw data. Level 2 is for example the filter sensor. Takes in data and then processes it (aggregate/filter/etc).

@Adminiuga
Copy link
Copy Markdown
Contributor Author

closing in favor of #14902

@Adminiuga Adminiuga closed this Jun 11, 2018
@ghost ghost removed the small-pr PRs with less than 30 lines. label Jun 11, 2018
@Adminiuga Adminiuga deleted the zha-binary-sensor-restore branch September 11, 2018 02:02
@home-assistant home-assistant locked and limited conversation to collaborators Dec 10, 2018
@ghost ghost added the integration: zha label Mar 21, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants