This article outlines the design of event handling in Hot Rod protocol (JIRA).
The idea is that Hot Rod servers should be able to notify remote clients of events such as cache entry created, cache entry modified, or cache entry deleted. Clients would opt in to listen to these events to avoid swamping all connected clients. It assumes that clients are able to maintain persistent connections to the servers.
RemoteCache:
/**
* Add a client listener to receive events that happen in the remote cache. The listener object must be
* annotated with @ClientListener annotation. It returns a handler which can be used to do further operations
* on the client listener, such as removing it.
*/
ClientListenerHandler addClientListener(Object listener)
ClientListenerHandler:
interface ClientListenerHandler {
void remove();
}
Annotation:
@interface ClientListener {
String filterFactory default null;
String converterFactory default null;
}
A new operation called addClientListener
would be added to Hot Rod clients that will enable remote client listeners to be registered remotely in Hot Rod server(s). If Hot Rod servers are clustered, it’s enough to register the listener in one of the nodes. Hot Rod servers will take care of making sure that the other clustered servers are aware of the listener registration (see Server section for more info).
For the Java client, these listener objets will be annotated with @ClientListener
. The filterFactory
is an optional annotation parameter that enables event filtering and converterFactory
is another optional parameter that customizes the content of the events received.
When the client provides a non-null filterFactory
, it is giving the name of a server-side event filter factory which creates instances of filters for events the client should receive. When the client provides a non-null converterFactory
, it is giving the name of a server-side converter factory which creates converters to transforms the contents of event sent back. By doing the filtering and conversion server side, network traffic can be reduced by avoiding receiving events which the client is not interested in, or data that the client is not interested in. See Server/Filtering and Server/Converter sections for more info.
Hot Rod client implementations must track these listener registration so that in case of a disconnect, they can resend client listener registrations.
For each registered listener, a listener ID needs to be sent to the server as part of the listener registration message. This listener ID is generated by the Hot Rod client implementation.
By removing a client listener, the client will inform the server, via which the client listener was registered, that the client is no longer interested in events that particular listener. The listener ID of the object will be sent in this message. Same as before, servers will make sure that any other clustered Hot Rod servers are aware that the listener registration has been removed.
Upon registration, the server will loop through the cache contents and send events for currently cached contents. Once the cache contents have been iterated over, the client will start receiving events for on-going operations. The connection through which the listener is added will be exclusively used to send events back to the client. Clients won’t be able to use this connection to send operations.
By default, if no filtering is enabled, when the client registers a client listener, it will receive notifications on the following events:
Annotations
@ClientCacheEntryCreated
@ClientCacheEntryModified
@ClientCacheEntryRemoved
Although Infinispan embedded mode supports a wide array of listener events, client listeners are limited on the type of events to fire. These events can be further limited by enabling filtering.
When a listener is registered without a filter, or a filter that does not transform the event payload, events received by client will contain the following information:
class ClientCacheEntryEvent {
Type getType()
}
class ClientCacheEntryCreatedEvent<K> implements ClientCacheEntryEvent {
K getKey()
long getVersion()
}
class ClientCacheEntryModifiedEvent<K> implements ClientCacheEntryEvent {
K getKey()
long getVersion()
}
class ClientCacheEntryRemovedEvent<K> implements ClientCacheEntryEvent {
K getKey()
}
By default, Infinispan provides just enough information to make the event relevant while minimising the network bandwidth used. The bare minimum to send is the name of the key affected by the event. On top of that, cache entry created and updated events also provide version information, which allows clients to do modifications as long as the versions have not changed.
The following is an example of listener that just prints events received:
class MyListener {
@ClientCacheEntryCreated
@ClientCacheEntryModified
@ClientCacheEntryRemoved
public void handleClientEvent(ClientCacheEntryEvent e) {
System.out.println(e)
}
}
When a cache entry has been created in the server, the version of that cache entry will be received by the client in the event notification. This is useful for scenarios where Hot Rod clients users want to do conditional replace/remove operations using the latest version of the data that they’ve received via a client cache entry created event.
When a client cache entry modified event is received, it means than an existing cache entry has been updated (as opposed to be created for the first time or removed).
When a cache entry has been updated, the version of that cache entry will be received by the client in the event notification. This is useful for scenarios where Hot Rod clients users want to do conditional replace/remove operations using the latest version of the data that they’ve received via a client cache entry updated event.
Indicates that a cache entry has been removed from the server(s). For invalidated caches, invalidation messages sent around are considered cache removals, and hence they would emit events. No cache removed events will be sent for removals happening as a result of the cache being cleared.
NOTE: Cache entry removals as a result of expirations are not included. It requires https://issues.jboss.org/browse/ISPN-694[ISPN-694].
If Hot Rod clients use some kind of marshalling mechanism to convert typed key objects to byte arrays, when default events are received from the server, the keys will be unmarshalled and passed in the client callback. Clients are not required to maintain referential equality between keys passed in to the remote cache and the keys that are received in the client callbacks.
As mentioned earlier, the contents of the client events can optionally be customized in order to provide further information on top of what the default events provide, or vice versa, provide even less information. For example, clients might want to receive value information as well key information along with the events, or some clients might not want to receive any key information at all.
These customizations happen server side (explained in Server Filter section), but on the client side, call backs must take this into account and they need to be able to extract the information in the same format the server-side filter generated it.
class ClientCacheEntryCustomEvent implements ClientCacheEntryEvent {
byte[] getEventData()
}
The following is an example of listener that just prints events received:
class MyListener {
@ClientCacheEntryCreated
public void handleClientEvent(ClientCacheEntryCustomEvent e) {
// Parse byte array provided by e.getEventData() and act accordingly
System.out.println(e)
}
}
For each client listener registration request, the server will register a cluster listener to track it with the given listener ID.
When a new registration occurs, the server will loop through the cache contents and send events to the client that made the registration. Once all the events have been sent, the server will send events for any on-going operations.
The cluster listener infrastructure will provide the capabilities to send events for on-going operations to all registered listeners.
Optionally, clients will configure the server’s caches with one or several named filters and converters.
TBD: How to plug Infinispan Server with custom remote filters implementations. Configuration wise, a list of filter factories class names will need to be provided, along with their filter factory name.
To implement filtering, users will provide implementations of a KeyValueFilterFactory:
interface KeyValueFilterFactory {
<K, V> KeyValueFilter<K, V> getKeyValueFilter();
}
The server will call getKeyValueFilter
on the named factory when the listener is registered. Implementations are free to return a stateless org.infinispan.notifications.KeyValueFilter to all invocations, or a brand new stateful instance every time the getKeyValueFilter
is called. Since Hot Rod caches store binary data, the users should implement a KeyValueFilter<byte[], byte[]>
By default, the server will send notifications for all cache entry created, cache entry modified, and cache entry removed events. This filter will send events affecting all keys in the cache.
Optionally, clients can configure servers to transform the payload of the event to add extra information to the event, or remove uninteresting parts from the event.
TBD: How to plug Infinispan Server with custom remote filters implementations. Configuration wise, a list of converter factories class names will need to be provided, along with their converter factory name.
To implement filtering, users will provide implementations of a ConverterFactory:
interface ConverterFactory {
<K, V, C> Converter<K, V, C> getConverter();
}
The server will call getConverter
on the named factory when the listener is registered. Implementations are free to return a stateless org.infinispan.notifications.Converter to all invocations, or a brand new stateful instance every time the getConverter
is called.
Converter implementations need to be written for binary key types, so Converter<byte[], byte[], byte[]>. Whatever Converter’s convert
callback returns, that binary payload will be shipped as is back to the client. In such scenarios, clients will receive a ClientCacheEntryCustomEvent
.
This section focused on what happens when certain components fail.
Listener registrations will survive node failures thanks to the underlying clustered listener implementation.
When a client detects that the server which was serving the events is gone, it needs to resend it’s registration to one of the nodes in the cluster. Whoever receives that request will again loop through its contents and send an event for each entry to the client. This way the client avoids loosing events. Once all entries have been iterated over, on-going events will be sent to the client.
This way of handling failure means that clients will receive at-least-once delivery of cache updates. It might receive multiple events for the cache update as a result of topology changes handling.
| Req [version=0x13, opcode=0x1B] | listener id [bytearray] | | has filter [byte] | filter factory [string, if has filter] | | has converter [byte] | converter factory [string, if has converter]
| Req [version=0x13, opcode=0x1D] | listener id [bytearray]
| Res [version=0x13, opcode=0x60, msgId={generated}] | listener id [bytearray] | key [bytearray] | entry version [8 bytes]
| Res [version=0x13, opcode=0x61, msgId={generated}] | listener id [bytearray] | key [bytearray] | entry version [8 bytes]
| Res [version=0x13, opcode=0x62, msgId={generated}] | listener id [bytearray] | key [bytearray]
| Res [version=0x13, opcode=0x63, msgId={generated}] | listener id [bytearray] | event data [bytearray]