Skip to content

Commit 23aedbe

Browse files
committed
update README files
1 parent b017e11 commit 23aedbe

File tree

3 files changed

+90
-46
lines changed

3 files changed

+90
-46
lines changed
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Kibana Telemetry Service
2+
3+
Telemetry allows Kibana features to have usage tracked in the wild. The general term "telemetry" refers to multiple things:
4+
5+
1. Integrating with the telemetry service to express how to collect usage data (Collecting).
6+
2. Sending a payload of usage data up to Elastic's telemetry cluster.
7+
3. Viewing usage data in the Kibana instance of the telemetry cluster (Viewing).
8+
9+
This plugin is responsible for sending usage data to the telemetry cluster. For collecting usage data, use
Lines changed: 79 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -1,62 +1,105 @@
1-
# Kibana Telemetry Service
1+
# Kibana Usage Collection Service
22

3-
Telemetry allows Kibana features to have usage tracked in the wild. The general term "telemetry" refers to multiple things:
3+
Usage Collection allows collecting usage data for other services to consume (telemetry and monitoring).
4+
To integrate with the telemetry services for usage collection of your feature, there are 2 steps:
45

5-
1. Integrating with the telemetry service to express how to collect usage data (Collecting).
6-
2. Sending a payload of usage data up to Elastic's telemetry cluster, once per browser per day (Sending).
7-
3. Viewing usage data in the Kibana instance of the telemetry cluster (Viewing).
6+
1. Create a usage collector.
7+
2. Register the usage collector.
88

9-
You, the feature or plugin developer, mainly need to worry about the first meaning: collecting. To integrate with the telemetry services for usage collection of your feature, there are 2 steps:
10-
11-
1. Create a usage collector using a factory function
12-
2. Register the usage collector with the Telemetry service
9+
## Creating and Registering Usage Collector
1310

14-
NOTE: To a lesser extent, there's also a need to update the telemetry payload of Kibana stats and telemetry cluster field mappings to include your fields. This part is typically handled not by you, the developer, but different maintainers of the telemetry cluster. Usually, this step just means talk to the Platform team and have them approve your data model or added fields.
11+
All you need to provide is a `type` for organizing your fields, and a `fetch` method for returning your usage data. Then you need to make the Telemetry service aware of the collector by registering it.
1512

16-
## Creating and Registering Usage Collector
13+
### New Platform:
1714

18-
A usage collector object is an instance of a class called `UsageCollector`. A factory function on `server.usage.collectorSet` object allows you to create an instance of this class. All you need to provide is a `type` for organizing your fields, and a `fetch` method for returning your usage data. Then you need to make the Telemetry service aware of the collector by registering it.
15+
Make sure `usageCollection` is in your optional Plugins.
1916

20-
Example:
17+
```json
18+
// plugin/kibana.json
19+
{
20+
"id": "...",
21+
"optionalPlugins": ["usageCollection"]
22+
}
23+
```
2124

22-
```js
23-
// create usage collector
24-
const myCollector = server.usage.collectorSet.makeUsageCollector({
25-
type: MY_USAGE_TYPE,
26-
fetch: async callCluster => {
25+
```ts
26+
// server/plugin.ts
27+
class Plugin {
28+
setup(core, plugins) {
29+
registerMyPluginUsageCollector(plugins.usageCollection);
30+
}
31+
}
32+
import { PluginSetupContract as UsageCollection } from 'src/plugins/usage_collection/server';
33+
import { CallCluster } from 'src/legacy/core_plugins/elasticsearch';
34+
35+
// server/collectors/register.ts
36+
export function registerMyPluginUsageCollector(usageCollection: UsageCollection): void {
37+
// create usage collector
38+
const myCollector = usageCollection.makeUsageCollector({
39+
type: MY_USAGE_TYPE,
40+
fetch: async (callCluster: CallCluster) => {
2741

2842
// query ES and get some data
2943
// summarize the data into a model
3044
// return the modeled object that includes whatever you want to track
3145

32-
return {
33-
my_objects: {
34-
total: SOME_NUMBER
35-
}
36-
};
37-
},
38-
});
39-
40-
// register usage collector
41-
server.usage.collectorSet.register(myCollector);
46+
return {
47+
my_objects: {
48+
total: SOME_NUMBER
49+
}
50+
};
51+
},
52+
});
53+
54+
// register usage collector
55+
usageCollection.registerCollector(myCollector);
56+
}
4257
```
4358

4459
Some background: The `callCluster` that gets passed to the `fetch` method is created in a way that's a bit tricky, to support multiple contexts the `fetch` method could be called. Your `fetch` method could get called as a result of an HTTP API request: in this case, the `callCluster` function wraps `callWithRequest`, and the request headers are expected to have read privilege on the entire `.kibana` index. The use case for this is stats pulled from a Kibana Metricbeat module, where the Beat calls Kibana's stats API in Kibana to invoke collection.
4560

46-
The fetch method also might be called through an internal background task on the Kibana server, which currently lives in the `kibana_monitoring` module of the X-Pack Monitoring plugin, that polls for data and uploads it to Elasticsearch through a bulk API exposed by the Monitoring plugin for Elasticsearch. In this case, the `callCluster` method will be the internal system user and will have read privilege over the entire `.kibana` index.
47-
4861
Note: there will be many cases where you won't need to use the `callCluster` function that gets passed in to your `fetch` method at all. Your feature might have an accumulating value in server memory, or read something from the OS.
4962

63+
### Migrating to NP from Legacy Plugins:
64+
65+
Pass `usageCollection` to the setup NP plugin setup function under plugins. Inside the `setup` function call the `registerCollector` like what you'd do in the NP example above.
66+
67+
```js
68+
// index.js
69+
export const myPlugin = (kibana: any) => {
70+
return new kibana.Plugin({
71+
init: async function (server) {
72+
const usageCollection = server.newPlatform.setup.plugins.usageCollection;
73+
const plugins = {
74+
usageCollection,
75+
};
76+
plugin(initializerContext).setup(core, plugins);
77+
}
78+
});
79+
}
80+
```
81+
82+
### Legacy Plugins:
5083

5184
Typically, a plugin will create the collector object and register it with the Telemetry service from the `init` method of the plugin definition, or a helper module called from `init`.
5285

86+
```js
87+
// index.js
88+
export const myPlugin = (kibana: any) => {
89+
return new kibana.Plugin({
90+
init: async function (server) {
91+
const usageCollecion = server.newPlatform.setup.plugins.usageCollection;
92+
registerMyPluginUsageCollector(usageCollecion);
93+
}
94+
});
95+
}
96+
```
97+
5398
## Update the telemetry payload and telemetry cluster field mappings
5499

55100
There is a module in the telemetry service that creates the payload of data that gets sent up to the telemetry cluster.
56101

57-
As of the time of this writing (pre-6.5.0) there are a few unpleasant realities with this module. Today, this module has to be aware of all the features that have integrated with it, which it does from hard-coding. It does this because at the time of creation, the payload implemented a designed model where X-Pack plugin info went together regardless if it was ES-specific or Kibana-specific. In hindsight, all the Kibana data could just be put together, X-Pack or not, which it could do in a generic way. This is a known problem and a solution will be implemented in an upcoming refactoring phase, as this would break the contract for model of data sent in the payload.
58-
59-
The second reality is that new fields added to the telemetry payload currently mean that telemetry cluster field mappings have to be updated, so they can be searched and aggregated in Kibana visualizations. This is also a short-term obligation. In the next refactoring phase, collectors will need to use a proscribed data model that eliminates maintenance of mappings in the telemetry cluster.
102+
New fields added to the telemetry payload currently mean that telemetry cluster field mappings have to be updated, so they can be searched and aggregated in Kibana visualizations. This is also a short-term obligation. In the next refactoring phase, collectors will need to use a proscribed data model that eliminates maintenance of mappings in the telemetry cluster.
60103

61104
## Testing
62105

@@ -65,7 +108,7 @@ There are a few ways you can test that your usage collector is working properly.
65108
1. The `/api/stats?extended=true` HTTP API in Kibana (added in 6.4.0) will call the fetch methods of all the registered collectors, and add them to a stats object you can see in a browser or in curl. To test that your usage collector has been registered correctly and that it has the model of data you expected it to have, call that HTTP API manually and you should see a key in the `usage` object of the response named after your usage collector's `type` field. This method tests the Metricbeat scenario described above where `callCluster` wraps `callWithRequest`.
66109
2. There is a dev script in x-pack that will give a sample of a payload of data that gets sent up to the telemetry cluster for the sending phase of telemetry. Collected data comes from:
67110
- The `.monitoring-*` indices, when Monitoring is enabled. Monitoring enhances the sent payload of telemetry by producing usage data potentially of multiple clusters that exist in the monitoring data. Monitoring data is time-based, and the time frame of collection is the last 15 minutes.
68-
- Live-pulled from ES API endpoints. This will get just real-time stats without context of historical data.
111+
- Live-pulled from ES API endpoints. This will get just real-time stats without context of historical data.
69112
- The dev script in x-pack can be run on the command-line with:
70113
```
71114
cd x-pack
@@ -76,17 +119,9 @@ There are a few ways you can test that your usage collector is working properly.
76119
3. In Dev mode, Kibana will send telemetry data to a staging telemetry cluster. Assuming you have access to the staging cluster, you can log in and check the latest documents for your new fields.
77120
4. If you catch the network traffic coming from your browser when a telemetry payload is sent, you can examine the request payload body to see the data. This can be tricky as telemetry payloads are sent only once per day per browser. Use incognito mode or clear your localStorage data to force a telemetry payload.
78121
79-
✳ At the time of this writing, there is an open issue that in the sending phase, Kibana usage collectors are not "live-pulled" from Kibana API endpoints if Monitoring is disabled. The implementation on this depends on a new secure way to live-pull the data from the end-user's browser, as it would not be appropriate to supply only partial data if the logged-in user only has partial access to `.kibana`.
80-
81122
## FAQ
82123
83-
1. **Can telemetry track UI interactions, such as button click?**
84-
Brief answer: no. Telemetry collection happens on the server-side so the usage data will only include information that the server-side is aware of. There is no generic way to do this today, but UI-interaction KPIs can be tracked with a custom server endpoint that gets called for tracking when the UI event happens.
85-
2. **Does the telemetry service have a hook that I can call whenever some event happens in my feature?**
86-
Brief answer: no. Telemetry collection is a fetch model, not a push model. Telemetry fetches info from your collector.
87-
3. **How should I design my data model?**
124+
1. **How should I design my data model?**
88125
Keep it simple, and keep it to a model that Kibana will be able to understand. In short, that means don't rely on nested fields (arrays with objects). Flat arrays, such as arrays of strings are fine.
89-
4. **Can the telemetry payload include dynamic fields?**
90-
Yes. When you talk to the Platform team about new fields being added, point out specifically which properties will have dynamic inner fields.
91-
5. **If I accumulate an event counter in server memory, which my fetch method returns, won't it reset when the Kibana server restarts?**
126+
2. **If I accumulate an event counter in server memory, which my fetch method returns, won't it reset when the Kibana server restarts?**
92127
Yes, but that is not a major concern. A visualization on such info might be a date histogram that gets events-per-second or something, which would be impacted by server restarts, so we'll have to offset the beginning of the time range when we detect that the latest metric is smaller than the earliest metric. That would be a pretty custom visualization, but perhaps future Kibana enhancements will be able to support that.

src/plugins/usage_collection/server/plugin.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,10 +46,10 @@ export class Plugin {
4646
}
4747

4848
public start() {
49-
this.logger.info('Starting plugin');
49+
this.logger.debug('Starting plugin');
5050
}
5151

5252
public stop() {
53-
this.logger.info('Stopping plugin');
53+
this.logger.debug('Stopping plugin');
5454
}
5555
}

0 commit comments

Comments
 (0)