|
1 |
| -# TDM Overview |
| 1 | +# TDM Documentation |
2 | 2 |
|
3 |
| -- [TDM Overview](#tdm-overview) |
4 |
| - - [Plan](#plan) |
5 |
| - - [Problem Statement](#problem-statement) |
6 |
| - - [Solution](#solution) |
7 |
| - - [Elaboration](#elaboration) |
8 |
| - - [Components](#components) |
| 3 | +TDM is documented via [VuePress](https://vuepress.vuejs.org/), written in Markdown and converted via VuePress to HTML to be deployed alongside TDM. This allows us to write our documentation in Markdown, and not be responsible for hand-coding HTML documentation documents. This does break some level of GitHub-like navigation, but results in a nice, self-hosted documentation site. |
9 | 4 |
|
10 |
| -* [Crash Course](Crash%20Course.md) |
11 |
| -* [Cool Stuff](Cool%20Stuff.md) |
| 5 | +```bash |
| 6 | +# Run dev server on localhost:8089 |
| 7 | +./standalone.sh |
| 8 | +``` |
12 | 9 |
|
13 |
| -## Plan |
14 |
| -Anything besides the following must be addressed on an engagement/need basis. |
| 10 | +## Manual Navigation |
| 11 | +If you do not want to run the standalone server or production TDM, you may navigate through this repositories Markdown documents but the links might be more difficult to navigate. |
15 | 12 |
|
16 |
| -- [x] MVP1 - Consume existing spreadsheets of mappings and provide same experience via UI. |
17 |
| -- [x] MVP2 - Provide search and mapping capability via UI. |
18 |
| -- [x] Exit Strategy - Open Source++ |
19 |
| -- [ ] Extended Goals - Recurring ETL process to remove any maintenance requirement and refactor code to be more maintainable. |
20 |
| - |
21 |
| -## Problem Statement |
22 |
| -In its current state, network telemetry can be accessed in many different ways that are not easily reconciled – for instance, finding the same information in SNMP MIBs and YANG models. There is no way of determining if the information gathered will have the same values either, or which is more accurate than another. Further, the operational methods of deploying this monitoring varies across platforms and implementations. This makes networking monitoring a fragmented ecosystem of inconsistent and unverified data. Discovering the datapoints in the first place is often tedious and somewhat arcane as well. |
23 |
| - |
24 |
| -## Solution |
25 |
| -TDM seeks to solve this problem by providing a simple schema to model all forms of accessing network telemetry and capability to create relationships between individual data points to demonstrate qualities in consistency, validity, and interoperability. |
26 |
| - |
27 |
| - |
28 |
| - |
29 |
| -### Elaboration |
30 |
| -We are seeking to alleviate two major problems using TDM. |
31 |
| - |
32 |
| -1. The same data is addressed differently across platforms. |
33 |
| -2. Data discovery is difficult. |
34 |
| - |
35 |
| -The two problems above are especially prevalent now with data driving business-impacting decisions and with Streaming Telemetry/Model Driven Telemetry* coming to market. Our customers currently experience a difficult and uncertain upgrade path transitioning from SNMP to MDT. We hope that Telemetry Data Mapper will make it easy to map the data available from different protocols like SNMP, gRPC, NETCONF, etc. and serve as a source of truth for transformation and exploration of data across OS/Releases and platforms. |
36 |
| - |
37 |
| -Telemetry Data Mapper encompasses more than just Streaming Telemetry - it seeks to enable mappings between any form of telemetry on any device or platform such as SNMP. The data points that are gathered via MDT or SNMP, etc. are what TDM focuses upon. The data points will be tracked on a per device/platform basis, and have relationships created between them in a database to illustrate equivalency, or whatever else we would like to track and demonstrate. Thus we can begin to see what data points are equivalent across devices and platforms, and begin to holistically collect and analyze the data. |
38 |
| - |
39 |
| -As a side effect of the necessity of these mappings, we will also gain offline visibility in to what data is available for collection and the ability to easily explore the data. Further, we can begin analyzing data coverage, etc. Even more importantly, we can begin to validate the data on a cross-platform basis! There are huge quality assurance benefits to having an offline vision for our platform data. |
40 |
| - |
41 |
| -In order to solve these problems, we must first understand all of the difficulties and arcane methods to get this data, and provide easy presentation and usage. *yay* |
42 |
| - |
43 |
| -## Components |
44 |
| - |
45 |
| - |
46 |
| - |
47 |
| -TDM is made up of several different technologies. These are expanded in their own code sub-directories if not here. |
48 |
| -* [ETL Code](/etl/) |
49 |
| -* [Web Code](/web/) |
50 |
| -* [NGINX / Goaccess](/nginx/) |
51 |
| -* [ArangoDB](https://www.arangodb.com/) |
52 |
| -ArangoDB was being used elsewhere in our team and thus we decided to bring it in to another project. Being a graph database, it had some attractive features, flexibility, and GUI that could in theory make it easier for relatively technical users to directly use the database instead of building custom web interfaces. ArangoDB contains our processed source of truth in TDM. The ETL process parses all of the different data we are interested in and formalizes relationships into ArangoDB to enable querying capabilities on the data instead of doing all the work in code. ArangoDB has a nice query language for traversing schemas, as it is a graph database, and simplifies some queries. However, it does come with some limitations in maturity such as lacking "views" (now implemented*), etc. We are currently using it very much like a relational database, and should likely eventually transition to an RDBMS. However, given it works and having operationalized around it, there's no pressing need to do so. |
53 |
| -* [Elasticsearch](https://www.elastic.co/products/elasticsearch) |
54 |
| -Elasticsearch is effectively a mitigation of search difficulty in ArangoDB. In order to achieve reasonable performance, we wrote a query which resulted in monstrous RAM usage (32 GB for "bgp") with potential to take up to a minute to return. This was unacceptable. We could revisit aspects of TDM's design, but decided to try out ES. Elasticsearch receives all of the data in TDM in a denormalized form to enable fast and powerful search capabilities. Elasticsearch is perfectly suited to our search needs, and solved a couple search issues at once by implicitly handling fuzziness, scoring of results, and more. Our current usage is by no means perfect and is rather naive, but it's extremely fast and sits at a near constant 1 GB of RAM usage during search loads. |
55 |
| -* [Kibana](https://www.elastic.co/products/kibana) |
56 |
| -Kibana provides a nice interface for exploring Elasticsearch and is useful for determining the Elasticsearch query structures to bring into custom interfaces. Elasticsearch's query syntax has many options, so having a tool which enables easy query iteration to see how the structure of the query needs to look is very useful. If TDM's Web UI Search does not meet your requirements - try using Kibana and open an issue for improvement. |
| 13 | +* Documentation |
| 14 | +`docs/*` except `.vuepress` |
| 15 | +* Static Content |
| 16 | +`docs/.vuepress/public` |
0 commit comments