Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add bounds to memory storage #845

Merged
merged 5 commits into from
May 27, 2018
Merged

Add bounds to memory storage #845

merged 5 commits into from
May 27, 2018

Conversation

jpkrohling
Copy link
Contributor

@jpkrohling jpkrohling commented May 24, 2018

Resolves #842

Signed-off-by: Juraci Paixão Kröhling [email protected]

Which problem is this PR solving?

Short description of the changes

  • This PR adds a simple FIFO eviction mechanism

This is WIP as it's missing:

  • Documentation changes
  • Flag to configure the queue size

If the approach from this PR is acceptable, I'll proceed with the doc/flag changes.

@jpkrohling
Copy link
Contributor Author

@objectiser would something like this be sufficient as an initial solution?

@black-adder
Copy link
Contributor

I'm ok with these changes since the in memory storage should be used for toying around (no need to optimize for performance), that being said how hard would it be to use a circular buffer instead for the bounded storage?

@coveralls
Copy link

coveralls commented May 24, 2018

Coverage Status

Coverage remained the same at 100.0% when pulling 67d162e on jpkrohling:842-Add-Bounds-To-Memory-Storage into 037875b on jaegertracing:master.

@jpkrohling
Copy link
Contributor Author

how hard would it be to use a circular buffer instead for the bounded storage

Probably not that hard. Do you have opinions about https://golang.org/pkg/container/ring/ ?

In any case, I would like to do it on a follow-up PR, probably with some heuristics to determine a good default based on the available memory, in case a limit isn't explicitly set. As this is probably causing istio/istio#5782 , a "good enough" solution would be wonderful to have for now :)

@objectiser
Copy link
Contributor

@jpkrohling Any solution that prevents the crash will be acceptable at this stage - but would need to be configurable as the amount of memory used by a trace can vary significantly based on number of spans and span size.

@@ -115,6 +130,17 @@ func (m *Store) WriteSpan(span *model.Span) error {
m.traces[span.TraceID] = &model.Trace{}
}
m.traces[span.TraceID].Spans = append(m.traces[span.TraceID].Spans, span)
m.ids = append(m.ids, &span.TraceID)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the id only be appended when the trace id first occurs, i.e. in the above conditional block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, will add a test about this

@yurishkuro
Copy link
Member

  • +1 to container/ring as the current impl seems to grow the array infinitely
  • may want to defer memory-bound implementation until Use Protobuf and gRPC for internal communications #773, which will make it much easier to estimate the memory size. But could also build a simple heuristic now that approximates the size of the trace

@objectiser
Copy link
Contributor

@jpkrohling Actually using a simple circular buffer may be better - as there could still be a large number of traces being held, so this approach could be very inefficient?

@jpkrohling
Copy link
Contributor Author

jpkrohling commented May 25, 2018

Seems like the circular buffer is consensus :) Will do it in this PR.

@objectiser
Copy link
Contributor

@jpkrohling FYI istio/istio#5840

Once this PR is merged, if you could sort out a release and let the istio team know - or better still do a PR on the release-0.8 branch to update https://github.com/istio/istio/blob/release-0.8/install/kubernetes/helm/istio/values.yaml#L401.

Might be good if the max number of traces is configurable as part of this PR, and included as a property in the values.yaml file.

// See the License for the specific language governing permissions and
// limitations under the License.

package config
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this object is needed at all, but I'm following the pattern established by the Cassandra configuration object. Consistency wins :)

"github.com/jaegertracing/jaeger/pkg/memory/config"
)

const limit = "memory.limit"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This translates to a flag like --memory.limit=100. This is also consistent with cassandra, where the plugin name is the prefix for the option. What I don't like, though, is that "limit" is too generic and it's not clear what kind of limit this is. Passing --help will clarify it, but I'm open to suggestions for a better name.

--memory.max-traces perhaps?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it traces or spans?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Traces

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it's better and cleaner to bound on the total number of spans. The docs should also mention an average estimation of span size, so people roughly know what value should be used.

I am not sure how much it would complicate your PR...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Span size estimation is hard when you don't know what people are storing within the span... Also, evicting traces is cleaner than evicting individual spans. Otherwise, a trace might show up now in the UI with 100 spans and then, after a couple of minutes, with only 10 spans.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Span size estimation is hard when you don't know what people are storing within the span...

For example it could use average HTTP span. Which is what most of the people use.

Also, evicting traces is cleaner than evicting individual spans

I am not saying we should evict on per span! Eviction strategy is different than limit :)

Otherwise, a trace might show up now in the UI with 100 spans and then, after a couple of minutes, with only 10 spans.

This can happen with your impl too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example it could use average HTTP span. Which is what most of the people use.

I think this fits the memory-bound strategy that was mentioned before. We'll wait for #773 for that.

I am not saying we should evict on per span! Eviction strategy is different than limit :)

OK, so, you mean to keep a count of the number of spans and evict traces when the number of spans reaches the limit? Could be a possible additional strategy and a candidate for a follow-up PR. For now, we just need a simple upper bound.

@jpkrohling jpkrohling changed the title WIP - Add bounds to memory storage Add bounds to memory storage May 25, 2018
@objectiser
Copy link
Contributor

Maybe inmemory rather than memory to make clear about storage impl and not actual memory?

@jpkrohling
Copy link
Contributor Author

Maybe inmemory rather than memory to make clear about storage impl and not actual memory?

I thought about that, but "memory" is what's used as possible value for SPAN_STORAGE_TYPE :-/

@jpkrohling
Copy link
Contributor Author

I changed this PR to not have a performance impact when the --memory.max-traces options isn't used, so that the ids slice is always empty.

@jpkrohling
Copy link
Contributor Author

Created #846 as a follow-up, to have the circular list to back the in-memory storage.

@ghost ghost assigned black-adder May 25, 2018

package config

// Configuration describes the configuration properties needed to connect to a Cassandra cluster
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rm cassandra

}

// WithConfiguration creates a new in memory storage based on the given configuration
func WithConfiguration(configuration *config.Configuration) *Store {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've always preferred passing around configurations as values instead of references. The caller could technically update the config which might lead to errant behavior

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And storing it in the parent struct by value as well

Copy link
Member

@yurishkuro yurishkuro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside from config by value, lgtm

@@ -113,6 +123,22 @@ func (m *Store) WriteSpan(span *model.Span) error {
m.services[span.Process.ServiceName] = struct{}{}
if _, ok := m.traces[span.TraceID]; !ok {
m.traces[span.TraceID] = &model.Trace{}

// if we have a limit, let's cleanup the oldest traces
if m.config.MaxTraces > 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another reason to pass config by value - NPE

Resolves #842 by adding a simple FIFO eviction mechanism.

Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
@jpkrohling
Copy link
Contributor Author

@yurishkuro, @black-adder, changes from the last comment implemented. This should now be ready to be merged.

@black-adder black-adder merged commit 7796f75 into jaegertracing:master May 27, 2018
@ghost ghost removed the review label May 27, 2018
mabn pushed a commit to mabn/jaeger that referenced this pull request May 28, 2018
* master: (38 commits)
  Preparing release 1.5.0 (jaegertracing#847)
  Add bounds to memory storage (jaegertracing#845)
  Add metric for debug traces (jaegertracing#796)
  Change metrics naming scheme (jaegertracing#776)
  Bump gocql version (jaegertracing#829)
  Remove ParentSpanID from domain model (jaegertracing#831)
  Make gas run quiet (jaegertracing#838)
  Revert "Make gas run quite"
  Revert "Install gas from install-ci"
  Install gas from install-ci
  Make gas run quite
  Add 'gas' for security problems scanning (jaegertracing#830)
  Add ability to adjust static sampling probabilities per operation (jaegertracing#827)
  Support log-level flag on agent (jaegertracing#828)
  Remove unused function (jaegertracing#822)
  Add healthcheck to standalone (jaegertracing#784)
  Do not use KeyValue fields directly and use KeyValues as decorator only (jaegertracing#810)
  Add ContaAzul to the adopters list (jaegertracing#806)
  Add ISSUE_TEMPLATE and PULL_REQUEST_TEMPLATE (jaegertracing#805)
  Upgrade to  go 1.10 (jaegertracing#792)
  ...

# Conflicts:
#	cmd/agent/app/builder.go
#	cmd/collector/main.go
#	cmd/query/main.go
#	cmd/standalone/main.go
@jpkrohling jpkrohling deleted the 842-Add-Bounds-To-Memory-Storage branch July 28, 2021 19:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants