-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add bounds to memory storage #845
Add bounds to memory storage #845
Conversation
@objectiser would something like this be sufficient as an initial solution? |
I'm ok with these changes since the in memory storage should be used for toying around (no need to optimize for performance), that being said how hard would it be to use a circular buffer instead for the bounded storage? |
Probably not that hard. Do you have opinions about https://golang.org/pkg/container/ring/ ? In any case, I would like to do it on a follow-up PR, probably with some heuristics to determine a good default based on the available memory, in case a limit isn't explicitly set. As this is probably causing istio/istio#5782 , a "good enough" solution would be wonderful to have for now :) |
@jpkrohling Any solution that prevents the crash will be acceptable at this stage - but would need to be configurable as the amount of memory used by a trace can vary significantly based on number of spans and span size. |
plugin/storage/memory/memory.go
Outdated
@@ -115,6 +130,17 @@ func (m *Store) WriteSpan(span *model.Span) error { | |||
m.traces[span.TraceID] = &model.Trace{} | |||
} | |||
m.traces[span.TraceID].Spans = append(m.traces[span.TraceID].Spans, span) | |||
m.ids = append(m.ids, &span.TraceID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the id only be appended when the trace id first occurs, i.e. in the above conditional block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, will add a test about this
|
@jpkrohling Actually using a simple circular buffer may be better - as there could still be a large number of traces being held, so this approach could be very inefficient? |
Seems like the circular buffer is consensus :) Will do it in this PR. |
@jpkrohling FYI istio/istio#5840 Once this PR is merged, if you could sort out a release and let the istio team know - or better still do a PR on the Might be good if the max number of traces is configurable as part of this PR, and included as a property in the values.yaml file. |
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
package config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this object is needed at all, but I'm following the pattern established by the Cassandra configuration object. Consistency wins :)
plugin/storage/memory/options.go
Outdated
"github.com/jaegertracing/jaeger/pkg/memory/config" | ||
) | ||
|
||
const limit = "memory.limit" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This translates to a flag like --memory.limit=100
. This is also consistent with cassandra
, where the plugin name is the prefix for the option. What I don't like, though, is that "limit" is too generic and it's not clear what kind of limit this is. Passing --help
will clarify it, but I'm open to suggestions for a better name.
--memory.max-traces
perhaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it traces or spans?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Traces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it's better and cleaner to bound on the total number of spans. The docs should also mention an average estimation of span size, so people roughly know what value should be used.
I am not sure how much it would complicate your PR...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Span size estimation is hard when you don't know what people are storing within the span... Also, evicting traces is cleaner than evicting individual spans. Otherwise, a trace might show up now in the UI with 100 spans and then, after a couple of minutes, with only 10 spans.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Span size estimation is hard when you don't know what people are storing within the span...
For example it could use average HTTP span. Which is what most of the people use.
Also, evicting traces is cleaner than evicting individual spans
I am not saying we should evict on per span! Eviction strategy is different than limit :)
Otherwise, a trace might show up now in the UI with 100 spans and then, after a couple of minutes, with only 10 spans.
This can happen with your impl too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example it could use average HTTP span. Which is what most of the people use.
I think this fits the memory-bound strategy that was mentioned before. We'll wait for #773 for that.
I am not saying we should evict on per span! Eviction strategy is different than limit :)
OK, so, you mean to keep a count of the number of spans and evict traces when the number of spans reaches the limit? Could be a possible additional strategy and a candidate for a follow-up PR. For now, we just need a simple upper bound.
Maybe inmemory rather than memory to make clear about storage impl and not actual memory? |
I thought about that, but "memory" is what's used as possible value for |
I changed this PR to not have a performance impact when the |
Created #846 as a follow-up, to have the circular list to back the in-memory storage. |
pkg/memory/config/config.go
Outdated
|
||
package config | ||
|
||
// Configuration describes the configuration properties needed to connect to a Cassandra cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rm cassandra
plugin/storage/memory/memory.go
Outdated
} | ||
|
||
// WithConfiguration creates a new in memory storage based on the given configuration | ||
func WithConfiguration(configuration *config.Configuration) *Store { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've always preferred passing around configurations as values instead of references. The caller could technically update the config which might lead to errant behavior
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And storing it in the parent struct by value as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from config by value, lgtm
@@ -113,6 +123,22 @@ func (m *Store) WriteSpan(span *model.Span) error { | |||
m.services[span.Process.ServiceName] = struct{}{} | |||
if _, ok := m.traces[span.TraceID]; !ok { | |||
m.traces[span.TraceID] = &model.Trace{} | |||
|
|||
// if we have a limit, let's cleanup the oldest traces | |||
if m.config.MaxTraces > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another reason to pass config by value - NPE
Resolves #842 by adding a simple FIFO eviction mechanism. Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
Signed-off-by: Juraci Paixão Kröhling <[email protected]>
@yurishkuro, @black-adder, changes from the last comment implemented. This should now be ready to be merged. |
* master: (38 commits) Preparing release 1.5.0 (jaegertracing#847) Add bounds to memory storage (jaegertracing#845) Add metric for debug traces (jaegertracing#796) Change metrics naming scheme (jaegertracing#776) Bump gocql version (jaegertracing#829) Remove ParentSpanID from domain model (jaegertracing#831) Make gas run quiet (jaegertracing#838) Revert "Make gas run quite" Revert "Install gas from install-ci" Install gas from install-ci Make gas run quite Add 'gas' for security problems scanning (jaegertracing#830) Add ability to adjust static sampling probabilities per operation (jaegertracing#827) Support log-level flag on agent (jaegertracing#828) Remove unused function (jaegertracing#822) Add healthcheck to standalone (jaegertracing#784) Do not use KeyValue fields directly and use KeyValues as decorator only (jaegertracing#810) Add ContaAzul to the adopters list (jaegertracing#806) Add ISSUE_TEMPLATE and PULL_REQUEST_TEMPLATE (jaegertracing#805) Upgrade to go 1.10 (jaegertracing#792) ... # Conflicts: # cmd/agent/app/builder.go # cmd/collector/main.go # cmd/query/main.go # cmd/standalone/main.go
Resolves #842
Signed-off-by: Juraci Paixão Kröhling [email protected]
Which problem is this PR solving?
Short description of the changes
This is WIP as it's missing:
If the approach from this PR is acceptable, I'll proceed with the doc/flag changes.