K6: Use HDR histograms for calculating percentiles in thresholds and summary stats

0

Currently the Trend-based threshold checks and end-of-test summary stats rely on saving all of the relevant metric values in-memory. This is basically a large memory leak that can negatively affect long-running and/or HTTP-heavy load tests, as reported by this user on slack.

For the moment the best solution for solving those issues without any loss of functionality or and only a very tiny loss of precision appears to be HdrHistogram. Here's an excerpt from the description in the above website:

A Histogram that supports recording and analyzing sampled data value counts across a configurable integer value range with configurable value precision within the range. Value precision is expressed as the number of significant digits in the value recording, and provides control over value quantization behavior across the value range and the subsequent value resolution at any given level.
[ ... ]
HDR Histogram is designed for recoding histograms of value measurements in latency and performance sensitive applications. [ ... ] The HDR Histogram maintains a fixed cost in both space and time. A Histogram's memory footprint is constant, with no allocation operations involved in recording data values or in iterating through them. The memory footprint is fixed regardless of the number of data value samples recorded, and depends solely on the dynamic range and precision chosen. The amount of work involved in recording a sample is constant, and directly computes storage index locations such that no iteration or searching is ever involved in recording data values.

This is the original Java library by Gil Tene. He also has some great talks (this and its earlier version for example) that explain some of the common pitfalls when people measure latency and why he built HdrHistogram. They're also a very strong argument why we should prioritize the arrival-rate based VU executor... :smile:

This is an MIT-licensed Go implementation of HdrHistogram, though it seems to be dead - archived repo with no recent commits and unresolved issues and PRs. So we may need to fork that repo and maintain it or and re-implement the algorithm ourselves.

Another thing that HdrHistogram may help with is exposing summary stats in the teardown() function or outputting them in a JSON file at the end. This is something that a lot of users have requested - https://github.com/loadimpact/k6/issues/647 and https://github.com/loadimpact/k6/issues/351 and somewhat https://github.com/loadimpact/k6/issues/355.

Most of the difficulty there lies in exposing the raw data to the JS runtime (and with HdrHistogram we can expose its API), and especially with implementing the stats calculation in the distributed execution environment (the current Load Impact cloud or the future native k6 cluster execution). Having trend metrics backed by HdrHistogram should allow us to avoid the need to schlep all of the raw metrics data between k6 instances (or require an external DB) at the end of a distributed test...

na-- picture na--  ·  12 Sep 2018

Most helpful comment

5

I'm nobody in particular, but this would be really cool. I've been using HDR Histograms to provide a consistent interface over different load-testing tools output stats/metrics to be able to chart them together coherently (e.g. k6 + wrk2 + autocannon etc).

You're able to get a really high degree of information density, and HDR Histogram has addWhileCorrectingForCoordinatedOmission and copyCorrectedForCoordinatedOmission which is unique.

Currently to do this, I have to write the stdout JSONL logs to a file, create a line-reader, parse them, and then post-run build up the histogram from the logs:

Gist to avoid spamming thread with unneccessary code:
https://gist.github.com/GavinRay97/b57094686f64ad4591c55eb7b9dd5cac

GavinRay97 picture GavinRay97  ·  24 Jul 2020

All comments

0

Just noticed that the go-metrics library proposed in https://github.com/loadimpact/k6/issues/429 is a Go port this Java metrics library, the original author of which was Coda Hale - same person who authored the dead Go library for HdrHistogram linked above. Some further investigation of that metrics library (and the topic in general) may offer other benefits, so it may be worth it do deal with both this issue and https://github.com/loadimpact/k6/issues/429 at the same time.

na-- picture na--  ·  12 Sep 2018
0

@MStoykov found an active fork of the original HDR histogram repo: https://github.com/kanosaki/hdrhistogram

na-- picture na--  ·  8 Jul 2019
0

I've looked at the HDR histogram and go-metrics libraries, but it seems that their implementations of a histogram can only store int64 values, which isn't appropriate for Trend metrics.

Sirozha1337 picture Sirozha1337  ·  10 Jul 2019
0

Hm that's a good point :confused: I think all of the internal k6 Trend metrics like http_req_* ones actually start as time.Duration (which is basically int64), but then they get converted to float64 when they are put in the stats.Sample struct.

It seems to me like the choices are:

  • convert the floats in the metric samples back to int64, losing some performance and accuracy in the process
  • refactor the k6 metrics so that the original time.Duration values aren't lost
  • figure out an alternative to HdrHistogram (previous discussion why that would be difficult here: https://github.com/loadimpact/k6/issues/1068#issuecomment-507156609)

And in all these scenarios, but especially in the first 2 ones, it's not clear what we should do with custom user-defined Trend metrics. After all, the HdrHistogram is a good fit for the normal Trend metrics that k6 emits, since they are positive numbers that are usually close to 0. But a user may track something completely different and potentially unsuitable for an HdrHistogram to handle accurately...

na-- picture na--  ·  10 Jul 2019
0

What if we implement HDR histogram only for time based trend metrics?
If the user sets parameter isTime for their metric, then Hdr Histogram will be used, otherwise we will use the current method for collecting trend metrics.

Sirozha1337 picture Sirozha1337  ·  10 Jul 2019
0

Yeah, that would probably work in most cases, though we don't have a guarantee that the times users choose would be close to 0. The alternative would probably be to expose the implementation a bit and add a separate parameter if the sink for the Trend metric should be based on the HdrHistogram or not...

na-- picture na--  ·  10 Jul 2019
0

Or we can use histogram with exponentially decaying samples from go-metrics package, provided all the values will be int64. This type of histogram doesn't depend on what values are stored in there and it's much easier to rewrite if we need to support float64.

Sirozha1337 picture Sirozha1337  ·  11 Jul 2019
0

https://github.com/loadimpact/k6/issues/1064#issuecomment-660908647 pointed to another approach that deserves some investigation, before we start implementing things: https://github.com/tdunning/t-digest

Go versions: https://github.com/spenczar/tdigest, https://github.com/influxdata/tdigest

na-- picture na--  ·  20 Jul 2020
5

I'm nobody in particular, but this would be really cool. I've been using HDR Histograms to provide a consistent interface over different load-testing tools output stats/metrics to be able to chart them together coherently (e.g. k6 + wrk2 + autocannon etc).

You're able to get a really high degree of information density, and HDR Histogram has addWhileCorrectingForCoordinatedOmission and copyCorrectedForCoordinatedOmission which is unique.

Currently to do this, I have to write the stdout JSONL logs to a file, create a line-reader, parse them, and then post-run build up the histogram from the logs:

Gist to avoid spamming thread with unneccessary code:
https://gist.github.com/GavinRay97/b57094686f64ad4591c55eb7b9dd5cac

GavinRay97 picture GavinRay97  ·  24 Jul 2020
0

Any movement on this? I'm sad that we don't get a complete histogram for timings. k6 produces some random percentiles (0, 50, 90, 95, 100; why not 99?), but that's not sufficent to draw a complete latency chart.

We've been using hdrhistogram-go for a project, and it seems mature enough to use in k6.

atombender picture atombender  ·  19 Dec 2020
0

@atombender you can specify whatever percentile you want.
Via command line arguments:
k6 run --summary-trend-stats="p(42),p(99),p(99.9)" script.js
or options object:

export let options = {
  summaryTrendStats: ['p(42)', 'p(99)','p(99.9)']
};
Sirozha1337 picture Sirozha1337  ·  19 Dec 2020
0

@Sirozha1337 That option causes k6 to _print_ percentiles, but they don't end up in the file specified with --summary-export:

$ k6 run k6.js --summary-export summary.json --summary-trend-stats="p(42),p(99),p(99.9)"

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: ./tools/perfrunner/k6.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 31s max duration (incl. graceful stop):
           * test: 200.00 iterations/s for 1s (maxVUs: 50-100, gracefulStop: 30s)


running (01.0s), 000/050 VUs, 201 complete and 0 interrupted iterations
test ✓ [======================================] 050/050 VUs  1s  200 iters/s

    ✓ status200

    checks.....................: 100.00% ✓ 201  ✗ 0
    data_received..............: 225 kB  220 kB/s
    data_sent..................: 122 kB  119 kB/s
    gradient_query_exec_time...: p(42)=1      p(99)=17      p(99.9)=18.6
    http_req_blocked...........: p(42)=4µs    p(99)=245µs   p(99.9)=568.4µs
    http_req_connecting........: p(42)=0s     p(99)=188µs   p(99.9)=213.8µs
    http_req_duration..........: p(42)=2.5ms  p(99)=17.93ms p(99.9)=20.05ms
    http_req_receiving.........: p(42)=37µs   p(99)=73µs    p(99.9)=82.8µs
    http_req_sending...........: p(42)=19µs   p(99)=72µs    p(99.9)=82.2µs
    http_req_tls_handshaking...: p(42)=0s     p(99)=0s      p(99.9)=0s
    http_req_waiting...........: p(42)=2.43ms p(99)=17.85ms p(99.9)=19.9ms
    http_reqs..................: 201     196.581631/s
    iteration_duration.........: p(42)=2.83ms p(99)=18.22ms p(99.9)=20.97ms
    iterations.................: 201     196.581631/s
    vus........................: 50      min=50 max=50
    vus_max....................: 50      min=50 max=50

$ grep "p\(" summary.json
            "p(90)": 16,
            "p(95)": 16
            "p(90)": 0.205,
            "p(95)": 0.215
            "p(90)": 0.156,
            "p(95)": 0.167
            "p(90)": 16.618,
            "p(95)": 17.174
            "p(90)": 0.055,
            "p(95)": 0.058
            "p(90)": 0.043,
            "p(95)": 0.047
            "p(90)": 0,
            "p(95)": 0
            "p(90)": 16.516,
            "p(95)": 17.128
            "p(90)": 17.058415,
            "p(95)": 17.508086
atombender picture atombender  ·  20 Dec 2020
0

@atombender That issue is being tracked in #1611, and a fix will likely land in v0.30.0, planned for mid-January.

No updates yet for this issue as others have taken up higher priority. Most of the team is on vacation right now, but I'll discuss making this a priority for the upcoming releases.

imiric picture imiric  ·  21 Dec 2020