Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src: implement FastByteLengthUtf8 with simdutf::utf8_length_from_latin1 #50840

Merged
merged 4 commits into from
Dec 19, 2023

Conversation

lemire
Copy link
Member

@lemire lemire commented Nov 21, 2023

This PR proposes to replace the conventional FastByteLengthUtf8 implementation by simdutf::utf8_length_from_latin1. Internally, the simdutf library implements specific functions. The results are presented in the following blog posts...

The speed can be 30x better in some instances.

This PR actually reduces the total number of lines of code.

@nodejs-github-bot nodejs-github-bot added buffer Issues and PRs related to the buffer subsystem. c++ Issues and PRs that require attention from people who are familiar with C++. needs-ci PRs that need a full CI run. labels Nov 21, 2023
src/node_buffer.cc Outdated Show resolved Hide resolved
@joyeecheung joyeecheung added the request-ci Add this label to start a Jenkins CI on a PR. label Nov 22, 2023
@joyeecheung
Copy link
Member

@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Nov 22, 2023
@nodejs-github-bot
Copy link
Collaborator

@H4ad
Copy link
Member

H4ad commented Nov 22, 2023

The Benchmark Result:

                                                                                              confidence improvement accuracy (*)    (**)   (***)
buffers/buffer-bytelength-buffer.js n=4000000 len=16                                                         -3.30 %       ±9.88% ±13.14% ±17.11%
buffers/buffer-bytelength-buffer.js n=4000000 len=2                                                           1.38 %       ±9.65% ±12.84% ±16.71%
buffers/buffer-bytelength-buffer.js n=4000000 len=256                                                         0.93 %       ±9.48% ±12.61% ±16.42%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='four_bytes'                   -1.26 %       ±4.96%  ±6.61%  ±8.60%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='one_byte'                      4.25 %       ±6.17%  ±8.21% ±10.69%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='three_bytes'                   2.37 %       ±5.89%  ±7.84% ±10.20%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='two_bytes'                    -2.50 %       ±5.70%  ±7.59%  ±9.88%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='four_bytes'               *     -3.07 %       ±2.67%  ±3.56%  ±4.66%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='one_byte'               ***    -12.10 %       ±5.23%  ±6.96%  ±9.06%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='three_bytes'                    -3.55 %       ±3.56%  ±4.74%  ±6.17%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='two_bytes'                      -2.96 %       ±3.90%  ±5.19%  ±6.77%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='four_bytes'                   3.07 %       ±5.79%  ±7.70% ±10.03%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='one_byte'                    -4.61 %       ±5.28%  ±7.04%  ±9.19%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='three_bytes'                  1.22 %       ±6.51%  ±8.66% ±11.28%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='two_bytes'                    0.83 %       ±5.08%  ±6.76%  ±8.80%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='four_bytes'                    -0.12 %       ±0.43%  ±0.57%  ±0.74%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='one_byte'                       0.35 %       ±3.62%  ±4.82%  ±6.28%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='three_bytes'                    0.01 %       ±0.89%  ±1.18%  ±1.54%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='two_bytes'                      0.02 %       ±1.13%  ±1.50%  ±1.96%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='four_bytes'                    5.14 %       ±6.61%  ±8.79% ±11.44%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='one_byte'                      2.06 %       ±5.72%  ±7.61%  ±9.91%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='three_bytes'                  -0.40 %       ±5.92%  ±7.87% ±10.24%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='two_bytes'                     1.14 %       ±6.14%  ±8.17% ±10.64%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='four_bytes'                     -1.54 %       ±1.67%  ±2.23%  ±2.90%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='one_byte'                       -3.39 %       ±4.73%  ±6.29%  ±8.19%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='three_bytes'                    -1.81 %       ±2.48%  ±3.30%  ±4.31%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='two_bytes'                      -1.67 %       ±2.34%  ±3.12%  ±4.08%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='four_bytes'                 -0.49 %       ±5.12%  ±6.81%  ±8.87%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='one_byte'                    4.71 %       ±5.45%  ±7.26%  ±9.46%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='three_bytes'                 0.73 %       ±5.41%  ±7.20%  ±9.37%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='two_bytes'                   3.78 %       ±5.52%  ±7.35%  ±9.57%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='four_bytes'                   -0.01 %       ±0.04%  ±0.05%  ±0.07%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='one_byte'                      0.03 %       ±0.70%  ±0.94%  ±1.22%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='three_bytes'                  -0.02 %       ±0.09%  ±0.12%  ±0.16%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='two_bytes'              *     -0.15 %       ±0.14%  ±0.18%  ±0.24%

Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case, there are 35 comparisons, you can thus
expect the following amount of false-positive results:
  1.75 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.35 false positives, when considering a   1% risk acceptance (**, ***),
  0.04 false positives, when considering a 0.1% risk acceptance (***)

Probably the speedup will only be noticeable when the CPU supports simdutf.

There is a way to detect when is supported? If so, we can add fast-path when is supported.

@lemire
Copy link
Member Author

lemire commented Nov 22, 2023

@H4ad

Thanks.

The PR only changes one function, FastByteLengthUtf8. You offer results for buffer-bytelength-string.js. Maybe it is sensitive to the performance of FastByteLengthUtf8?

Let us profile buffer-bytelength-string.js with Node 20 (so without this PR).

  92.77%  node     node                       [.] v8::String::Utf8Length
   1.23%  node     node                       [.] Builtins_LoadGlobalIC
   0.76%  node     node                       [.] Builtins_StringEqual
   0.37%  node     node                       [.] Builtins_CallApiCallback
   0.32%  node     node                       [.] node::Buffer::(anonymous namespace)::SlowByteLengthUtf8
   0.20%  node     node                       [.] Builtins_CallFunction_ReceiverIsAny
   0.18%  node     node                       [.] v8::Isolate::GetCurrentContext
   0.16%  node     node                       [.] Builtins_LoadGlobalICTrampoline
   0.10%  node     node                       [.] Builtins_Call_ReceiverIsAny
   0.05%  node     node                       [.] v8::Context::GetNumberOfEmbedderDataFields
   0.05%  node     node                       [.] v8::internal::Deserializer<v8::internal::Isolate>::ReadSingleBytecodeData<v8::internal::SlotAccessorForHeapObject>
   0.04%  node     node                       [.] v8::internal::Deserializer<v8::internal::Isolate>::ReadObject
   0.02%  node     node                       [.] Builtins_LoadIC
   0.02%  node     node                       [.] node::Buffer::(anonymous namespace)::FastByteLengthUtf8

So FastByteLengthUtf8 accounts for 0.02% of the running time, before the PR. The benchmark is entirely bounded by the performance of v8::String::Utf8Length, which this PR does not change.

I submit to you that it is not a good benchmark to examine the effect of an optimization on FastByteLengthUtf8. I suspect that the changes in performance that these numbers suggest are not significant. I don't think that this PR can affect the performance of buffer-bytelength-string.js.

There is a way to detect when is supported? If so, we can add fast-path when is supported.

In the worst case, on very old CPUs (e.g., 15 years old), it will fallback on something equivalent to the current code. On reasonable CPUs, it will provide an accelated kernel. It is easy to query which CPU type it detects, but I don't think we currently expose it in Node.

The simdutf library is already successfully accelerating Node so that's not a concern...

Decoding and Encoding becomes considerably faster than in Node.js 18. With the addition of simdutf for UTF-8 parsing the observed benchmark, results improved by 364% (an extremely impressive leap) when decoding in comparison to Node.js 16. (State of Node.js Performance 2023)

@nodejs-github-bot
Copy link
Collaborator

@H4ad
Copy link
Member

H4ad commented Nov 25, 2023

The PR only changes one function, FastByteLengthUtf8. You offer results for buffer-bytelength-string.js. Maybe it is sensitive to the performance of FastByteLengthUtf8?

This function affects directly the performance of that function because of:

byteLength: byteLengthUtf8,

On my machine using Ryzen 9 5950X:

                                                                                              confidence improvement accuracy (*)   (**)  (***)
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='four_bytes'                   -0.75 %       ±4.16% ±5.72% ±7.82%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='one_byte'                      1.63 %       ±3.73% ±5.12% ±7.00%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='three_bytes'                  -0.65 %       ±2.96% ±4.11% ±5.73%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='base64' type='two_bytes'                     0.28 %       ±2.98% ±4.11% ±5.64%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='four_bytes'                     -1.03 %       ±2.10% ±2.89% ±3.98%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='one_byte'               ***    -19.64 %       ±1.83% ±2.51% ±3.44%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='three_bytes'                    -1.63 %       ±2.60% ±3.56% ±4.86%
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='two_bytes'                       1.56 %       ±3.59% ±5.01% ±7.03%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='four_bytes'                  -0.02 %       ±4.57% ±6.35% ±8.85%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='one_byte'                     0.15 %       ±4.87% ±6.73% ±9.28%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='three_bytes'                 -0.89 %       ±3.84% ±5.31% ±7.34%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='base64' type='two_bytes'                    0.33 %       ±3.98% ±5.47% ±7.46%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='four_bytes'             **     -2.22 %       ±1.24% ±1.71% ±2.33%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='one_byte'                      -0.49 %       ±4.20% ±5.83% ±8.11%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='three_bytes'             *      1.73 %       ±1.53% ±2.11% ±2.90%
buffers/buffer-bytelength-string.js n=4000000 repeat=16 encoding='utf8' type='two_bytes'             ***     34.50 %       ±2.82% ±3.86% ±5.27%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='four_bytes'                   -1.87 %       ±5.12% ±7.01% ±9.56%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='one_byte'                      2.07 %       ±2.88% ±3.95% ±5.38%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='three_bytes'                   3.08 %       ±4.19% ±5.84% ±8.20%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='base64' type='two_bytes'                     0.32 %       ±1.71% ±2.35% ±3.21%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='four_bytes'              **     -2.60 %       ±1.87% ±2.56% ±3.49%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='one_byte'                       -2.73 %       ±3.25% ±4.46% ±6.08%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='three_bytes'            ***      6.06 %       ±2.62% ±3.60% ±4.91%
buffers/buffer-bytelength-string.js n=4000000 repeat=2 encoding='utf8' type='two_bytes'              ***      8.77 %       ±3.28% ±4.51% ±6.16%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='four_bytes'                 -1.40 %       ±2.95% ±4.13% ±5.79%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='one_byte'                    3.92 %       ±4.15% ±5.84% ±8.28%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='three_bytes'          *      3.38 %       ±2.82% ±3.95% ±5.58%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='base64' type='two_bytes'                   0.40 %       ±2.67% ±3.67% ±5.03%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='four_bytes'           ***     -2.80 %       ±0.79% ±1.10% ±1.53%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='one_byte'                      0.18 %       ±1.07% ±1.52% ±2.17%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='three_bytes'                   0.67 %       ±0.97% ±1.34% ±1.85%
buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='two_bytes'            ***     48.13 %       ±0.92% ±1.27% ±1.74%

Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case, there are 32 comparisons, you can thus
expect the following amount of false-positive results:
  1.60 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.32 false positives, when considering a   1% risk acceptance (**, ***),
  0.03 false positives, when considering a 0.1% risk acceptance (***)

Probably on strings with lengths lower than 256 will probably be slower than the old version, but for larger strings, it will be faster.

Well, I'm not against this change since people smarter than me approved this PR but just want to give more context.

@nodejs-github-bot
Copy link
Collaborator

@joyeecheung
Copy link
Member

joyeecheung commented Nov 25, 2023

@lemire how did you generate the profile? The benchmark runs on multiple different data sets while this PR only affects the one-byte & two byte ones (which are Latin 1 but not ASCII), if you profile all the datasets then it’s going to be dominated by other data sets that don’t hit the path being modified here, but the perf hit here on the ASCII dataset looks real. To select the regressed data set you need to pass something like “ n=4000000 repeat=1 encoding='utf8' type='one_byte'” to the benchmark runner.

@lemire
Copy link
Member Author

lemire commented Nov 26, 2023

@joyeecheung I’ll add a performance analysis in the coming days.

:-)

@lemire
Copy link
Member Author

lemire commented Nov 26, 2023

It should not lead to a regression but issues are always possible. I will investigate.

@joyeecheung
Copy link
Member

joyeecheung commented Nov 26, 2023

Also one thing to note: in the benchmark results datasets that are not affected by this change did not show significant performance differences (when the characters are 3-4 bytes in Unicode or when they are repeated i.e. no longer flat and therefore aren’t FastOneByteStrings). Only the datasets that are affected by this change (flat strings with ASCII or Latin-1 characters) showed significant performance differences. (It could also have something to do with the reduction of the fast calls in V8 but it’s difficult for me to see how changing only what’s inside the fast call could make a difference in the optimizations in V8).

@lemire
Copy link
Member Author

lemire commented Nov 26, 2023

@joyeecheung

in the benchmark results datasets that are not affected by this change did not show significant performance differences

Are you sure?

The two_bytes cases are not Latin1. The only test here that is Latin1 is the ASCII test hello brendan!!!'. And we see the largest difference in the two_bytes case.

buffers/buffer-bytelength-string.js n=4000000 repeat=256 encoding='utf8' type='two_bytes'            ***     48.13 %

This is not related to this PR for two reasons: two_bytes is not testing a Latin1 string, and we have a repeat higher than one that will not trigger this PR (because they are not flat strings).

I think that the only case that should be affected is buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding='utf8' type='one_byte'. And there is definitively something wrong there (I am going to stop the comment here, but I see something wrong in the profiling).

@lemire
Copy link
Member Author

lemire commented Nov 26, 2023

So I have examined a bit the issue and we have a case where computing the UTF-8 length of a string like hello brendan!!! is very cheap. Effectively, the function call overhead can easily trump the actual cost of the function.

For example, if I just profile buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="one_byte", I get the following (this is the current code without this PR):

  13.37%  node     node                  [.] Builtins_LoadGlobalIC
   8.68%  node     node                  [.] node::Buffer::(anonymous namespace)::FastByteLengthUtf8
   3.97%  node     node                  [.] Builtins_StringEqual
   3.67%  node     node                  [.] Builtins_CallFunction_ReceiverIsAny
   2.40%  node     node                  [.] Builtins_LoadGlobalICTrampoline
   1.93%  node     node                  [.] v8::internal::Deserializer<v8::internal::Isolate>::ReadObject
   1.66%  node     [JIT] tid 3373877     [.] 0x00007f2558009aa1
   1.55%  node     node                  [.] Builtins_Call_ReceiverIsAny

You can see here that FastByteLengthUtf8 is lost in the noise. So it is easy to see that if calling simdutf has a slightly higher function call overhead, we might end up with a net loss (say 12% like what @H4ad reported).

Let us try something more significant...

  latin1: 'Un homme sage est supérieur à toutes les insultes qui peuvent lui être adressées, et la meilleure réponse est la patience et la modération.',

Profiling this string, I get the following...

  30.12%  node     node                  [.] node::Buffer::(anonymous namespace)::FastByteLengthUtf8
   6.21%  node     node                  [.] Builtins_LoadGlobalIC
   5.68%  node     node                  [.] Builtins_StringEqual
   1.77%  node     node                  [.] Builtins_CallFunction_ReceiverIsAny
   1.52%  node     node                  [.] Builtins_LoadGlobalICTrampoline

That's more reasonable. Even so, we see that the whole program cannot be sped up by more than 30% when optimizing FastByteLengthUtf8 (i.e., imagine we made it free). The code is not instrumented so it is a bit difficult to do a more serious analysis, but it is a ballpark upper bound.

Anyhow, now we have a chance for this PR to be beneficial.

And, sure enough, I get that this PR is faster on this less trivial string:

PR:

buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="latin1": 13,629,038.864301458

Main:

buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="latin1": 16,449,335.577614883

So how do we deal with this?

I suggest doing something akin to what @H4ad proposed. You filter the queries... according to length. Short strings go through the old code, and long strings go through simdutf.

PR:

buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="one_byte": 21,786,932.147987694
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="latin1": 16,358,409.145528518

Main:

buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="one_byte": 21,426,180.9172187
buffers/buffer-bytelength-string.js n=4000000 repeat=1 encoding="utf8" type="latin1": 12,812,998.407619778

(I run the benchmark just once, but it is a quiet machine with no more than a 2% variance.)

In the last commit, I added an extra test to the benchmark corresponding to the latin1 string in question.

My thanks to @H4ad for the benchmarks.

Copy link
Member

@H4ad H4ad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, thanks for the PR!

@joyeecheung
Copy link
Member

The two_bytes cases are not Latin1. The only test here that is Latin1 is the ASCII test hello brendan!!!'. And we see the largest difference in the two_bytes case.

hmm indeed, I looked at the dataset again and they are not Latin1. I think the fix should be enough to avoid the extra overhead for short strings, here's another benchmark CI to verify it (I also noticed that in the CI results (#50840 (comment)), two_bytes were not affected, by @H4ad's local results were - it could be machine-dependent).

https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1474/

@joyeecheung
Copy link
Member

The Builtins_LoadGlobalIC part in the profile also looks interesting, I am fairly certain that it comes from Buffer access, so const Buffer = globalThis.Buffer might remove that bit from the profile

@lemire
Copy link
Member Author

lemire commented Nov 27, 2023

@joyeecheung We might change the benchmark further in another PR? I am happy to issue a second PR if this one gets merged.

@joyeecheung
Copy link
Member

Yes, #50840 (comment) is more of a "thinking out loud" comment.

@lemire lemire requested a review from anonrig December 4, 2023 20:15
@anonrig anonrig added the request-ci Add this label to start a Jenkins CI on a PR. label Dec 4, 2023
@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Dec 4, 2023
@nodejs-github-bot
Copy link
Collaborator

@anonrig
Copy link
Member

anonrig commented Dec 4, 2023

@lemire Can you fix the linting errors?

@lemire
Copy link
Member Author

lemire commented Dec 4, 2023

@anonrig

Can you fix the linting errors?

Sorry about that. I forgot to check after modifying the JavaScript code.

Done.

@H4ad H4ad added author ready PRs that have at least one approval, no pending requests for changes, and a CI started. request-ci Add this label to start a Jenkins CI on a PR. labels Dec 4, 2023
@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Dec 4, 2023
@nodejs-github-bot
Copy link
Collaborator

@nodejs-github-bot
Copy link
Collaborator

@nodejs-github-bot
Copy link
Collaborator

@nodejs-github-bot
Copy link
Collaborator

@H4ad H4ad added commit-queue Add this label to land a pull request using GitHub Actions. commit-queue-squash Add this label to instruct the Commit Queue to squash all the PR commits into the first one. labels Dec 19, 2023
@nodejs-github-bot nodejs-github-bot removed the commit-queue Add this label to land a pull request using GitHub Actions. label Dec 19, 2023
@nodejs-github-bot nodejs-github-bot merged commit 891bd5b into nodejs:main Dec 19, 2023
59 checks passed
@nodejs-github-bot
Copy link
Collaborator

Landed in 891bd5b

RafaelGSS pushed a commit that referenced this pull request Jan 2, 2024
PR-URL: #50840
Reviewed-By: Yagiz Nizipli <[email protected]>
Reviewed-By: Joyee Cheung <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
@RafaelGSS RafaelGSS mentioned this pull request Jan 2, 2024
richardlau pushed a commit that referenced this pull request Mar 25, 2024
PR-URL: #50840
Reviewed-By: Yagiz Nizipli <[email protected]>
Reviewed-By: Joyee Cheung <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Vinícius Lourenço Claro Cardoso <[email protected]>
@richardlau richardlau mentioned this pull request Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
author ready PRs that have at least one approval, no pending requests for changes, and a CI started. buffer Issues and PRs related to the buffer subsystem. c++ Issues and PRs that require attention from people who are familiar with C++. commit-queue-squash Add this label to instruct the Commit Queue to squash all the PR commits into the first one. needs-ci PRs that need a full CI run.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants