Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize editor for input latency #161622

Closed
Tyriar opened this issue Sep 23, 2022 · 21 comments
Closed

Optimize editor for input latency #161622

Tyriar opened this issue Sep 23, 2022 · 21 comments
Assignees
Labels
editor-input Editor text input perf plan-item VS Code - planned item for upcoming
Milestone

Comments

@Tyriar
Copy link
Member

Tyriar commented Sep 23, 2022

I recently did an exploration into input latency and found it can get pretty bad on slower machines. A lot of the problem is related to how we use synchronous event emitters/listeners work, performing their work after keypress but before the animation frame.

image

My proposal to improve is:

  1. Review the most time consuming event listeners. If they are clearly safe to be performed at a later time without worrying about race conditions being introduced., move them to use an asynchronous event listener to be performed after the animation frame. For example UI updates like activity bar badge and tab indicator when an editor becomes dirty
  2. Review all events and their listeners, defer as much as possible to after the animation frame. One way of doing this is for each important event to have both a sync and an async event. It's unclear how much we can move exactly, but when we do this we need to be extremely careful to not introduce text buffer-related race conditions as there are some assumptions made that we may be breaking by doing this
  3. Setup development tools and/or telemetry to easily track measuring latency, I created https://github.com/microsoft/vscode/tree/tyriar/measure_latency to demonstrate a technique to approximate input latency
  4. Come up with a plan for how we can prevent regressions for this critical path code

Tentatively assigning to October

@Tyriar Tyriar added perf editor-input Editor text input labels Sep 23, 2022
@Tyriar Tyriar added this to the October 2022 milestone Sep 23, 2022
@bpasero
Copy link
Member

bpasero commented Sep 24, 2022

The core listener for text editors to react on content changes is:

These drive a ton of things on top such as:

  • dirty state indication throughout the UI (only when dirty state changes, not on every content change)
  • editor auto save
  • editor backups
  • etc (anyone using onDidChangeDirty and friends)

I wonder how an async emitter would help here: yes, it would take away lag from the first character typing when the editor transitions into being dirty, but eventually we have to pay the price, so the lag would just happen later? Or is the idea to delay the event literally on idle time?

@Tyriar
Copy link
Member Author

Tyriar commented Oct 11, 2022

@bpasero oh I missed the question there. The idea is to delay it until shortly after via setTimeout, so the text change should appear asap and the dirty indicator (as an example) would appear 1 or 2 frames later. ie:

Current:

  • keypress task:
    • handle dirty change
    • schedule various things (backup, auto save, etc.)
    • other less critical updates (bracket pair parsing?)
    • render

Desired:

  • keypress task:
    • handle dirty change
    • render
  • following task:
    • schedule various things (backup, auto save, etc.)
    • other less critical updates (bracket pair parsing?)
    • render (whatever changed in this task)

There's some nuance here in what should be handled in the keypress task. For example currently the suggest widget is moved and updated in the keypress task. What we probably want is for the suggest widget to move in the keypress task but defer updating as it can be very expensive. Things like this we'll need to experiment with to see if splitting it up ends up with a worse UX and should be on the critical path.

We may be able to optimize the list's splice method as well to help here, haven't looked at the impl yet but it seems to do a lot of work and also affects search performance/ui responsiveness I saw last week.

@Tyriar
Copy link
Member Author

Tyriar commented Oct 11, 2022

Realized one of my laptops has a CPU similar to the average users (though a better GPU). Here's a screenshot of typing this into TerminalInstance.ctor which validates my assumptions that latency is pretty terrible on lower hardware (up to 100ms in this case):

image

This is with a i7-8750H @ 2.2 GHz, didn't turn off turbo boost which can push it up to 4.10 GHz. Not entirely sure how that works but I think I can disable it in BIOS if needed.

Though I haven't tested thoroughly on the laptop, I'm quite surprised that it seems to actually perform much worse than 4x CPU throttle on my primary machine (i7-12700KF @ 3.61 GHz, boost 5.00 GHz). I was expecting it to be the other way around.

@Tyriar
Copy link
Member Author

Tyriar commented Oct 13, 2022

I drilled into a profile on my macbook to understand some parts a little more. Here are the details

TLDR: An enormous amount of work seems to be spent just scheduling things, Event.defer will probably be an easy solution to those.

image

Latency

  • Keydown to character on screen ~30.52ms
  • Keydown to suggest on screen ~106.6ms

High level parts

  • Critical path 30.52ms
    • Key down 1.85ms (6%)
    • Key press 15.43ms (51%)
    • Render animation frame 6.91ms (23%)
    • Render to composite 4.56ms (15%)
  • Suggest
    • Waiting for suggestions from extension host 37.1ms
    • Suggest widget setup/adding 30.87ms
    • Suggest widget rendering 8.37ms

Key press

15.43ms (51% of critical path)

Most expensive bottom up parts

  • setTimeout 3.4ms (22%)
    • Async timeouts are definitely disabled
    • A lot of this is due to the sheer amount of timeouts, a shared timeout like in Event.defer will improve
      image
  • Recalculate style and layout 1.8ms (12%)
    • This is done because of the text input event from the keypress, nothing we can do if we're using an /<textarea>
  • clearTimeout 0.7ms (5%)
    • We may be able to avoid clearing some?
  • requestAnimationFrame 0.2ms (1.5%)
  • requestIdleCallback 0.2ms (1.5%)
  • setInterval 0.2ms (1.5%)

What is it doing?

Legend:

💡 We can probably improve or defer this fairly easily
❓ We can maybe improve this, needs more investigation/validation

  • _type @ codeEditorWidget.ts 11.54ms
    • _executeCursorEdit 10.57ms (69%)
      • _executeEdit 2.99ms
        • pushEditOperation @ editStack.ts 0.96ms
          • 💡 parseDocumentFromTextBuffer @ bracketPairsTree.ts 0.12ms (can be merged with below occurrence)
        • endDeferredEmit
          • onDidChangeContentOrInjectedText @ viewModelImpl.ts 0.39ms
          • modelService scheduling 0.16ms
            • 💡 Consolidate scheduling?
          • scheduleBackup 0.5ms
            • 💡 Cancels and restarts a timeout
          • triggerDiff @ dirtyDiffDecorator.ts 0.13ms
            • 💡 Consolidate scheduling?
          • Schedule syncing with ext host 0.5ms
            • 💡 Stringify and multiple setTimeouts
            • This is already async, perfect for Event.defer
        • setSelections @ cursorCollection.ts 0.11ms
      • endEmitViewEvents
        • _emitMany @ viewModelEventDispatcher.ts
          • _scheduleRender @ view.ts 0.24ms
            • ❓ Mainly just requests an animation frame
          • writeScreenReaderContent @ textAreaInput.ts 1.41ms
            • This is a result of setting value on textarea, triggers a recalc style, layout, scroll which is a little weird?
            • 💡 Defer to after render when accessibility mode is off?
          • onLinesChanged @ viewLayer.ts 0.13ms
          • onCursorStateChanged 0.62ms
            • ❓ Updates the cursor blinking; on input the interval gets reset
        • _emitOutgoingEvents @ viewModelEventDispatcher.ts
          • 💡 Bracket pair matching scheduling 0.11ms
          • computeLinks @ links.ts0.35ms
            • _updateScores @ languageFeatureRegistry.ts 0.11ms
              • 💡 Schedules language detection?
            • 💡 More unknown scheduling 0.24ms
          • Schedule tokenize viewport 0.11ms
            • ❓ This is done even though tokenize if cheap runs?
          • 💡 Schedule multicursor selection highlighter 0.13ms
          • 💡 Schedule updateInlineValuesScheduler @ debugEditorContribution.ts 0.38ms
          • notifyNavigation @ historyService.ts 0.13ms
            • This is keeping the history stack up to date, probably needed
          • triggerFolderingModelChanged @ folding.ts 0.24ms
            • 💡 Schedules folding changes
            • 💡 This creates a new function each trigger? Extra time is probably spent in V8 compiling/optimising
          • beginComputeCommentingRanges @ commentsEditorContribution.ts 0.11ms
            • 💡 Just schedules
          • onDidChangeModelContent @ lightBulbWidget.ts 0.12ms
            • 💡 This just gets the editor model and hides the light bulb widget if needed, good candidate for deferral
          • unicodeHighlighter.ts 0.13ms
            • 💡 Just schedules
          • inlayHintsController.ts 0.38ms
            • 💡 Just schedules and debounces for 1250ms
          • documentSymbolsOutline.ts 0.11ms
            • 💡 Just schedules/debounces
          • _onModelChange @ codeLensController.ts 2.87ms
            • 💡 This is a lot of time devoted to code lenses, could all these be deferred?
            • changeDecorations @ codeEditorWidget.ts 2.74ms
              • Creating a changeAccessor object ~0.15ms
                – 💡 Could be optimized
            • endDeferredEmit 2.61ms
              • languageDetection.contribution.ts 0.37ms
                – 💡 Just scheduling
              • inlineCompletionsModel.ts 0.25ms
                – Update ranges, updateFilteredInlineCompletions
                – 💡 Schedule auto update
              • bracketMatching.ts 0.40ms
                – 💡 Just scheduling for 50ms
              • suggestWidgetPreviewModel.ts 0.61ms
                – ❓ Updates and schedules
              • onSelectionChange @ editorStatus.ts 0.13ms
                – ❓ Update
              • _onCursorChange @ codeActionModel.ts 0.25ms
                – 💡 Just scheduling
              • inlineCompletionsModel.ts 0.13ms
                – ❓ Updating
                – Is this doing duplicate work as above? Can this be debounced to a microtask?
              • onDidChangeCursorPosition @ documentSymbolsOutline.ts 0.11ms
                – 💡 Just scheduling
              • _onCursorChange @ suggestModel.ts 0.26ms
                – ❓ Schedules based on some state and the selection
              • mainThreadEditor.ts 0.11ms
                – ❓ Generates a delta of the editor and fires properties change (to sync with ext host?)
                – ❓ _readVisibleRangesFromCodeEditor which calls codeEditor.getVisibleRanges() is the most expensive thing here
            • Schedule asking for all references 0.13ms
              • Not sure we could defer this but we could probably optimize
    • _type @ codeEditorWidget.ts 0.97ms
      • onDidType @ parameterHintsModel.ts 0.1ms
      • handleUserInput @inlineCompletionsModel.ts 0.14ms
        • Just hides or schedules
      • checkTriggerCharacter @ suggestModel.ts 0.73ms
        • tokenizeIfCheap 0.49ms
          • parseDocumentFromTextBuffer @ bracketPairsTree.ts 0.13ms (! This is much worse on my Windows machine?)
        • provideSuggestionItems
  • Microtasks 0.31ms
    • ? @ defaultWorkerFactory.ts 0.15ms
      • Not sure what triggers this

@Tyriar
Copy link
Member Author

Tyriar commented Oct 14, 2022

The methodology, using the "Performance" tab was flawed, it exagerates timeouts (even when async stacks are disabled) and the overhead slows it by around 4x. A profile in the JavaScript Profiler tab is much more accurate and you can also perform the operation many times and then look as the slow parts in terms of the percentage of time they take up. By looking at the percentage instead of actual milliseconds it's much more reliable to get a good sample as it consolidates all calls.

For example I recorded typing many bunch of characters, in top down you can now see a more accurate view of the function. This fn is the keypress handler and it shows it took up 17.85% of the CPU time:

Screen Shot 2022-10-14 at 8 38 51 am

Now focusing on that function shows the breakdown of the call stack as a percentage of the total fn time. One of the suspected functions contributing to the slow down is the bracket pair parsing which you can see takes up 1% of the time after cheap tokenization:

Screen Shot 2022-10-14 at 8 40 50 am

And 8.39% of the time after the model content changes:

Screen Shot 2022-10-14 at 8 43 14 am

Looking into this some more now.

@Tyriar
Copy link
Member Author

Tyriar commented Oct 14, 2022

Here's a breakdown of the CPU profile when typing a bunch of characters (~50, random), the suggest widget didn't show most of the time. This tree isn't complete, a lot is omitted here to remove noise vs just looking at the profile in devtools. The children of a node are not ordered and not necessarily directly below the parent or siblings of other children.

❌ = Too risky to touch

This is a living document for now:

  • 100%: fn - keypress handler
    • 94.53%: _type
      • 11.55%: onWillType
        • 10.75%: checkTriggerCharacter - suggestModel.ts
          • 0.30%: provideSuggestionItems - suggest.ts
          • 0.68%: getWordAtPosition - textModel.ts
          • 9.30%: tokenizeIfCheap - tokenizationTextModelPart.ts
      • 82.76%: type
        • 52.68%: endEmitViewEvents - viewModelEventDispatcher.ts
          • 44.59%: emitOutgoingEvents
            • onDidChangeModelContent
              • 0.87%: editorStatus.ts
              • 27.16%: suggestModel.ts - refilters items
                • This is a lot of time devoted to suggest when it's barely shown
                • 21.12% showSuggestions - suggestWidget.ts
                  • Drill into this
                • 2.51%: cancel - suggestModel.ts
                  • 2.51% to cancel?
                • 1.33%: shouldAutoTrigger - suggestModel.ts
              • 0.23%: inlayHintsController.ts
              • 0.27%: clickLinkGesture.ts
              • 0.57%: links.ts
              • 0.34%: unicodeHighlighter.ts
              • 0.30%: commentsEditorContribution.ts
              • 1.67%: lightBulbWidget.ts
              • 1.56%: textEditor.ts
              • 0.27%: viewPortSemanticTokens.ts
              • 0.57%: folding.ts - Reschedules and makes minor changes to hiddenRangeModel
              • 0.27%: documentSymbolsOutline.ts
                • Does sharing a timeout help?
              • 8.58%: codelensController.ts
                • 8.36%: changeDecorations - textModel.ts
                  • 0.38%: emitMany - viewModelEventDispatcher.ts
                  • 7.44%: _emitOutgoingEvents - viewModelEventDispatcher.ts
                    • 0.23%: multicursor.ts
                    • 0.84%: suggestWidgetPreviewModel.ts
                    • 0.46%: inlineCompletionsModel.ts
                    • 0.61%: textEditor.ts
                    • 0.57%: historyService.ts - records navigation
                    • This happens multiple times
                    • 0.27%: suggestModel.ts
                    • 1.41%: editorStatus.ts - Updating the status bar lines/cols
                    • Defer editorStatus model content and cursor position updates #163836
                    • 0.27%: languageDetectionContribution.ts
                    • 0.15%: bracketMatching.ts
                    • 0.57%: inlineCompletionsModel.ts
                    • 1.71%: mainThreadEditor.ts
                    • ❌ Syncing with ext host? Risky to change
                    • 0.19%: codeActionModel.ts
              • 0.49%: testingDecorations.ts
          • 8.01%: emitMany
            • 6.96%: handleEvents - viewEventHandler.ts
            • 0.72%: handleEvents - view.ts
              • 0.46%: _scheduleRender - view.ts
                • Investigate
        • 30.08%: type
          • 25.29%: _executeEditOperation
            • 23.59%: pushEditOperations
              • 14.28%: _pushEditOperations
              • 8.51%: endDeferredEmit
                • 1.60%: onDidChangeContentOrInjectedText - viewModelImpl.ts
                • 6.42%: onDidChangeContent - textModel.ts
                  • 3.04%: onDidChangeContent - mainThreadDocuments.ts - Syncing model changes with ext host
                  • 0.42%: onDidChangeContent - dirtydiffDecorator.ts
                  • 2.70%: onDidChangeContent - textFileEditorModel.ts

Tyriar added a commit that referenced this issue Oct 14, 2022
Before TextAreaHandler.onCursorStateChanged was taking approximately 4-16%
of the total keypress task's runtime. After this is becomes 1-2ms.

Part of #161622
Tyriar added a commit that referenced this issue Oct 14, 2022
Before TextAreaHandler.onCursorStateChanged was taking approximately 4-16%
of the total keypress task's runtime. After this is becomes < 0.5ms.

Part of #161622
Tyriar added a commit that referenced this issue Oct 14, 2022
Previously dealing with working copies was taking about 1.5% of the total
runtime for the keypress function, this reduces it to around 0.2% by
deferring all handling of the event until the text has rendered. Since the
onDidContentChange event is not used, this should be a low risk change.

Part of #161622
@Tyriar
Copy link
Member Author

Tyriar commented Oct 17, 2022

Profile when typing . after this and backspace repeatedly to measure high-level impact of suggest widget.

  • 96.19% _type - codeEditorWidget.ts
    • 5.32% checkTriggerCharacter suggestModel.ts
    • 72.37% _insertSuggestion - suggestWidget.ts
      • 63.53% insert - snippetController2.ts
        • Snippet leading to show suggest?
        • 61.15% insert - snippetSession.ts
          • 58.85% executeEdits - codeEditorWidget.ts
        • What's being executed? Is this just in an unexpected spot in the stack trace?
        • 48.99% endEmitViewEvents
          • 43.74% _refilterCompletionItems - suggestModel.ts
            • 43.31 _onNewContext (firing event) - suggestModel.ts
              • 42.73% showSuggestions - suggestWidget.ts
                • 37.12% _layout - suggestWidget.ts
                  • Double rendering
                  • Run suggestWidget.showSuggestions and showDetails in an animation frame #163947
                  • 30.00% getScrolledVisiblePosition codeEditorWidget.ts
                    • No scrolling was happening in either the editor or the suggest widget
                  • 29.71% _renderNow - view.ts
                  • 5.68% render - minimap.ts
                    • Minimap shouldn't block text rendering?
                  • 4.60% render - contentWidgets.ts
                    • 4.6% afterRender - suggestWidget.ts
                    • Investigate
                  • 2.01% prepareRender - viewOverlays.ts
                    • 1.44% prepareRender - indentGuides.ts
                    • Investigate
                  • 7.34% prepareRender - textAreaHandler.ts
                    • 7.34% _readPixelOffset - viewLine.ts
                    • DOM access?
                  • 5.54% renderText - viewLines.ts
                    • Double rendering of text buffer!
                  • 5.61% getDomNodePagePosition - dom.ts
                    • DOM access?
            • 4.96% splice - listWidget.ts
          • 9.50% (anonymous) - viewModelImpl.ts
            • 8.63% pushEditOperations - TextModel.ts
      • 1.08% memorize - suggestMemory.ts
      • 6.12% cancel - suggestModel.ts
        • 6.04% hideWidget - suggestWidget.ts
          • Hiding is 6% when it shouldn't be getting hidden? It does hide it briefly before showing new results from exthost? Most of the time is spent in list splice
      • 3.02% splice - listWidget.ts
    • 17.77% type - viewModelImpl.ts
      • Similar to the profile above
      • 4.46% typeWithInterceptors - cursorTypeOperations.ts
        • 4.17% _runAutoIndentType - cursorTypeOperations.ts
      • 6.04%_executeEditOperation

@Tyriar
Copy link
Member Author

Tyriar commented Oct 17, 2022

The big revelation above is that there's a full render in both the keypress and the animation frame events, triggered by the suggest widget 🤯:

image

Tyriar added a commit that referenced this issue Oct 18, 2022
This prevents double rendering, both in the keypress task and the following
animation frame

Part of #161622
@Tyriar
Copy link
Member Author

Tyriar commented Oct 18, 2022

Here's a breakdown of the list splice calls made when filtering suggest. This is a general area I've noticed is a little slow in other areas like search. This particular profile ("c", backspace, repeat to refilter on both) showed ListWidget.splice taking up 11.28% of the total keypress task.

image

@Tyriar
Copy link
Member Author

Tyriar commented Oct 18, 2022

After digging through how List.splice and in particular HighlightedLabel.render, nothing's sticking out in list's splice as an obvious way to improve performance unfortunately.

@Tyriar
Copy link
Member Author

Tyriar commented Oct 18, 2022

After pulling all the proposed changes into a single branch (tyriar/all_latency_changes) this is what pressing . after a this looks like. The red will mostly go away if we implemented the GPU-based renderer (#162445), the blue part is minimap rendering which could be significantly sped up by moving to webgl.

image

Current main:


image

After the completion response comes back from the exthost this happens:

image

Current main:


image

I couldn't find many more wins here, most of it is bound by list splice and fetching the position of elements from the DOM (maybe something here could improve?). There are wins from moving to webgl as well but it would probably be more effort than it's worth to maintain.


Here is a look at re-filtering the already visible suggest widget:

image

Current main:


image

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

Just discovered this issue: #27378

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

Using typometer, here are the results for xterm.js, with a few changes to get it closer to what we could expect as the upper bound of adopting the webgl renderer:

image

Those are some pretty nice numbers 🙂, of course the model updates/events in bare xterm.js aren't nearly as costly as in vscode's editor though. The viewport is also quite small in xterm.js' demo which affects the time it takes to fill in the webgl buffers.

Interestingly, those 2 ones that are very low with 8.2-8.3 averages are when I was taking a devtools performance profile. My theory is it's related to performance vs efficiency cores in my i12 CPU. They are also far more consistent during the performance profile:

image

Compared to the other 3:

image

Another thing I noticed are the big spikes that sometimes occur. This is actually why I did the performance profile to have a look at them. Here's an example (edited the screenshot as the iterations part was giant and couldn't be collapsed):

image

I can't explain this as the work appears to be done right at the start, like all the other frames. Perhaps it's another application eating more CPU time? Or Chromium doing something and deciding to drop those frames? I tried to set process affinity/priority which didn't change the results. Also the processes are not marks as "efficiency mode" in task manager regardless of what task manager said, I think this doesn't map to efficiency cores though.

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

Ok I'm pretty sure it is efficiency vs performance cores. I disabled all efficiency cores in BIOS and I get the good numbers of about 8ms average - I tried later and even with efficiency cores disabled I get the different set of numbers.

@max-sixty
Copy link

@Tyriar I don't want to add noise to this issue, but I'm excited to see you working on this. I've kept an eye on latency and though I could give some comparisons for context:

I found VSCode quite good — a lower min/max/mean that emacs & vim — at least on a high-spec Macbook Pro. Specifically, from a couple of years ago, typometer got 9.7/27.1/16.8 for min/max/mean; relative to 48/77/60 for vim and 29/48/38 for emacs.

I just reran it a couple of times now and got higher max & means in VSCode 1.72.2: 9.7/66.2/23.1 & 7.4/49.6/21.9. So maybe something has got slightly worse, or maybe the setups a different. Min is still impressively low!

To the extent we get a few ms saving, that's appreciated — I think I can notice a difference when there's more immediate feedback; even small amounts of latency or jitter feel worse (maybe needs a blind test though :) )

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

@max-sixty good to hear, I've been trying to squeeze out ms here and there for the past month or so, a lot of the changes are still in review though 🙂

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

For the unexplained xterm.js numbers, it appears to be related to extensions, incognito fixes the problem. Perhaps it waits on some extension APIs when not profiling?

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

Looking at the spikes in the profiles again I think I can explain it now:

image

There is a task that does Compute Intersections where this happens:

image

These are for intersection observers which xterm.js does indeed use. https://developer.chrome.com/en/blog/new-in-devtools-92/#computed-intersections

There are other places where Compute Intersections happens that don't have a spike, but all spikes contain a Compute Intersections.

I disabled the textarea syncing to avoid intersection needing to be recomputed and the results seem to be better. There are still the occasional spikes:

image

Zooming into this one, it's happening because the requestAnimationFrame doesn't fire for a long time, normally these 2 tasks are very close together:

image

It's not clear why, but if Chromium doesn't fire it there isn't much we can do about it. It does appear to be a longer interaction but I'm guessing that's because something's blocking so it delays processing the keyup event:

image

I also tried a JavaScript profile and it didn't reveal anything for the spikes.

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

The impact of disabling v-sync and unlocking the frame limit is that it's slightly slower by average but much more variable as you would expect. It doesn't lower the minimum though.

image

@Tyriar
Copy link
Member Author

Tyriar commented Oct 21, 2022

Here are some results of measuring latency in VS Code and other similar projects:

image

Some thoughts:

  • It's not a totally fair comparison, since the latency tool is a bit finnicky, some of these could only be measured when the app was either very wide or narrow and certain font sizes, and how large glyphs and the viewport are will likely have a direct relationship with render time. These are not all called out in the image above, I think it was Sublime and all VS Codes that needed to be narrow with a large font size. Inversely the vim tests are in very wide terminals so it's unfair that way too.
  • This isn't representative of real world typing as it's period . characters into a text file, so it ignores autocomplete for example, or just actual coding in general (varying glyphs, scrolling when typing, wrapping, files with a lot of content, etc.)
  • The numbers varied a fair bit in different runs and it's not clear why. For example I did VS Code v1.73.0-insider fairly early on and it was 18ms average, I did another Sublime Text profile just now and it was 14ms average. A more thorough comparison should probably have more individual runs and/or more than 200 characters
  • The Insiders ones included my set of extensions
  • Electron/Chromium typically had a large maximum as spikes could occur for unknown reasons.
  • xterm.js' webgl renderer is super fast which is evident when you conpty from the picture. This used the barebones demo connected to the "fake pty" which basically just prints what you type. So it represents potential of where we could get if the typing did nearly nothing except render, that's unrealistic though as a lot of things need to happen in the editor.

Hardware used:

  • CPU: 12th Gen Intel Core i7-12700KF 3.61 GHz (efficiency cores disabled in BIOS)
  • GPU: RTX 2070 Super
  • Monitor: 240Hz, 3840x2160 resolution, 150% scale

@Tyriar Tyriar modified the milestones: October 2022, November 2022 Oct 24, 2022
bpasero added a commit that referenced this issue Nov 16, 2022
* Batch/defer history service navigation events

Part of #161622

* Pass store through Event.accumulate

* Remove duplicate function

* Action feedback

* 💄

Co-authored-by: Benjamin Pasero <[email protected]>
Co-authored-by: Benjamin Pasero <[email protected]>
@Tyriar
Copy link
Member Author

Tyriar commented Nov 23, 2022

I'm going to call this done for now, here are the outcomes:

  • In depth look at the major things that happen when typing
  • Deferred a bunch of stuff when typing
  • Setup a telemetry event which will let us track our input latency goals over time and ensure we don't regress
  • Learned a lot above

@Tyriar Tyriar closed this as completed Nov 23, 2022
@github-actions github-actions bot locked and limited conversation to collaborators Jan 7, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
editor-input Editor text input perf plan-item VS Code - planned item for upcoming
Projects
None yet
Development

No branches or pull requests

4 participants