Skip to content

Conversation

@Erarndt
Copy link
Contributor

@Erarndt Erarndt commented Jul 10, 2025

Fixes #

Context

The implementation of ConcurrentStack<T> creates a new wrapper node each time an item is pushed onto the stack

public void Push(T item)
{
    Node node = new Node(item);
    node.m_next = m_head;
    if (Interlocked.CompareExchange(ref m_head, node, node.m_next) != node.m_next)
    {
        PushCore(node, node);
    }
}

The creates an appreciable amount of allocations during build:
image

These allocations can almost entirely be eliminated by using a vanilla generic Stack<T> with a lock.

image

Changes Made

Testing

Notes

Copilot AI review requested due to automatic review settings July 10, 2025 23:18
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR replaces the thread‐safe ConcurrentStack<GenericExpressionNode> pools with a plain Stack<GenericExpressionNode> guarded by a lock to eliminate per‐push node allocations and reduce GC pressure.

  • Swaps all ConcurrentStack<GenericExpressionNode> usage to Stack<GenericExpressionNode> in the cache structure.
  • Adds a lock(expressionPool) around popping/parsing logic in EvaluateConditionCollectingConditionedProperties.
  • Updates the ExpressionTreeForCurrentOptionsWithSize constructor and GetOrAdd signatures to use Stack<GenericExpressionNode>.
Comments suppressed due to low confidence (3)

src/Build/Evaluation/ConditionEvaluator.cs:251

  • Add unit tests that simulate concurrent access to the new Stack<GenericExpressionNode> pools, ensuring that both retrieval and return of parsed expressions under lock correctly preserve pool integrity and reuse.
            lock (expressionPool)

src/Build/Evaluation/ConditionEvaluator.cs:251

  • Only the pop-and-parse logic is wrapped by this lock, but the subsequent push back into the pool (still using Stack.Push) occurs outside it. Since Stack<T> is not thread-safe, wrap both pop and push operations in the same lock to avoid race conditions.
            lock (expressionPool)

src/Build/Evaluation/ConditionEvaluator.cs:255

  • Declaring parsedExpression inside the lock block can limit its scope if you later need to use it outside. Consider moving the declaration immediately before the lock so its value is accessible throughout the remainder of the method.
                GenericExpressionNode parsedExpression;

@ghost ghost assigned AR-May and unassigned AR-May Aug 6, 2025
Copy link
Member

@AR-May AR-May left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I ran exp insertion, let's see if it passes.

Copy link
Member

@JanProvaznik JanProvaznik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks correct in the reasoning and code, let's see insertion

@AR-May
Copy link
Member

AR-May commented Aug 21, 2025

@AR-May AR-May merged commit 857f95b into dotnet:main Aug 21, 2025
9 checks passed
JanProvaznik pushed a commit that referenced this pull request Aug 26, 2025
The implementation of `ConcurrentStack<T>` creates a new wrapper node
each time an item is pushed onto the stack

    public void Push(T item)
    {
        Node node = new Node(item);
        node.m_next = m_head;
if (Interlocked.CompareExchange(ref m_head, node, node.m_next) !=
node.m_next)
        {
            PushCore(node, node);
        }
    }

The creates an appreciable amount of allocations during build.
These allocations can almost entirely be eliminated by using a vanilla
generic `Stack<T>` with a lock.
@Erarndt Erarndt deleted the dev/erarndt/concurrentStackFix branch September 22, 2025 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants