Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
116 changes: 116 additions & 0 deletions spanner/session_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2221,3 +2221,119 @@ func TestSessionRecycle_AlreadyInvalidSession(t *testing.T) {
t.Errorf("Unexpected error \"Number of sessions in use is negative\" logged: %s", logOutput)
}
}

// TestSessionPool_CreateMultiplexedSession_NoGoroutineLeak tests that closing
// the session pool properly cleans up the createMultiplexedSession goroutine.
// This is a regression test for issue #13396.
//
// Before the fix, the multiplexedSessionReq channel was not closed when the
// pool was closed, causing the createMultiplexedSession goroutine to block
// forever waiting for requests.
//
// Note: This test validates the fix by checking that the multiplexedSessionReq
// channel is properly closed. We test channel closure directly rather than
// counting goroutines, as goroutine counting is unreliable in test environments.
func TestSessionPool_CreateMultiplexedSession_NoGoroutineLeak(t *testing.T) {
t.Parallel()

ctx := context.Background()

// Create a client with multiplexed sessions enabled
_, client, teardown := setupMockedTestServerWithConfig(t, ClientConfig{
DisableNativeMetrics: true,
SessionPoolConfig: SessionPoolConfig{
MinOpened: 0,
MaxOpened: 1,
enableMultiplexSession: true,
},
})
defer teardown()

pool := client.idleSessions

// Verify the channel exists and is open by sending a request
select {
case pool.multiplexedSessionReq <- muxSessionCreateRequest{force: false, ctx: ctx}:
// Successfully sent, channel is open
case <-time.After(100 * time.Millisecond):
t.Fatal("multiplexedSessionReq channel appears to be blocked or closed before pool close")
}

// Give the createMultiplexedSession goroutine time to process the request
time.Sleep(50 * time.Millisecond)

// Close the pool
client.Close()

// Give some time for cleanup
time.Sleep(100 * time.Millisecond)
Comment on lines +2263 to +2269
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These time.Sleep calls can likely be removed to make the test faster and more robust.

  1. The sleep on line 2263 (time.Sleep(50 * time.Millisecond)) seems unnecessary. The preceding select block already confirms that a message can be sent to the channel. The test doesn't depend on the request being fully processed, only that the channel was open, so waiting here isn't required.
  2. The sleep on line 2269 (time.Sleep(100 * time.Millisecond)) is redundant. The final select block (lines 2273-2281) already waits for up to 100ms for the channel to be closed. By having a sleep and a timeout, the test waits longer than necessary. You can rely solely on the select with its timeout to handle any small delay in closing the channel after client.Close() returns.

Removing these sleeps would make the test more efficient without compromising its correctness.


// Verify the channel is properly closed
// A receive from a closed channel returns immediately with zero value and ok=false
select {
case _, ok := <-pool.multiplexedSessionReq:
if ok {
t.Fatal("multiplexedSessionReq channel is still open after pool close; this would cause a goroutine leak")
}
// Channel is properly closed, test passes
case <-time.After(100 * time.Millisecond):
t.Fatal("multiplexedSessionReq channel is not closed after pool close; createMultiplexedSession goroutine is leaked")
}
}

// TestSessionPool_MultiplexedSessionReqChannelClosed tests that the
// multiplexedSessionReq channel is properly closed when the pool is closed.
func TestSessionPool_MultiplexedSessionReqChannelClosed(t *testing.T) {
t.Parallel()

ctx := context.Background()

// Create a client with multiplexed sessions enabled
_, client, teardown := setupMockedTestServerWithConfig(t, ClientConfig{
DisableNativeMetrics: true,
SessionPoolConfig: SessionPoolConfig{
MinOpened: 0,
MaxOpened: 1,
enableMultiplexSession: true,
},
})
defer teardown()

pool := client.idleSessions

// Verify the channel exists and is open
select {
case pool.multiplexedSessionReq <- muxSessionCreateRequest{force: false, ctx: ctx}:
// Successfully sent, channel is open
case <-time.After(100 * time.Millisecond):
t.Fatal("multiplexedSessionReq channel appears to be blocked or closed before pool close")
}

// Close the pool
client.Close()

// Give some time for cleanup
time.Sleep(50 * time.Millisecond)

// Verify the channel is closed by attempting to send
// A send on a closed channel will panic, so we recover from it
func() {
defer func() {
if r := recover(); r != nil {
t.Fatal("Sending to multiplexedSessionReq panicked, which means channel might not be closed properly or send was attempted incorrectly")
}
}()

// Try to receive from the closed channel
// A receive from a closed channel returns immediately with zero value
select {
case _, ok := <-pool.multiplexedSessionReq:
if ok {
t.Fatal("multiplexedSessionReq channel is still open after pool close")
}
// Channel is properly closed
case <-time.After(100 * time.Millisecond):
t.Fatal("multiplexedSessionReq channel is not closed after pool close")
}
}()
Comment on lines +2320 to +2338
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This anonymous function with defer/recover is unnecessary and its logic doesn't align with the implementation.

  • The recover is set up to handle a panic, which typically occurs when sending to a closed channel.
  • However, the code inside the select block receives from the channel. Receiving from a channel (whether open or closed) never panics, so the recover block is unreachable.
  • The select block on its own is a correct and sufficient way to verify if the channel is closed. The surrounding func() wrapper and defer/recover logic can be removed for clarity and correctness. This would also make the comments on lines 2318-2319, which mention panics, obsolete.

I suggest replacing this entire block with just the core select logic.

	// Verify the channel is closed by attempting to receive.
	// A receive from a closed channel returns immediately with a zero value and ok=false.
	select {
	case _, ok := <-pool.multiplexedSessionReq:
		if ok {
			t.Fatal("multiplexedSessionReq channel is still open after pool close")
		}
		// Channel is properly closed.
	case <-time.After(100 * time.Millisecond):
		t.Fatal("multiplexedSessionReq channel is not closed after pool close")
	}

}
Loading