Support auditing chunked SQL Server packets#29228
Conversation
| initialPacketHeader = p.Header() | ||
| } | ||
|
|
||
| chunkData.Write(p.Data()) |
There was a problem hiding this comment.
Could this cause a DoS if a large enough packet was crafted?
Should we add a (configurable?) max size for the chunk data?
There was a problem hiding this comment.
There is no good limit value that can be set. We can consider using https://learn.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-ver16#:~:text=Bytes%20per%20varchar(max)%2C%20varbinary(max)%2C%20xml%2C%20text%2C%20or%20image%20column but this is a limit for single connection a client can establish many db conn and try enforce teleport db agent to run out of memory.
I wonder I we can add a debug/info log in case of handling large package and audit this operations.
There was a problem hiding this comment.
We can't set a limit that will generate a valid packet because the RPC calls can be composed of multiple parameters. Although I prefer having predictable memory consumption, I don't know how we should handle those partial packets. We could stick with the current behavior of generating db.session.malformed_packet if a packet exceeds a threshold, but in that case, that could be used to bypass audit logs too.
For me, the best solution is to be able to parse those requests partially, so we can decide to split them into multiple audit logs. But this would require custom packet parsing (other than what we have from the go-mssqldb driver); I think it might not be worth doing it.
There was a problem hiding this comment.
I think it would be ok to return an error if the packet exceeds a couple of MBs.
Or, just truncate the packet (is it possible?) and add a sentinel flag indicating that this is a truncated message
For reference, AWS CloudTrail has a max of 256KB for their log message size.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html
| initialPacketHeader = p.Header() | ||
| } | ||
|
|
||
| chunkData.Write(p.Data()) |
There was a problem hiding this comment.
There is no good limit value that can be set. We can consider using https://learn.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-ver16#:~:text=Bytes%20per%20varchar(max)%2C%20varbinary(max)%2C%20xml%2C%20text%2C%20or%20image%20column but this is a limit for single connection a client can establish many db conn and try enforce teleport db agent to run out of memory.
I wonder I we can add a debug/info log in case of handling large package and audit this operations.
marcoandredinis
left a comment
There was a problem hiding this comment.
Sorry for the delay
There's only one open thread wrt packet size but feel free to merge as is 👍
|
@gabrielcorado See the table below for backport results.
|
Closes #28632
Chunked packets consist in packets that don't fit on the negotiated network packet size. In this case, the packet has a status flag that indicates it is not the last message from the chunk.
Besides that production changes to support this kind of packet, this PR also adds new functions to generate SQL Server packets. Those functions have better documentation and enable tests to rely on provided data instead of example packets.
Note: This PR does not cover RPC calls that use the
NTEXTTYPEparameter. This is due to an error involving the driver. It will require additional debugging, I'll work on this separately. (Tracking this in a separate issue).Changelog: Improved audit logging support for large SQL Server queries.