forked from snyk-labs/nodejs-goof
-
Notifications
You must be signed in to change notification settings - Fork 0
[Snyk] Upgrade mongodb from 3.5.9 to 6.10.0 #1366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
akanchhaS
wants to merge
29
commits into
master
Choose a base branch
from
snyk-upgrade-d804d2e20fee79d61abdc80f59c9328f
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Circleci project setup
Circleci editor/master
Circleci editor/master
Snyk has created this PR to upgrade mongodb from 3.5.9 to 6.10.0. See this package in npm: mongodb See this project in Snyk: https://app.snyk.io/org/panda-co/project/ebfb2282-581e-4b1b-afb0-8a0e07b1b540?utm_source=github&utm_medium=referral&page=upgrade-pr
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Snyk has created this PR to upgrade mongodb from 3.5.9 to 6.10.0.
ℹ️ Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.
The recommended version is 226 versions ahead of your current version.
The recommended version was released on 21 days ago.
Issues fixed by the recommended upgrade:
SNYK-JS-BL-608877
Release notes
Package name: mongodb
6.10.0 (2024-10-21)
The MongoDB Node.js team is pleased to announce version 6.10.0 of the
mongodbpackage!Release Notes
Warning
Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.
Support for new client bulkWrite API (8.0+)
A new bulk write API on the
MongoClientis now supported for users on server versions 8.0 and higher.This API is meant to replace the existing bulk write API on the
Collectionas it supports a bulkwrite across multiple databases and collections in a single call.
Usage
Users of this API call
MongoClient#bulkWriteand provide a list of bulk write models and options.The models have a structure as follows:
Insert One
Note that when no
_idfield is provided in the document, the driver will generate a BSONObjectIdautomatically.
Update One
Update Many
Note that write errors occuring with an update many model present are not retryable.
Replace One
Delete One
Delete Many
Note that write errors occuring with a delete many model present are not retryable.*
Example
Below is a mixed model example of using the new API:
The bulk write specific options that can be provided to the API are as follows:
ordered: Optional boolean that indicates the bulk write as ordered. Defaults to true.verboseResults: Optional boolean to indicate to provide verbose results. Defaults to false.bypassDocumentValidation: Optional boolean to bypass document validation rules. Defaults to false.let: Optional document of parameter names and values that can be accessed using $$var. No default.The object returned by the bulk write API is:
Error Handling
Server side errors encountered during a bulk write will throw a
MongoClientBulkWriteError. This errorhas the following properties:
writeConcernErrors: Ann array of documents for each write concern error that occurred.writeErrors: A map of index pointing at the models provided and the individual write error.partialResult: The client bulk write result at the point where the error was thrown.Schema assertion support
name: string;
authorName: string;
}
interface Author {
name: string;
}
type MongoDBSchemas = {
'db.books': Book;
'db.authors': Author;
}
const model: ClientBulkWriteModel<MongoDBSchemas> = {
namespace: 'db.books'
name: 'insertOne',
document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
// error
authorNamecannot be number};
Notice how authorName is type checked against the
Booktype because namespace is set to"db.books".Allow SRV hostnames with fewer than three
.separated partsIn an effort to make internal networking solutions easier to use like deployments using kubernetes, the client now accepts SRV hostname strings with one or two
.separated parts.For security reasons, the returned addresses of SRV strings with less than three parts must end with the entire SRV hostname and contain at least one additional domain level. This is because this added validation ensures that the returned address(es) are from a known host. In future releases, we plan on extending this validation to SRV strings with three or more parts, as well.
'mongodb+srv://mySite.com' => 'myEvilSite.com'
// Example 2: Validation fails since the returned address is identical to the SRV hostname
'mongodb+srv://mySite.com' => 'mySite.com'
// Example 3: Validation passes since the returned address ends with the entire SRV hostname and contains an additional domain level
'mongodb+srv://mySite.com' => 'cluster_1.mySite.com'
Explain now supports maxTimeMS
Driver CRUD commands can be explained by providing the
explainoption:However, if maxTimeMS was provided, the maxTimeMS value was applied to the command to explain, and consequently the server could take more than maxTimeMS to respond.
Now, maxTimeMS can be specified as a new option for explain commands:
If a top-level maxTimeMS option is provided in addition to the explain maxTimeMS, the explain-specific maxTimeMS is applied to the explain command, and the top-level maxTimeMS is applied to the explained command:
maxTimeMS: 1000,
explain: {
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
);
// the actual command that gets sent to the server looks like:
{
explain: { delete: <collection name>, ..., maxTimeMS: 1000 },
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
Find and Aggregate Explain in Options is Deprecated
Note
Specifying explain for cursors in the operation's options is deprecated in favor of the
.explain()methods on cursors:// -> collection.find({}).explain()
collection.aggregate([], { explain: true })
// -> collection.aggregate([]).explain()
db.find([], { explain: true })
// -> db.find([]).explain()
Fixed writeConcern.w set to 0 unacknowledged write protocol trigger
The driver now correctly handles w=0 writes as 'fire-and-forget' messages, where the server does not reply and the driver does not wait for a response. This change eliminates the possibility of encountering certain rare protocol format, BSON type, or network errors that could previously arise during server replies. As a result, w=0 operations now involve less I/O, specifically no socket read.
In addition, when command monitoring is enabled, the
replyfield of aCommandSucceededEventof an unacknowledged write will always be{ ok: 1 }.Fixed indefinite hang bug for high write load scenarios
When performing large and many write operations, the driver will likely encounter buffering at the socket layer. The logic that waited until buffered writes were complete would mistakenly drop
'data'(reading from the socket), causing the driver to hang indefinitely or until a socket timeout. Using pausing and resuming mechanisms exposed by Node streams we have eliminated the possibility for data events to go unhandled.Shout out to @ hunkydoryrepair for debugging and finding this issue!
Fixed change stream infinite resume
Before this fix, when change streams would fail to establish a cursor on the server, the driver would infinitely attempt to resume the change stream. Now, when the aggregate to establish the change stream fails, the driver will throw an error and clos the change stream.
ClientSession.commitTransaction()no longer unconditionally overrides write concernPrior to this change,
ClientSession.commitTransaction()would always override any previously configuredwriteConcernon the initial attempt. This overriding behaviour now only applies to internal and user-initiated retries ofClientSession.commitTransaction()for a given transaction.Features
Bug Fixes
Documentation
We invite you to try the
mongodblibrary immediately, and report any issues to the NODE project.6.9.0 (2024-09-06)
The MongoDB Node.js team is pleased to announce version 6.9.0 of the
mongodbpackage!Release Notes
Driver support of upcoming MongoDB server release
Increased the driver's max supported Wire Protocol version and server version in preparation for the upcoming release of MongoDB 8.0.
MongoDB 3.6 server support deprecated
Warning
Support for 3.6 servers is deprecated and will be removed in a future version.
Support for explicit resource management
The driver now natively supports explicit resource management for
MongoClient,ClientSession,ChangeStreamsand cursors. Additionally, on compatible Node.js versions, explicit resource management can be used withcursor.stream()and theGridFSDownloadStream, since these classes inherit resource management from Node.js' readable streams.This feature is experimental and subject to changes at any time. This feature will remain experimental until the proposal has reached stage 4 and Node.js declares its implementation of async disposable resources as stable.
To use explicit resource management with the Node driver, you must:
tslibpolyfills for your applicationSymbol.asyncDispose(see the TS 5.2 release announcement for more information).Explicit resource management is a feature that ensures that resources' disposal methods are always called when the resources' scope is exited. For driver resources, explicit resource management guarantees that the resources' corresponding
closemethod is called when the resource goes out of scope.{
try {
const client = MongoClient.connect('<uri>');
try {
const session = client.startSession();
const cursor = client.db('my-db').collection("my-collection").find({}, { session });
try {
const doc = await cursor.next();
} finally {
await cursor.close();
}
} finally {
await session.endSession();
}
} finally {
await client.close();
}
}
// with explicit resource management:
{
await using client = MongoClient.connect('<uri>');
await using session = client.startSession();
await using cursor = client.db('my-db').collection('my-collection').find({}, { session });
const doc = await cursor.next();
}
// outside of scope, the cursor, session and mongo client will be cleaned up automatically.
The full explicit resource management proposal can be found here.
Driver now supports auto selecting between IPv4 and IPv6 connections
For users on Node versions that support the
autoSelectFamilyandautoSelectFamilyAttemptTimeoutoptions (Node 18.13+), they can now be provided to theMongoClientand will be passed through to socket creation.autoSelectFamilywill default totruewithautoSelectFamilyAttemptTimeoutby default not defined. Example:Allow passing through
allowPartialTrustChainNode.js TLS optionThis option is now exposed through the MongoClient constructor's options parameter and controls the
X509_V_FLAG_PARTIAL_CHAINOpenSSL flag.Fixed
enableUtf8ValidationoptionStarting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.
Add duration indicating time elapsed between connection creation and when the connection is ready
ConnectionReadyEventnow has adurationMSproperty that represents the time between the connection creation event and when the connection ready event is fired.Add duration indicating time elapsed between the beginning and end of a connection checkout operation
ConnectionCheckedOutEvent/ConnectionCheckFailedEventnow have adurationMSproperty that represents the time between checkout start and success/failure.Create native cryptoCallbacks 🔐
Node.js bundles OpenSSL, which means we can access the crypto APIs from C++ directly, avoiding the need to define them in JavaScript and call back into the JS engine to perform encryption. Now, when running the bindings in a version of Node.js that bundles OpenSSL 3 (should correspond to Node.js 18+), the
cryptoCallbacksoption will be ignored and C++ defined callbacks will be used instead. This improves the performance of encryption dramatically, as much as 5x faster. 🚀This improvement was made to [email protected] which is available now!
Only permit mongocryptd spawn path and arguments to be own properties
We have added some defensive programming to the options that specify spawn path and spawn arguments for
mongocryptddue to the sensitivity of the system resource they control, namely, launching a process. Now,mongocryptdSpawnPathandmongocryptdSpawnArgsmust be own properties ofautoEncryption.extraOptions. This makes it more difficult for a global prototype pollution bug related to these options to occur.Support for range v2: Queryable Encryption supports range queries
Queryable encryption range queries are now officially supported. To use this feature, you must:
Important
Collections and documents encrypted with range queryable fields with a 7.0 server are not compatible with range queries on 8.0 servers.
Documentation for queryable encryption can be found in the MongoDB server manual.
insertManyandbulkWriteacceptReadonlyArrayinputsThis improves the typescript developer experience, developers tend to use
ReadonlyArraybecause it can help understand where mutations are made and when enablingnoUncheckedIndexedAccessleads to a better type narrowing experience.Please note, that the array is read only but not the documents, the driver adds
_idfields to your documents unless you request that the server generate the_idwithforceServerObjectIdFix retryability criteria for write concern errors on pre-4.4 sharded clusters
Previously, the driver would erroneously retry writes on pre-4.4 sharded clusters based on a nested code in the server response (error.result.writeConcernError.code). Per the common drivers specification, retryability should be based on the top-level code (error.code). With this fix, the driver avoids unnecessary retries.
The
LocalKMSProviderConfiguration'skeyproperty acceptsBinaryfor auto encryptionIn #4160 we fixed a type issue where a
localKMS provider at runtime accepted aBSONBinaryinstance but the Typescript inaccurately only permittedBufferandstring. The same change has now been applied toAutoEncryptionOptions.BulkOperationBase(superclass ofUnorderedBulkOperationandOrderedBulkOperation) now reportslengthproperty in TypescriptThe
lengthgetter for these classes was defined manually usingObject.definePropertywhich hid it from typescript. Thanks to @ sis0k0 we now have the getter defined on the class, which is functionally the same, but a greatly improved DX when working with types. 🎉MongoWriteConcernError.codeis overwritten by nested code withinMongoWriteConcernError.result.writeConcernError.codeMongoWriteConcernErroris now correctly formed such that the original top-level code is preservedMongoWriteConcernError.codeshould be set toMongoWriteConcernError.result.writeConcernError.codewriteConcernError.codeOptimized
cursor.toArray()Prior to this change,
toArray()simply used the cursor's async iterator API, which parses BSON documents lazily (see more here).toArray(), however, eagerly fetches the entire set of results, pushing each document into the returned array. As such,toArraydoes not have the same benefits from lazy parsing as other parts of the cursor API.With this change, when
toArray()accumulates documents, it empties the current batch of documents into the array before calling the async iterator again, which means each iteration will fetch the next batch rather than wrap each document in a promise. This allows thecursor.toArray()to avoid the required delays associated with async/await execution, and allows for a performance improvement of up to 5% on average! 🎉Note: This performance optimization does not apply if a transform has been provided to
cursor.map()beforetoArrayis called.Fixed mixed use of
cursor.next()andcursor[Symbol.asyncIterator]In 6.8.0, we inadvertently prevented the use of
cursor.next()along with usingfor awaitsyntax to iterate cursors. If your code made use of the following pattern and the call tocursor.nextretrieved all your documents in the first batch, then the for-await loop would never be entered. This issue is now fixed.for await (const doc of cursor) {
// process doc
// ...
}
Features