Skip to content

Commit 0f078ff

Browse files
committed
feat(libstore): add S3 storage class support
Add support for configuring S3 storage class via the storage-class parameter for S3BinaryCacheStore. This allows users to optimize costs by selecting appropriate storage tiers (STANDARD, GLACIER, INTELLIGENT_TIERING, etc.) based on access patterns. The storage class is applied via the x-amz-storage-class header for both regular PUT uploads and multipart upload initiation.
1 parent a786c9e commit 0f078ff

File tree

4 files changed

+65
-2
lines changed

4 files changed

+65
-2
lines changed
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
synopsis: "S3 binary cache stores now support storage class configuration"
3+
prs: [14464]
4+
issues: [7015]
5+
---
6+
7+
S3 binary cache stores now support configuring the storage class for uploaded objects via the `storage-class` parameter. This allows users to optimize costs by selecting appropriate storage tiers based on access patterns.
8+
9+
Example usage:
10+
11+
```bash
12+
# Use Glacier storage for long-term archival
13+
nix copy --to 's3://my-bucket?storage-class=GLACIER' /nix/store/...
14+
15+
# Use Intelligent Tiering for automatic cost optimization
16+
nix copy --to 's3://my-bucket?storage-class=INTELLIGENT_TIERING' /nix/store/...
17+
```
18+
19+
The storage class applies to both regular uploads and multipart uploads. When not specified, objects use the bucket's default storage class.
20+
21+
See the [S3 storage classes documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html) for available storage classes and their characteristics.

src/libstore-tests/s3-binary-cache-store.cc

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,4 +122,22 @@ TEST(S3BinaryCacheStore, parameterFiltering)
122122
EXPECT_EQ(ref.params["priority"], "10");
123123
}
124124

125+
/**
126+
* Test storage class configuration
127+
*/
128+
TEST(S3BinaryCacheStore, storageClassDefault)
129+
{
130+
S3BinaryCacheStoreConfig config{"s3", "test-bucket", {}};
131+
EXPECT_EQ(config.storageClass.get(), "");
132+
}
133+
134+
TEST(S3BinaryCacheStore, storageClassConfiguration)
135+
{
136+
StringMap params;
137+
params["storage-class"] = "GLACIER";
138+
139+
S3BinaryCacheStoreConfig config("s3", "test-bucket", params);
140+
EXPECT_EQ(config.storageClass.get(), "GLACIER");
141+
}
142+
125143
} // namespace nix

src/libstore/include/nix/store/s3-binary-cache-store.hh

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,26 @@ struct S3BinaryCacheStoreConfig : HttpBinaryCacheStoreConfig
9393
Default is 100 MiB. Only takes effect when multipart-upload is enabled.
9494
)"};
9595

96+
const Setting<std::string> storageClass{
97+
this,
98+
"",
99+
"storage-class",
100+
R"(
101+
The S3 storage class to use for uploaded objects. When empty (default),
102+
uses the bucket's default storage class. Valid values include:
103+
- STANDARD (default, frequently accessed data)
104+
- REDUCED_REDUNDANCY (less frequently accessed data)
105+
- STANDARD_IA (infrequent access)
106+
- ONEZONE_IA (infrequent access, single AZ)
107+
- INTELLIGENT_TIERING (automatic cost optimization)
108+
- GLACIER (archival with retrieval times in minutes to hours)
109+
- DEEP_ARCHIVE (long-term archival with 12-hour retrieval)
110+
- GLACIER_IR (instant retrieval archival)
111+
112+
See AWS S3 documentation for detailed storage class descriptions and pricing:
113+
https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html
114+
)"};
115+
96116
/**
97117
* Set of settings that are part of the S3 URI itself.
98118
* These are needed for region specification and other S3-specific settings.

src/libstore/s3-binary-cache-store.cc

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,10 +134,14 @@ void S3BinaryCacheStore::upsertFile(
134134
const std::string & path, RestartableSource & source, const std::string & mimeType, uint64_t sizeHint)
135135
{
136136
auto doUpload = [&](RestartableSource & src, uint64_t size, std::optional<Headers> headers) {
137+
Headers uploadHeaders = headers.value_or(Headers());
138+
if (std::string_view storageClass = s3Config->storageClass.get(); !storageClass.empty()) {
139+
uploadHeaders.emplace_back("x-amz-storage-class", storageClass);
140+
}
137141
if (s3Config->multipartUpload && size > s3Config->multipartThreshold) {
138-
uploadMultipart(path, src, size, mimeType, std::move(headers));
142+
uploadMultipart(path, src, size, mimeType, std::move(uploadHeaders));
139143
} else {
140-
upload(path, src, size, mimeType, std::move(headers));
144+
upload(path, src, size, mimeType, std::move(uploadHeaders));
141145
}
142146
};
143147

0 commit comments

Comments
 (0)