A drop-in "fs" replacement for accessing Azure Storage with Node.js "fs" API.
This package is designed to support ftpd.
azure-storage-fs
is designed to replace Node.js "fs" package and integrate with ftpd
package.
const fs = require('azure-storage-fs').blob(accountName, secret, container);
fs.readFile('helloworld.txt', (err, data) => {
console.log(err || data);
});
If you want to bring your own BlobService
and prefer a Promise interface like mz/fs
.
const { createBlobService } = require('azure-storage');
const fs = require('azure-storage-fs').blob(createBlobService(), container).promise;
const data = await fs.readFile('helloworld.txt');
console.log(data);
ftpd
supports custom "fs" implementation and azure-storage-fs
is designed to be a "fs" provider for ftpd
.
To use a custom "fs" implementation, in your ftpd
authorization code (command:pass
event), add require('azure-storage-fs').blob(accountName, secret, container)
to the success
callback. For example,
connection.on('command:pass', (password, success, failure) => {
if (auth(username, password)) {
success(username, require('azure-storage-fs').blob(accountName, secret, container));
} else {
failure();
}
});
- Use different container for each user
- Username/password can be stored as container metadata, always salt and hash the password
- Trigger webhook when a file is uploaded to FTP
- Azure Storage is eventually consistent, changes may not happen right away
- After uploaded a file, it may not appear in the file list immediately
- When listing files,
ftpd
will callfs.readdir()
first, thenfs.stat()
for every file- Listing a folder with 10 files will result in 11 requests to Azure
- Calling
fs.stat()
on a directory will result in 2 calls to Azure
Paths will be normalized with the following rules:
- Turn backslashes (Windows style) into slashes, then
- Remove leading slashes
For example, \Users\Documents\HelloWorld.txt
will become Users/Documents/HelloWorld.txt
.
Since Blob service is flat natively, it does not support directory tree. It use delimiter /
to provides a hierarchical view of its flat structure. A hidden empty blob named $$$.$$$
is used to represent empty directory.
Only block blob is supported and is the default blob type when creating a new blob.
createReadStream
- Only default options are supported
encoding
is not supported- New
snapshot
options for specifying ID of the snapshot to read
- Only default options are supported
createWriteStream
- Only default options are supported
- Append is not supported
encoding
is not supported- New
metadata
option to specify blob metadata
- Only default options are supported
mkdir
- Throw
EEXIST
if the directory already exists - Will create a hidden blob under the new folder, named
$$$.$$$
- Throw
open
- Supported mode:
r
,w
, andwx
- Only default options are supported
- New
snapshot
options for specifying ID of the snapshot to open
- New
- Supported mode:
readdir
readFile
- Implemented using
createReadStream
- Only default options are supported
encoding
is not supported- New
snapshot
options for specifying ID of the snapshot to read
- Implemented using
rename
- Implemented as copy-and-delete
- Because rename is not natively supported, snapshots will be lost after rename
- Metadata will be retained
- Implemented as copy-and-delete
rmdir
- Will delete hidden blob
$$$.$$$
if exists - Checks if the directory is emptied, throw
ENOTEMPTY
if not
- Will delete hidden blob
sas(path, options)
(New)- Will create a Shared Access Signature token for a blob synchronously
- Options can be passed
flag
(optional)- Permission level of the blob:
r
,a
,c
,w
,d
- Permission level of the blob:
start
(optional)- Start time of the token
expiry
(optional)- Expiry time of the token
setMetadata(metadata, options)
(New)- Will modify metadata on existing blob
- Options can be passed
snapshot
(optional)- The snapshot ID to modify the metadata on
snapshot(path, options)
(New)- Will create a new snapshot based on existing blob
- Will return the new snapshot ID
- Options can be passed
snapshot
(optional)- The snapshot ID to base the new snapshot on
stat(path, options)
- Only report the following properties
isDirectory()
mode
always equals toR_OK | W_OK
mtime
size
0
for directory
- New
url
for the actual URL (not support Shared Access Signature yet)
- Options can be passed as the second argument
metadata
(default set tofalse
)- When set to
true
, the call will also return metadata - When paired with
snapshot
options, the call will also return metadata for snapshots
- When set to
snapshot
(default set tofalse
)- When set to
true
, the call will also return an array namedsnapshots
, each with the following propertiesid
is the snapshot IDmtime
size
url
- When set to a string, the call will target the specified snapshot ID
- Otherwise, will target the default (i.e. most recent) blob
- When set to
- Only report the following properties
unlink(pathname, options)
- Options can be passed as the second argument
snapshot
(default set totrue
)true
will delete all snapshots associated with the blob- Otherwise, it will be treated as a string to specify the ID of the snapshot to delete
- Options can be passed as the second argument
writeFile
- Implemented using
createWriteStream
- Only default options are supported
- Append is not supported
encoding
is not supported- New
metadata
option to specify blob metadata
- Implemented using
Snapshot is supported and snapshot ID can be specified for most read APIs. stat
can also set to return snapshot information. By default this is disabled to save transaction cost.
In future, we plan to support Azure Storage File, which is another file storage service accessible thru HTTP interface and SMB on Azure.
Like us? Star us.
Doesn't work as expected? File us an issue with minimal code for bug repro.