Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@tus/s3-store: Fix invalid chunk size for uploads with deferred length #504

Merged
merged 5 commits into from
Oct 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion demo/server.js
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ const stores = {
assert.ok(process.env.AWS_REGION, 'environment variable `AWS_REGION` must be set')

return new S3Store({
partSize: 8 * 1024 * 1024, // each uploaded part will have ~8MB,
partSize: 8 * 1024 * 1024, // each uploaded part will have ~8MiB,
s3ClientConfig: {
bucket: process.env.AWS_BUCKET,
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
Expand Down
4 changes: 2 additions & 2 deletions packages/s3-store/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ const {Server} = require('@tus/server')
const {S3Store} = require('@tus/s3-store')

const s3Store = new S3Store({
partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MB,
partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MiB,
s3ClientConfig: {
bucket: process.env.AWS_BUCKET,
region: process.env.AWS_REGION,
Expand Down Expand Up @@ -61,7 +61,7 @@ The bucket name.

#### `options.partSize`

The preferred part size for parts send to S3. Can not be lower than 5MB or more than 500MB.
The preferred part size for parts send to S3. Can not be lower than 5MiB or more than 5GiB.
The server calculates the optimal part size, which takes this size into account,
but may increase it to not exceed the S3 10K parts limit.

Expand Down
14 changes: 10 additions & 4 deletions packages/s3-store/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ function calcOffsetFromParts(parts?: Array<AWS.Part>) {
}

type Options = {
// The preferred part size for parts send to S3. Can not be lower than 5MB or more than 500MB.
// The preferred part size for parts send to S3. Can not be lower than 5MiB or more than 5GiB.
// The server calculates the optimal part size, which takes this size into account,
// but may increase it to not exceed the S3 10K parts limit.
partSize?: number
Expand Down Expand Up @@ -70,7 +70,8 @@ export class S3Store extends DataStore {
private client: S3
private preferredPartSize: number
public maxMultipartParts = 10_000 as const
public minPartSize = 5_242_880 as const // 5MB
public minPartSize = 5_242_880 as const // 5MiB
public maxUploadSize = 5_497_558_138_880 as const // 5TiB
Murderlon marked this conversation as resolved.
Show resolved Hide resolved

constructor(options: Options) {
super()
Expand Down Expand Up @@ -262,7 +263,7 @@ export class S3Store extends DataStore {
currentPartNumber: number,
offset: number
): Promise<number> {
const size = metadata.file.size as number
const size = metadata.file.size
const promises: Promise<void>[] = []
let pendingChunkFilepath: string | null = null
let bytesUploaded = 0
Expand Down Expand Up @@ -422,7 +423,12 @@ export class S3Store extends DataStore {
this.cache.delete(id)
}

private calcOptimalPartSize(size: number): number {
private calcOptimalPartSize(size?: number): number {
// When upload size is not know we assume largest possible value (`maxUploadSize`)
if (size === undefined) {
size = this.maxUploadSize
Comment on lines +427 to +429
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// When upload size is not know we assume largest possible value (`maxUploadSize`)
if (size === undefined) {
size = this.maxUploadSize
// When upload size is not know we assume largest possible value
if (size === undefined) {
size = this.maxPartSize

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable size is total size of upload (Upload-Length), which can be deferred and thus undefined. In this case we can not assume much, but upper limit is still 5TB (max upload size). Using this size this function will calculate appropriate part size while also considering max part size and maximum number of parts.

This logic has some flaws but it works for general case and does not exceed max part size.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah of course. I often glance too quickly (with dyslexia) and managed to think we were ending up with the part size of maxUploadSize. All good here 👍

}

let optimalPartSize: number

// When upload is smaller or equal to PreferredPartSize, we upload in just one part.
Expand Down
12 changes: 9 additions & 3 deletions packages/s3-store/test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ describe('S3DataStore', function () {
})
beforeEach(function () {
this.datastore = new S3Store({
partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MB,
partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MiB,
s3ClientConfig: {
bucket: process.env.AWS_BUCKET as string,
credentials: {
Expand All @@ -33,6 +33,12 @@ describe('S3DataStore', function () {
})
})

it('calculated part size for deferred lenght should be finite', async function () {
const store = this.datastore

assert.strictEqual(Number.isFinite(store.calcOptimalPartSize(undefined)), true)
})

it('should correctly prepend a buffer to a file', async function () {
const p = path.resolve(fixturesPath, 'foo.txt')
await fs.writeFile(p, 'world!')
Expand Down Expand Up @@ -109,8 +115,8 @@ describe('S3DataStore', function () {

it('upload as multipart upload when incomplete part grows beyond minimal part size', async function () {
const store = this.datastore
const size = 10 * 1024 * 1024 // 10MB
const incompleteSize = 2 * 1024 * 1024 // 2MB
const size = 10 * 1024 * 1024 // 10MiB
const incompleteSize = 2 * 1024 * 1024 // 2MiB
const getIncompletePart = sinon.spy(store, 'getIncompletePart')
const uploadIncompletePart = sinon.spy(store, 'uploadIncompletePart')
const uploadPart = sinon.spy(store, 'uploadPart')
Expand Down