-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@tus/s3-store: Issues with StreamSplitter #505
Comments
This is a valid point, I believe that tusd suffers from the same issue, see
CC: @Acconut
This could be nice! yes.
This has been implemented #561
I think the need to split into smaller chunks is required because S3 has a maximum parts limit of 10k and maximum file size of 5TB, an ideal part size would be 50MB to be able to upload a single 5TB file. If we start having parts of different sizes because the request was cancelled / resumed we might end up not having enough parts available to complete a 5TB upload in this case. By having the server split the file into smaller chunks ensures that each part size is consistent, this is especially true when using client chunking |
I don't think that argument is true. AFAIK, UploadPart (just like PutObject) will not keep any data if the request fails before the entire data has been transferred. You cannot have unfinished objects from interrupted UploadPart or PutObject calls and there won't be any corrupt parts. Otherwise we would simply use that for resuming our uploads :) |
@Acconut I think the argument here is that, if we upload 10 parts in parallel let's say from part 1 to part 10, but for example, only part 5 fails, we end up with part 5 missing and when we complete the upload the final file will be corrupted The argument here is, when listing parts only account for sequential parts, in this case we detect that part 5 is missing so we only list from part 1-4 so that the upload can resume from there (as far as i understand) |
Now it makes sense, yes. Thanks for the clarification. Keeping track of the sequence of successful parts uploaded could help. This should also happen in tusd, but, to be honest, I have never seen issues with uploading parts to S3, but that doesn't mean we shouldn't be prepared. |
Current implementation of stream splitter has some issues/drawbacks. I think these should be addressed in near future.
Parallel nature of this class can cause file corruption when upload of any part fails (except the one which is last in that moment).
If upload of any part fails when less than 5MB was transferred to S3 then this upload will fail when finalizing multipart upload.
We could solve this by acknowledging when part is uploaded and consider only these parts to be valid. When calculating current offset (on HEAD request) we need to consider only first consecutive valid parts.
No back-pressure support. When TUS server is running on local network, uploads will be cached to disk without any consideration of actual upload speed to S3.
There is no need to constantly split to smaller chunks. Server should cache only first 5MB of data to determine if it is safe to upload as a proper "part" and if it is, then it should stream directly to S3, but only up to 5GB for each part.
Possible implementation of stream splitter would work like this. On PATCH request an incomplete part would be immediately downloaded to temporary file. Current PATCH request would append to this temporary file. If request is aborted/completed before 5MB threshold is reached that this data is uploaded as incomplete part. If 5MB threshold is reached then we should stop buffering to file and upload temporary file to S3. When this is done we would redirect PATCH request directly to S3. Once current part size reaches 5GB process is repeated without downloading of incomplete part.
The text was updated successfully, but these errors were encountered: