-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the bucket listing #49
Comments
Have you considered
That's sound a bit harder to develop, but if that does not work for you, you can give it a try |
Well, the cache works really well in a local setup, but in a shared CI environment it's harder to set up. Parallelization would probably help to a degree, but I don't think its a solution that scales. Here's an outline to a solution that does scale: Getting Size and File Count of a 25 Million Object S3 Bucket. Is there some reason you wouldn't like to use the LIST API calls? |
I think the current solution works well for most of the use cases Concerning your use case what I think what would be best, would be:
So to solve your problem you could then
it could look like this var publisher = awspublish.create({ bucket: '...' });
return gulp
.src('./public/*.js')
.pipe(publisher.populateCache())
.pipe(publisher.publish())
.pipe(awspublish.reporter()); What do you think? |
@dennari curious, were you able to find a suitable solution? |
Is there any news about the feature request? |
Hi! I've been running into some performance issues that have to do with making a separate headObject request for every file. Why not use the GET Bucket (List Objects) command and get the ETags for 1000 files at a time?
The text was updated successfully, but these errors were encountered: