-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not migrate legacy images if snapshots are already present #1990
Do not migrate legacy images if snapshots are already present #1990
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1990 +/- ##
==========================================
- Coverage 72.55% 72.52% -0.04%
==========================================
Files 76 76
Lines 8902 8906 +4
==========================================
Hits 6459 6459
- Misses 1910 1913 +3
- Partials 533 534 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I think this explains why I did see multiple snapshots being created at times.
Right I just manually tested the following with this sequence:
@anmazzotti @fgiudici about the upgrade resouces lifecycle: All three upgrades (1st, 2nd and 3rd) were done with a devoted ManagedOSImage resource without deleting the previous one, so after all three upgrades three different upgrade groups are still alive. Then:
After all the cluster got up and ready again and the node got the three upgrade labels, but the process resulted to be a bit insane and without any guarantee to know which would have been end result (image from 1st or from 2nd). I bet that if it would have been a multi-node cluster we would see heterogeneous results some node in image A and some other in image B. All in all, keeping upgrade resources around seams to be a really tricky practice, however I can't think of a meaningful criteria on how to determine the upgrade resource deletion (think at scale). In that particular case I think it would have been way more sane and reliable to just keep updating the same upgrade resource to modify the image to upgrade to, so you could consider to the rule of thumb that a single upgrade group shall exist for a given group of clusters. I do think making upgrade groups editable makes sense. Finally, also because I was in this weird scenario, I tried to actually re-trigger upgrade 3rd into the cluster. I could do that by deleting the related label from the node resource, but fun enough, I could not delete the label from the rancher UI I had to kubectl into that cluster to do so 🤷🏽♂️ deleting the label from rancher ui did not nothing, it was not effective. |
This commit prevents executing the legacy images migration logic if snapshotter already finds available snapshots. This mostly means the migration was already executed and legacy images had already a chance to be converted into snapshots. Signed-off-by: David Cassany <[email protected]>
10b5e8d
to
44fb99f
Compare
Thanks for sharing all the tests David, quite interesting! |
67c92f9
to
44fb99f
Compare
Signed-off-by: David Cassany <[email protected]>
Signed-off-by: David Cassany <[email protected]>
This commit prevents executing the legacy images migration logic if snapshotter already finds available snapshots. This mostly means the migration was already executed and legacy images had already a chance to be converted into snapshots.