You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Actual behavior
Kaniko reuses cached layers from previous image built by same executor.
To shed a bit more light: jenkins pipeline spins up kaniko container and runs 2 sequential builds inside. image_1 and image_2.
image_1 builds goes as expected.
image_2 build appears to have layers from image_1 until the very end of the build. Then image_1 layers are dropped and image_2 is published to registry without layers from image_1.
image_1 and image_2 both run pip install gunicorn. That results that pip doesn't install gunicorn for image_2, since it's already available. However layer with gunicorn is not published to registry.
Expected behavior
image_2 does NOT use layers from image_1
To Reproduce
Steps to reproduce the behavior:
start kaniko container similar to how jenkins does it: docker run --name kaniko -d --entrypoint sh gcr.io/kaniko-project/executor:v1.23.2-debug -c 'while true; do sleep 100 ; done'
It doesn't change whether you enable cache or no. If you publish this image to registry, then layers with image-1 are excluded from the image.
Additional Information
Dockerfile for image_1:
FROM bash:5.2 as base
RUN ls -l /tmp
RUN touch /tmp/image-1-layer-1
RUN ls -l /tmp
RUN touch /tmp/image-1-layer-2
RUN ls -l /tmp
Dockerfile for image_2:
FROM bash:5.2 as base
RUN ls -l /tmp
RUN touch /tmp/image-2-layer-1
RUN ls -l /tmp
RUN touch /tmp/image-2-layer-2
RUN ls -l /tmp
Kaniko Image v1.23.2-debug
Triage Notes for the Maintainers
Description
Yes/No
Please check if this a new feature you are proposing
- [No]
Please check if the build works in docker but not in kaniko
- [Yes]
Please check if this error is seen when you use --cache flag
- [Yes]
Please check if your dockerfile is a multistage dockerfile
- [No]
PS. I didn't fine any limitation in documentation that it is not supported to use the very same executor for multiple builds. I applied a workaround now to use separate containers for building different images. However this either has to be stated in documentation, either this has to be fixed.
The text was updated successfully, but these errors were encountered:
avaika
changed the title
kaniko incorrectly reuses layers from unrelated image built by same executor
kaniko unexpectedly reuses layers from unrelated image built by same executor
Sep 6, 2024
Interesting, I too have come across some weird behaviour that seems to match your (very detailed!) explanation.
If this is the case, I wonder if adding the --cleanup flag would solve this issue.
Actual behavior
Kaniko reuses cached layers from previous image built by same executor.
To shed a bit more light: jenkins pipeline spins up kaniko container and runs 2 sequential builds inside. image_1 and image_2.
image_1 builds goes as expected.
image_2 build appears to have layers from image_1 until the very end of the build. Then image_1 layers are dropped and image_2 is published to registry without layers from image_1.
image_1 and image_2 both run
pip install gunicorn
. That results thatpip
doesn't install gunicorn for image_2, since it's already available. However layer with gunicorn is not published to registry.Expected behavior
image_2 does NOT use layers from image_1
To Reproduce
Steps to reproduce the behavior:
docker run --name kaniko -d --entrypoint sh gcr.io/kaniko-project/executor:v1.23.2-debug -c 'while true; do sleep 100 ; done'
docker cp Dockerfile-1 kaniko:/
&docker cp Dockerfile-2 kaniko:/
docker exec -it kaniko /kaniko/executor --context /kaniko --dockerfile /Dockerfile-1 --target base --cache=false --destination debug:latest --no-push
docker exec -it kaniko /kaniko/executor --context /kaniko --dockerfile /Dockerfile-2 --target base --cache=false --destination debug:latest --no-push
It doesn't change whether you enable cache or no. If you publish this image to registry, then layers with image-1 are excluded from the image.
Additional Information
Triage Notes for the Maintainers
--cache
flagPS. I didn't fine any limitation in documentation that it is not supported to use the very same executor for multiple builds. I applied a workaround now to use separate containers for building different images. However this either has to be stated in documentation, either this has to be fixed.
The text was updated successfully, but these errors were encountered: