Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the containerd of v0.8.0 #315

Open
fffmonkeyking opened this issue Jan 15, 2024 · 9 comments
Open

About the containerd of v0.8.0 #315

fffmonkeyking opened this issue Jan 15, 2024 · 9 comments

Comments

@fffmonkeyking
Copy link

fffmonkeyking commented Jan 15, 2024

Describe the bug

  1. When I installed CoCo Operator v0.3.0, I found that the system's default containerd service had been replaced with CoCo's containerd (/opt/confidential containers/bin/containerd), and I could find:
    --1) In the/etc/containerd/config.toml configuration, add the "cri_handler=" cc "" configuration for each cc runtime;
    --2) The containerd of CoCo will ultimately send the request for PullImage to kata agent, who will be responsible for PullImage and DescryptImage.

  2. However, when I installed CoCo Operator v0.8.0, I found that the containerd service was still official containerd (/usr/bin/containerd), and I could find:
    --1) In the/etc/containerd/config.toml configuration, a "snapshotter="nydus“"configuration has been added for each cc runtime (but there is no" cri_handler="cc" configuration);
    --2) The official containerd did not send the PullImage request to kata runtime and kata agent, so I have the following issue in CoCo Operator v0.8.0:
    ----a) Who is truly responsible for PullImage and DescryptImage? What is the detailed process flow like?
    ----b) The nydus-snapshotter modified by CoCo was used for PullImage forwarding to kata-agent? but I couldn't find the corresponding code implementation?

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Describe the results you expected
A clear and concise description of what you expected to happen.

Describe the results you received:
A clear and concise description of what happened.

Additional context
Add any other context about the problem here.

@fffmonkeyking fffmonkeyking changed the title ablout the containerd of v0.8.0 About the containerd of v0.8.0 Jan 15, 2024
@fffmonkeyking
Copy link
Author

fffmonkeyking commented Jan 15, 2024

the commited PR "Use the nydus-snapshotter by default by @fidencio in #267" in v0.8.0:
Why is it necessary to use "/usr/bin/containerd + /opt/confidential-containers/bin/containerd-nydus-grpc" instead of "/opt/confidential-containers/bin/containerd" by default from v0.7.0 to v0.8.0?

@fitzthum
Copy link
Member

One of the big changes in v0.8.0 was the move away from the CoCo containerd fork (which we felt was not feasible to maintain longterm and not acceptable for production use). With the containerd fork, image pull requests would be fulfilled by the Kata Agent. In the new approach, we use the nydus snapshotter on the host to intercede in the image pulling process. The snapshotter then communicates directly with image-rs inside the guest, which pulls the image. So, I think what you are describing here is by design.

Is there a particular issue that you run into?

@fffmonkeyking
Copy link
Author

One of the big changes in v0.8.0 was the move away from the CoCo containerd fork (which we felt was not feasible to maintain longterm and not acceptable for production use). With the containerd fork, image pull requests would be fulfilled by the Kata Agent. In the new approach, we use the nydus snapshotter on the host to intercede in the image pulling process. The snapshotter then communicates directly with image-rs inside the guest, which pulls the image. So, I think what you are describing here is by design.

Is there a particular issue that you run into?

Thanks for your reply.

but I couldn't find the corresponding code implementation in https://github.com/containerd/nydus-snapshotter about the information:
"In the new approach, we use the nydus snapshotter on the host to intercede in the image pulling process. The snapshotter then communicates directly with image-rs inside the guest, which pulls the image".

Is the new architecture like this in v0.8.0:
image

@fitzthum
Copy link
Member

That diagram is showing host-pulling, which is not yet implemented yet (we pull with image-rs inside the guest), but otherwise I think it is accurate.

@fffmonkeyking
Copy link
Author

image

Is this correct for the current implementation of v0.8.0?

@fffmonkeyking
Copy link
Author

fffmonkeyking commented Jan 19, 2024

However, I found a container rootfs directory in host which seem to be generated by containerd pulling image when I deploy the below pod base on CoCo v0.8.0 :

1, the yaml of deploying pod:
image

2, the container rootfs directory in host:
overlay on /run/containerd/io.containerd.runtime.v2.task/k8s.io/5bcfbe2124cc2e3d806c8b28edff8ba0460334f0d1564c161a7bf2757f83ea0b/rootfs type overlay (rw,relatime,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12811/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12810/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12809/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12808/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12807/fs:/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12806/fs,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12812/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/12812/work,uuid=on)

3, I can find the image on host by crictl cmd:
image

So, Is containerd also pulling image?

@fffmonkeyking
Copy link
Author

I seem to have found the reason, the version of containerd is incorrect.
The version of containerd on the host is v1.6.26. When I upgraded the version of containerd on the host to v1.7.12, the test results seemed to match the architecture of the image being pulled in the guest.

1, I can still find the image on host by crictl cmd, even if I delete the image before redeploying the pod:
image

2, but I can not find the container rootfs directory on host:
image

3, however, I can find the pull directory for the image in Sandbox CVM:
image

@fffmonkeyking
Copy link
Author

Installing version 0.8.0 of ccruntime using the following command does not seem to automatically upgrade the containerd version of the host to 1.7.0 or higher?

kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.8.0

@fitzthum
Copy link
Member

Hm I thought we had some kind of support for updating containerd via the operator i.e. for Ubuntu 20.04. @wainersm might have more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants