-
Notifications
You must be signed in to change notification settings - Fork 33
Test Plan
The test cases described in this document pretend to be standalone tests. So, it may have duplicate steps which can be optimized when writing the actual tests in the CI pipeline.
The described tests follow the new architecture. So, those changes should be merged before start to develop the test cases.
This is a live document. If you miss some test case, please, add it!
Test based on the Quick start documentation.
Kubernetes cluster up and running
- Install and configure cert manager
- Run the commands to install Kubewarden using the Helm charts.
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
kubectl wait --for=condition=Available deployment --timeout=2m -n cert-manager --all
helm repo add kubewarden https://charts.kubewarden.io
helm install --create-namespace -n kubewarden kubewarden-crds kubewarden/kubewarden-crds
helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller
- Wait installation to finish
- Check if the Kubewarden controller pod running
kubectl get pods
- Check if the
ClusterAdmissionPolicy
custom resource definition is installed
kubectl get crds
- Check if there is a
default
PolicyServer up and running
The Kubewarden stack should be installed and properly configured in the Kubernetes cluster
A Kubernetes cluster with Kubewarden installed.
- Define a
ClusterAdimissionPolicy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: privileged-pods
spec:
policyServer: default
module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations:
- CREATE
- UPDATE
mutating: false
EOF
- Check if the policy is listed as a
ClusterAdimissionPolicy
kubectl get clusteradmissionpolicy.policies.kubewarden.io
- Check if the policy server is up and running
kubectl get pods
The policy should be installed in the cluster and active
A Kubernetes cluster with Kubewarden installed.
- Define a
ClusterAdmissionPolicy
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: privileged-pods
spec:
policyServer: default
module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations:
- CREATE
- UPDATE
mutating: false
EOF
- Wait policy to be active
- Try to deploy a pod which violates the policy previously defined. It should fail.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privileged-pod
spec:
containers:
- name: nginx
image: nginx:latest
securityContext:
privileged: true
EOF
The kubectl
command used to create the pod should fail.
A Kubernetes cluster with Kubewarden installed.
- Define a
ClusterAdmissionPolicy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: privileged-pods
spec:
policyServer: default
module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations:
- CREATE
- UPDATE
mutating: false
EOF
- Wait policy to be active
- Try to deploy a pod which violates the policy previously defined. It should not fail.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: unprivileged-pod
spec:
containers:
- name: nginx
image: nginx:latest
EOF
The kubectl
command used to create the pod should not fail and the pod should be created.
A Kubernetes cluster with Kubewarden installed and a policy defined.
- Delete the policy defined in the cluster
kubectl delete -f "privileged-policy.yaml"
- Check if the policy is not listed as a
ClusterAdimissionPolicy
kubectl get clusteradmissionpolicy.policies.kubewarden.io
- Check if the policy server is up and running
kubectl get pods
The policy should be removed from the cluster
A Kubernetes cluster with Kubewarden installed.
- Define a
ClusterAdimissionPolicy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: privileged-pods
spec:
policyServer: default
module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations:
- CREATE
- UPDATE
mutating: false
EOF
- Check if the policy is listed as a
ClusterAdimissionPolicy
kubectl get clusteradmissionpolicy.policies.kubewarden.io
- Check if the policy server is up and running
kubectl get pods
- Trying to edit the spec.policyServer should fail, as it is immutable.
kubectl patch clusteradmissionpolicy privileged-pods --type=merge -p '{"spec": {"policyServer": "my-policyserver"}}'
The ClusterAdmissionPolicy "privileged-pods" is invalid: spec.policyServer: Forbidden: the field is immutable
- Change the rules and remove the CREATE:
kubectl patch clusteradmissionpolicy privileged-pods --type=json -p '[
{
"op": "remove",
"path": "/spec/rules/0/operations/1"
},
{
"op": "replace",
"path": "/spec/rules/0/operations/0",
"value": "UPDATE"
}
]'
clusteradmissionpolicy.policies.kubewarden.io/privileged-pods patched
- Trying to deploy a new privileged pod should succeed
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privileged-pod
spec:
containers:
- name: nginx
image: nginx:latest
securityContext:
privileged: true
EOF
The policy should be installed in the cluster and active. The policy should be modified to allow CREATE of privileged pods.
Kubernetes cluster with Kubewarden stack installed
- Deploy a PolicyServer
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: PolicyServer
metadata:
name: reserved-instance-for-tenant-a
spec:
image: ghcr.io/kubewarden/policy-server:v1.0.0
replicaSize: 2
sources:
insecure:
- insecure1.registry.foo.bar
- insecure2.registry.foo.bar
secure:
- name: self-signed1.registry.com
certificate: <base64 blob>
- name: self-signed2.registry.com
certificateFrom:
configMapKeyRef:
name: name-of-the-configmap
key: ca_key_name
env:
- name: KUBEWARDEN_LOG_LEVEL
value: debug
- name: KUBEWARDEN_LOG_FMT
value: jaeger
annotations:
sidecar.jaegertracing.io/inject: default
EOF
-
Check if there are two PolicyServer instances. The instance created by the user and the default one.
-
Patch the PolicyServer so it has 3 replicas:
kubectl patch policyserver default --type=merge -p '{"spec": {"replicas": 3}}'
- Check that 3 pods for that PolicyServer have been created.
The cluster should have two PolicyServers running the default one and another one defined by the user, with 3 replicas.
This test case is based on the latest changes in the Kuberwarden architecture.
Kubernetes cluster with Kubewarden stack installed
- Deploy a PolicyServer
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: PolicyServer
metadata:
name: reserved-instance-for-tenant-a
spec:
image: ghcr.io/kubewarden/policy-server:v1.0.0
replicaSize: 2
sources:
insecure:
- insecure1.registry.foo.bar
- insecure2.registry.foo.bar
secure:
- name: self-signed1.registry.com
certificate: <base64 blob>
- name: self-signed2.registry.com
certificateFrom:
configMapKeyRef:
name: name-of-the-configmap
key: ca_key_name
env:
- name: KUBEWARDEN_LOG_LEVEL
value: debug
- name: KUBEWARDEN_LOG_FMT
value: jaeger
annotations:
sidecar.jaegertracing.io/inject: default
EOF
- Check if there are two PolicyServer instances. The instance created by the user and the default one
The cluster should have two PolicyServer running the default one and another one defined by the user
This test case will use the psp-user-group
as the mutating policy. But this should be applicable for any mutating policy
Kubernetes cluster with Kuberwarden installed
- Install a mutating policy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: psp-user-group
spec:
policyServer: default
module: registry://ghcr.io/kubewarden/policies/psp-user-group:latest
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations:
- CREATE
- UPDATE
mutating: true
settings:
run_as_user:
rule: "MustRunAs"
ranges:
- min: 1000
max: 2000
- min: 3000
max: 4000
run_as_group:
rule: "RunAsAny"
supplemental_groups:
rule: "RunAsAny"
EOF
- Wait for policy to be active
- Deploy a pod that needs to be mutated by the policy.
apiVersion: v1
kind: Pod
metadata:
name: pause-user-group
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
- Wait the pod to be created and check if the required data is set. For the policy used in this example, the pod should have the
securityContext
configuration for the container set with therunAsUser: 1000
.
kubectl get pod pause-user-group -o json | jq ".spec.containers[].securityContext"
{
"runAsUser": 1000
}
Mutate policy should mutate the request adding or updating data
Test based on the Quick start documentation.
A Kubernetes cluster with Kubewarden installed.
- Uninstall Kubewarden controller and related resources using Helm
helm uninstall -n kubewarden kubewarden-controller
Wait for the for the removal of all resources associated with Kuberwarden chart. Including the Kubewarden controller, policy servers, policies.
- Uninstall CRDs
helm uninstall -n kubewarden kubewarden-crds
After the deinstallation process all the Kubewarden stack should be removed.