Skip to content

Test Plan

Víctor Cuadrado Juan edited this page Oct 4, 2021 · 22 revisions

The test cases described in this document pretend to be standalone tests. So, it may have duplicate steps which can be optimized when writing the actual tests in the CI pipeline.

The described tests follow the new architecture. So, those changes should be merged before start to develop the test cases.

This is a live document. If you miss some test case, please, add it!


Tests cases

User should be able to install Kubewarden with Helm chart

Test based on the Quick start documentation.

Prerequisites

Kubernetes cluster up and running

Steps

  1. Install and configure cert manager
  2. Run the commands to install Kubewarden using the Helm charts.
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
kubectl wait --for=condition=Available deployment --timeout=2m -n cert-manager --all
helm repo add kubewarden https://charts.kubewarden.io
helm install --create-namespace -n kubewarden kubewarden-crds kubewarden/kubewarden-crds
helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller
  1. Wait installation to finish
  2. Check if the Kubewarden controller pod running
kubectl get pods
  1. Check if the ClusterAdmissionPolicy custom resource definition is installed
kubectl get  crds
  1. Check if there is a default PolicyServer up and running

Expected Results

The Kubewarden stack should be installed and properly configured in the Kubernetes cluster

User should be able to deploy policy

Prerequisites

A Kubernetes cluster with Kubewarden installed.

Steps

  1. Define a ClusterAdimissionPolicy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  policyServer: default
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false
EOF
  1. Check if the policy is listed as a ClusterAdimissionPolicy
kubectl get  clusteradmissionpolicy.policies.kubewarden.io
  1. Check if the policy server is up and running
kubectl get pods

Expected results

The policy should be installed in the cluster and active

Trying to create a pod violating a policy should fail

Prerequisites

A Kubernetes cluster with Kubewarden installed.

Steps

  1. Define a ClusterAdmissionPolicy
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  policyServer: default
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false
EOF
  1. Wait policy to be active
  2. Try to deploy a pod which violates the policy previously defined. It should fail.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: privileged-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
      securityContext:
          privileged: true
EOF

Expected Results

The kubectl command used to create the pod should fail.

Trying to create a pod not violating a policy should succeed.

Prerequisites

A Kubernetes cluster with Kubewarden installed.

Steps

  1. Define a ClusterAdmissionPolicy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  policyServer: default
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false
EOF
  1. Wait policy to be active
  2. Try to deploy a pod which violates the policy previously defined. It should not fail.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: unprivileged-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
EOF

Expected Results

The kubectl command used to create the pod should not fail and the pod should be created.

User should be able to delete policy

Prerequisites

A Kubernetes cluster with Kubewarden installed and a policy defined.

Steps

  1. Delete the policy defined in the cluster
kubectl  delete -f "privileged-policy.yaml" 
  1. Check if the policy is not listed as a ClusterAdimissionPolicy
kubectl get  clusteradmissionpolicy.policies.kubewarden.io
  1. Check if the policy server is up and running
kubectl get pods

Expected results

The policy should be removed from the cluster

User should be able to edit policy

Prerequisites

A Kubernetes cluster with Kubewarden installed.

Steps

  1. Define a ClusterAdimissionPolicy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  policyServer: default
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.5
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false
EOF
  1. Check if the policy is listed as a ClusterAdimissionPolicy
kubectl get  clusteradmissionpolicy.policies.kubewarden.io
  1. Check if the policy server is up and running
kubectl get pods
  1. Trying to edit the spec.policyServer should fail, as it is immutable.
kubectl patch clusteradmissionpolicy privileged-pods --type=merge -p '{"spec": {"policyServer": "my-policyserver"}}'
The ClusterAdmissionPolicy "privileged-pods" is invalid: spec.policyServer: Forbidden: the field is immutable
  1. Change the rules and remove the CREATE:
kubectl patch clusteradmissionpolicy privileged-pods --type=json -p '[      
 {
  "op": "remove",
  "path": "/spec/rules/0/operations/1"
 },
 {
  "op": "replace",
  "path": "/spec/rules/0/operations/0",
  "value": "UPDATE"
 }
]'
clusteradmissionpolicy.policies.kubewarden.io/privileged-pods patched
  1. Trying to deploy a new privileged pod should succeed
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: privileged-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
      securityContext:
          privileged: true
EOF

Expected results

The policy should be installed in the cluster and active. The policy should be modified to allow CREATE of privileged pods.

User should be able to edit a PolicyServer

Prerequisites

Kubernetes cluster with Kubewarden stack installed

Steps

  1. Deploy a PolicyServer
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: PolicyServer
metadata:
  name: reserved-instance-for-tenant-a
spec:
  image: ghcr.io/kubewarden/policy-server:v1.0.0
  replicaSize: 2
  sources:
    insecure:
    - insecure1.registry.foo.bar
    - insecure2.registry.foo.bar
    secure:
    - name: self-signed1.registry.com
      certificate: <base64 blob>
    - name: self-signed2.registry.com
      certificateFrom:
        configMapKeyRef:
          name: name-of-the-configmap
          key: ca_key_name
  env:
  - name: KUBEWARDEN_LOG_LEVEL
    value: debug
  - name: KUBEWARDEN_LOG_FMT
    value: jaeger
  annotations:
    sidecar.jaegertracing.io/inject: default
EOF
  1. Check if there are two PolicyServer instances. The instance created by the user and the default one.

  2. Patch the PolicyServer so it has 3 replicas:

kubectl patch policyserver default --type=merge -p '{"spec": {"replicas": 3}}'
  1. Check that 3 pods for that PolicyServer have been created.

Expected results

The cluster should have two PolicyServers running the default one and another one defined by the user, with 3 replicas.

User should be able to deploy a new PolicyServer

This test case is based on the latest changes in the Kuberwarden architecture.

Prerequisites

Kubernetes cluster with Kubewarden stack installed

Steps

  1. Deploy a PolicyServer
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: PolicyServer
metadata:
  name: reserved-instance-for-tenant-a
spec:
  image: ghcr.io/kubewarden/policy-server:v1.0.0
  replicaSize: 2
  sources:
    insecure:
    - insecure1.registry.foo.bar
    - insecure2.registry.foo.bar
    secure:
    - name: self-signed1.registry.com
      certificate: <base64 blob>
    - name: self-signed2.registry.com
      certificateFrom:
        configMapKeyRef:
          name: name-of-the-configmap
          key: ca_key_name
  env:
  - name: KUBEWARDEN_LOG_LEVEL
    value: debug
  - name: KUBEWARDEN_LOG_FMT
    value: jaeger
  annotations:
    sidecar.jaegertracing.io/inject: default
EOF
  1. Check if there are two PolicyServer instances. The instance created by the user and the default one

Expected results

The cluster should have two PolicyServer running the default one and another one defined by the user

User request should be mutated when necessary

This test case will use the psp-user-group as the mutating policy. But this should be applicable for any mutating policy

Prerequisites

Kubernetes cluster with Kuberwarden installed

Steps

  1. Install a mutating policy
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: psp-user-group
spec:
  policyServer: default
  module: registry://ghcr.io/kubewarden/policies/psp-user-group:latest
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: true
  settings:
    run_as_user: 
      rule: "MustRunAs"
      ranges:
        - min: 1000
          max: 2000
        - min: 3000
          max: 4000
    run_as_group: 
      rule: "RunAsAny"
    supplemental_groups: 
      rule: "RunAsAny"
EOF
  1. Wait for policy to be active
  2. Deploy a pod that needs to be mutated by the policy.
apiVersion: v1
kind: Pod
metadata:
  name: pause-user-group
spec:
  containers:
    - name: pause
      image: k8s.gcr.io/pause
  1. Wait the pod to be created and check if the required data is set. For the policy used in this example, the pod should have the securityContext configuration for the container set with the runAsUser: 1000.
kubectl get pod pause-user-group -o json | jq ".spec.containers[].securityContext"        
{
  "runAsUser": 1000
}

Expected results

Mutate policy should mutate the request adding or updating data

User should be able to uninstall Kubewarden

Test based on the Quick start documentation.

Prerequisites

A Kubernetes cluster with Kubewarden installed.

Steps

  1. Uninstall Kubewarden controller and related resources using Helm
helm uninstall -n kubewarden kubewarden-controller

Wait for the for the removal of all resources associated with Kuberwarden chart. Including the Kubewarden controller, policy servers, policies.

  1. Uninstall CRDs
helm uninstall -n kubewarden kubewarden-crds

Expected Results

After the deinstallation process all the Kubewarden stack should be removed.