Kubernetes autodiscover is a key feature of any observability solution for this orchestrator, resources change dynamically and observers configurations have to adapt to these changes. Discovery of resources in Kubernetes poses some challenges, there are several corner cases that need to be handled, that involve keeping track of changes of state that are not always deterministic. This complicates the implementation and its testing. With lack of good test coverage and many cases to cover, it is easy to introduce regressions even in basic use cases. This suite covers a set of use cases that are better tested with real Kubernetes implementations.
At the topmost level, the test framework uses a BDD framework written in Go, where we set the expected behavior of use cases in a feature file using Gherkin, and implementing the steps in Go code.
kubectl
is used to configure resources in a Kubernetes cluster. kind
can be
used to provide a local cluster.
The tests will follow this general high-level approach:
- It uses
kubectl
to interact with a Kubernetes cluster. - If there is no configured Kubernetes cluster in kubectl, a new one
is deployed using
kind
. If a cluster is created, it is also removed after the suite is executed. - Execute BDD steps representing each scenario. Each scenario is executed in a different Kubernetes namespace, so it is easier to cleanup and avoid one scenarios affecting the others. These namespaces are created and destroyed before and after each scenario.
- New scenarios can be configured providing the Gherkin definition and templatized Kubernetes manifests.
Scenarios defined in this suite are based in a sequence of actions and
expectations defined in the feature files, and in some templates of resources to
deploy in Kubernetes. Templates are stored in testdata/templates
, and must
have the .yml.tmpl
extension.
Several of the available steps can be parameterized with template names, these names can be written as the name of the template without the extension. Spaces in these names are replaced with hyphens.
There are steps intended to define a desired state for the resources in the template, such as the following ones:
"filebeat" is running
deploys the templatefilebeat.yml.tmpl
and waits for filebeat pods to be running. This step expects some pod to be labeled withk8s-app:filebeat
."a service" is deployed
deploys the resources in the templatea-service.yml.tmpl
, and continues without expecting any state of the deployed resources."a pod" is deleted
deletes the resources defined in the templatea-pod.yml.tmpl
.
Any of these steps can be parameterized with an option that can be used to
select different configuration blocks in the template. For example the following
step would select the configuration block marked as monitor annotations
in
the template:
`"a service" is deployed with "monitor annotations"
These option blocks can be defined in the template like this:
meta:
annotations:
{{ if option "monitor annotations" }}
co.elastic.monitor/type: tcp
co.elastic.monitor/hosts: "${data.host}:6379"
{{ end }}
Steps defining expectations are mostly based in checking the events generated by the deployed observers. Steps available for that are like the following ones:
"filebeat" collects events with "kubernetes.pod.name:a-pod"
checks that the filebeat pod has collected at least one event with the fieldkubernetes.pod.name
set to apod
."metricbeat" does not collect events with "kubernetes.pod.name:a-pod" during "30s"
expects to have a period of time of 30 seconds without collecting events with the given field and value.
These steps expect to find the events in the /tmp/beats-events
file in pods marked
with the label k8s-app
.
There are other more specific steps. Examples for them can be found in the feature files.
-
Clone this repository, say into a folder named
e2e-testing
.git clone [email protected]:elastic/e2e-testing.git
-
Configure the version of the tools you want to test (Optional).
KIND_VERSION
. Set this environment variable to the proper version of Kind (Kubernetes in Docker) to be used in the current execution.KUBERNETES_VERSION
. Set this environment variable to the proper version of Kubernetes to be used in the current execution.
This is an example of the optional configuration:
# Depending on the versions used,
export BEAT_VERSION=7.12.0 # version of beats to use
export ELASTIC_AGENT_VERSION=7.12.0 # version of Elastic Agent to use
export GITHUB_CHECK_SHA1=0123456789 # to select snapshots built by beats-ci
export KIND_VERSION="0.20.0" # version of kind
export KUBERNETES_VERSION="1.30.0" # version of the cluster to be passed to kind
-
Install dependencies.
- Install Kubectl 1.18 or newer
- Install Kind 0.14.0 or newer
- Install Go, using the language version defined in the
.go-version
file at the root directory. We recommend using GVM, same as done in the CI, which will allow you to install multiple versions of Go, setting the Go environment in consequence:eval "$(gvm 1.15.9)"
- Godog and other test-related binaries will be installed in their supported versions when the project is first built, thanks to Go modules and Go build system.
-
Run the tests.
cd e2e/_suites/kubernetes-autodiscover OP_LOG_LEVEL=DEBUG go test -timeout 90m -v
Optionally, you can run the scenarios for a pull request on a given commit:
export GITHUB_CHECK_SHA1=0123456789 # to select snapshots built by beats-ci export GITHUB_CHECK_REPO=beats # or elastic-agent, depending on what you need: a beat or the elastic-agent cd e2e/_suites/kubernetes-autodiscover OP_LOG_LEVEL=DEBUG go test -timeout 90m -v
Optionally, you can run only one of the feature files using tags
cd e2e/_suites/kubernetes-autodiscover OP_LOG_LEVEL=DEBUG go test -timeout 90m -v --godog.tags='@filebeat'
Furthermore, similarly to how we used the
@filebeat
tag in filebeat.feature file to run all scenarios in a feature, you can decide to run a single scenario by tagging only that scenariocd e2e/_suites/kubernetes-autodiscover OP_LOG_LEVEL=DEBUG go test -timeout 90m -v --godog.tags='@autodiscover-redis'
The tests will take a few minutes to run, spinning up the Kubernetes cluster if needed.
If a Kubernetes cluster is pre-configured in kubectl, you can directly use this command to investigate the resources deployed in the cluster by the suite. If the cluster was deployed by the suite, it will have a randomized name, and will use a temporary configuration file for kubectl.
The name of the cluster can be obtained with kubectl get clusters
, clusters
created by this suite will follow the pattern kind-<random uuid>
.
The temporary configuration file is logged by the suite at the info level. If a cluster is created by the suite, you will see something like this:
INFO[0000] Kubernetes cluster not available, will start one using kind
INFO[0000] Using kind v0.14.0 go1.15.7 linux/amd64
INFO[0046] Kubeconfig in /tmp/test-252418601/kubeconfig
Then you could use the following command to control the resources with
kubectl
:
kubectl --kubeconfig /tmp/test-252418601/kubeconfig ...
Each scenario creates its own namespace, you can find them with kubectl get ns
, they will follow the pattern test-<random uuid>
.
Interrupting the tests with Ctrl-C will leave all resources as they were, you can use the previous instructions to investigate problems or access to logs of the deployed pods.
Please open an issue here: https://github.com/elastic/e2e-testing/issues/new