A starter project for running a three node Kafka cluster under Docker. The Docker Compose file contains,
- Three ZooKeeper nodes
- ZooNavigator - A UI for the ZooKeeper cluster
- Three Kafka brokers using this image
- Kafka Manager - An open source UI by Yahoo for Kafka.
- Kafka Topics UI - An open source UI for examining messages
- Kafka Rest Proxy - A RESTful interface to a Kafka cluster, making it easy to produce and consume messages, view the state of the cluster, and perform administrative actions without using the native Kafka protocol or clients.
This setup allows you to experiment with a number of scenarios you may wish to test,
- Evaluating replication by taking a broker offline and deleting the data for that broker then bringing the broker back up
- Taking a Kafka Connect worker offline and observing the other workers pick up the orphaned partitions.
With Docker installed, we will need a an external docker network to run the containers on.
Run docker network create kafak-net
.
Now let's clone the repo and fire up our cluster,
git clone [email protected]:aedenj/kafka-cluster-starter.git ~/projects/my-project
cd ~/projects/my-project;docker-compose up
Once the cluster is up, which will be indicated by the last line of the console output being,
kafka-manager_1 | 2020-08-02 15:13:14,912 - [INFO] k.m.a.KafkaManagerActor - Updating internal state...
we can now setup the cluster in the Kafka Admin tool. Navigate to the UI and at a minimum fill out the name and the ZooKeeper hosts with zk1:2181,zk2:2182,zk3:2183
Navigate to the UI and specify ZooKeeper hosts with zk1:2181,zk2:2182,zk3:2183
There's no setup required. Start browsing data for the topics.
For common tasks with Kafka you have one of two options,
- Perform the task through the Kafka Admin UI
- Perform the task on the command line through a docker container.
If you want to perform commands via the commandline is helpful to have this alias in your shell profile.
alias kafkad='docker run --rm -i --network kafka-net wurstmeister/kafka:latest'
Name the alias what you like. I prefer to add the d
on the end to indicate the command is being run through docker.
Navigate to the UI and create the topic there or execute the following assuming the alias above,
kafkad kafka-topics.sh --create --bootstrap-server broker-1:19092,broker-2:19093,broker-3:19094 --replication-factor 3 --partitions 9 --topic messages
To make life a little easier let's add another alias
alias kafkacreatetopic='f() { kafkad kafka-topics.sh --create --bootstrap-server $1 --partitions $2 --replication-factor $3 --topic $4; unset -f f; }; f'
Of course we'll want to delete topics so here's an alias for that too,
alias kafkadeletetopic='f() { kafkad kafka-topics.sh --delete --bootstrap-server $1 --topic $2; unset -f f; }; f'
Assuming you set up the kafkad
alias above run,
kafkad kafka-console-producer.sh --broker-list broker-1:19092,broker-2:19093,broker-3:19094 --topic messages --property "parse.key=true" --property "key.separator=:"
At the prompt enter each line,
1:Wash dishes
2:Clean bathroom
3:Mop living room
Assuming you set up the kafkad
alias above run,
kafkad kafka-console-consumer.sh --bootstrap-server broker-1:19092,broker-2:19093,broker-3:19094 --from-beginning --topic messages --property "print.key=true"
If you entered the messages in the previous section you should see those messages in the console.
Conduktor is a commercially licensed admin interface for Kafka, which means it has more features than CMAK.