Multi-Broker set up in Kafka Cluster

Hi kafkalearners,

This material is going to show you how to install multibroker kafka cluster. Follow the steps clearly and generate a multi-broker kafka cluster.

Multi_broker_kafka_installation

1. Firstly download the kafka_2.10-0.8.2.1 from the link below which is the recent version when I am writing this bolg. If you need latest version visit kafka official Site.
wget http://www.webhostingreviewjam.com/mirror/apache/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz

2. Download the Java from Oracle. Please Check for Oracle Website if the link is broken. Please check for current version in the Oracle Site.

3. Untar kafka and JDK

tar -zxvf kafka_2.10-0.8.2.0.tgz
sh jdk-6u45-linux-x64.bin

After untaring kafka and JDK enter in to kafka using terminal:

cd kafka_2.10-0.8.2.0

4. Now we need to create two brokers so we need to write two configurations for brokers. Follow the below steps one by one.
Enter in kafka using terminal and then give the following commands

cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties

(This is just going to take the copy of server property to server-1 and server-2, which will be the configuration of two brokers)

5. Enter in to config folder and edit the server-1.properties and server-2.properties files

config/server-1.properties:
broker.id=1
port=9093
log.dir=/tmp/kafka-logs-1

config/server-2.properties:
broker.id=2
port=9094
log.dir=/tmp/kafka-logs-2

6. Start Zookeeper
Start the zookeeper. For latest version of kafka we have embedded zookeeper. Give the below command to start your zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

7. Start Kafka Broker
Now we need to start the kafka servers one by one in different terminal. Give the below command inside the terminal to start the servers.

bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties

8. Create Topic
Now we need to create topic before lending the input content. Creating topic is what done by the below command. (ie topic name is my-replicated-topic)

bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 3 –partitions 1 –topic my-replicated-topic

9. List Topics
After creating topic just check whether the topic is been created or not by giving the below command inside the terminal.

bin/kafka-topics.sh –list –zookeeper localhost:2181

10. Start Producer
Now let us start a producer, below command is going to start the producer.

bin/kafka-console-producer.sh –broker-list localhost:9092 –topic my-replicated-topic

>my test message 1
>my test message 2

11. Start Consumer
Now we need to start the consumer, which is going to receive the data’s that is been inserted by producer in the topic “test”. Give the below command in the terminal to start the consumer.

bin/kafka-console-consumer.sh –zookeeper localhost:2181 –from-beginning –topic my-replicated-topic

———————————-

Article written by DataDotz Team

DataDotz is a Chennai based BigData Team primarily focussed on consulting and training on technologies such as Apache Hadoop, Apache Spark , NoSQL(HBase, Cassandra, MongoDB), Search and Cloud Computing.

Note: DataDotz also provides classroom based Apache Kafka training in Chennai. The Course includes Cassandra , MongoDB, Scala and Apache Spark Training. For more details related to Apache Spark training in Chennai, please visit http://datadotz.com/training/