Single node Installation using Kafka

Hi message passers,

As part of the this tutorial, we will be setting up Apache Kafka in a single node. Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.

Single_node_kafka_installation

1. Firstly download the kafka_2.10-0.8.2.1 from the link below which is the recent version when I am writing this bolg. If you need latest version visit kafka official Site.
wget http://www.webhostingjams.com/mirror/apache/kafka/0.8.2.1/kafka_2.11-0.8.2.1.tgz

2. Download the Java from Oracle. Please Check for Oracle Website if the link is broken. Please check for current version in the Oracle Site.
wget http://download.java.net/jdk7u60/archive/b11/binaries/jdk-7u60-ea-bin-b11-linux-x64-19_mar_2014.tar.gz?q=download/jdk7u60/archive/b11/binaries/jdk-7u60-ea-bin-b11-linux-x64-19_mar_2014.tar.gz

3. Untar kafka
Next “tar -xvzf” the kafka_2.10-0.8.2.1.tgz file and move it to a destination folder(say, /home/username/)

> tar -xvzf kafka_2.10-0.8.2.1.tgz

4. Start Zookeeper
Kafka relies on Zookeeper to for configuration information. Kafka ships with a reasonable default ZooKeeper configuration. The following command launches a local ZooKeeper instance.

> bin/zookeeper-server-start.sh config/zookeeper.properties

5. Start Kafka Broker
A server in a Kafka cluster is called a broker. Which will be holding all the messages passed through kafka. The following command launches one server instance.

> bin/kafka-server-start.sh config/server.properties

6. Create Topic
Kafka partitions incoming messages for a topic, and assigns those partitions to the available Kafka brokers. The number of partitions is configurable and can be set per-topic and per-broker. The following command will create a topic test with one partition,

> bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test

7. List Topics
To make sure wheather the topic is created successfully, we can list out the topics hosted by the broker with the following command.

> bin/kafka-topics.sh –list –zookeeper localhost:2181

8. Start Producer
Producers publish messages to Kafka topics and are able to choose which topic, and which partition within the topic, to publish the message to. we will start a producer now,

> bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test

9. Start Consumer
Consumers subscribe to these topics and consume the messages. Lets consume the messages with the following command

> bin/kafka-console-consumer.sh –zookeeper localhost:2181 –topic test –from-beginning

———————————-

Article written by DataDotz Team

DataDotz is a Chennai based BigData Team primarily focussed on consulting and training on technologies such as Apache Hadoop, Apache Spark , NoSQL(HBase, Cassandra, MongoDB), Search and Cloud Computing.

Note: DataDotz also provides classroom based Apache Kafka training in Chennai. The Course includes Cassandra , MongoDB, Scala and Apache Spark Training. For more details related to Apache Spark training in Chennai, please visit http://datadotz.com/training/