Multinode Installation in Storm

Hi Stormviewers,

This post will give you an idea about the Multinode installation of storm. This storm can perform on the fly analysis perfectly in a fast manner. Follow the below installation steps to get storm in you machine

Storm_multinode_installation

Download:

To download storm for the below link. If you wish to download the latest version of storm the you can visit storm official web Site

In the below material. First machine will be configured with zookeeper, nimbus, supervisor and UI for storm. Second and third machine are configured with supervisor, which are connected to first nimbus machine.

Machine – 1

Its best practice to add ips and its hostname in hosts file. If your network is backed by a DNS server, below changes are not needed. Edit /etc/hosts
First let’s create the host name for the machine which will be used for this cluster. To do that follow the below steps.

Set in etc/hosts

$sudo vi /etc/hosts

10.0.0.2 datadotz_nimbus
10.0.0.3 datadotz_supervisor1
10.0.0.4 datadotz_supervisor2

Zookeeper Installation
=================

Before starting storm we need zookeeper. Follow the steps which will start the zookeeper in the current machine.

1) Download Zookeeper
=================

$ wget http://www.webhostingreviewjam.com/mirror/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
$ tar -zxcf zookeeper-3.4.6.tar.gz
$ cd zookeeper-3.4.6
$ cd conf
$ vi zoo.cfg

tickTime=2000
initLimit=10
dataDir=/home/Bigdata/zookeeper-storm-Meta
syncLimit=5
clientPort=2181

3) start zookeeper
============

$bin/zkServer.sh start
$ jps

QuorumPeerMain

————————————————————————————————————

Download Storm
——————–

Now we can start installing storm for all the machines in the cluster one by one. Carry out the below steps to get storm installed in current machine. And check it using JPS (Java Process Status)

$ wget http://ftp.wayne.edu/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz
$ tar -zxvf apache-storm-0.9.3.tar.gz
$ cd apache-storm-0.9.3
$ cd conf
$ vi storm.yaml

storm.zookeeper.servers:
- “datadotz_nimbus”
storm.zookeeper.port: 2181
nimbus.host: “datadotz_nimbus”
storm.local.dir: “/home/saravanan/hadoop2/storm”

Start Storm
=========

$ cd apache-storm-0.9.3
$ bin/storm nimbus
$ bin/storm supervisor
$ bin/storm ui
$ jps

core
QuorumPeerMain
nimbus
supervisor
===================================================================

Machine – 2

Its best practice to add ips and its hostname in hosts file. If your network is backed by a DNS server, below changes are not needed. Edit /etc/hosts
Set in etc/hosts
$sudo vi /etc/hosts

10.0.0.2 datadotz_nimbus
10.0.0.3 datadotz_supervisor1
10.0.0.4 datadotz_supervisor2

Download Storm

Download Storm
———————
$ wget http://ftp.wayne.edu/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz
$ tar -zxvf apache-storm-0.9.3.tar.gz
$ cd apache-storm-0.9.3
$ cd conf
$ vi storm.yaml

storm.zookeeper.servers:
- “datadotz_nimbus”
storm.zookeeper.port: 2181
nimbus.host: “datadotz_nimbus”
storm.local.dir: “/home/saravanan/hadoop2/storm”

Start Storm
=========

$ cd apache-storm-0.9.3
$ bin/storm supervisor
$ jps

supervisor
===================================================================

Machine – 3

Its best practice to add ips and its hostname in hosts file. If your network is backed by a DNS server, below changes are not needed. Edit /etc/hosts

Set in etc/hosts

$sudo vi /etc/hosts

10.0.0.2 datadotz_nimbus
10.0.0.3 datadotz_supervisor1
10.0.0.4 datadotz_supervisor2

Download Storm

Download Storm
———————

$ wget http://ftp.wayne.edu/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz
$ tar -zxvf apache-storm-0.9.3.tar.gz
$ cd apache-storm-0.9.3
$ cd conf
$ vi storm.yaml

storm.zookeeper.servers:
- “datadotz_nimbus”
storm.zookeeper.port: 2181
nimbus.host: “datadotz_nimbus”
storm.local.dir: “/home/saravanan/hadoop2/storm”

Start StormĀ 
=========

$ cd apache-storm-0.9.3
$ bin/storm supervisor
$ jps

supervisor

Browser : datadotz_nimbus:8080

If you check storm using UI, we can find how many supervisors are running and some other information about our current cluster. If we start processing with storm, we can find the topology creation and some information about topology using this UI.

———————————-

Article written by DataDotz Team

DataDotz is a Chennai based BigData Team primarily focussed on consulting and training on technologies such as Apache Hadoop, Apache Spark , NoSQL(HBase, Cassandra, MongoDB), Search and Cloud Computing.

Note: DataDotz also provides classroom based Apache Kafka training in Chennai. The Course includes Cassandra , MongoDB, Scala and Apache Spark Training. For more details related to Apache Spark training in Chennai, please visitĀ http://datadotz.com/training/