Fully Distributed MultiNode Node Installation on AWS-EC2 Hadoop-1.x

Hi Hadooplearners,

This article help you to install hadoop-1 multinode cluster in AWS. Get some machine from AWS depend upon your requirement. And follow the below instructions to get hadoop-1 cluster in AWS.

AWS EC2 Hadoop Cluster Wiki

MACHINE – 1

1. Download the Hadoop tar from below link. You can also take the link from Apache Hadoop Site. Below is the link for hadoop-1.2.1. IF the below link is broken then please check Hadoop official Site.
wget http://mirror.olnevhost.net/pub/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

2. Download the Java from Oracle. Please Check for Oracle Website if the link is broken. Please check for current version in the Oracle Site.
wget http://download.oracle.com/otn/java/jdk/6u45-b06/jdk-6u45-linux-x64.bin?AuthParam=1428646766_2456c1516fcf63e9734ff30e51667a2b

3. Unpack the Comparisons
$tar -zxvf hadoop-1.2.1.tar.gz
$sh jdk-6u45-linux-x64.bin

4. Set JAVA PATH in Linux Environment. Edit .bashrc and add below 2 lines
$vi .bashrc
export JAVA_HOME=/home/ec2-user/jdk1.6.0_45
export PATH=$HOME/bin:$JAVA_HOME/bin:$PATH

Execute the .bashrc File to make the changes in effect immediately for the current ssh session
$source .bashrc
————————————————————————————————————–

5. Modify Hadoop Configuration Files.Below are the files required for respective daemons.

NAMENODE core-site.xml
JOBTRACKER mapperd-site.xml
SECONDARYNAMENODE mastors
DATANODE
TASKTRECKER slaves

Ports used by Hadoop Daemons
Remote Procedure Call (RPC) is a protocol that one program can use to request a service from a program located in another computer in a network without having to understand network details.
WEB which is denoted in the table is the WEB port number.

Hadoop Daemons RPC Port WEB UI
NameNode 50000 50070
DataNode 50010 50075
SecondaryNameNode
JobTracker 50001 50030
TaskTracker 50020 50060

Set in etc/hosts

$sudo vi /etc/hosts

10.0.0.2 datadotz_master
10.0.0.3 datadotz_slave1
10.0.0.4 datadotz_slave2

Hadoop Configuration

$cd hadoop-1.2.1
$cd conf

$vi core-site.xml
<!– This conf denotes the filesystem. Also which ip & port for NN to bind –>
<property>
<name>fs.default.name</name>
<value>hdfs://datadotz_master:50000</value>
</property>

$vi mapred-site.xml
<!– This conf denotes the Mapreduce –>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://datadotz_master:50001</value>
</property>

$vi hdfs-site.xml
<!– Directory to store NameNode metadata–>
<property>
<name>dfs.name.dir</name>
<value>/home/ec2-user/hadoop-dir/name-dir</value>
</property>

<!– Directory to store blocks and other related by DataNode.–>
<property>
<name>dfs.data.dir</name>
<value>/home/ec2-user/hadoop-dir/data-dir</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

$vi hadoop-env.sh
export JAVA_HOME=/home/ec2-user/jdk1.6.0_45

$vi masters
datadotz_slave1

$vi slaves
datadotz_master
datadotz_slave1
datadotz_slave2
————————————————————————————————————–
Passwordless Authentication. If you use default script such as start-all.sh, stop-all.sh and other similar scripts, it needs to log(using ssh) into other machines from the machine where you are running the script. Typically we run it from NN machine. While logging in , every machine will ask for passwords. If you are having 10 node cluster, you are needed to enter password minimum of 10. To avoid the same, we create passwordless authentication. First we need to generate the ssh key and copy the public key into authorized keys of the destination machine.

Install the Openssh-server

$ sudo apt-get install openssh-server

Generate the ssh key

(manages and converts authentication keys)

$ cd
$ ssh-keygen -t rsa
$ cd .ssh
$ cat id_rsa.pub >> authorized_keys

Setup passwordless ssh to localhost and to slaves

$ ssh localhost (or) ipaddress
(Asking No Password)
————————————————————————————————————–

MACHINE – 2

1. Download the Hadoop tar from below link. You can also take the link from Apache Hadoop Site. Below is the link for hadoop-1.2.1. IF the below link is broken then please check Hadoop official Site.
wget http://mirror.olnevhost.net/pub/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

2. Download the Java from Oracle. Please Check for Oracle Website if the link is broken. Please check for current version in the Oracle Site.
wget http://download.oracle.com/otn/java/jdk/6u45-b06/jdk-6u45-linux-x64.bin?AuthParam=1428646766_2456c1516fcf63e9734ff30e51667a2b

3. Unpack the Comparisons
$tar -zxvf hadoop-1.2.1.tar.gz
$sh jdk-6u45-linux-x64.bin

4. Set JAVA PATH in Linux Environment. Edit .bashrc and add below 2 lines
$vi .bashrc
export JAVA_HOME=/home/ec2-user/jdk1.6.0_45
export PATH=$HOME/bin:$JAVA_HOME/bin:$PATH

Execute the .bashrc File to make the changes in effect immediately for the current ssh session
$source .bashrc
————————————————————————————————————–

5. Modify Hadoop Configuration Files.Below are the files required for respective daemons.

NAMENODE core-site.xml
JOBTRACKER mapperd-site.xml
SECONDARYNAMENODE mastors
DATANODE
TASKTRECKER slaves

Ports used by Hadoop Daemons
Remote Procedure Call (RPC) is a protocol that one program can use to request a service from a program located in another computer in a network without having to understand network details.
WEB which is denoted in the table is the WEB port number.

Hadoop Daemons RPC Port WEB UI
NameNode 50000 50070
DataNode 50010 50075
SecondaryNameNode
JobTracker 50001 50030
TaskTracker 50020 50060

Set in etc/hosts

$sudo vi /etc/hosts

10.0.0.2 datadotz_master
10.0.0.3 datadotz_slave1
10.0.0.4 datadotz_slave2

Modify Hadoop Configuration Files

Hadoop Configuration

$cd hadoop-1.2.1
$cd conf

$vi core-site.xml

<property>
<name>fs.default.name</name>
<value>hdfs://datadotz_master:50000</value>
</property>

$vi mapred-site.xml

<property>
<name>mapred.job.tracker</name>
<value>hdfs://datadotz_master:50001</value>
</property>

$vi hdfs-site.xml

<property>
<name>dfs.data.dir</name>
<value>/home/ec2-user/hadoop-dir/data-dir</value>
</property>

$vi hadoop-env.sh

export JAVA_HOME=/home/ec2-user/jdk1.6.0_45
————————————————————————————————————–
Passwordless Authentication. If you use default script such as start-all.sh, stop-all.sh and other similar scripts, it needs to log(using ssh) into other machines from the machine where you are running the script. Typically we run it from NN machine. While logging in , every machine will ask for passwords. If you are having 10 node cluster, you are needed to enter password minimum of 10. To avoid the same, we create passwordless authentication. First we need to generate the ssh key and copy the public key into authorized keys of the destination machine.

Generate the ssh key

(manages and converts authentication keys)

$ cd
$ ssh-keygen -t rsa
$ cd .ssh
$ cat id_rsa.pub >> authorized_keys

Setup passwordless ssh to localhost and to slaves

$ ssh localhost (or) ipaddress
(Asking No Password )
————————————————————————————————————–
Copy Namenode id_rsa.pub key paste (Upending) in slaves Machine authorized_keys
————————————————————————————————————–

MACHINE – 3

1. Download the Hadoop tar from below link. You can also take the link from Apache Hadoop Site. Below is the link for hadoop-1.2.1. IF the below link is broken then please check Hadoop official Site.
wget http://mirror.olnevhost.net/pub/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz

2. Download the Java from Oracle. Please Check for Oracle Website if the link is broken. Please check for current version in the Oracle Site.
wget http://download.oracle.com/otn/java/jdk/6u45-b06/jdk-6u45-linux-x64.bin?AuthParam=1428646766_2456c1516fcf63e9734ff30e51667a2b

3. Unpack the Comparisons
$tar -zxvf hadoop-1.2.1.tar.gz
$sh jdk-6u45-linux-x64.bin

4. Set JAVA PATH in Linux Environment. Edit .bashrc and add below 2 lines
$vi .bashrc
export JAVA_HOME=/home/ec2-user/jdk1.6.0_45
export PATH=$HOME/bin:$JAVA_HOME/bin:$PATH

Execute the .bashrc File to make the changes in effect immediately for the current ssh session
$source .bashrc
————————————————————————————————————–

5. Modify Hadoop Configuration Files.Below are the files required for respective daemons.

NAMENODE core-site.xml
JOBTRACKER mapperd-site.xml
SECONDARYNAMENODE mastors
DATANODE
TASKTRECKER slaves

Ports used by Hadoop Daemons
Remote Procedure Call (RPC) is a protocol that one program can use to request a service from a program located in another computer in a network without having to understand network details.
WEB which is denoted in the table is the WEB port number.

Hadoop Daemons RPC Port WEB UI
NameNode 50000 50070
DataNode 50010 50075
SecondaryNameNode
JobTracker 50001 50030
TaskTracker 50020 50060

Set in etc/hosts

$sudo vi /etc/hosts

10.0.0.2 datadotz_master
10.0.0.3 datadotz_slave1
10.0.0.4 datadotz_slave2

Modify Hadoop Configuration Files

Hadoop Configuration

$cd hadoop-1.2.1
$cd conf

$vi core-site.xml

<property>
<name>fs.default.name</name>
<value>hdfs://datadotz_master:50000</value>
</property>

$vi mapred-site.xml

<property>
<name>mapred.job.tracker</name>
<value>hdfs://datadotz_master:50001</value>
</property>

$vi hdfs-site.xml

<property>
<name>dfs.data.dir</name>
<value>/home/ec2-user/hadoop-dir/data-dir</value>
</property>

$vi hadoop-env.sh

export JAVA_HOME=/home/ec2-user/jdk1.6.0_45
————————————————————————————————————–
Passwordless Authentication. If you use default script such as start-all.sh, stop-all.sh and other similar scripts, it needs to log(using ssh) into other machines from the machine where you are running the script. Typically we run it from NN machine. While logging in , every machine will ask for passwords. If you are having 10 node cluster, you are needed to enter password minimum of 10. To avoid the same, we create passwordless authentication. First we need to generate the ssh key and copy the public key into authorized keys of the destination machine.

Generate the ssh key

(manages and converts authentication keys)

$ cd
$ ssh-keygen -t rsa
$ cd .ssh
$ cat id_rsa.pub >> authorized_keys

Setup passwordless ssh to localhost and to slaves

$ ssh localhost (or) ip address

(Asking No Password )

————————————————————————————————————–

Copy Namenode id_rsa.pub key paste (Upending) in slaves Machine authorized_keys

————————————————————————————————————–

Master Node (Namenode Machine) (MACHINE – 1)

————————————————————————————————————–

Format Hadoop NameNode

$cd
$cd hadoop-1.2.1
$bin/hadoop namenode -format

Start All Hadoop Related Services

$bin/start-all.sh (start only Namenode)
$ jps (Hadoop – java process status)

NameNode
DataNode
JobTracker
TaskTracker

MACHINE – 2

Below command ‘Jps’ is for listing JVM processes on local and remote machines.
$ jps

SecondaryNameNode
DataNode
TaskTracker

MACHINE – 3

$ jps

DataNode
TaskTracker

(Browse NameNode and JobTracker Web GUI )

NameNode : datadotz_master:50070
JobTracker : datadotz_master:50030

$ bin/stop-all.sh (Stop All Hadoop Related Services ) 

———————————-

Article written by DataDotz Team

DataDotz is a Chennai based BigData Team primarily focussed on consulting and training on technologies such as Apache Hadoop, Apache Spark , NoSQL(HBase, Cassandra, MongoDB), Search and Cloud Computing.

Note: DataDotz also provides classroom based Apache Kafka training in Chennai. The Course includes Cassandra , MongoDB, Scala and Apache Spark Training. For more details related to Apache Spark training in Chennai, please visit http://datadotz.com/training/