Installing Apache Phoenix(SQL on HBase) with sample Queries

Hi Phoenix Learners,

This Article guides you to install Apache Phoenix in your machine. Phoenix is an open source SQL skin for Hbase. Your machine should have Hadoop and Hbase ,before installing Phoenix. Phoenix have different versions, check the version compatibility

Phoenix 2.x – HBase 0.94.x
Phoenix 3.x – HBase 0.94.x
Phoenix 4.x – HBase 0.98.1+
Here in this material we will be installing the Phoenix 3.x – HBase 0.94.x

Phoenix_installation

Download

Download Phoenix from below link. Visit Phoenix official web Site for its latest version.

wget http://apache.mirrors.lucidnetworks.net/phoenix/phoenix-3.3.1/bin/phoenix-3.3.1-bin.tar.gz

Installation

Untar the downloaded Phoenix version in any path.

Now ,get into the phoenix “/home/Datadotz/phoenix-3.3.1-bin/hadoop1/

Copy two hadoop jars (phoenix-3.3.1-client-hadoop1, phoenix-core-3.3.1-tests-hadoop1) and paste them in the hadoop-1.2.1/lib folder.

Then,get into the phoenix common lib “/home/Datadotz/phoenix-3.3.1-bin/common

Copy all the jars and paste them in Hbase-0.94.15/lib folder

Set the following in .bashrc

export PHOENIX_HOME=/home/Datadotz/phoenix-3.3.1-bin
export HADOOP_HOME=/home/Datadotz/hadoop-1.2.1
export HBASE_HOME=/home/Datadotz/hbase-0.94.15

Starting Phoenix:

Start your Hadoop and Hbase in a perfect way, before starting Phoenix. Check whether all daemons belonging to Hbase and Hadoop are running in your machine or not.

Now lets check the dependencies of Phoenix in Hbase using below command,

bin/hbase classpath | grep ‘phoenix’

This will show you 7 phoenix jar highlighted in the terminal. Now your machine is perfectly ready to start Phoenix

Command to start Phoenix:

Enter into “/home/Datadotz/phoenix-3.3.1-bin/hadoop1/” in your terminal and give the below command

bin/sqlline.py Datadotz

The name “Datadotz” mentioned on above command is the Zookeeper Machine name. If you are running in same Machine then you can give ‘localhost’ instead of ‘Datadotz’
You will get a Phoenix shell as shown below.

0: jdbc:phoenix:Datadotz>
Now Phoenix is ready in your machine. Follow the below query and check how Phoenix works in your machine

To list the tables:
0: jdbc:phoenix:Datadotz> !tables

To create a table:
create table Bigdata (ID integer not null primary key, Name varchar);

To insert data in created table:
0: jdbc:phoenix:Datadotz> upsert into Bigdata values (1,’Hadoop’);
0: jdbc:phoenix:Datadotz> upsert into Bigdata values (2,’Mongodb’);
0: jdbc:phoenix:Datadotz> upsert into Bigdata values (3,’Spark’);

To view the table:
select * from Bigdata;

To write ‘where’ condition:
0: jdbc:phoenix:Datadotz> select * Bigdata where name=’Spark’;

That’s it! Phoenix is working in your machine.

Check for data in Hbase:

Enter into hbase shell:
bin/hbase shell

List the table 
hbase(main):001:0> list
The table Named as ‘Bigdata’ will be available in the list.

View the table 
hbase(main):005:0> scan ‘BIGDATA’

This is how Phoenix works. Now you can start querying over Phoenix.

———————————-

Article written by DataDotz Team

DataDotz is a Chennai based BigData Team primarily focussed on consulting and training on technologies such as Apache Hadoop, Apache Spark , NoSQL(HBase, Cassandra, MongoDB), Search and Cloud Computing.

Note: DataDotz also provides classroom based Apache Kafka training in Chennai. The Course includes Cassandra , MongoDB, Scala and Apache Spark Training. For more details related to Apache Spark training in Chennai, please visit http://datadotz.com/training/