Installing CDH5 on a Single Linux (Ubuntu) Node in Pseudo-distributed Mode on Amazon EC2

Cloudera has good article here on how to install CDH5 on a Single Linux Node in Pseudo-distributed Mode:

But I have added more information here for Ubuntu which will help you getting it up and running quickly.
I hope you know how to launch instance on Amazon EC2. We are starting with Ubuntu 14.04 and java pre-installed on this instance.
1) First find out which version of ubuntu you are running by using following command:
root@ip-172-30-0-84:/home/ubuntu# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.2 LTS
Release: 14.04
Codename: trusty

2)Download the CDH 5 “1-click Install” package according to your ubuntu version: for us, it is trusty as highlighted in step 1.
wget http://archive.cloudera.com/cdh5/one-click-install/trusty/amd64/cdh5-repository_1.0_all.deb

3)Install the package
root@ip-172-30-0-84:/home/ubuntu# sudo dpkg -i cdh5-repository_1.0_all.deb
Selecting previously unselected package cdh5-repository.
(Reading database … 70808 files and directories currently installed.)
Preparing to unpack cdh5-repository_1.0_all.deb …
Unpacking cdh5-repository (1.0) …
Setting up cdh5-repository (1.0) …
gpg: keyring `/etc/apt/secring.gpg’ created
gpg: keyring `/etc/apt/trusted.gpg.d/cloudera-cdh5.gpg’ created
gpg: /etc/apt/trustdb.gpg: trustdb created
gpg: key 02A818DD: public key “Cloudera Apt Repository” imported
gpg: Total number processed: 1
gpg: imported: 1

4)Install hadoop in pseudo-distributed mode:
4a) root@ip-172-30-0-84:/home/ubuntu# sudo apt-get update
Ign http://us-east-1.ec2.archive.ubuntu.com trusty InRelease
Ign http://us-east-1.ec2.archive.ubuntu.com trusty-updates InRelease
Hit http://us-east-1.ec2.archive.ubuntu.com trusty Release.gpg
Get:1 http://archive.cloudera.com trusty-cdh5 InRelease [1,930 B]
Get:2 http://us-east-1.ec2.archive.ubuntu.com trusty-updates Release.gpg [933 B]
Ign http://security.ubuntu.com trusty-security InRelease
Ign http://download.draios.com stable-amd64/ InRelease
Ign http://ppa.launchpad.net trusty InRelease
Ign http://ppa.launchpad.net trusty InRelease
Hit http://us-east-1.ec2.archive.ubuntu.com trusty Release
Get:3 http://us-east-1.ec2.archive.ubuntu.com trusty-updates Release [63.5 kB]
Get:4 http://download.draios.com stable-amd64/ Release.gpg [490 B]
Get:5 http://security.ubuntu.com trusty-security Release.gpg [933 B]
Get:6 http://download.draios.com stable-amd64/ Release [753 B]
Get:7 http://security.ubuntu.com trusty-security Release [63.5 kB]
Hit http://ppa.launchpad.net trusty Release.gpg
Get:8 http://archive.cloudera.com trusty-cdh5/contrib Sources [11.0 kB]
Hit http://ppa.launchpad.net trusty Release.gpg
Hit http://us-east-1.ec2.archive.ubuntu.com trusty/main Sources
Hit http://ppa.launchpad.net trusty Release
Hit http://us-east-1.ec2.archive.ubuntu.com trusty/universe Sources
Hit http://us-east-1.ec2.archive.ubuntu.com trusty/main amd64 Packages
Hit http://us-east-1.ec2.archive.ubuntu.com trusty/universe amd64 Packages
Hit http://us-east-1.ec2.archive.ubuntu.com trusty/main Translation-en
Hit http://us-east-1.ec2.archive.ubuntu.com trusty/universe Translation-en
Get:9 http://us-east-1.ec2.archive.ubuntu.com trusty-updates/main Sources [229 kB]
Get:10 http://archive.cloudera.com trusty-cdh5/contrib amd64 Packages [26.6 kB]
Get:11 http://us-east-1.ec2.archive.ubuntu.com trusty-updates/universe Sources [133 kB]
Get:12 http://us-east-1.ec2.archive.ubuntu.com trusty-updates/main amd64 Packages [600 kB]
Get:13 http://us-east-1.ec2.archive.ubuntu.com trusty-updates/universe amd64 Packages [307 kB]
Get:14 http://us-east-1.ec2.archive.ubuntu.com trusty-updates/main Translation-en [289 kB]
Get:15 http://us-east-1.ec2.archive.ubuntu.com trusty-updates/universe Translation-en [163 kB]
Get:16 http://download.draios.com stable-amd64/ Packages [1,451 B]
Hit http://ppa.launchpad.net trusty Release
Ign http://us-east-1.ec2.archive.ubuntu.com trusty/main Translation-en_US
Ign http://us-east-1.ec2.archive.ubuntu.com trusty/universe Translation-en_US
Ign http://archive.cloudera.com trusty-cdh5/contrib Translation-en_US
Ign http://archive.cloudera.com trusty-cdh5/contrib Translation-en
Get:17 http://security.ubuntu.com trusty-security/main Sources [91.1 kB]
Get:18 http://security.ubuntu.com trusty-security/universe Sources [29.4 kB]
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Get:19 http://security.ubuntu.com trusty-security/main amd64 Packages [326 kB]
Ign http://download.draios.com stable-amd64/ Translation-en_US
Ign http://download.draios.com stable-amd64/ Translation-en
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:20 http://security.ubuntu.com trusty-security/universe amd64 Packages [113 kB]
Get:21 http://security.ubuntu.com trusty-security/main Translation-en [178 kB]
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Get:22 http://security.ubuntu.com trusty-security/universe Translation-en [66.3 kB]
Hit http://ppa.launchpad.net trusty/main Translation-en
Fetched 2,695 kB in 2s (981 kB/s)
Reading package lists… Done
————————————————————————
4b)root@ip-172-30-0-84:/home/ubuntu# sudo apt-get install hadoop-conf-pseudo
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following extra packages will be installed:
avro-libs bigtop-jsvc bigtop-utils hadoop hadoop-hdfs hadoop-hdfs-datanode
hadoop-hdfs-namenode hadoop-hdfs-secondarynamenode hadoop-mapreduce
hadoop-mapreduce-historyserver hadoop-yarn hadoop-yarn-nodemanager
hadoop-yarn-resourcemanager parquet parquet-format zookeeper
The following NEW packages will be installed:
avro-libs bigtop-jsvc bigtop-utils hadoop hadoop-conf-pseudo hadoop-hdfs
hadoop-hdfs-datanode hadoop-hdfs-namenode hadoop-hdfs-secondarynamenode
hadoop-mapreduce hadoop-mapreduce-historyserver hadoop-yarn
hadoop-yarn-nodemanager hadoop-yarn-resourcemanager parquet parquet-format
zookeeper
0 upgraded, 17 newly installed, 0 to remove and 18 not upgraded.
Need to get 196 MB of archives.
After this operation, 238 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib avro-libs all 1.7.6+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [55.9 MB]
Get:2 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib bigtop-utils all 0.7.0+cdh5.4.4+0-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [52.2 kB]
Get:3 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib bigtop-jsvc amd64 0.6.0+cdh5.4.4+680-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [72.8 kB]
Get:4 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib zookeeper all 3.4.5+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [4,186 kB]
Get:5 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib parquet-format all 2.1.0+cdh5.4.4+12-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [470 kB]
Get:6 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-yarn all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [15.2 MB]
Get:7 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-mapreduce all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [28.5 MB]
Get:8 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-hdfs all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [18.5 MB]
Get:9 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib parquet all 1.5.0+cdh5.4.4+96-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [28.5 MB]
Get:10 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [43.6 MB]
Get:11 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-hdfs-namenode all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [121 kB]
Get:12 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-hdfs-datanode all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [121 kB]
Get:13 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-hdfs-secondarynamenode all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [121 kB]
Get:14 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-yarn-resourcemanager all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [121 kB]
Get:15 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-yarn-nodemanager all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [121 kB]
Get:16 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-mapreduce-historyserver all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [121 kB]
Get:17 http://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib hadoop-conf-pseudo all 2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4 [124 kB]
Fetched 196 MB in 5s (34.0 MB/s)
Selecting previously unselected package avro-libs.
(Reading database … 70813 files and directories currently installed.)
Preparing to unpack …/avro-libs_1.7.6+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking avro-libs (1.7.6+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package bigtop-utils.
Preparing to unpack …/bigtop-utils_0.7.0+cdh5.4.4+0-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking bigtop-utils (0.7.0+cdh5.4.4+0-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package bigtop-jsvc.
Preparing to unpack …/bigtop-jsvc_0.6.0+cdh5.4.4+680-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_amd64.deb …
Unpacking bigtop-jsvc (0.6.0+cdh5.4.4+680-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package zookeeper.
Preparing to unpack …/zookeeper_3.4.5+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking zookeeper (3.4.5+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package parquet-format.
Preparing to unpack …/parquet-format_2.1.0+cdh5.4.4+12-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking parquet-format (2.1.0+cdh5.4.4+12-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-yarn.
Preparing to unpack …/hadoop-yarn_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-yarn (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-mapreduce.
Preparing to unpack …/hadoop-mapreduce_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-mapreduce (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-hdfs.
Preparing to unpack …/hadoop-hdfs_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-hdfs (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package parquet.
Preparing to unpack …/parquet_1.5.0+cdh5.4.4+96-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking parquet (1.5.0+cdh5.4.4+96-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop.
Preparing to unpack …/hadoop_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-hdfs-namenode.
Preparing to unpack …/hadoop-hdfs-namenode_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-hdfs-namenode (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-hdfs-datanode.
Preparing to unpack …/hadoop-hdfs-datanode_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-hdfs-datanode (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-hdfs-secondarynamenode.
Preparing to unpack …/hadoop-hdfs-secondarynamenode_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-hdfs-secondarynamenode (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-yarn-resourcemanager.
Preparing to unpack …/hadoop-yarn-resourcemanager_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-yarn-resourcemanager (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-yarn-nodemanager.
Preparing to unpack …/hadoop-yarn-nodemanager_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-yarn-nodemanager (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-mapreduce-historyserver.
Preparing to unpack …/hadoop-mapreduce-historyserver_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-mapreduce-historyserver (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Selecting previously unselected package hadoop-conf-pseudo.
Preparing to unpack …/hadoop-conf-pseudo_2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4_all.deb …
Unpacking hadoop-conf-pseudo (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Processing triggers for ureadahead (0.100.0-16) …
Setting up avro-libs (1.7.6+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up bigtop-utils (0.7.0+cdh5.4.4+0-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up bigtop-jsvc (0.6.0+cdh5.4.4+680-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up zookeeper (3.4.5+cdh5.4.4+91-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
update-alternatives: using /etc/zookeeper/conf.dist to provide /etc/zookeeper/conf (zookeeper-conf) in auto mode
Setting up parquet-format (2.1.0+cdh5.4.4+12-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up parquet (1.5.0+cdh5.4.4+96-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up hadoop (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
update-alternatives: using /etc/hadoop/conf.empty to provide /etc/hadoop/conf (hadoop-conf) in auto mode
Setting up hadoop-yarn (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up hadoop-yarn-resourcemanager (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
starting resourcemanager, logging to /var/log/hadoop-yarn/yarn-yarn-resourcemanager-ip-172-30-0-84.out
* Started Hadoop resourcemanager:
Setting up hadoop-yarn-nodemanager (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-ip-172-30-0-84.out
* Started Hadoop nodemanager:
Setting up hadoop-mapreduce (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up hadoop-hdfs (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
Setting up hadoop-hdfs-namenode (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-30-0-84.out
* Failed to start Hadoop namenode. Return value: 1
invoke-rc.d: initscript hadoop-hdfs-namenode, action “start” failed.
Setting up hadoop-hdfs-datanode (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-30-0-84.out
* Failed to start Hadoop datanode. Return value: 1
invoke-rc.d: initscript hadoop-hdfs-datanode, action “start” failed.
Setting up hadoop-hdfs-secondarynamenode (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-ip-172-30-0-84.out
* Failed to start Hadoop secondarynamenode. Return value: 1
invoke-rc.d: initscript hadoop-hdfs-secondarynamenode, action “start” failed.
Setting up hadoop-mapreduce-historyserver (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
chown: changing ownership of ‘/var/log/hadoop-mapreduce’: Operation not permitted
starting historyserver, logging to /var/log/hadoop-mapreduce/mapred-mapred-historyserver-ip-172-30-0-84.out
15/08/13 19:18:51 INFO hs.JobHistoryServer: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobHistoryServer
STARTUP_MSG: host = ip-172-30-0-84.ec2.internal/172.30.0.84
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0-cdh5.4.4
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/logredactor-1.0.3.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/zookeeper.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/htrace-core-3.0.4.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-tools.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-aws-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//hadoop-nfs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scala_2.10.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-scrooge_2.10.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-common-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//hadoop-annotations-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-jackson.jar:/usr/lib/hadoop/.//parquet-protobuf.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/zookeeper.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jline-2.11.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//zookeeper.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/modules/*.jar
STARTUP_MSG: build = http://github.com/cloudera/hadoop -r b739cd891f6269da5dd22766d7e75bd2c9db73b6 ; compiled by ‘jenkins’ on 2015-07-06T23:58Z
STARTUP_MSG: java = 1.7.0_80
************************************************************/
* Started Hadoop historyserver:
Processing triggers for ureadahead (0.100.0-16) …
Setting up hadoop-conf-pseudo (2.6.0+cdh5.4.4+597-1.cdh5.4.4.p0.6~trusty-cdh5.4.4) …
update-alternatives: using /etc/hadoop/conf.pseudo to provide /etc/hadoop/conf (hadoop-conf) in auto mode
Processing triggers for libc-bin (2.19-0ubuntu6.6) …
root@ip-172-30-0-84:/home/ubuntu#

For YARN, a pseudo-distributed Hadoop installation consists of one node running all five Hadoop daemons: namenode, secondarynamenode, resourcemanager, datanode, and nodemanager.
5) To view files on Ubuntu system:
ubuntu@ip-172-30-0-142:~$ dpkg -L hadoop-conf-pseudo
/.
/etc
/etc/hadoop
/etc/hadoop/conf.pseudo
/etc/hadoop/conf.pseudo/hadoop-env.sh
/etc/hadoop/conf.pseudo/hadoop-metrics.properties
/etc/hadoop/conf.pseudo/core-site.xml
/etc/hadoop/conf.pseudo/README
/etc/hadoop/conf.pseudo/hdfs-site.xml
/etc/hadoop/conf.pseudo/mapred-site.xml
/etc/hadoop/conf.pseudo/yarn-site.xml
/etc/hadoop/conf.pseudo/log4j.properties
/usr
/usr/share
/usr/share/doc
/usr/share/doc/hadoop-conf-pseudo
/usr/share/doc/hadoop-conf-pseudo/copyright
/usr/share/doc/hadoop-conf-pseudo/changelog.Debian.gz
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/hadoop-conf-pseudo

6) Format Namenode
root@ip-172-30-0-84:/home/ubuntu# sudo -u hdfs hdfs namenode -format
15/08/13 19:35:00 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ip-172-30-0-84.ec2.internal/172.30.0.84
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.0-cdh5.4.4
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/logredactor-1.0.3.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/zookeeper.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/htrace-core-3.0.4.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-tools.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-aws-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//hadoop-nfs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scala_2.10.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-scrooge_2.10.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-common-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//hadoop-annotations-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-jackson.jar:/usr/lib/hadoop/.//parquet-protobuf.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/zookeeper.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jline-2.11.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.4-tests.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//zookeeper.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.6.0-cdh5.4.4.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar
STARTUP_MSG: build = http://github.com/cloudera/hadoop -r b739cd891f6269da5dd22766d7e75bd2c9db73b6 ; compiled by ‘jenkins’ on 2015-07-06T23:58Z
STARTUP_MSG: java = 1.7.0_80
************************************************************/
15/08/13 19:35:00 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/08/13 19:35:00 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-b2904cb5-935f-4a49-855f-e135a07344bf
15/08/13 19:35:01 INFO namenode.FSNamesystem: No KeyProvider found.
15/08/13 19:35:01 INFO namenode.FSNamesystem: fsLock is fair:true
15/08/13 19:35:01 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/08/13 19:35:01 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/08/13 19:35:01 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/08/13 19:35:01 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Aug 13 19:35:01
15/08/13 19:35:01 INFO util.GSet: Computing capacity for map BlocksMap
15/08/13 19:35:01 INFO util.GSet: VM type = 64-bit
15/08/13 19:35:01 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/08/13 19:35:01 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/08/13 19:35:01 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/08/13 19:35:01 INFO blockmanagement.BlockManager: defaultReplication = 1
15/08/13 19:35:01 INFO blockmanagement.BlockManager: maxReplication = 512
15/08/13 19:35:01 INFO blockmanagement.BlockManager: minReplication = 1
15/08/13 19:35:01 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/08/13 19:35:01 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/08/13 19:35:01 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/08/13 19:35:01 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/08/13 19:35:01 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/08/13 19:35:01 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
15/08/13 19:35:01 INFO namenode.FSNamesystem: supergroup = supergroup
15/08/13 19:35:01 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/08/13 19:35:01 INFO namenode.FSNamesystem: HA Enabled: false
15/08/13 19:35:01 INFO namenode.FSNamesystem: Append Enabled: true
15/08/13 19:35:01 INFO util.GSet: Computing capacity for map INodeMap
15/08/13 19:35:01 INFO util.GSet: VM type = 64-bit
15/08/13 19:35:01 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/08/13 19:35:01 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/08/13 19:35:01 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/08/13 19:35:01 INFO util.GSet: Computing capacity for map cachedBlocks
15/08/13 19:35:01 INFO util.GSet: VM type = 64-bit
15/08/13 19:35:01 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/08/13 19:35:01 INFO util.GSet: capacity = 2^18 = 262144 entries
15/08/13 19:35:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/08/13 19:35:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/08/13 19:35:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
15/08/13 19:35:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/08/13 19:35:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/08/13 19:35:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/08/13 19:35:01 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/08/13 19:35:01 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/08/13 19:35:01 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/08/13 19:35:01 INFO util.GSet: VM type = 64-bit
15/08/13 19:35:01 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/08/13 19:35:01 INFO util.GSet: capacity = 2^15 = 32768 entries
15/08/13 19:35:01 INFO namenode.NNConf: ACLs enabled? false
15/08/13 19:35:01 INFO namenode.NNConf: XAttrs enabled? true
15/08/13 19:35:01 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/08/13 19:35:01 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1416750624-172.30.0.84-1439494501674
15/08/13 19:35:01 INFO common.Storage: Storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
15/08/13 19:35:01 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/08/13 19:35:01 INFO util.ExitUtil: Exiting with status 0
15/08/13 19:35:01 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ip-172-30-0-84.ec2.internal/172.30.0.84
************************************************************/
root@ip-172-30-0-84:/home/ubuntu#

7) Start HDFS
ubuntu@ip-172-30-0-178:~$ for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done
starting datanode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-30-0-178.out
* Started Hadoop datanode (hadoop-hdfs-datanode):
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-30-0-178.out
* Started Hadoop namenode:
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-ip-172-30-0-178.out
* Started Hadoop secondarynamenode:

8) Create directories needed for Hadoop processes
ubuntu@ip-172-30-0-142:~$ sudo /usr/lib/hadoop/libexec/init-hdfs.sh
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /tmp’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 1777 /tmp’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /var’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /var/log’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 1775 /var/log’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown yarn:mapred /var/log’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /tmp/hadoop-yarn’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /tmp/hadoop-yarn/staging/history/done_intermediate’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn/staging’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 1777 /tmp’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /var/log/hadoop-yarn/apps’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 1777 /var/log/hadoop-yarn/apps’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown yarn:mapred /var/log/hadoop-yarn/apps’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /hbase’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown hbase /hbase’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /benchmarks’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 777 /benchmarks’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/history’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown mapred /user/history’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/jenkins’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 777 /user/jenkins’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown jenkins /user/jenkins’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/hive’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 777 /user/hive’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown hive /user/hive’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/root’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 777 /user/root’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown root /user/root’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/hue’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 777 /user/hue’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown hue /user/hue’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share/lib’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share/lib/hive’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share/lib/mapreduce-streaming’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share/lib/distcp’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share/lib/pig’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/oozie/share/lib/sqoop’
+ ls ‘/usr/lib/hive/lib/*.jar’
+ ls /usr/lib/hadoop-mapreduce/hadoop-streaming-2.6.0-cdh5.4.4.jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -put /usr/lib/hadoop-mapreduce/hadoop-streaming*.jar /user/oozie/share/lib/mapreduce-streaming’
+ ls /usr/lib/hadoop-mapreduce/hadoop-distcp-2.6.0-cdh5.4.4.jar /usr/lib/hadoop-mapreduce/hadoop-distcp.jar
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -put /usr/lib/hadoop-mapreduce/hadoop-distcp*.jar /user/oozie/share/lib/distcp’
+ ls ‘/usr/lib/pig/lib/*.jar’ ‘/usr/lib/pig/*.jar’
+ ls ‘/usr/lib/sqoop/lib/*.jar’ ‘/usr/lib/sqoop/*.jar’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod -R 777 /user/oozie’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown -R oozie /user/oozie’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /var/lib/hadoop-hdfs/cache/mapred/mapred/staging’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chmod 1777 /var/lib/hadoop-hdfs/cache/mapred/mapred/staging’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown -R mapred /var/lib/hadoop-hdfs/cache/mapred’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -mkdir -p /user/spark/applicationHistory’
+ su -s /bin/bash hdfs -c ‘/usr/bin/hadoop fs -chown spark /user/spark/applicationHistory’

9) Verify the HDFS file structure:
ubuntu@ip-172-30-0-178:~$ sudo -u hdfs hadoop fs -ls -R /
drwxrwxrwx – hdfs supergroup 0 2015-08-13 20:11 /benchmarks
drwxr-xr-x – hbase supergroup 0 2015-08-13 20:11 /hbase
drwxrwxrwt – hdfs supergroup 0 2015-08-13 20:11 /tmp
drwxrwxrwt – mapred mapred 0 2015-08-13 20:11 /tmp/hadoop-yarn
drwxrwxrwt – mapred mapred 0 2015-08-13 20:11 /tmp/hadoop-yarn/staging
drwxrwxrwt – mapred mapred 0 2015-08-13 20:11 /tmp/hadoop-yarn/staging/history
drwxrwxrwt – mapred mapred 0 2015-08-13 20:11 /tmp/hadoop-yarn/staging/history/done_intermediate
drwxr-xr-x – hdfs supergroup 0 2015-08-13 20:12 /user
drwxr-xr-x – mapred supergroup 0 2015-08-13 20:11 /user/history
drwxrwxrwx – hive supergroup 0 2015-08-13 20:11 /user/hive
drwxrwxrwx – hue supergroup 0 2015-08-13 20:12 /user/hue
drwxrwxrwx – jenkins supergroup 0 2015-08-13 20:11 /user/jenkins
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/ooze
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share/lib
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share/lib/distcp
-rwxrwxrwx 1 oozie supergroup 98630 2015-08-13 20:12 /user/oozie/share/lib/distcp/hadoop-distcp-2.6.0-cdh5.4.4.jar
-rwxrwxrwx 1 oozie supergroup 98630 2015-08-13 20:12 /user/oozie/share/lib/distcp/hadoop-distcp.jar
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share/lib/hive
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share/lib/mapreduce-streaming
-rwxrwxrwx 1 oozie supergroup 109997 2015-08-13 20:12 /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-2.6.0-cdh5.4.4.jar
-rwxrwxrwx 1 oozie supergroup 109997 2015-08-13 20:12 /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming.jar
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share/lib/pig
drwxrwxrwx – oozie supergroup 0 2015-08-13 20:12 /user/oozie/share/lib/sqoop
drwxrwxrwx – root supergroup 0 2015-08-13 20:11 /user/root
drwxr-xr-x – hdfs supergroup 0 2015-08-13 20:12 /user/spark
drwxr-xr-x – spark supergroup 0 2015-08-13 20:12 /user/spark/applicationHistory
drwxr-xr-x – hdfs supergroup 0 2015-08-13 20:12 /var
drwxr-xr-x – hdfs supergroup 0 2015-08-13 20:12 /var/lib
drwxr-xr-x – hdfs supergroup 0 2015-08-13 20:12 /var/lib/hadoop-hdfs
drwxr-xr-x – hdfs supergroup 0 2015-08-13 20:12 /var/lib/hadoop-hdfs/cache
drwxr-xr-x – mapred supergroup 0 2015-08-13 20:12 /var/lib/hadoop-hdfs/cache/mapred
drwxr-xr-x – mapred supergroup 0 2015-08-13 20:12 /var/lib/hadoop-hdfs/cache/mapred/mapred
drwxrwxrwt – mapred supergroup 0 2015-08-13 20:12 /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
drwxrwxr-t – yarn mapred 0 2015-08-13 20:11 /var/log
drwxr-xr-x – hdfs mapred 0 2015-08-13 20:11 /var/log/hadoop-yarn
drwxrwxrwt – yarn mapred 0 2015-08-13 20:11 /var/log/hadoop-yarn/apps

10) Run ps -ef at your command prompt to see if it has started all required deamons or not:
You will see following:

ubuntu@ip-172-30-0-178:~$ ps -ef
ubuntu 1484 1483 0 19:43 pts/0 00:00:00 -bash
yarn 2365 1 0 19:46 ? 00:00:17 /usr/lib/jvm/java-7-oracle/bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.lo
yarn 2453 1 0 19:46 ? 00:00:12 /usr/lib/jvm/java-7-oracle/bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.log.fi
mapred 3128 1 0 19:47 ? 00:00:09 /usr/lib/jvm/java-7-oracle/bin/java -Dproc_historyserver -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-mapreduce -Dhadoop.log.file=hadoop.log -Dhadoop.home.d
root 3217 2 0 19:49 ? 00:00:00 [kworker/u30:2]
hdfs 3399 1 0 20:01 ? 00:00:09 /usr/lib/jvm/java-7-oracle/bin/java -Dproc_datanode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ip-172-30-0-178
hdfs 3497 1 0 20:01 ? 00:00:08 /usr/lib/jvm/java-7-oracle/bin/java -Dproc_namenode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-ip-172-30-0-178
hdfs 3613 1 0 20:01 ? 00:00:05 /usr/lib/jvm/java-7-oracle/bin/java -Dproc_secondarynamenode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-secondarynameno
ubuntu 5923 1484 0 20:32 pts/0 00:00:00 ps -ef

11) Some useful commands:
Start YARN
$ sudo service hadoop-yarn-resourcemanager start
Start Node Manager
$ sudo service hadoop-yarn-nodemanager start
Start History Server
$ sudo service hadoop-mapreduce-historyserver start

12) Check Resource Manager at: http://yourmachineDNS:8088
13) Check your security group rules: at-least following port should have inbound access

RangeSource

TypeProtocolPort
SSHTCP22Anywhere
Custom TCP RuleTCP50070Anywhere
Custom TCP RuleTCP8088Anywhere

14) Set HADOOP_MAPREDUCE_HOME using following command and then check if you can run hadoop command from anywhere using command line.
ubuntu@ip-172-30-0-178:~$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
ubuntu@ip-172-30-0-178:~$ hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar run a jar file
checknative [-a|-h] check native hadoop and compression libraries availability
distcp copy file or directories recursively
archive -archiveName NAME -p * create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
or
CLASSNAME run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

15) check hadoop version
ubuntu@ip-172-30-0-178:~$ hadoop version
Hadoop 2.6.0-cdh5.4.4
Subversion http://github.com/cloudera/hadoop -r b739cd891f6269da5dd22766d7e75bd2c9db73b6
Compiled by jenkins on 2015-07-06T23:58Z
Compiled with protoc 2.5.0
From source with checksum 4acea6ac185376e0b48b33695e88e7a7
This command was run using /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.4.4.jar

Running an example application with YARN
1) Add user Pankaj
ubuntu@ip-172-30-0-178:~$ sudo useradd -d /home/pankaj -m pankaj
ubuntu@ip-172-30-0-178:~$ sudo passwd pankaj
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
ubuntu@ip-172-30-0-178:~$ ls -la /home/pankaj
total 20
drwxr-xr-x 2 pankaj pankaj 4096 Aug 14 02:56 .
drwxr-xr-x 4 root root 4096 Aug 14 02:56 ..
-rw-r–r– 1 pankaj pankaj 220 Apr 9 2014 .bash_logout
-rw-r–r– 1 pankaj pankaj 3637 Apr 9 2014 .bashrc
-rw-r–r– 1 pankaj pankaj 675 Apr 9 2014 .profile

2) create a home directory on HDFS for the user who will be running the job
root@ip-172-30-0-142:/home/ubuntu# sudo -u hdfs hadoop fs -mkdir /user/pankaj
root@ip-172-30-0-142:/home/ubuntu# sudo -u hdfs hadoop fs -chown pankaj /user/pankaj
root@ip-172-30-0-142:/home/ubuntu# ubuntu# sudo -u hdfs hadls /user/
Found 9 items
drwxr-xr-x – mapred supergroup 0 2015-08-14 14:41 /user/history
drwxrwxrwx – hive supergroup 0 2015-08-14 14:41 /user/hive
drwxrwxrwx – hue supergroup 0 2015-08-14 14:41 /user/hue
drwxrwxrwx – jenkins supergroup 0 2015-08-14 14:41 /user/jenkins
drwxrwxrwx – oozie supergroup 0 2015-08-14 14:41 /user/oozie
drwxr-xr-x – pankaj supergroup 0 2015-08-14 16:25 /user/pankaj
drwxrwxrwx – root supergroup 0 2015-08-14 14:41 /user/root
drwxr-xr-x – hdfs supergroup 0 2015-08-14 14:42 /user/spark

3) change user to newly created user
ubuntu@ip-172-30-0-142:/home/ubuntu# su - pankaj
Password: …

4) Make a directory in HDFS for input file
pankaj@ip-172-30-0-142:~$ hadoop fs -mkdir input

pankaj@ip-172-30-0-142:~$ hadoop fs -ls
Found 1 items
drwxr-xr-x – pankaj supergroup 0 2015-08-14 16:26 input

5) get your input file (this is just an example, you can get file from sources)
pankaj@ip-172-30-0-142:~$ wget http://pthakkar.com/plaintext.txt ~/
pankaj@ip-172-30-0-142:~$ ls
plaintext.txt

6) copy from local directory to HDFS
pankaj@ip-172-30-0-142:~$ hadoop fs -copyFromLocal ~/plaintext.txt input
pankaj@ip-172-30-0-142:~$ hadoop fs -ls
Found 1 items
drwxr-xr-x – pankaj supergroup 0 2015-08-14 16:33 input
pankaj@ip-172-30-0-142:~$ hadoop fs -ls input
Found 1 items
-rw-r–r– 1 pankaj supergroup 4538523 2015-08-14 16:33 input/plaintext.txt

You can do the same thing with put command e.g hdfs dfs -put localfile /user/hadoop/hadoopfile
pankaj@ip-172-30-0-142:~$ hadoop fs -put ~/plaintext.txt input

7) Just note of caution: do not create output directory.
8) Run the map reduce example job that comes with distribution. It takes three arguments: first-> name of the example job which is wordcount in our case, second-> input directory, third-> name of the output directory which will be created the example job.
pankaj@ip-172-30-0-142:~$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount input output
15/08/14 16:37:15 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/08/14 16:37:15 INFO input.FileInputFormat: Total input paths to process : 1
15/08/14 16:37:15 INFO mapreduce.JobSubmitter: number of splits:1
15/08/14 16:37:16 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1439567484890_0002
15/08/14 16:37:16 INFO impl.YarnClientImpl: Submitted application application_1439567484890_0002
15/08/14 16:37:16 INFO mapreduce.Job: The url to track the job: http://ip-172-30-0-142:8088/proxy/application_1439567484890_0002/
15/08/14 16:37:16 INFO mapreduce.Job: Running job: job_1439567484890_0002
15/08/14 16:37:21 INFO mapreduce.Job: Job job_1439567484890_0002 running in uber mode : false
15/08/14 16:37:21 INFO mapreduce.Job: map 0% reduce 0%
15/08/14 16:37:27 INFO mapreduce.Job: map 100% reduce 0%
15/08/14 16:37:33 INFO mapreduce.Job: map 100% reduce 100%
15/08/14 16:37:33 INFO mapreduce.Job: Job job_1439567484890_0002 completed successfully
15/08/14 16:37:33 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=483860
FILE: Number of bytes written=1189105
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=4538643
HDFS: Number of bytes written=356409
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=4141
Total time spent by all reduces in occupied slots (ms)=2994
Total time spent by all map tasks (ms)=4141
Total time spent by all reduce tasks (ms)=2994
Total vcore-seconds taken by all map tasks=4141
Total vcore-seconds taken by all reduce tasks=2994
Total megabyte-seconds taken by all map tasks=4240384
Total megabyte-seconds taken by all reduce tasks=3065856
Map-Reduce Framework
Map input records=129107
Map output records=980637
Map output bytes=8406347
Map output materialized bytes=483860
Input split bytes=120
Combine input records=980637
Combine output records=33505
Reduce input groups=33505
Reduce shuffle bytes=483860
Reduce input records=33505
Reduce output records=33505
Spilled Records=67010
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=54
CPU time spent (ms)=5270
Physical memory (bytes) snapshot=524533760
Virtual memory (bytes) snapshot=2764136448
Total committed heap usage (bytes)=570425344
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=4538523
File Output Format Counters
Bytes Written=356409

9) check the output in output directory
pankaj@ip-172-30-0-142:~$ hadoop fs -ls output
Found 2 items
-rw-r–r– 1 pankaj supergroup 0 2015-08-14 16:37 output/_SUCCESS
-rw-r–r– 1 pankaj supergroup 356409 2015-08-14 16:37 output/part-r-00000

10) copy output files from hdfs output directory to local directory so you can open with your favourite editor.
pankaj@ip-172-30-0-142:~$ hadoop fs -copyToLocal output/* ~/
pankaj@ip-172-30-0-142:~$ ls
part-r-00000 plaintext.txt _SUCCESS
pankaj@ip-172-30-0-142:~$ less part-r-00000

Installing Hadoop Using Apache Ambari and Amazon EC2 – Part 1

In order to run hadoop with Amazon EC2, I used Apache Ambari installation wizard to install Hadoop. According to its documentation, Apache Ambari provides an end-to-end management and monitoring application for Apache Hadoop. Also Ambari provides a graphical user interface (GUI) to deploy and operate a complete Hadoop stack, manage configuration changes, monitor services, and create alerts for all the nodes in your cluster from a central point.
My configuration for Apache Ambari looks like below using six Amazon AMI 64bit instances:
– m1.medium ambarimaster , we will call it p1_mar24
– m1.large hdpmaster1, we will call it p2_mar24
– m1.large hdpmaster2, we will call it p3_mar24
– m1.medium hdpslave1, we will call it p4_mar24
– m1.medium hdpslave2, we will call it p5_mar24
– m1.medium hdpslave3, we will call it p6_mar24
Here is how it looks like:

Now let’s configure everything step-by-step:
1) Connect to your first EC2 instance using ssh. I used Cygwin terminal client on my windows 7 machine.
First I downloaded my key file from Amazon EC2 configuration as pankaj_east_hadoop_20130324.pem. Then I copied that key file in my cygwin terminal home directory that I wanted to work on, which is /home/pankaj/20130324.

$ pwd
/home/pankaj/20130324
$ ls -ltra
total 4
-r-------- 1 pankaj mkgroup 1696 Mar 24 14:19 pankaj_east_hadoop_20130324.pem
drwxr-xr-x+ 1 pankaj mkgroup 0 Mar 24 14:41 ..
drwx------+ 1 pankaj mkgroup 0 Mar 24 14:56 .ssh
drwxr-xr-x+ 1 pankaj mkgroup 0 Mar 24 15:04 .

Once this is done, open new Cygwin terminal window. Go to directory /home/pankaj/20130324 and launch ssh using following command.
ssh -i pankaj_east_hadoop_20130324.pem ec2-user@ec2-174-129-88-73.compute-1.amazonaws.com
This command will fail initially as below.

$ ssh -i pankaj_east_hadoop_20130324.pem ec2-user@ec2-174-129-88-73.compute-1.amazonaws.com
The authenticity of host 'ec2-174-129-88-73.compute-1.amazonaws.com (174.129.88.73)' can't be established.
RSA key fingerprint is eb:e8:f8:35:23:f1:31:cf:29:82:82:fa:eb:4a:3d:b3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-174-129-88-73.compute-1.amazonaws.com,174.129.88.73' (RSA) to the list of known hosts.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0755 for 'pankaj_east_hadoop_20130324.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: pankaj_east_hadoop_20130324.pem
Permission denied (publickey).

The reason is permission issue on your pem file. So run following command:

$chmod 400 pankaj_east_hadoop_20130324.pem

This will remove permission issues and allow you to connect to remote linux instance.

$ ssh -i pankaj_east_hadoop_20130324.pem ec2-user@ec2-174-129-88-73.compute-1.amazonaws.com

__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/

There are 13 security update(s) out of 24 total update(s) available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-62-97-105 ~]$

Now you are connected to the EC2 instance as ec2-user. You can use” sudo su” command to run any command as root user if needed.
This instance will be used as our Ambari server host.
Now you generate public and private SSH keys on this Ambari Server host as below:

[ec2-user@ip-10-62-97-105 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ec2-user/.ssh/id_rsa.
Your public key has been saved in /home/ec2-user/.ssh/id_rsa.pub.
The key fingerprint is:
2e:14:e4:2d:d2:5b:2f:fa:0b:9d:f1:47:71:bf:00:47 ec2-user@ip-10-62-97-105
The key's randomart image is:
+--[ RSA 2048]----+
| . E |
| + . . |
| . = o .... |
| . = . oo . |
| o S . .. .|
| . + = . . .|
| + + . . . |
| + . |
| o. |
+-----------------+
[ec2-user@ip-10-62-97-105 ~]$ ls -ltra
total 24
-rw-r--r-- 1 ec2-user ec2-user 124 May 22 2012 .bashrc
-rw-r--r-- 1 ec2-user ec2-user 176 May 22 2012 .bash_profile
-rw-r--r-- 1 ec2-user ec2-user 18 May 22 2012 .bash_logout
drwxr-xr-x 3 root root 4096 Feb 15 23:51 ..
drwx------ 3 ec2-user ec2-user 4096 Mar 24 18:28 .
drwx------ 2 ec2-user ec2-user 4096 Mar 24 18:47 .ssh
[ec2-user@ip-10-62-97-105 ~]$ cd .ssh
[ec2-user@ip-10-62-97-105 .ssh]$ ls -ltra
total 20
-rw------- 1 ec2-user ec2-user 409 Mar 24 18:28 authorized_keys
drwx------ 3 ec2-user ec2-user 4096 Mar 24 18:28 ..
-rw-r--r-- 1 ec2-user ec2-user 406 Mar 24 18:47 id_rsa.pub
-rw------- 1 ec2-user ec2-user 1671 Mar 24 18:47 id_rsa
drwx------ 2 ec2-user ec2-user 4096 Mar 24 18:47 .
[ec2-user@ip-10-62-97-105 .ssh]$

Now download “id_rsa.pub” file from “.ssh” directory of your home folder to your laptop or desktop using following command in separate Cygwin terminal window.

$pwd
/home/pankaj/20130324
$scp -i pankaj_east_hadoop_20130324.pem ec2-user@ec2-174-129-88-73.compute-1.amazonaws.com:/home/ec2-user/.ssh/id_rsa.pub .
id_rsa.pub 100% 406 0.4KB/s 00:00
$ ls
id_rsa.pub pankaj_east_hadoop_20130324.pem

Now we will upload the downloaded public key file “id_rsa.pub” to our remaining instances one by one using following command (we need to change instance name though).

$ scp -i pankaj_east_hadoop_20130324.pem ./id_rsa.pub ec2-user@ec2-204-236-208-203.compute-1.amazonaws.com:/home/ec2-user/.ssh/
id_rsa.pub 100% 406 0.4KB/s 00:00

———————————————————————————————————————————–
Now let’s go to our second instance using ssh on new Cygwin terminal window and check “.ssh” directory as below:

$ ssh -i pankaj_east_hadoop_20130324.pem ec2-user@ec2-204-236-208-203.compute-1.amazonaws.com
The authenticity of host 'ec2-204-236-208-203.compute-1.amazonaws.com (204.236.208.203)' can't be established.
RSA key fingerprint is 16:cf:09:4f:f2:0b:d2:8c:76:66:5c:76:33:eb:d0:df.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-204-236-208-203.compute-1.amazonaws.com,204.236.208.203' (RSA) to the list of known hosts.

__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/

There are 13 security update(s) out of 24 total update(s) available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-118-74-223 ~]$ pwd
/home/ec2-user
[ec2-user@ip-10-118-74-223 ~]$ cd .ssh
[ec2-user@ip-10-118-74-223 .ssh]$ ls -ltra
total 16
drwx------ 3 ec2-user ec2-user 4096 Mar 24 18:28 ..
drwx------ 2 ec2-user ec2-user 4096 Mar 24 19:13 .
-rw-r--r-- 1 ec2-user ec2-user 406 Mar 24 19:13 id_rsa.pub
[ec2-user@ip-10-118-74-223 .ssh]$ cat id_rsa.pub >> authorized_keys
[ec2-user@ip-10-118-74-223 .ssh]$ chmod 640 authorized_keys
[ec2-user@ip-10-118-74-223 .ssh]$ chmod 640 id_rsa.pub

——————————————————————————————————————————————
Once above configuration is done on second instance, go the Cygwin terminal window for the first instance.

[ec2-user@ip-10-62-97-105 .ssh]$ cd ..
[ec2-user@ip-10-62-97-105 ~]$ chmod 700 .ssh
[ec2-user@ip-10-62-97-105 ~]$ cd .ssh
[ec2-user@ip-10-62-97-105 .ssh]$ ls -lta
total 24
drwx------ 2 ec2-user ec2-user 4096 Mar 24 19:15 .
-rw-r--r-- 1 ec2-user ec2-user 884 Mar 24 19:15 known_hosts
-rw------- 1 ec2-user ec2-user 815 Mar 24 18:50 authorized_keys
-rw------- 1 ec2-user ec2-user 1671 Mar 24 18:47 id_rsa
-rw-r--r-- 1 ec2-user ec2-user 406 Mar 24 18:47 id_rsa.pub
drwx------ 3 ec2-user ec2-user 4096 Mar 24 18:28 ..
[ec2-user@ip-10-62-97-105 .ssh]$ chmod 640 id_rsa.pub
[ec2-user@ip-10-62-97-105 .ssh]$ chmod 640 authorized_keys
[ec2-user@ip-10-62-97-105 .ssh]$ ls -ltra
total 24
drwx------ 3 ec2-user ec2-user 4096 Mar 24 18:28 ..
-rw-r----- 1 ec2-user ec2-user 406 Mar 24 18:47 id_rsa.pub
-rw------- 1 ec2-user ec2-user 1671 Mar 24 18:47 id_rsa
-rw-r----- 1 ec2-user ec2-user 815 Mar 24 18:50 authorized_keys
-rw-r--r-- 1 ec2-user ec2-user 884 Mar 24 19:15 known_hosts
drwx------ 2 ec2-user ec2-user 4096 Mar 24 19:15 .
[ec2-user@ip-10-62-97-105 ~]$ ssh ec2-user@ec2-204-236-208-203.compute-1.amazonaws.com
Last login: Sun Mar 24 19:11:27 2013 from bas1-malton23-2925222199.dsl.bell.ca

__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/

There are 13 security update(s) out of 24 total update(s) available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-118-74-223 ~]$

Now you are connected to second instance from first instance without providing password.
Similar configuration can be done for all other instances.

I will add more detail on further installation in part 2.
Credits: Adam Muise from Toronto Hadoop User Group. He works as Solutions Engineer at Hortonworks.

Linux: color of files on ls command

Today, I just want to put basic information regarding Linux files and that is color coding of files when user runs ls command.
When you run ls command, you see executable files in different colors, image files in different colors and so does the directories. The color coding of different types of files is located in a configuration file /etc/DIR_COLORS for Linux. A side note: this is in my centOS server.
Colors on my CentOS server are as below

  • Executable files: Green
  • Directory: Blue
  • Image files(jpg, gif, bmp, png, tif): Magenta
  • Symbolic links: Cyan
  • Pipe: Yellow
  • Socket: Magenta
  • Orphaned symbolic links: Blinking Bold white with red background
  • Block device driver: Bold yellow foreground, with black background
  • Missing links along with files they point to: Blinking Bold white with red background
  • Archives or compressed files(like tar,gz,zip,rpm): Red

You can change them if you understand /etc/DIR_COLORS file. They are in the format of (file type attribute codes: text color codes:Background color codes)
File type attribute codes are as below:
00=none
01=bold
04=underscore
05=blink
07=reverse
08=concealed
Text color codes are as below:
30=black
31=red
32=green
33=yellow
34=blue
35=magenta
36=cyan
37=white
And finally, background color codes are:
40=black (default)
41=red
42=green
43=yellow
44=blue
45=magenta
46=cyan
47=white

The example definition for DIR file type which is of Bold Blue color, look for the entry: DIR 01;34
If you change the file than you have to logout and log back in to see the change.

Step-by-Step guide to WordPress Optimization using W3 Total Cache

This article about WordPress optimization is mostly written for developers who know how to create website with WordPress but not very well-versed with other tricks to optimize WordPress for speedy delivery of the content of their website. But anyone can use some of these tricks to improve their non-WordPress websites. I am using my own website pthakkar.com for this blog as I am going through the same pain of optimizing my own website. So lets start.
If you profile a web page with Google Page Speed, it calculates the page’s performance by using number of different rules. These rules are general front-end best practices we can apply at any stage of web development. If you are interested in understanding these rules, you can find at https://developers.google.com/speed/docs/insights/rules — you can refer to these pages at any time. It gives us specific tips and suggestions for how we can best implement the rules and incorporate them into our development process.

When I created my website pthakkar.com with wordpress, it was not optimized at all as you can see from my test run for Google Page Speed. It got an overall PageSpeed Score of 54 (out of 100) which was way too low.

Google Page Speed Overview of PThakkar.com

Google Page Speed Overview of PThakkar.com

So I started step by step process to take Page Speed score above 90. Page Speed evaluates performance from the client point of view, typically measured as the page load time. The best practices involves many steps that affect page load time, including resolving DNS names, setting up TCP connections, transmitting HTTP requests, downloading resources, fetching resources from cache, parsing and executing scripts, and rendering objects on the page and they are grouped in six different categories by Google.
Optimizing caching - keeping your application’s data and logic off the network altogether
Minimizing round-trip times - reducing the number of serial request-response cycles
Minimizing request overhead - reducing upload size
Minimizing payload size - reducing the size of responses, downloads, and cached pages
Optimizing browser rendering - improving the browser’s layout of a page
Optimizing for mobile - tuning a site for the mobile networks and mobile devices

All of the above rules are bit complex for normal WordPress users who does not have experience in performance optimization. So many developers from WordPress community have created plugins to do various optimization using WordPress interfaces. I will list these plugins one by one with advantages and disadvantages of each of them and we will use the best one to optimize my website pthakkar.com. After applying all the optimization techniques, we will compare Google Page Speed score to see how much improvement each plugin did in terms of Google Page Speed score. some of these optimization tricks require some php programming knowledge.
Optimization by tweaking the Configuration File
My WordPress website required to make database calls to locate the site URL. We can greatly reduce database calls just to locate the site URL by defining the site URL in the WordPress configuration file, wp-config.php , as below:
define(‘WP_HOME’, ‘http://pthakkar.com’);
define(‘WP_SITEURL’, ‘htp://pthakkar.com’);

After tweaking the Configuration file, I wanted to see what was the Google Page Speed score and but it still stayed same so I decided to find some plug-ins which might help me in making my website faster so I searched through all available WordPress plug-ins and I found two that were most recommended  and the most downloaded, W3 Total Cache and WP Super Cache. So I decided to understand the differences between two and found that W3 Total Cache is far more powerful although tough to configure. If you need more information, please read a blog from tentblogger.

After understanding what W3 Total Cache plugin can do, I installed it and started configuring it. It provided various option for using cache so I tried to understand different caching options and found two of them very compelling, APC or Alternative PHP Cache and MemCache. After reading pros and cons of both of them, I thought APC would be best option for me as I am running single blog on virtual server so I tried installing APC but for PHP 5.3, it was not a simple job as my server was a bare-bone linux distro. It did not have APC, nor any development tools like “c” compiler. So I decided to install all development tools on my CentOS server. Using following command you can find out what php packages are installed on your server.

#yum list installed | grep php
php53.i386            5.3.3-13.el5_8                   installed
php53-cli.i386        5.3.3-13.el5_8                   installed
php53-common.i386     5.3.3-13.el5_8                   installed
php53-devel.i386      5.3.3-13.el5_8                   installed
php53-gd.i386         5.3.3-13.el5_8                   installed
php53-mysql.i386      5.3.3-13.el5_8                   installed
php53-pdo.i386        5.3.3-13.el5_8                   installed

I needed php53-devel and pcre-devel for my work as I needed phpize command which required php53-devel. I used following command to install both of them.

#yum install php-devel pcre-devel

Then, I needed to compile a package, I needed a C compiler (like gcc) and make. The easiest way to install all the needed development tools was using groupinstall.

#yum groupinstall "Development Tools"

Now I needed to download APC package so I went to my home directory and downloaded and expanded the package using following commands.

#cd ~
#wget http://pecl.php.net/get/APC-3.1.9.tgz
#tar -zxvf APC-3.1.9.tgz

To set up APC, we needed to go to the expanded directory.

#cd APC-3.1.9

Once in the directory, running phpize command would give output like below;

#phpize
Configuring for:
PHP Api Version:         20090626
Zend Module Api No:      20090626
Zend Extension Api No:   220090626

Once we had this, we needed to find the php-config file using following command:

#whereis php-config
php-config: /usr/bin/php-config /usr/share/man/man1/php-config.1.gz

In my case I found php-config file in ‘/usr/bin’ and used it for the next command ‘configure.

./configure --enable-apc --enable-mmap --with-apxs --with-php-config=/usr/bin/php-config

Basically, this command tells OS to compile APC as a dynamically loadable module or DSO. You pass the –with-apxs option to the configure script. If you know the location of the Apache apxs file, you can supply that to the switch with –with-apxs=/usr/local/apache/apxs.
Click on apxs – APache eXtenSion tool if you need to know more about apxs. Darrell Brogdon has given good explanation about –with-apxs switch.
Once you have run configure command, run following command.

#make
#make install

Now, if you have an /etc/php.d/ directory on your CentOS server, create a file called apc.ini in your /etc/php.d directory which will have all your configuration settings and if it already exists, you can just edit the existing one. If your CentOS server doesn’t have an /etc/php.d directory, find your php.ini file and add these configuration settings to it. Normally php.ini file is located in /etc.
You can use following code as an example for apc.ini file. It is with lots of comments to give you better understanding.

; Enable the extension module
extension = apc.so

; To get better explanation of options for the APC module, (at the time of writing version = 3.1.9)
; See http://www.php.net/manual/en/apc.configuration.php

; Set apc.enabled to 0 to disable APC.
apc.enabled=1
; The number of shared memory segments to allocate for the compiler cache.
apc.shm_segments=1
; The size of each shared memory segment, in MB. You can use G for GB.
apc.shm_size=64M
; A "hint" about the number of distinct source files that will be included or
; requested on your web server. Set to zero or omit if you're not sure;
apc.num_files_hint=1024
; Just like num_files_hint, a "hint" about the number of distinct user cache
; variables to store.  Set to zero or omit if you're not sure;
apc.user_entries_hint=4096
; The number of seconds a cache entry is allowed to idle in a slot in case this
; cache entry slot is needed by another entry.
apc.ttl=7200
; use the SAPI request start time for TTL
apc.use_request_time=1
; The number of seconds a user cache entry is allowed to idle in a slot in case
; this cache entry slot is needed by another entry.
apc.user_ttl=7200
; The number of seconds that a cache entry may remain on the garbage-collection list.
apc.gc_ttl=3600
; On by default, but can be set to off and used in conjunction with positive
; apc.filters so that files are only cached if matched by a positive filter.
apc.cache_by_default=1
; A comma-separated list of POSIX extended regular expressions.
apc.filters
; If compiled with MMAP support by using --enable-mmap this is the mktemp-style file_mask to pass to the mmap module for determining whether your 
; mmap'ed memory region is going to be file-backed or shared memory backed.
apc.mmap_file_mask=/tmp/apc.XXXXXX
; file_update_protection setting delays caching brand new files, value in second(s).
apc.file_update_protection=2
; This setting enables APC for the CLI version of PHP (Mostly for testing and debugging). Possible values; 1 for ON, 0 for OFF.
apc.enable_cli=0
; Prevent files larger than this value from getting cached
apc.max_file_size=1M
; Whether to stat the main script file and the fullpath includes. If this setting is off, APC will not check, which usually means that 
; to force APC to recheck files, the web server will have to be restarted or the cache will have to be manually cleared. 
; On a production server where the script files rarely change, a significant performance boost can be achieved by changing its value to 0.
apc.stat=1
; Vertification with ctime will avoid problems caused by programs such as svn or rsync by making
; sure inodes havn't changed since the last stat. APC will normally only check mtime.
apc.stat_ctime=0
; Whether to canonicalize paths in stat=0 mode or fall back to stat behaviour
apc.canonicalize=0
; With write_lock enabled, only one process at a time will try to compile an
; uncached script while the other processes will run uncached
apc.write_lock=1
; Logs any scripts that were automatically excluded from being cached due to early/late binding issues.
apc.report_autofilter=0
;This setting is deprecated, and replaced with apc.write_lock, so it is set to zero.
apc.slam_defense=0

Once this is done, restart your apache server.

#service httpd restart

For more information about APC, you can copy the ~/APC-3.1.9/apc.php file in your web root, and then open it in browser. If you can see server stats in that page, you have successfully installed APC on CentOS.
Once APC cache was installed, I needed to configure W3 Total Cache to use APC.
W3 Total Cache Page Cache setting
W3 Total Cache Minify Setting
W3 Total Cache Object Cache Setting
W3 Total Cache Browser Cache Setting
I haven’t used Content Delivery Network so my CDN setting is empty.
W3 Total Cache CDN Setting
W3 Total Cache Varnish Setting
W3 Total Cache Miscellaneous Setting
W3 Total Cache Minify Tab general Setting
W3 Total Cache Minify Tab html-xml Setting
W3 Total Cache Minify Tab JS Setting
W3 Total Cache Minify Tab CSS Setting
BrowserCache Tab HtmlXml Setting
BrowserCache Tab CSS-JS Setting
BrowserCache Tab Media-Other Setting
Once I did above settings in W3 Total Cache, Google Page Speed for my blog pthakkar.com increased to 97 as per my last check.
Google Page Speed After Setting

How to upgrade php from php5.1 to php 5.3 on CentOS

Today I am starting my first blog on “How to upgrade php from php5.1 to php 5.3 on my CentOS server”.
In order to upgrade php from php5.1 to php 5.3 do following:
1) # php -v
PHP 5.1.6 (cli) (built: Jun 27 20XX 12:25:37)
Copyright (c) 1997-2010 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies

2) run following command:
# yum list installed | grep php | cut -d’ ‘ -f1
php.i386
php-cli.i386
php-common.i386

3) Now run following command:
#yum remove php php-cli php-common

This will ask you to remove above named packages. Packages are the ones which we got in (2).
Press ‘y’ when prompted. This will remove old packages.

4) Now run php -v again
# php -v

Nothing will come up.

5) Now run following command:

#yum install php53 php53-cli php53-common php53-devel php53-gd
This will prompt you to install following 5 packages:

php53.i386
php53-cli.i386
php53-common.i386
php53-devel.i386
php53-gd.i386

Press ‘y’ when prompted which will install above packages.

6) run php -v to see what version you have
# php -v
PHP 5.3.3 (cli) (built: Jun 27 2012 12:25:37)
Copyright (c) 1997-2010 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies

7) now restart your Apache server:
#service httpd restart

8) now open your preffered browser and point it to your website:
You will see following message:
Your PHP installation appears to be missing the MySQL extension which is required by WordPress.

9) It means that you do not have mysql installed or not started the mysqld demon
if you don’t have mysql server installed, do following:
#yum install -y mysql mysql-server
Now ensure that MySQL and MySQL server are installed

10) Now install php-mysql
#yum install php53-mysql
This prompt you to install php53-mysql package and its necessary dependancies which in my case was php53-pdo.i386
It will prompt you to press ‘y’ or ‘N’.
Press ‘y’ when prompted which will install above packages.

11) Now ensure that PHP and the PHP MySQL components are installed using following command:
# yum list installed | grep php | cut -d’ ‘ -f1
php53.i386
php53-cli.i386
php53-common.i386
php53-devel.i386
php53-gd.i386
php53-mysql.i386
php53-pdo.i386

12) Restart Apache by
#service httpd restart

You will be good to go.

There are few good resources on this:

Upgrade PHP 5.1/5.2 to 5.3 on CentOS By Chris Jean

Step by step WordPress installation video on CentOS 5.6 by Joseph Palumbo