INSTALLATION DOCUMENTS BY RAVI

Sunday, October 15, 2017

Installing HBase

Prerequisites:


1. Java







2. Hadoop











Installing HBase in Standalone Mode
















Configuring HBase in Standalone Mode

Modify hbase-env.sh file under /u01/hadoop/hbase-1.2.6/conf as below






























save and close the file

Modify hbase-site.xml under /u01/hadoop/hbase-1.2.6/conf as below






























Starting HBase







Installing HBase in Pseudo-Distributed Mode

CONFIGURING HBASE

Note: Stop HBase if it is running













Modify hbase-site.xml under /u01/hadoop/hbase-1.2.6/conf as below































Starting HBase




















Checking the HBase Directory in HDFS







































Starting and Stopping a Master

Using the “local-master-backup.sh” you can start up to 10 servers. Open the home folder of HBase, master and execute the following command to start it.








Stopping Master







Starting and Stopping RegionServers

We can run multiple region servers from a single system using the following command.

Starting RegionServers








Stopping RegionServers




















Starting HBase shell



























HBase shell general commands

List:

It will give the list of all the tables in HBase








Status:

This command returns the status of the system including the details of the servers running on the system.






Version:

This command returns the version of HBase used in your system






table_help:

This command guides you what and how to use table-referenced commands.















whoami:

This command returns the user details of HBase







Creating a Table using HBase Shell

We can create a table using the create command, here you must specify the table name and the Column Family name.

Syntax : create ‘<table name>’,’<column family>’ 











HBase Web Interface

http://localhost:16010

This interface lists your currently running Region servers, backup masters and HBase tables.









































Setting Environment variables

Add HBase home and paths in .bashrc file under /home/hadoop as below






























Execute the .bashrc file as below






Spark Streaming and Kafka

Prerequisites:

1. Java






2. Start zookeeper server








3. Start kafka and create a new topic





Creating new topic












Test communication between Kafka and Spark

Run Kafka producer in a new screen














Run Spark’s KafkaWordCount in a new screen




































Step by step Installing Apache Oozie in Hadoop single node

Prerequisites:


1. Java jdk 1.6+





2. Maven 3.3.9








3. Hadoop 2.x







Installing Oozie 4.3.0
















Setting the environment variables:




















Save and close the file

Running Oozie Build
Modify the pom.xml under /u01/hadoop/oozie-4.3.0 as below
















Now run the below command























Oozie server setup
Add or modify hadoop core-site.xml under /u01/hadoop/hadoop-2.7.2/etc/hadoop as below












Create a libext folder under the oozie directory





Move downloaded ext-2.2.zip to libext folder





Copy Hadoop libraries into libext folder of oozie

























Preparing war file


















Creating Oozie Sharelib

Copy the hadoop core-site.xml file properties to core-site.xml file in /u01/hadoop/oozie-4.3.0/distro/target/oozie-4.3.0-distro/oozie-4.3.0/conf/hadoop-conf 






















Copy mapred-site.xml and yarn-site.xml files to /u01/hadoop/oozie-4.3.0/distro/target/oozie-4.3.0-distro/oozie-4.3.0/conf/hadoop-conf








Add or modify oozie-site.xml file under /u01/hadoop/oozie-4.3.0/distro/target/oozie-4.3.0-distro/oozie-4.3.0/conf as below























Now run the below command

































Creating Oozie database















Setting environment variables for Oozie




















Starting Oozie daemon
Run the below command to start Oozie daemon



















Run the below command to run as a process



Setting up client node for Oozie







Starting Oozie client node




Oozie web console url

















Checking the status of the Oozie process














Checking the what sharelib is being used by oozie while the daemon is running









Running the example Oozie jobs and testing the installation













We see different example folders such as pig,hive,mapreduce etc..,
Taking mapreduce to explain below steps. 
In the job.properties files of each folder, change the name node port and job tracker port as below



















save and close the file
Copying the examples folder to hdfs







Run the below command to check the status of the job






















To check the log run the below command






















Informatica client tool certification matrix

Please find the informatica client tool certification matrix below:






Informatica supported browsers list

Please find the list of browsers supported by informatica below:


























  Opatch reports 'Cyclic Dependency Detected' error when patching ODI   Issue: When applying a Patch Set Update (PSU) to WebLogic Se...