INSTALLATION DOCUMENTS BY RAVI

Thursday, August 17, 2017

Steps to avoid entering password while starting Apache hadoop

When we start hadoop it will asks for the password like below







To avoid entering the password please follow the below steps:

1. Generate the ssh key without password

















2. Copy  id_rsa.pub to authorized_keys with the below command









3. Now start ssh with localhost it will not ask for password now




4. Start the hadoop now
















It will not ask for password now.

5. Check with jps







Starting and stopping cloudera manager

Starting cloudera manager server

To start the Cloudera Manager Server:

Syntax:


#service cloudera-scm-server start




It will take some time to start all the services so please be patient.

We can check the startup log at /var/log/cloudera-scm-server/cloudera-scm-server.log




After all the services got started we can login to the cloudera manager console with below url

http://localhost:7180






















Default username and password is admin/admin









































Stopping cloudera manager server

To stop the Cloudera Manager Server:

Syntax:


#service cloudera-scm-server stop 



To check the status of the Cloudera Manager Server:

Syntax:

#service cloudera-scm-server status





To restart the Cloudera Manager Server:

Syntax:

#service cloudera-scm-server restart



Starting cloudera manager agent

To start the Cloudera Manager Agent:

Syntax:

#service cloudera-scm-agent start




Stopping cloudera manager agent

To stop the Cloudera Manager Agent:

Syntax:

#service cloudera-scm-agent stop




To check the status Cloudera Manager Agent:

Syntax:

#service cloudera-scm-agent status





To restart the Cloudera Manager Agent:

Syntax:

#service cloudera-scm-agent restart





Wednesday, August 16, 2017

Starting and stopping HBase

Starting HBase


Starting HBase server

Navigate to HBASE_HOME/bin and issue the below command








Starting Master

Navigate to HBASE_HOME/bin and issue the below command






Starting Region server

Navigate to HBASE_HOME/bin and issue the below command








Checking by using jps command














Starting HBase shell

Navigate to HBASE_HOME/bin and issue the below command













Stopping HBase 

Stopping HBase Shell:

To come out from the HBase shell simply type "quit" and click enter













Stopping Region Servers:

Navigate to HBASE_HOME/bin and issue the below command





Stopping master server:

Navigate to HBASE_HOME/bin and issue the below command





Stopping HBase:

Navigate to HBASE_HOME/bin and issue the below command













Starting and stopping Hive

Starting Hive

Starting Hive Metastore

Navigate to HIVE_HOME/bin and issue the below commands as below












Run jps and if we find "runjar" then it is confirms that metastore was started











Starting Hiveserver2

Navigate to HIVE_HOME/bin and issue the below commands as below













Starting Hive cli

Navigate to HIVE_HOME/bin and issue the below commands as below














Stopping Hive

Stopping Hive cli:


To come out from hive cli simply type "exit" and click enter













Stopping Hiveserver2 :








Stopping Hive metastore:

Manually kill the process running "runjar"

Saturday, August 12, 2017

Starting and stopping Kafka

Starting Kafka server:

Navigate to KAFKA_HOME/bin directory and issue the below command











Stopping Kafka server:

Navigate to KAFKA_HOME/bin directory and issue the below command




Starting and stopping Oozie

Starting Oozie daemon

To start Oozie daemon open terminal and navigate to OOZIE_HOME/bin and issue the command "oozied.sh start" as below
















Oozie web console url


















Stopping Oozie daemon

To stop Oozie daemon open terminal and navigate to OOZIE_HOME/bin and issue the command "oozied.sh stop" as below



















Safe mode in hadoop

1. Safe Mode is a state in Hadoop where the HDFS cluster goes in read only mode i.e. no data can be written to the blocks and no deletion or replication of blocks can happen.

2. During this state, the Namenode goes into maintenance mode.

3. The Namenode implicitly goes into this mode at the startup of the HDFS cluster because at startup, the Namenode gives some time to the data nodes to account for their Data Blocks so that it does not start the replication process without knowing whether there are sufficient replicas already present or not.

4. Once, all the validations are done by the Name node, the safe mode is implicitly disabled.

5. Sometimes, it so happens that the Namenode is not able to come out of the safe mode

Example:
NameNode allocated a block and then was killed before the HDFS client got the addBlock response. After NameNode restarted, it couldn't get out of Safe Mode waiting for the block which was never created. In this case, we are not able to write data to the HDFS as it is still in safe mode which is read-only.

6. To resolve this, we need to manually exit out of the safe mode by running the following command: hdfs hadoop dfsadmin -safemode leave

To know the status of the safe mode:






To come out of the safe mode:






To enter into the safe mode:






7. On Startup Name Node goes into Safe mode. 

8. In Safe Mode Name node collects Block reports from the data nodes and once he data node confirms that there are Blocks available for storage then Name Node come out of Safe mode

  Opatch reports 'Cyclic Dependency Detected' error when patching ODI   Issue: When applying a Patch Set Update (PSU) to WebLogic Se...