INSTALLATION DOCUMENTS BY RAVI

Saturday, August 12, 2017

Safe mode in hadoop

1. Safe Mode is a state in Hadoop where the HDFS cluster goes in read only mode i.e. no data can be written to the blocks and no deletion or replication of blocks can happen.

2. During this state, the Namenode goes into maintenance mode.

3. The Namenode implicitly goes into this mode at the startup of the HDFS cluster because at startup, the Namenode gives some time to the data nodes to account for their Data Blocks so that it does not start the replication process without knowing whether there are sufficient replicas already present or not.

4. Once, all the validations are done by the Name node, the safe mode is implicitly disabled.

5. Sometimes, it so happens that the Namenode is not able to come out of the safe mode

Example:
NameNode allocated a block and then was killed before the HDFS client got the addBlock response. After NameNode restarted, it couldn't get out of Safe Mode waiting for the block which was never created. In this case, we are not able to write data to the HDFS as it is still in safe mode which is read-only.

6. To resolve this, we need to manually exit out of the safe mode by running the following command: hdfs hadoop dfsadmin -safemode leave

To know the status of the safe mode:






To come out of the safe mode:






To enter into the safe mode:






7. On Startup Name Node goes into Safe mode. 

8. In Safe Mode Name node collects Block reports from the data nodes and once he data node confirms that there are Blocks available for storage then Name Node come out of Safe mode

No comments:

Post a Comment

  Opatch reports 'Cyclic Dependency Detected' error when patching ODI   Issue: When applying a Patch Set Update (PSU) to WebLogic Se...