INSTALLATION DOCUMENTS BY RAVI

Showing posts with label HADOOP. Show all posts
Showing posts with label HADOOP. Show all posts

Friday, April 27, 2018

Unauthorized connection for super-user: root from IP 127.0.0.1

When trying to upload files to hdfs through ambari console getting the below error

Unauthorized connection for super-user: root from IP 127.0.0.1
























Looks like we have not added the mentioned IP Address properly inside the property: hadoop.proxyuser.root.hosts

If ambari-server daemon is running as root, we need to set up a proxy user for root in core-site by adding and changing properties in HDFS > Configs > Custom core-site:
  1. hadoop.proxyuser.root.groups=*
  2. hadoop.proxyuser.root.hosts=*
Open ambari console


Go to HDFS -> Configs























Click on Custom core-site























Modify the below parameters























Save the configuration











Click ok


Restart the affected components























After successful restart try to upload files now























Thursday, April 26, 2018

ERROR: Attempting to operate on hdfs datanode as root

Getting error when we tried to start hadoop as a root user












To resolve this navigate to /usr/hdf/3.1.1.0-35/hadoop/etc/hadoop
open hadoop-env.sh file
Add the below lines
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"





















Start hadoop now




























Sunday, March 25, 2018

Installing NIFI

Downloading the software














Extracting the software















Configuring Nifi


Open nifi.properties for updating configurations























Save and close the file

Starting the Nifi server


















Connect to Nifi data flow management url in your browser: http://localhost:8090/nifi/























java.lang.OutOfMemoryError: Java heap space, Running OOM killer script for process 11420 for Solr on port 8983, hortonworks

In Hortonworks cluster environment Ambari Infra Solr service failed to start with below error:


# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib/ambari-infra-solr/bin/oom_solr.sh 8983 /var/log/ambari-infra-solr"
#   Executing /bin/sh -c "/usr/lib/ambari-infra-solr/bin/oom_solr.sh 8983 /var/log/ambari-infra-solr"...
Running OOM killer script for process 11420 for Solr on port 8983
Killed process 11420



Resolution:

To resolve this

1. Login to Ambari management console.
2. Goto Ambari Infra
3. Click on Configs
4. Under settings tab increase the Infra Solr Minimum and Maximum heap size and click on save.
5. Start the service now.






















Sunday, March 11, 2018

Error: Could not contact Elasticsearch at http://localhost:9200. Please ensure that Elasticsearch is reachable from your system

Error: Could not contact Elasticsearch at http://172.16.10.53:9200. Please ensure that Elasticsearch is reachable from your system

Getting above error when we open Kibana console














Resolution:


Navigate to $ELASTICSEARCH_HOME/config



Open elasticsearch.yml and add lines below

script.disable_dynamic: true
http.cors.enabled: true
http.cors.allow-origin: "/.*/"























Close and save the file

Restart the elasticsearch and reopen kibana console











































Saturday, February 17, 2018

yarn application and listing commands

We can do that via yarn application -list command.
Listing yarn applications:
From web ui 
This will give you list of all SUBMITTED, ACCEPTED or RUNNING applications.
From this you can filter applications of default queue by running yarn application -list | grep default

From terminal














Killing yarn application
To kill the application you can run yarn application -kill <Application ID>






Resolution for the error (ERROR: java.io.IOException: Table Namespace Manager not fully initialized, try again later)

In Hbase shell when try to create a table getting the below error:







To resolve this error one of the solution is to clean the Hbase related data from zookeeper and hdfs.
Steps for cleaning HBase related data from zookeeper and hdfs











While cleaning if we get the below error
WatchedEvent state:SyncConnected type:None path:null Node does not exist: /HBase-unsecure
Create the node by connecting to zookeeper










Again run the Hbase clear command

Now try to create table in Hbase shell again








Table created successfully

Check for the table in Hbase master UI






















Friday, December 1, 2017

Step by Sep Apache Ambari installation and configuration on linux

Downloading Apache Ambari :

Connect to linux machine using putty and issue below command to download Apache Ambari

wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.2.0/ambari.repo -O /etc/yum.repos.d/ambari.repo





Installing Apache Ambari :

Install Ambari server using yum as below

































Configuring Apache Ambari:

Issue the below command to start configuring Apache Ambari


It will ask for options to choose .
Choose the option you want and click enter.




































Starting Ambari server:

Issue the below command to start the Ambari server as below












Accessing Ambari server Admin console using web UI:

We can access the Ambari server admin console using below url

http://localhost:8080






















Default user name and password is admin / admin























  Opatch reports 'Cyclic Dependency Detected' error when patching ODI   Issue: When applying a Patch Set Update (PSU) to WebLogic Se...