Error on starting HDFS daemons on hadoop Multinode cluster -


issue while hadoop multi-node set-up .as start hdfs demon on master (bin/start-dfs.sh)

i did got below logs on master

starting namenode, logging /home/hduser/hadoop/libexec/../logs/hadoop-hduser-namenode-localhost.localdomain.out slave: warning: $hadoop_home deprecated. slave: slave: starting datanode, logging /home/hduser/hadoop/libexec/../logs/hadoop-hduser-datanode-localhost.localdomain.out master: warning: $hadoop_home deprecated. master: master: starting datanode, logging /home/hduser/hadoop/libexec/../logs/hadoop-hduser-datanode-localhost.localdomain.out master: warning: $hadoop_home deprecated. master: master: starting secondarynamenode, logging /home/hduser/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-localhost.localdomain.out 

i did got below logs on slave @

hadoop-hduser-datanode-localhost.localdomain.log file

can advise me , whats wrong set-up .

2013-07-24 12:10:59,373 info org.apache.hadoop.ipc.client: retrying connect server: master/192.168.0.1:54310. tried 8 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 2013-07-24 12:11:00,374 info org.apache.hadoop.ipc.client: retrying connect server: master/192.168.0.1:54310. tried 9 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 2013-07-24 12:11:00,377 error org.apache.hadoop.hdfs.server.datanode.datanode: java.io.ioexception: call master/192.168.0.1:54310 failed on local exception: java.net.noroutetohostexception: no route host         @ org.apache.hadoop.ipc.client.wrapexception(client.java:1144)         @ org.apache.hadoop.ipc.client.call(client.java:1112) 

make sure namenode running fine. if running see if there problem in connection. datanode not able talk namenode. make sure have added ip , hostname of machine in /etc/hosts file of slave. try telnet 192.168.0.1:54310 , see whether able connect or not.

showing nn logs helpful.

edit :

see wiki has problem : tcp no route host error -often wrapped in java ioexception, when 1 machine on network not know how send tcp packets machine specified.

some possible causes (not exclusive list):

  • the hostname of remote machine wrong in configuration files.
  • the client's host table //etc/hosts has invalid ipaddress target host.
  • the dns server's host table has invalid ipaddress target host.
  • the client's routing tables (in linux, iptables) wrong.
  • the dhcp server publishing bad routing information.
  • client , server on different subnets, , not set talk each other. may accident, or deliberately lock down hadoop cluster.
  • the machines trying communicate using ipv6. hadoop not support ipv6
  • the host's ip address has changed long-lived jvm caching old value. known problem jvms (search "java negative dns caching" details , solutions).

the quick solution: restart jvms.

these network configuration/router issues. network, can find out , track down problem.


Comments

Popular posts from this blog

javascript - DIV "hiding" when changing dropdown value -

Does Firefox offer AppleScript support to get URL of windows? -

android - How to install packaged app on Firefox for mobile? -