天天看点

hadoop 日常错误解决方法整理

hadoop 日常错误整理  

====描述

hive 执行show tables; 

exception in thread "main" java.lang.nosuchmethoderror: org.apache.hadoop.conf.configuration.unset(ljava/lang/string;)v

解决方法

hive的版本是0.13.0太高,hadoop的版本较低。 hadoop中没有对应的方法。降低hive的版本到0.11.0

hive 的sql报错

 expression not in group by key 

原因

sql语句含有groupby 但是意义不明确,比如没有聚合函数

修改sql

makefile.include.header:97: *** error: unable to find the sources of your current linux kernel. specify kern_dir=<directory> and run make again

yum install gcc kernel kernel-devel , 重启机器

building the opengl support module

export make='/usr/bin/gmake -i'

./vboxlinuxadditions.run

hiveserver 执行的时候报错

jdbc前端报错

query returned non-zero code: 2, cause: failed: execution error, return code 2 from org.apache.hadoop.hive.ql.exec.mapredtask

hiveserver日志

org.apache.hadoop.hive.ql.metadata.hiveexception: hive runtime error while processing row

任务的日志

udfargumentexception: the udf implementation class 'com.udf.converter_long2str' is not present in the class path

把udf的jar包放到hive的lib之后, hiveserver 并未加载udf类

需要重新启动hiveserver ,重新加载jar包

warn util.nativecodeloader: unable to load native-hadoop library for your platform... using builtin-java classes where applicable

openjdk 64-bit server vm warning: you have loaded library /home/soulmachine/local/opt/hadoop-2.2.0/lib/native/libhadoop.so which might have disabled stack guard. the vm will try to fix the stack guard now.

it's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/02/14 13:14:50 warn util.nativecodeloader: unable to load native-hadoop library for your platform... using builtin-java classes where applicable

localhost

dns中找不到机器名

配置文件指定一个 servername localhost 即可

httpd: could not reliably determine the server's fully qualified domain name, using 192.168.12.210 for servername

java.io.ioexception: file <filename> could only be replicated to 0 nodes, instead of 1

datanode没有启动成功

重新启动datanode,或者namenode 重新format。

====描述:

jianghehui@yunwei-jumper:~/softs$ mysql -h xxxx -p 3306 -uroot -p

jianghehui@yunwei-jumper:~/softs$ mysql -h

jianghehui@yunwei-jumper:~/softs$ mysql -v

jianghehui@yunwei-jumper:~/softs$ mysql

error 2003 (hy000): can't connect to mysql server on '127.0.0.1' (111)

jianghehui@yunwei-jumper:~/softs$ 

原因:

编译的时候,bin指定到绝对路径了。

解决方法:

使用绝对路径,或者加到path

you don't have permission to access /index.html on this server

index.html是用root用户建的文件,apache权限不够

打开apache配置文件httpd.conf,找到这么一段:

<directory />

     options followsymlinks

     allowoverride none

     order deny,allow

     deny from all

     satisfy all

</directory>

然后试着把deny from all中的deny改成了allow,保存后重起了apache,然后再一测试我的网页就正常了.

mysql reset slave 执行还有遗留信息

xxx

使用reset slave all

no job jar file set.  user classes may not be found.see jobconf(class) or jobconf#setjar(string).

需要将class放到jar包中运行

not a sequencefile

指定为seqfile,需要创建seqfile

job submission failed with exception 'java.io.ioexception(the ownership/permissions on the staging directory /tmp/hadoop-hadoop-user1/mapred/staging/hadoop-user1/.staging is not as expected. it is owned by hadoop-user1 and permissions are rwxrwxrwx. the directory

must be owned by the submitter hadoop-user1 or by hadoop-user1 and permissions must be rwx------)

hadoop fs -chmod -r 700 /tmp/hadoop-hadoop-user1/mapred/staging/hadoop-user1/.staging

permission denied: user=xxj, access=write, inode="user":hadoop:supergroup:rwxr-xr-x

 <property>

<name>dfs.permissions</name>

<value>false</value>

</property>

 writablename cannot load class

自己写的writable对象不在classpath中

type mismatch in key from map: expected org.apache.hadoop.io.byteswritable, recieved org.apache.hadoop.io.longwritable

key的类型和指定的不匹配

cleaning up the staging area hdfs://192.168.12.200:9000/tmp/hadoop-root/mapred/staging/jianghehui/.staging/job_201307172232_0004

hadoop

sql语句执行有问题,比如没有指定库名 直接使用表名字

php startup: unable to load dynamic library './php_mysql.dll 找不到指定的模块

undefined function mysql_connect()

总结如下:

extension_dir要设置正确。

php的安装目录添加到%path%中

还有 把所依赖的dll拷贝到%windir%\system32

device "eth0" does not seem to be present, delaying initialization

虚拟机用模板做linux的时候因为网卡配置信息(主要是mac)也复制过去,但是虚拟服务器会分配另外的一个mac地址,启用的时候会出错

1.打开etc/sysconfig/network-scripts/ ficfg-eth0,确定onboot应该为yes,

2.检查ficfg-eth0的mac和ifconfig现实的mac是否相符,并修改ficfg-eth0的mac。

3.重启服务,service networkmanager restart ,service network restart.

4.然后系统会自动识别到网卡信息,就ok了。

keepalived 测试不成功,查看 /var/log/messages 

keepalived_healthcheckers: ipvs: can't initialize ipvs: protocol not available

是否lvs模块加载异常,于是lsmod|grep ip_vs发现果然没有相应的模块,而正常情况下应该是有的

手动加载ip_vs模块

modprobe ip_vs

modprobe ip_vs_wrr

添加进/etc/rc.local开机自动加载

hive> show tables;

failed: error in metadata: javax.jdo.jdofatalinternalexception: unexpected exception caught.

nestedthrowables:

java.lang.reflect.invocationtargetexception

failed: execution error, return code 1 from org.apache.hadoop.hive.ql.exec.ddltask

不知道

 delete $hadoop_home/build

warning: org.apache.hadoop.metrics.jvm.eventcounter is deprecated. please use org.apache.hadoop.log.metrics.eventcounter in all the log4j.properties files.

文档格式(如下):

过时的类

把所有配置文件中的eventcounter 替换成org.apache.hadoop.metrics.jvm.eventcounter。包括lib/hive-common-0.10.0.jar!/hive-log4j.properties。

hadoop 启动的时候 java_home is not set and could not be found.

libexec/hadoop-config.sh 或者其他的脚本里面手动设置java_home变量

hive> show tables; 

failed: error in metadata: java.lang.runtimeexception: unable to instantiate org.apache.hadoop.hive.metastore.hivemetastoreclient

驱动没加载成功

数据库没有创建,并且url没有配置createdatabaseifnotexist=true

把mysql或者derby的驱动加到path里面去

eclipse cdt 启动的时候报错

failed to load the jni shared library

jdk的版本是64bit , 而eclipse是32bit。位数不一致。

安装bit一致的jdk和eclipse

hive 使用mysql元数据库的时候 ,show tables 报错 index column size too large. the maximum column size is 767 bytes.

将数据库的字符集改成latin1

hive 执行查询的时候,表明明存在,却报错table not found

表名字前面加上库名