天天看点

HADOOP在处理HIVE时权限错误的解决办法

今天,小乔操作时发现问题:

org.apache.hadoop.security.accesscontrolexception: permission denied: user=root, access=write, inode="/user":hdfs:supergroup:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkfspermission(fspermissionchecker.java:265)

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker.java:251)

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker.java:232)

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker.java:176)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkpermission(fsnamesystem.java:5490)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkpermission(fsnamesystem.java:5472)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkancestoraccess(fsnamesystem.java:5446)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirsinternal(fsnamesystem.java:3600)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirsint(fsnamesystem.java:3570)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirs(fsnamesystem.java:3544)

at org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.mkdirs(namenoderpcserver.java:739)

at org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.mkdirs(clientnamenodeprotocolserversidetranslatorpb.java:558)

at org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)

at org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call(protobufrpcengine.java:585)

at org.apache.hadoop.ipc.rpc$server.call(rpc.java:1026)

at org.apache.hadoop.ipc.server$handler$1.run(server.java:1986)

at org.apache.hadoop.ipc.server$handler$1.run(server.java:1982)

at java.security.accesscontroller.doprivileged(native method)

at javax.security.auth.subject.doas(subject.java:415)

at org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1548)

at org.apache.hadoop.ipc.server$handler.run(server.java:1980)

at sun.reflect.nativeconstructoraccessorimpl.newinstance0(native method)

at sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl.java:57)

at sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl.java:45)

at java.lang.reflect.constructor.newinstance(constructor.java:526)

at org.apache.hadoop.ipc.remoteexception.instantiateexception(remoteexception.java:106)

at org.apache.hadoop.ipc.remoteexception.unwrapremoteexception(remoteexception.java:73)

at org.apache.hadoop.hdfs.dfsclient.primitivemkdir(dfsclient.java:2549)

at org.apache.hadoop.hdfs.dfsclient.mkdirs(dfsclient.java:2518)

at org.apache.hadoop.hdfs.distributedfilesystem$16.docall(distributedfilesystem.java:827)

at org.apache.hadoop.hdfs.distributedfilesystem$16.docall(distributedfilesystem.java:823)

at org.apache.hadoop.fs.filesystemlinkresolver.resolve(filesystemlinkresolver.java:81)

at org.apache.hadoop.hdfs.distributedfilesystem.mkdirsinternal(distributedfilesystem.java:823)

at org.apache.hadoop.hdfs.distributedfilesystem.mkdirs(distributedfilesystem.java:816)

at org.apache.hadoop.mapreduce.jobsubmissionfiles.getstagingdir(jobsubmissionfiles.java:125)

at org.apache.hadoop.mapreduce.jobsubmitter.submitjobinternal(jobsubmitter.java:348)

at org.apache.hadoop.mapreduce.job$10.run(job.java:1295)

at org.apache.hadoop.mapreduce.job$10.run(job.java:1292)

at org.apache.hadoop.mapreduce.job.submit(job.java:1292)

at org.apache.hadoop.mapred.jobclient$1.run(jobclient.java:562)

at org.apache.hadoop.mapred.jobclient$1.run(jobclient.java:557)

at org.apache.hadoop.mapred.jobclient.submitjobinternal(jobclient.java:557)

at org.apache.hadoop.mapred.jobclient.submitjob(jobclient.java:548)

at org.apache.hadoop.hive.ql.exec.mr.execdriver.execute(execdriver.java:425)

at org.apache.hadoop.hive.ql.exec.mr.mapredtask.execute(mapredtask.java:136)

at org.apache.hadoop.hive.ql.exec.task.executetask(task.java:151)

at org.apache.hadoop.hive.ql.exec.taskrunner.runsequential(taskrunner.java:65)

at org.apache.hadoop.hive.ql.driver.launchtask(driver.java:1485)

at org.apache.hadoop.hive.ql.driver.execute(driver.java:1263)

at org.apache.hadoop.hive.ql.driver.runinternal(driver.java:1091)

at org.apache.hadoop.hive.ql.driver.run(driver.java:931)

at org.apache.hadoop.hive.ql.driver.run(driver.java:921)

at org.apache.hadoop.hive.cli.clidriver.processlocalcmd(clidriver.java:268)

at org.apache.hadoop.hive.cli.clidriver.processcmd(clidriver.java:220)

at org.apache.hadoop.hive.cli.clidriver.processline(clidriver.java:422)

at org.apache.hadoop.hive.cli.clidriver.executedriver(clidriver.java:790)

at org.apache.hadoop.hive.cli.clidriver.run(clidriver.java:684)

at org.apache.hadoop.hive.cli.clidriver.main(clidriver.java:623)

at sun.reflect.nativemethodaccessorimpl.invoke0(native method)

at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)

at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)

at java.lang.reflect.method.invoke(method.java:606)

at org.apache.hadoop.util.runjar.main(runjar.java:212)

解决办法网上有很多,差不多都是同样的,用修改配置的办法解决,,呵呵,我们用cdh实例时,web界面也可以解决的。。

conf/hdfs-core.xml, 找到 dfs.permissions 的配置项 , 将value值改为 false

<property>

<name>dfs.permissions</name>

<value>false</value>

<description>

if "true", enable permission checking in hdfs.

if "false", permission checking is turned off,

but all other behavior is unchanged.

switching from one parameter value to the other does not change the mode,

owner or group of files or directories.

</description>

</property>

解决办法:

1)修改hive配置文件,hive的中间数据输出路径指向其他目录

cd /opt/hive-0.9.0/conf

vi hive-site.xml  #修改如下:

  <name>hive.exec.scratchdir</name>

  <value>/hive_tmp/hive-${user.name}</value> #将hdfs:///hive_tmp目录作为hive的中间数据路径

  <description>scratch space for hive jobs</description>

2)修改目录hdfs:///hive_tmp的所属用户和用户组

hadoop fs -chown -r common_user:common_group /hive_tmp/*

#修改目录hdfs:///hive_tmp的读写权限,确保目录对用户组common_group内的所有用户可访问(rwx)

hadoop fs -chmod g+w /hive_tmp/*

3)将普通用户user1,user2...等添加到用户组common_group

usermod -a -g common_group user1

4)转换用户到user1,测试hive查询

su user1

hive -e 'select * from taxi where speed > 150;'

# 查询成功

HADOOP在处理HIVE时权限错误的解决办法
HADOOP在处理HIVE时权限错误的解决办法