天天看点

CDH配置Sentry以及权限测试

在CDH中添加完Sentry服务后,需要更改一下hive配置才能通过beeline访问。 

第一,修改Hive下的HiveServer2配置,如下所示: 

将HiveServer2 Load Balancer中的admin去掉和HiveServer2 启用模拟的√去掉。 

这里的admin是默认的,跟前面配置有关,之前没有去掉直接导致beeline连接不上。更改后如下: 

第二,将Hive服务的Sentry勾上,默认是none 

在YARN的NodeManger在查看“允许的系统用户”项,看到下图所示就不用更改了(默认) 

重启Hive过时服务

紧接着,开启HDFS角色的acl(这里主要控制用户访问的HDFS目录,因为在生产环境中会指定一些目录是该用户可以访问的,而其他一些目录该用户访问不了) 

重启过期HDFS配置: 

然后到Sentry服务下的“管理员组”查看有没有hive/impala/hbase/hue,没有添加上(默认是已经存在) 

紧接着,生成hive.keytab文件 

这里,集群有wlint01、wlnamenode01、wldatanode001~wldatanode018一个20个节点,需要生成的hive的keytab文件最后汇集到hive.keytab一个文件中。 

首先在Kerberos数据库中创建principal:

[[email protected] ~]$ sudo kadmin.local -q "addprinc -randkey hive/[email protected]"

[[email protected] ~]$ sudo kadmin.local -q "addprinc -randkey hive/[email protected]"

[[email protected] ~]$ for i in {1..9}; do sudo kadmin.local -q "addprinc -randkey hive/[email protected]"; done

[[email protected] ~]$ for i in {0..8}; do sudo kadmin.local -q "addprinc -randkey hive/[email protected]"; done

1

2

3

4

5

这里用sudo是因为在生产环境下只能用wlbd这个用户,而且要用到sudo这个命令才能启用kadmin.local这个命令。如果你是root用户,不需要加sudo。

然后,在当前目录下生成hive.keytab文件。

[[email protected] ~]$ sudo kadmin.local -q "xst -norandkey -k hive.keytab hive/[email protected]"

[[email protected] ~]$ sudo kadmin.local -q "xst -norandkey -k hive.keytab hive/[email protected]"

[[email protected] ~]$ for i in {1..9}; do sudo kadmin.local -q "xst -norandkey -k hive.keytab hive/[email protected]"; done

[[email protected] ~]$ for i in {0..8}; do sudo kadmin.local -q "xst -norandkey -k hive.keytab hive/[email protected]"; done

1

2

3

4

修改hive-site.xml文件

[[email protected] ~]# vi /etc/hive/conf/hive-site.xml

1

注意,这里是root权限 

在hive-site.xml后面加入下面几句:

 <property>

    <name>hive.server2.authentication</name>

    <value>kerberos</value>

  </property>

  <property>

    <name>hive.metastore.kerberos.principal</name>

    <value>hive/[email protected]</value>

  </property>

  <property>

    <name>hive.server2.authentication.kerberos.principal</name>

    <value>hive/[email protected]</value>

  </property>

  <property>

  <name>hive.metastore.kerberos.keytab.file</name>

  <value>/etc/hive/conf/hive.keytab</value>

  </property>

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

最后一句就是你接下来放置hive.keytab的路径。这里需要注意的是当重启hive时,hive的配置文件可能会更改为之前默认情况下,而且/etc/hive/conf/下的hive.keytab也会被删除,需要重启拷贝和重新修改hive-site.xml。所以这里我们选择先修改配置文件再拷贝hive.keytab文件。 

将修改后的hive-site.xml文件scp到各个节点

[[email protected] wlbd]# for i in {10..28}; do  scp /etc/hive/conf/hive-site.xml 192.168.32.$i:/etc/hive/conf; done

1

这里的ip地址对应上面的hostname

同样将hive.keytab文件拷贝到每一台主机的/etc/hive/conf目录下。

[[email protected] wlbd]# for i in {9..28}; do  scp  hive.keytab 192.168.32.$i:/etc/hive/conf; done

1

修改每个节点keytab权限:

[[email protected] wlbd]#for i in {9..28}; do ssh 192.168.32.$i "cd /etc/hive/conf; chmod 400 hive.keytab; chown hive:hadoop hive.keytab"; done

1

2. 权限测试

创建两个系统用户user1和user2

[[email protected] wlbd]# useradd user1

[[email protected] wlbd]# passwd user1

[[email protected] wlbd]# useradd user2

[[email protected] wlbd]# passwd user2

1

2

3

4

创建Kerberos用户

[[email protected] ~]$ sudo kadmin.local -q "addprinc user1"

[[email protected] ~]$ sudo kadmin.local -q "addprinc user2"

1

2

创建数据库和表。这里需要kinit之前生成的hive.keytab,然后进入Hive CLI创建数据库,再通过beeline创建role。 

当前目录下有events.csv文件

[[email protected] ~]$ cat events.csv 

10.1.2.3,US,android,createNote

10.200.88.99,FR,windows,updateNote

10.1.2.3,US,android,updateNote

10.200.88.77,FR,ios,createNote

10.1.4.5,US,windows,updateTag

1

2

3

4

5

6

7

[[email protected] ~]$ kinit -kt hive.keytab hive/wlint01

1

创建两个数据库

create database db1;

create database db2;

1

2

在数据库中创建表 

在db1中创建table1,在db2中创建table1和table2

create table db1.table1 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

create table db2.table1 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

create table db2.table2 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

load data local inpath '/home/wlbd/events.csv' overwrite into table db1.table1;

load data local inpath '/home/wlbd/events.csv' overwrite into table db2.table1;

load data local inpath '/home/wlbd/events.csv' overwrite into table db2.table2;

1

2

3

4

5

6

7

8

9

10

11

12

13

14

赋予用户权限

这里是通过beeline连接进行操作的 

给user1赋予db1的所有权限

 beeline -u "jdbc:hive2://wlint01:10000/;principal=hive/[email protected]"

1

create role user1_role;

GRANT ALL ON DATABASE db1 TO ROLE user1_role;

GRANT ROLE user1_role TO GROUP user1;

1

2

3

给user2赋予db2的所有权限

create role user2_role;

GRANT ALL ON DATABASE db2 TO ROLE user2_role;

GRANT ROLE user2_role TO GROUP user2;

1

2

3

测试用户权限 

user1用户只具有db1和default的权限

[[email protected] ~]$ kinit user1

Password for [email protected]: 

[[email protected] ~]$ beeline -u "jdbc:hive2://wlint01:10000/;principal=hive/[email protected]"

scan complete in 2ms

Connecting to jdbc:hive2://wlint01:10000/;principal=hive/[email protected]

Connected to: Apache Hive (version 1.1.0-cdh5.14.2)

Driver: Hive JDBC (version 1.1.0-cdh5.14.2)

Transaction isolation: TRANSACTION_REPEATABLE_READ

Beeline version 1.1.0-cdh5.14.2 by Apache Hive

0: jdbc:hive2://wlint01:10000/> show databases;

INFO  : Compiling command(queryId=hive_20180618150404_d26fd5a2-8c54-44d5-9df8-38f362535491): show databases

INFO  : Semantic Analysis Completed

INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)

INFO  : Completed compiling command(queryId=hive_20180618150404_d26fd5a2-8c54-44d5-9df8-38f362535491); Time taken: 0.38 seconds

INFO  : Executing command(queryId=hive_20180618150404_d26fd5a2-8c54-44d5-9df8-38f362535491): show databases

INFO  : Starting task [Stage-0:DDL] in serial mode

INFO  : Completed executing command(queryId=hive_20180618150404_d26fd5a2-8c54-44d5-9df8-38f362535491); Time taken: 0.281 seconds

INFO  : OK

+----------------+--+

| database_name  |

+----------------+--+

| db1            |

| default        |

+----------------+--+

2 rows selected (0.884 seconds)

0: jdbc:hive2://wlint01:10000/> 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

user2用户只具有db2和default的权限

[[email protected] ~]$ kinit user2

Password for [email protected]: 

[[email protected] ~]$ beeline -u "jdbc:hive2://wlint01:10000/;principal=hive/[email protected]"

scan complete in 2ms

Connecting to jdbc:hive2://wlint01:10000/;principal=hive/[email protected]

Connected to: Apache Hive (version 1.1.0-cdh5.14.2)

Driver: Hive JDBC (version 1.1.0-cdh5.14.2)

Transaction isolation: TRANSACTION_REPEATABLE_READ

Beeline version 1.1.0-cdh5.14.2 by Apache Hive

0: jdbc:hive2://wlint01:10000/> show databases;

INFO  : Compiling command(queryId=hive_20180618151010_397e7de5-2bd7-4bd7-90d7-bbfabcab48e8): show databases

INFO  : Semantic Analysis Completed

INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)

INFO  : Completed compiling command(queryId=hive_20180618151010_397e7de5-2bd7-4bd7-90d7-bbfabcab48e8); Time taken: 0.104 seconds

INFO  : Executing command(queryId=hive_20180618151010_397e7de5-2bd7-4bd7-90d7-bbfabcab48e8): show databases

INFO  : Starting task [Stage-0:DDL] in serial mode

INFO  : Completed executing command(queryId=hive_20180618151010_397e7de5-2bd7-4bd7-90d7-bbfabcab48e8); Time taken: 0.176 seconds

INFO  : OK

+----------------+--+

| database_name  |

+----------------+--+

| db2            |

| default        |

+----------------+--+

2 rows selected (0.418 seconds)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

禁用HIve CLI 

表示hive、hue、hdfs和sentry都能访问CLI,别的用户或组成员访问不了CLI。这样创建一个用户user1,由于user1不在这个列表里面,user1自然访问不了Hive CLI。 

HDFS测试 

配置HDFS ACL与Sentry同步后,HDFS权限与Sentry监控的目录(/user/hive/warehouse)的权限同步

[[email protected] ~]# kinit -kt hive.keytab hive/hxmaster

[[email protected] ~]# hadoop fs -getfacl -R /user/hive/warehouse/

# file: /user/hive/warehouse

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db1.db

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user1:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db1.db/table1

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user1:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db1.db/table1/events.csv

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user1:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user2:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db/table1

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user2:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db/table1/events.csv

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user2:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db/table2

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user2:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db/table2/events.csv

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:user2:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/test_table

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/test_table/events.csv

# owner: hive

# group: hive

user::rwx

group::---

user:hive:rwx

group:hive:rwx

mask::rwx

other::--x

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

切换到user1用户,查看HDFS文件

[[email protected]xmaster ~]# kinit user1

Password for [email protected]: 

You have mail in /var/spool/mail/root

[[email protected] ~]# hadoop fs -ls /user/hive/warehouse/db1.db

Found 1 items

drwxrwx--x+  - hive hive          0 2018-06-10 20:08 /user/hive/warehouse/db1.db/table1

[[email protected] ~]# hadoop fs -cat /user/hive/warehouse/db1.db/table1/events.csv

10.1.2.3,US,android,createNote

10.200.88.99,FR,windows,updateNote

10.1.2.3,US,android,updateNote

10.200.88.77,FR,ios,createNote

10.1.4.5,US,windows,updateTag

You have mail in /var/spool/mail/root

[[email protected] ~]# hadoop fs -ls /user/hive/warehouse/db2.db

ls: Permission denied: user=user1, access=READ_EXECUTE, inode="/user/hive/warehouse/db2.db":hive:hive:drwxrwx--x

[[email protected] ~]# hadoop fs -cat /user/hive/warehouse/db2.db/table1/events.csv

cat: Permission denied: user=user1, access=READ, inode="/user/hive/warehouse/db2.db/table1/events.csv":hive:hive:-rwxrwx--x

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

切换到user2用户,查看HDFS文件

[[email protected] ~]# kinit user2

Password for [email protected]: 

[[email protected] ~]# hadoop fs -ls /user/hive/warehouse/db1.db

ls: Permission denied: user=user2, access=READ_EXECUTE, inode="/user/hive/warehouse/db1.db":hive:hive:drwxrwx--x

[[email protected] ~]# hadoop fs -cat /user/hive/warehouse/db1.db/table1/events.csv

cat: Permission denied: user=user2, access=READ, inode="/user/hive/warehouse/db1.db/table1/events.csv":hive:hive:-rwxrwx--x

[[email protected] ~]# hadoop fs -ls /user/hive/warehouse/db2.db

Found 2 items

drwxrwx--x+  - hive hive          0 2018-06-10 20:08 /user/hive/warehouse/db2.db/table1

drwxrwx--x+  - hive hive          0 2018-06-10 20:08 /user/hive/warehouse/db2.db/table2

[[email protected] ~]# hadoop fs -cat /user/hive/warehouse/db2.db/table1/events.csv

10.1.2.3,US,android,createNote

10.200.88.99,FR,windows,updateNote

10.1.2.3,US,android,updateNote

10.200.88.77,FR,ios,createNote

10.1.4.5,US,windows,updateTag

[[email protected] ~]# hadoop fs -cat /user/hive/warehouse/db2.db/table2/events.csv

10.1.2.3,US,android,createNote

10.200.88.99,FR,windows,updateNote

10.1.2.3,US,android,updateNote

10.200.88.77,FR,ios,createNote

10.1.4.5,US,windows,updateTag

--------------------- 

作者:AndrewTeng 

来源:CSDN 

原文:https://blog.csdn.net/qq_30982323/article/details/80704720 

版权声明:本文为博主原创文章,转载请附上博文链接!

继续阅读