天天看點

ceph叢集搭建Ceph 叢集文檔挂載 ceph 檔案系統

Ceph 叢集文檔

叢集架構:

ceph叢集搭建Ceph 叢集文檔挂載 ceph 檔案系統

環境:

10.200.51.4 admin、osd、mon 作為管理和監控節點

10.200.51.9 osd、mds

10.200.51.10 osd、mds

10.200.51.113~client 節點

ceph1 作為管理,osd.mon 節點。前三台新增硬碟

[root@ceph1 ~]# mkfs.xfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@ceph1 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph1 ~]# mount /dev/sdb /var/local/osd0/           

複制

[root@ceph2 ~]# mkfs.xfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@ceph2 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph2 ~]# mount /dev/sdb /var/local/osd1/           

複制

[root@ceph3 ~]# mkfs.xfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@ceph3 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph3 ~]# mount /dev/sdb /var/local/osd2/           

複制

  • 編輯 host 檔案(各節點操作)
10.200.51.4 ceph1
10.200.51.9 ceph2
10.200.51.10 ceph3
10.200.51.113 ceph4           

複制

echo -e "192.168.51.206 ceph1\n192.168.51.207 ceph2\n192.168.51.208 ceph3\n192.168.51.209 ceph4\n192.168.51.212 controller\n192.168.51.211 cinder\n192.168.51.210 computer" >> /etc/hosts           

複制

  • SSH 免密碼登陸(各節點操作)
ssh-keygen
ssh-copy-id ceph1
ssh-copy-id ceph2
ssh-copy-id ceph3
ssh-copy-id ceph4           

複制

  • 同步時間

    在 ceph1 上安裝 ntp 服務,是 2、3 同步 1 的時間

[root@ceph1 ~]# yum install ntp -y
[root@ceph1 ~]# systemctl start ntpd
[root@ceph2 ~]# ntpdate 10.200.51.4
[root@ceph3 ~]# ntpdate 10.200.51.4           

複制

或者所有節點配置統一網際網路 ntp 服務

ntpdate ntp1.aliyun.com           

複制

管理節點安裝

添加 yum 源

官方源

[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/ 
gpgcheck=0
priority=1

[ceph-noarch] 
name=ceph noarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ 
gpgcheck=0
priority=1

[ceph-source]
name=Ceph source packages 
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS 
gpgcheck=0
priority=1           

複制

yum clean all
yum makecache            

複制

  • 更新軟體源安裝 ceph-deploy
yum -y install ceph-deploy            

複制

  • 建立 monitor 服務
mkdir /etc/ceph && cd /etc/ceph
ceph-deploy new ceph1 
[root@ceph1 ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring           

複制

Ceph 配置檔案、一個 monitor 密鑰環和一個日志檔案

修改副本數

#配置檔案的預設副本數從 3 改成 2,這樣隻有兩個 osd 也能達到 active+clean 狀态,把下面這行加入到[global]段(可選配置),參考這裡

[root@ceph1 ceph]# vim ceph.conf
[global]
fsid = 94b37984-4bf3-44f4-ac08-3bb1a638c771
mon_initial_members = ceph1
mon_host = 10.200.51.4
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2           

複制

  • 安裝 ceph(所有節點安裝)
[root@ceph1 ceph]# ceph-deploy install ceph1 ceph2 ceph3 ceph4            

複制

完成後:

[root@ceph1 ceph]#  ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)           

複制

  • 安裝 ceph minitor(在/etc/ceph/目錄執行)
[root@ceph1 ceph]# ceph-deploy mon create ceph1           

複制

[root@ceph1 ceph]# ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
{
    "name": "ceph1",
    "rank": 0,
    "state": "leader",
    "election_epoch": 3,
    "quorum": [
        0
    ],
    "outside_quorum": [],
    "extra_probe_peers": [],
    "sync_provider": [],
    "monmap": {
        "epoch": 1,
        "fsid": "0fe8ad12-6f71-4a94-80cc-2d19a9217b4b",
        "modified": "2019-10-31 01:50:50.330750",
        "created": "2019-10-31 01:50:50.330750",
        "mons": [
            {
                "rank": 0,
                "name": "ceph1",
                "addr": "10.200.51.4:6789\/0"
            }
        ]
    }
}           

複制

  • 秘鑰管理:
[root@ceph1 ceph]# ceph-deploy gatherkeys ceph1           

複制

部署 osd 服務

添加 osd 節點

之前建立的目錄/var/local/osd{id}

  • 準備 OSD
[root@ceph1 ceph]# ceph-deploy osd prepare ceph1:/var/local/osd0 ceph2:/var/local/osd1 ceph3:/var/local/osd2           

複制

  • 激活 OSD:

    首先給足權限(在各個節點上給/var/local/osd0/、/var/local/osd1/和/var/local/osd2/添權重限):

chmod 777 -R /var/local/osd0/
chmod 777 -R /var/local/osd1/
chmod 777 -R /var/local/osd2/           

複制

[root@ceph1 ceph]# ceph-deploy osd activate ceph1:/var/local/osd0 ceph2:/var/local/osd1    ceph3:/var/local/osd2 ceph4:/var/local/osd3           

複制

  • 檢視狀态:
[root@ceph1 ceph]#  ceph-deploy osd list ceph1 ceph2 ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd list ceph1 ceph2 ceph3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0d07370290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f0d073be050>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph1', None, None), ('ceph2', None, None), ('ceph3', None, None)]
[ceph1][DEBUG ] connected to host: ceph1 
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /bin/ceph --cluster=ceph osd tree --format=json
[ceph1][DEBUG ] connected to host: ceph1 
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ceph-disk list
[ceph1][INFO  ] ----------------------------------------
[ceph1][INFO  ] ceph-0
[ceph1][INFO  ] ----------------------------------------
[ceph1][INFO  ] Path           /var/lib/ceph/osd/ceph-0
[ceph1][INFO  ] ID             0
[ceph1][INFO  ] Name           osd.0
[ceph1][INFO  ] Status         up
[ceph1][INFO  ] Reweight       1.0
[ceph1][INFO  ] Active         ok
[ceph1][INFO  ] Magic          ceph osd volume v026
[ceph1][INFO  ] Whoami         0
[ceph1][INFO  ] Journal path   /var/local/osd0/journal
[ceph1][INFO  ] ----------------------------------------
[ceph2][DEBUG ] connected to host: ceph2 
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /usr/sbin/ceph-disk list
[ceph2][INFO  ] ----------------------------------------
[ceph2][INFO  ] ceph-1
[ceph2][INFO  ] ----------------------------------------
[ceph2][INFO  ] Path           /var/lib/ceph/osd/ceph-1
[ceph2][INFO  ] ID             1
[ceph2][INFO  ] Name           osd.1
[ceph2][INFO  ] Status         up
[ceph2][INFO  ] Reweight       1.0
[ceph2][INFO  ] Active         ok
[ceph2][INFO  ] Magic          ceph osd volume v026
[ceph2][INFO  ] Whoami         1
[ceph2][INFO  ] Journal path   /var/local/osd1/journal
[ceph2][INFO  ] ----------------------------------------
[ceph3][DEBUG ] connected to host: ceph3 
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph3][DEBUG ] find the location of an executable
[ceph3][INFO  ] Running command: /usr/sbin/ceph-disk list
[ceph3][INFO  ] ----------------------------------------
[ceph3][INFO  ] ceph-2
[ceph3][INFO  ] ----------------------------------------
[ceph3][INFO  ] Path           /var/lib/ceph/osd/ceph-2
[ceph3][INFO  ] ID             2
[ceph3][INFO  ] Name           osd.2
[ceph3][INFO  ] Status         up
[ceph3][INFO  ] Reweight       1.0
[ceph3][INFO  ] Active         ok
[ceph3][INFO  ] Magic          ceph osd volume v026
[ceph3][INFO  ] Whoami         2
[ceph3][INFO  ] Journal path   /var/local/osd2/journal
[ceph3][INFO  ] ----------------------------------------           

複制

  • 統一配置

    (用 ceph-deploy 把配置檔案和 admin 密鑰拷貝到所有節點,這樣每次執行 Ceph 指令行時就無需 指定 monitor 位址和 ceph.client.admin.keyring 了)

[root@ceph1 ceph]# ceph-deploy admin ceph1 ceph2 ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy admin ceph1 ceph2 ceph3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7feff2220cb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph1', 'ceph2', 'ceph3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7feff2f39a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph1
[ceph1][DEBUG ] connected to host: ceph1 
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] connected to host: ceph2 
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph3
[ceph3][DEBUG ] connected to host: ceph3 
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf           

複制

各節點修改 ceph.client.admin.keyring 權限:

chmod +r /etc/ceph/ceph.client.admin.keyring           

複制

  • 檢視 OSD 狀态:
[root@ceph1 ceph]# ceph health
HEALTH_OK           

複制

部署 MDS 服務

[root@ceph1 ceph]# ceph-deploy mds create ceph2 ceph3             

複制

檢視叢集狀态:

[root@ceph1 ceph]# ceph mds stat  
e4:, 3 up:standby
[root@ceph1 ceph]# ceph -s
    cluster 0fe8ad12-6f71-4a94-80cc-2d19a9217b4b
     health HEALTH_OK
     monmap e1: 1 mons at {ceph1=10.200.51.4:6789/0}
            election epoch 3, quorum 0 ceph1
     osdmap e15: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v25: 64 pgs, 1 pools, 0 bytes data, 0 objects
            15681 MB used, 45728 MB / 61410 MB avail
                  64 active+clean           

複制

建立存儲系統

首先檢視:

[root@ceph1 ceph]# ceph fs ls 
No filesystems enabled           

複制

  • 建立存儲池
[root@ceph1 ceph]# ceph osd pool create cephfs_data 128 
pool 'cephfs_data' created
[root@ceph1 ceph]# ceph osd pool create cephfs_metadata 128 
pool 'cephfs_metadata' created           

複制

  • 建立檔案系統

    建立好存儲池後,就可以用 fs new 指令建立檔案系統了

[root@ceph1 ceph]#  ceph fs new 128 cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
[root@ceph1 ceph]# ceph fs ls
name: 128, metadata pool: cephfs_metadata, data pools: [cephfs_data ]           

複制

檔案系統建立完畢後, MDS 伺服器就能達到 active 狀态了,比如在一個單 MDS 系統中:

[root@ceph1 ceph]#  ceph mds stat
e7: 1/1/1 up {0=ceph3=up:active}, 2 up:standby           

複制

active 是活躍的,另 1 個是處于熱備份的狀态

挂載 ceph 檔案系統

不同的挂載方式

  • 挂載 CephFS 檔案系統
  • 把 CephFS 挂載為使用者空間檔案系統

用核心驅動挂載 CEPH 檔案系統

client 端配置:

#建立挂載點 存儲密鑰(如果沒有在管理節點使用 ceph-deploy 拷貝 ceph 配置檔案)

[root@ceph3 ~]#  cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
        key = AQA6drpdei+IORAA7Ogqh9Wn0NbdGI/juTXnqw==           

複制

将如上 key 對應的值儲存到用戶端(/etc/ceph/admin.secret)

[root@ceph4 ceph]# cat /etc/ceph/admin.secret 
AQA6drpdei+IORAA7Ogqh9Wn0NbdGI/juTXnqw==           

複制

要挂載啟用了 cephx 認證的 Ceph 檔案系統,你必須指定使用者名、密鑰。

[root@ceph4 ~]# mount -t ceph 10.200.51.4:6789:/ /opt/ -o name=admin,secretfile=/etc/ceph/admin.secret             

複制

檢視挂載情況:

[root@ceph4 ~]# df -h 
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   27G  1.6G   26G   6% /
/dev/sda1               1014M  136M  879M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
10.200.51.4:6789:/        60G   16G   45G  26% /opt           

複制

取消挂載

[root@ceph4 ~]# umount /opt/
[root@ceph4 ~]# df -h       
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   27G  1.6G   26G   6% /
/dev/sda1               1014M  136M  879M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0           

複制

使用者空間挂載 CEPH 檔案系統

标題:ceph叢集搭建

作者:cuijianzhe

位址:https://solo.cjzshilong.cn/articles/2019/10/30/1572426357896.html