从零部署一个ceph集群

从零部署一个ceph集群

微信搜索 zze_coding 或扫描 👉 二维码关注我的微信公众号获取更多资源推送:

ceph 的详细概念、使用学习可参考《ceph分布式存储学习指南》,关注文章首部微信公众号发送 #ceph_book_01 即可获取该书 pdf 电子档,由于该书出版时间较早,其部署方式已过时,部署可参考本篇文章。

1、准备如下三台机器,并在三台主机中配置好 epel 源:

主机名IP角色系统
ceph-node110.0.1.201ceph-deploy、monitor、osdCentOS 7.8
ceph-node210.0.1.202monitor、osdCentOS 7.8
ceph-node310.0.1.203monitor、osdCentOS 7.8

在每台主机额外添加一个磁盘设备用作 osd。

2、在三台主机配置时间同步:

$ crontab -l
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null

3、在三条主机配置 hosts 映射:

$ cat << EOF >> /etc/hosts
10.0.1.201      ceph-node1
10.0.1.202      ceph-node2
10.0.1.203      ceph-node3
EOF

4、配置三台主机可以免密互访,下面操作在 ceph-node1 中进行:

$ ssh-keygen 
$ mv /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
$ chmod 600 /root/.ssh/authorized_keys
$ scp -rp /root/.ssh ceph-node2:/root/
$ scp -rp /root/.ssh ceph-node3:/root/
$ echo -e '\tStrictHostKeyChecking no' >> /etc/ssh/ssh_config
$ ssh test02 "echo -e '\tStrictHostKeyChecking no' >> /etc/ssh/ssh_config"
$ ssh test03 "echo -e '\tStrictHostKeyChecking no' >> /etc/ssh/ssh_config"

5、在 ceph-node1 添加 ceph 的 yum 源:

$ cat << EOF > /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
EOF
$ yum clean all && yum makecache

6、在 ceph-node1 安装 ceph-deploy 工具包:

$ yum -y install ceph-deploy

7、在 ceph-node1 执行下面操作初始化集群:

$ mkdir /etc/ceph && cd /etc/ceph
$ ceph-deploy new ceph-node1 ceph-node2 ceph-node3
...
[ceph_deploy.new][DEBUG ] Resolving host ceph-node3
[ceph_deploy.new][DEBUG ] Monitor ceph-node3 at 10.0.1.203
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1', 'ceph-node2', 'ceph-node3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.1.201', '10.0.1.202', '10.0.1.203']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf....

8、在 ceph-node1 用 ceph-deploy 工具将 ceph 软件的二进制包安装到所有的节点上,在 ceph-node1 执行以下命令:

$ ceph-deploy install ceph-node1 ceph-node2 ceph-node3 --repo-url=http://mirrors.aliyun.com/ceph/rpm-octopus/el7/
# 查看 ceph 版本
$ ceph -v
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

部署不同版本直接修改 url 中的 rpm-* 即可,可访问 http://mirrors.aliyun.com/ceph/ 查看所有可用的版本。

9、在 ceph-node1 安装依赖:

# 使用国内 pip 源
$ mkdir ~/.pip && cat << EOF > ~/.pip/pip.conf
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple

[install]
trusted-host=mirrors.aliyun.com
EOF
# 安装依赖
$ pip3 install pecan werkzeug

10、在 ceph-node1 修改配置文件:

$ cat << EOF > /etc/ceph/ceph.conf 
[global]
fsid = 670d637d-f95c-4caf-9aaf-b7289d0b3e2d
mon_initial_members = ceph-node1
mon_host = 10.0.1.201
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 2     #设置默认副本数为2  默认为3
rbd_default_features = 1      #永久改变默认值
osd journal size = 2000        #日志大小设置
public network = 10.0.1.0/24  #如果有多个网卡则需进行网络配置
#如果是ext4的文件格式需要执行下面两个配置 (df -T -h|grep -v var查看格式)
#osd max object name len = 256
#osd max object namespace len = 64
EOF

11、在 ceph-node1 上创建你的第一个 monitor:

$ ceph-deploy mon create-initial
# 在 ceph-node2、ceph-node3 部署 monitor
$ ceph-deploy mon create ceph-node2 ceph-node3

12、在 ceph-node1 节点上创建对象存储设备(OSD),并将其加入 ceph 集群中:

# 列出磁盘
$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][INFO  ] Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
[ceph-node1][INFO  ] Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][INFO  ] Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
[ceph-node2][INFO  ] Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][INFO  ] Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
[ceph-node3][INFO  ] Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
# 销毁磁盘中已存在的分区表和数据:
$ ceph-deploy disk zap ceph-node1 /dev/sdb
$ ceph-deploy disk zap ceph-node2 /dev/sdb
$ ceph-deploy disk zap ceph-node3 /dev/sdb
# 创建 mgr  
$ ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
# 创建 osd
$ ceph-deploy osd create ceph-node1 --data /dev/sdb
$ ceph-deploy osd create ceph-node2 --data /dev/sdb
$ ceph-deploy osd create ceph-node3 --data /dev/sdb

13、在 ceph-node1 检查集群状态:

$ ceph status
  cluster:
    id:     670d637d-f95c-4caf-9aaf-b7289d0b3e2d
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 17m)
    mgr: ceph-node1(active, since 4m), standbys: ceph-node2, ceph-node3
    osd: 3 osds: 3 up (since 6s), 3 in (since 6s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:     1 active+clean
 
  progress:
    Rebalancing after osd.2 marked in (1s)
      [............................] 

看到磁盘总容量为三个主机的 sdb 磁盘容量的总和即说明搭建成功。

Copyright: 采用 知识共享署名4.0 国际许可协议进行许可

Links: https://www.zze.xyz/archives/ceph-deploy.html

Buy me a cup of coffee ☕.