centos7搭建ceph集群
听风。 人气:1
一、服务器规划
主机名 | 主机IP | 磁盘配比 | 角色 |
node1 |
public-ip:10.0.0.130 |
sda,sdb,sdc sda是系统盘,另外两块数据盘 |
ceph-deploy,monitor,mgr,osd |
node2 |
public-ip:10.0.0.131 |
sda,sdb,sdc sda是系统盘,另外两块数据盘 |
monitor,mgr,osd |
node3 |
public-ip:10.0.0.132 |
sda,sdb,sdc sda是系统盘,另外两块数据盘 |
monitor,mgr,osd |
二、设置主机名
主机名设置,三台主机分别执行属于自己的命令
node1
[root@localhost ~]# hostnamectl set-hostname node1 [root@localhost ~]# hostname node1
node2
[root@localhost ~]# hostnamectl set-hostname node2 [root@localhost ~]# hostname node2
node3
[root@localhost ~]# hostnamectl set-hostname node3 [root@localhost ~]# hostname node3
执行完毕后要想看到效果,需要关闭当前命令行窗口,重新打开即可看到设置效果
三、设置hosts文件
在3台机器上都执行下面命令,添加映射
echo "10.0.0.130 node1 " >> /etc/hosts echo "10.0.0.131 node2 " >> /etc/hosts echo "10.0.0.132 node3 " >> /etc/hosts
四、创建用户并设置免密登录
创建用户(三台机器上都运行)
useradd -d /home/admin -m admin echo "123456" | passwd admin --stdin #sudo权限 echo "admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/admin sudo chmod 0440 /etc/sudoers.d/admin
设置免密登录 (只在node1上执行)
[root@node1 ~]# su - admin [admin@node1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/admin/.ssh/id_rsa): Created directory '/home/admin/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/admin/.ssh/id_rsa. Your public key has been saved in /home/admin/.ssh/id_rsa.pub. The key fingerprint is: SHA256:qfWhuboKeoHQOOMLOIB5tjK1RPjgw/Csl4r6A1FiJYA admin@admin.ops5.bbdops.com The key's randomart image is: +---[RSA 2048]----+ |+o.. | |E.+ | |*% | |X+X . | |=@.+ S . | |X.* o + . | |oBo. . o . | |ooo. . | |+o....oo. | +----[SHA256]-----+ [admin@node1 ~]$ ssh-copy-id admin@node1 [admin@node1 ~]$ ssh-copy-id admin@node2 [admin@node1 ~]$ ssh-copy-id admin@node3
五、配置时间同步
三台都执行
yum -y install ntpdate ntpdate -u cn.ntp.org.cn crontab -e */20 * * * * ntpdate -u cn.ntp.org.cn > https://img.qb5200.com/download-x/dev/null 2>&1 systemctl reload crond.service
六、安装ceph-deploy并安装ceph软件包
配置ceph清华源
cat > /etc/yum.repos.d/ceph.repo<<'EOF' [Ceph] name=Ceph packages for $basearch baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc priority=1 EOF
安装ceph-deploy
[root@node1 ~]# sudo yum install ceph-deploy
初始化mon点
ceph需要epel源的包,所以安装的节点都需要yum install epel-release
[admin@node1 ~]$ mkdir my-cluster [admin@node1 ~]$ cd my-cluster # new [admin@node1 my-cluster]$ ceph-deploy new node1 node2 node3 Traceback (most recent call last): File "/bin/ceph-deploy", line 18, in <module> from ceph_deploy.cli import main File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module> import pkg_resources ImportError: No module named pkg_resources #以上出现报错,是因为没有pip,安装pip [admin@node1 my-cluster]$ sudo yum install epel-release [admin@node1 my-cluster]$ sudo yum install python-pip #重新初始化 [admin@node1 my-cluster]$ ceph-deploy new node1 node2 node3 [admin@node1 my-cluster]$ ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring [admin@node1 my-cluster]$ cat ceph.conf [global] fsid = a1132f78-cdc5-43d0-9ead-5b590c60c53d mon_initial_members = node1, node2, node3 mon_host = 10.28.103.211,10.28.103.212,10.28.103.213 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
修改ceph.conf,添加如下配置
public network = 10.28.103.0/24 cluster network = 172.30.103.0/24 osd pool default size = 3 osd pool default min size = 2 osd pool default pg num = 128 osd pool default pgp num = 128 osd pool default crush rule = 0 osd crush chooseleaf type = 1 max open files = 131072 ms bind ipv6 = false [mon] mon clock drift allowed = 10 mon clock drift warn backoff = 30 mon osd full ratio = .95 mon osd nearfull ratio = .85 mon osd down out interval = 600 mon osd report timeout = 300 mon allow pool delete = true [osd] osd recovery max active = 3 osd max backfills = 5 osd max scrubs = 2 osd mkfs type = xfs osd mkfs options xfs = -f -i size=1024 osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog filestore max sync interval = 5 osd op threads = 2
安装Ceph软件到指定节点
[admin@node1 my-cluster]$ ceph-deploy install --no-adjust-repos node1 node2 node3
--no-adjust-repos是直接使用本地源,不生成官方源。
部署初始的monitors,并获得keys
[admin@node1 my-cluster]$ ceph-deploy mon create-initial
做完这一步,在当前目录下就会看到有如下的keyrings:
[admin@node1 my-cluster]$ ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
将配置文件和密钥复制到集群各节点
配置文件就是生成的ceph.conf,而密钥是ceph.client.admin.keyring,当使用ceph客户端连接至ceph集群时需要使用的密默认密钥,这里我们所有节点都要复制,命令如下。
[admin@node1 my-cluster]$ ceph-deploy admin node1 node2 node3
七、部署ceph-mgr
#在L版本的`Ceph`中新增了`manager daemon`,如下命令部署一个`Manager`守护进程 [admin@node1 my-cluster]$ ceph-deploy mgr create node1
八、创建osd
在node1上执行以下命令
#用法:ceph-deploy osd create –data {device} {ceph-node} ceph-deploy osd create --data https://img.qb5200.com/download-x/dev/sdb node1 ceph-deploy osd create --data https://img.qb5200.com/download-x/dev/sdb node2 ceph-deploy osd create --data https://img.qb5200.com/download-x/dev/sdb node3 ceph-deploy osd create --data https://img.qb5200.com/download-x/dev/sdc node1 ceph-deploy osd create --data https://img.qb5200.com/download-x/dev/sdc node2 ceph-deploy osd create --data https://img.qb5200.com/download-x/dev/sdc node3
如果报错,记得用root执行
检查osd状态
[admin@node1 ~]$ sudo ceph health HEALTH_OK [admin@node1 ~]$ sudo ceph -s cluster: id: af6bf549-45be-419c-92a4-8797c9a36ee8 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active) osd: 6 osds: 6 up, 6 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 6.0 GiB used, 108 GiB / 114 GiB avail pgs:
默认情况下ceph.client.admin.keyring文件的权限为600,属主和属组为root,如果在集群内节点使用cephadmin用户直接直接ceph命令,将会提示无法找到/etc/ceph/ceph.client.admin.keyring文件,因为权限不足。
如果使用sudo ceph不存在此问题,为方便直接使用ceph命令,可将权限设置为644。在集群节点上面node1 admin用户下执行下面命令。
[admin@node1 my-cluster]$ ceph -s 2020-03-08 07:59:36.062 7f52d08e0700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2020-03-08 07:59:36.062 7f52d08e0700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication [errno 2] error connecting to the cluster [admin@node1 my-cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring [admin@node1 my-cluster]$ ceph -s cluster: id: af6bf549-45be-419c-92a4-8797c9a36ee8 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active) osd: 6 osds: 6 up, 6 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 6.1 GiB used, 108 GiB / 114 GiB avail pgs: [admin@node1 my-cluster]$
查看osds
[admin@node1 ~]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.11151 root default -3 0.03717 host node1 0 hdd 0.01859 osd.0 up 1.00000 1.00000 3 hdd 0.01859 osd.3 up 1.00000 1.00000 -5 0.03717 host node2 1 hdd 0.01859 osd.1 up 1.00000 1.00000 4 hdd 0.01859 osd.4 up 1.00000 1.00000 -7 0.03717 host node3 2 hdd 0.01859 osd.2 up 1.00000 1.00000 5 hdd 0.01859 osd.5 up 1.00000 1.00000
九、开启MGR监控模块
方式一:命令操作
ceph mgr module enable dashboard
如果以上操作报错如下:
Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement
则因为没有安装ceph-mgr-dashboard
,在mgr的节点上安装。
yum install ceph-mgr-dashboard
方式二:配置文件
# 编辑ceph.conf文件 vi ceph.conf [mon] mgr initial modules = dashboard #推送配置 [admin@admin my-cluster]$ ceph-deploy --overwrite-conf config push node1 node2 node3 #重启mgr sudo systemctl restart ceph-mgr@node1
web登录配置
默认情况下,仪表板的所有HTTP连接均使用SSL/TLS进行保护。
#要快速启动并运行仪表板,可以使用以下内置命令生成并安装自签名证书: [root@node1 my-cluster]# ceph dashboard create-self-signed-cert Self-signed certificate created
#创建具有管理员角色的用户: [root@node1 my-cluster]# ceph dashboard set-login-credentials admin admin Username and password updated
#查看ceph-mgr服务: [root@node1 my-cluster]# ceph mgr services { "dashboard": "https://node1:8443/" }
以上配置完成后,浏览器输入https://node1:8443
输入用户名admin
,密码admin
登录即可查看
参考链接:
https://www.sysit.cn/blog/post/sysit/Ceph%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE%E6%89%8B%E5%86%8C
https://boke.wsfnk.com/archives/1163.html
https://www.linux-note.cn/?p=85
加载全部内容