|
| 1 | +# ceph 部署 |
| 2 | + |
| 3 | +* 参考 |
| 4 | + 1. [Preflight Checklist](http://docs.ceph.com/docs/master/start/quick-start-preflight/#ceph-deploy-setup) |
| 5 | + 2. [Storage Cluster Quick Start](http://docs.ceph.com/docs/master/start/quick-ceph-deploy/) |
| 6 | + |
| 7 | +* 规划图 |
| 8 | + |
| 9 | + |
| 10 | + |
| 11 | +* 节点情况 |
| 12 | + |
| 13 | +``` |
| 14 | +192.168.122.196 deploy |
| 15 | +192.168.122.149 node1 |
| 16 | +192.168.122.18 node2 |
| 17 | +192.168.122.35 node3 |
| 18 | +``` |
| 19 | + |
| 20 | +*各节点的系统为:centos7.4 minimal* |
| 21 | + |
| 22 | +## 预先准备 |
| 23 | + |
| 24 | +### ceph yum 源设定 |
| 25 | + |
| 26 | +``` |
| 27 | +vim /etc/yum.repo.d/ceph.repo |
| 28 | +``` |
| 29 | + |
| 30 | +``` |
| 31 | +[Ceph] |
| 32 | +name=Ceph packages for $basearch |
| 33 | +baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch |
| 34 | +enabled=1 |
| 35 | +gpgcheck=1 |
| 36 | +type=rpm-md |
| 37 | +gpgkey=https://download.ceph.com/keys/release.asc |
| 38 | +priority=1 |
| 39 | +
|
| 40 | +[Ceph-noarch] |
| 41 | +name=Ceph noarch packages |
| 42 | +baseurl=http://download.ceph.com/rpm-jewel/el7/noarch |
| 43 | +enabled=1 |
| 44 | +gpgcheck=1 |
| 45 | +type=rpm-md |
| 46 | +gpgkey=https://download.ceph.com/keys/release.asc |
| 47 | +priority=1 |
| 48 | +
|
| 49 | +[ceph-source] |
| 50 | +name=Ceph source packages |
| 51 | +baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS |
| 52 | +enabled=1 |
| 53 | +gpgcheck=1 |
| 54 | +type=rpm-md |
| 55 | +gpgkey=https://download.ceph.com/keys/release.asc |
| 56 | +priority=1 |
| 57 | +``` |
| 58 | + |
| 59 | +### SSH 免密登录 |
| 60 | + |
| 61 | +从 deploy 节点向 node1、node2、node3 节点做 ssh 认证 |
| 62 | + |
| 63 | +``` |
| 64 | +ssh-keygen |
| 65 | +ssh-copy-id node1 |
| 66 | +ssh-copy-id node2 |
| 67 | +ssh-copy-id node3 |
| 68 | +``` |
| 69 | + |
| 70 | +## 防火墙和 SELINUX 设定 |
| 71 | + |
| 72 | +### 防火墙设定 |
| 73 | + |
| 74 | +``` |
| 75 | +systemctl stop firewalld |
| 76 | +systemctl disable firewalld |
| 77 | +``` |
| 78 | + |
| 79 | +### SELINUX 设定 |
| 80 | + |
| 81 | +``` |
| 82 | +sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config |
| 83 | +reboot |
| 84 | +``` |
| 85 | + |
| 86 | +### 配置 NTP |
| 87 | + |
| 88 | +**说明:** 我们以部署节点来做 ntp-server,其他的节点来向 deploy 节点进行同步 |
| 89 | + |
| 90 | +* 在 deploy 节点上: |
| 91 | + |
| 92 | +``` |
| 93 | +yum install ntp |
| 94 | +vim /etc/ntp.conf |
| 95 | +``` |
| 96 | + |
| 97 | +``` |
| 98 | +server 0.cn.pool.ntp.org |
| 99 | +server 1.cn.pool.ntp.org |
| 100 | +server 2.cn.pool.ntp.org |
| 101 | +server 3.cn.pool.ntp.org |
| 102 | +
|
| 103 | +
|
| 104 | +restrict 0.cn.pool.ntp.org nomodify notrap noquery |
| 105 | +restrict 1.cn.pool.ntp.org nomodify notrap noquery |
| 106 | +restrict 2.cn.pool.ntp.org nomodify notrap noquery |
| 107 | +restrict 3.cn.pool.ntp.org nomodify notrap noquery |
| 108 | +
|
| 109 | +server 127.0.0.1 # local clock |
| 110 | +fudge 127.0.0.1 stratum 10 |
| 111 | +``` |
| 112 | + |
| 113 | +* 在 node 节点上 |
| 114 | + |
| 115 | +``` |
| 116 | +yum install ntp -y |
| 117 | +vim /etc/ntp.conf |
| 118 | +``` |
| 119 | + |
| 120 | +>server deploy |
| 121 | +
|
| 122 | +## 部署 ceph |
| 123 | + |
| 124 | +*以下操作在 deploy 节点上进行。* |
| 125 | + |
| 126 | +``` |
| 127 | +yum install ceph-deploy |
| 128 | +mkdir cluster |
| 129 | +cd clutser |
| 130 | +ceph-deploy new node1 |
| 131 | +vim ceph.conf |
| 132 | +``` |
| 133 | + |
| 134 | +>public network = 192.168.122.0/24 |
| 135 | +
|
| 136 | +``` |
| 137 | +ceph-deploy install node1 node2 node3 |
| 138 | +ceph-deploy mon create-initial |
| 139 | +ceph-deploy admin node1 node2 node3 |
| 140 | +ceph-deploy osd create node1:vdb node2:vdb node3:vdb |
| 141 | +``` |
| 142 | + |
| 143 | +## 测试 |
| 144 | + |
| 145 | +*以下操作是在 node1 节点进行。* |
| 146 | + |
| 147 | +### 查看集群状态 |
| 148 | + |
| 149 | +``` |
| 150 | +ceph -s |
| 151 | +``` |
| 152 | + |
| 153 | +### 设定集群默认的副本数为 2 |
| 154 | + |
| 155 | +``` |
| 156 | +ceph osd pool set poolname size 2 |
| 157 | +ceph osd pool set poolname min_size 1 |
| 158 | +``` |
| 159 | + |
| 160 | +### ceph 的性能测试 |
| 161 | + |
| 162 | +#### rados 性能测试 |
| 163 | + |
| 164 | +``` |
| 165 | +rados bench -p rbd 10 write --no-cleanup |
| 166 | +``` |
| 167 | + |
| 168 | +``` |
| 169 | +[root@node1 ~]# rados bench -p rbd 10 write --no-cleanup |
| 170 | +Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects |
| 171 | +Object prefix: benchmark_data_node1_6143 |
| 172 | + sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) |
| 173 | + 0 0 0 0 0 0 - 0 |
| 174 | + 1 16 27 11 43.9897 44 0.158622 0.132078 |
| 175 | + 2 16 27 11 21.9946 0 - 0.132078 |
| 176 | + 3 16 27 11 14.6632 0 - 0.132078 |
| 177 | + 4 16 27 11 10.9974 0 - 0.132078 |
| 178 | + 5 16 29 13 10.3975 2 4.48653 0.801245 |
| 179 | + 6 16 33 17 11.3307 16 5.83552 1.98999 |
| 180 | + 7 16 33 17 9.71202 0 - 1.98999 |
| 181 | + 8 16 33 17 8.498 0 - 1.98999 |
| 182 | + 9 16 43 27 11.9972 13.3333 8.38729 4.36487 |
| 183 | + 10 16 43 27 10.7975 0 - 4.36487 |
| 184 | + 11 16 43 27 9.81593 0 - 4.36487 |
| 185 | + 12 16 43 27 8.99793 0 - 4.36487 |
| 186 | + 13 16 43 27 8.30579 0 - 4.36487 |
| 187 | + 14 16 43 27 7.71251 0 - 4.36487 |
| 188 | + 15 16 44 28 7.46495 0.666667 10.2601 4.57541 |
| 189 | + 16 16 44 28 6.99839 0 - 4.57541 |
| 190 | + 17 15 44 29 6.82196 2 10.148 4.76757 |
| 191 | + 18 15 44 29 6.44294 0 - 4.76757 |
| 192 | + 19 15 44 29 6.10384 0 - 4.76757 |
| 193 | +2017-12-05 16:21:42.700892 min lat: 0.102313 max lat: 10.2601 avg lat: 4.76757 |
| 194 | + sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) |
| 195 | + 20 15 44 29 5.79865 0 - 4.76757 |
| 196 | +Total time run: 20.331496 |
| 197 | +Total writes made: 44 |
| 198 | +Write size: 4194304 |
| 199 | +Object size: 4194304 |
| 200 | +Bandwidth (MB/sec): 8.65652 |
| 201 | +Stddev Bandwidth: 10.4479 |
| 202 | +Max bandwidth (MB/sec): 44 |
| 203 | +Min bandwidth (MB/sec): 0 |
| 204 | +Average IOPS: 2 |
| 205 | +Stddev IOPS: 2 |
| 206 | +Max IOPS: 11 |
| 207 | +Min IOPS: 0 |
| 208 | +Average Latency(s): 7.29053 |
| 209 | +Stddev Latency(s): 4.9346 |
| 210 | +Max latency(s): 15.843 |
| 211 | +Min latency(s): 0.102313 |
| 212 | +``` |
| 213 | + |
| 214 | +#### rbd 块设备性能测试 |
| 215 | + |
| 216 | +``` |
| 217 | +rbd create bd0 --size 10G --image-format 2 --image-feature layering |
| 218 | +rbd map bd0 |
| 219 | +rbd showmapped |
| 220 | +mkfs.xfs /dev/rbd0 |
| 221 | +mkdir -p /mnt/ceph-bd0 |
| 222 | +mount /dev/rbd0 /mnt/ceph-bd0/ |
| 223 | +rbd bench-write bd2 --io-total 171997300 |
| 224 | +``` |
| 225 | + |
| 226 | +``` |
| 227 | +[root@node1 ~]# rbd bench-write bd2 --io-total 171997300 |
| 228 | +bench-write io_size 4096 io_threads 16 bytes 171997300 pattern sequential |
| 229 | + SEC OPS OPS/SEC BYTES/SEC |
| 230 | + 3 16403 4565.67 18700991.63 |
| 231 | + 5 16732 3025.63 12392968.92 |
| 232 | + 6 16780 2506.66 10267282.10 |
| 233 | + 7 17316 2464.75 10095624.00 |
| 234 | + 9 20506 2275.66 9321092.65 |
| 235 | + 10 21019 705.50 2889722.18 |
| 236 | + 12 32463 2183.37 8943067.81 |
| 237 | + 13 33340 2525.23 10343326.56 |
| 238 | + 16 39702 2340.36 9586124.18 |
| 239 | + 19 39889 1934.44 7923454.07 |
| 240 | + 26 40046 1193.27 4887622.33 |
| 241 | + 28 40393 496.41 2033296.68 |
| 242 | + 29 40953 483.60 1980809.31 |
| 243 | +elapsed: 29 ops: 41992 ops/sec: 1443.13 bytes/sec: 5911064.21 |
| 244 | +``` |
0 commit comments