将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 842|回复: 6
收起左侧

新版使用cephadm安装ceph octopus

[复制链接]
发表于 2021-7-19 10:56:03 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
Ceph集群基础库安装
9 |! i) u& D3 O; d. ~3 o& N$ N% T步骤:# `) U4 Q, h, @5 U( @' g3 b7 g) ^5 E
2-1. 安装配置其它基础包,pip、deltarpm、ceph-common  - m5 I, {+ O" G# R& F4 X
2-2. 防火墙规则添加- {% I# d9 J+ t& O

/ p/ \1 E2 B  t2-1 安装配置其它基础包,pip、deltarpm、ceph-common$ Z) w& A6 y0 d* j+ Z/ q
# a$ \0 R$ j: m* ^
1. sed -i 's/^SELINUX\=.*/SELINUX=disabled/g' /etc/selinux/config2.  yum install -y python3 epel-release ceph-mgr-dashboard ceph-common    pip3 install --upgrade pip3. yum install -y snappy leveldb gdisk python3-ceph-argparse python3-flask gperftools-libs4. yum install -y ceph3 |8 J  M5 N# x
( n2 ^  R% ?; N
2-2 防火墙规则添加6 ]0 V- j- n3 y* t
8 K& ]6 Q& f* D
bash> firewall-cmd --zone=public --add-service=ceph-mon --permanentbash> firewall-cmd --zone=public --add-service=ceph --permanentbash> firewall-cmd --zone=public --add-service=ntp --permanentbash> firewall-cmd --reload
: t1 F* [2 M  |' E% B5 u附:ceph相关使端口列表
3 Y1 T" Z: x- p% I5 D- E2 X+ u1 t  CephMonitor(ceph-mon):3300、6789(TCP)
2 \8 J3 Z# E/ n  CephManager(ceph-mgr):6800、6801、以及一个自定义Web端口(TCP)/ `# [7 u7 `4 e" h& g1 Y
  CephOSD(ceph-osd): 6800->7300(TCP)
5 `. x, r6 W2 w5 ]( ^' [以上为止,则整个ceph集群的基础依赖环境都安装好。" F& r8 |4 O) m: t9 z) e  y

" `5 W  T$ u  \( eCephadm使用podman容器和systemd安装和管理Ceph分布式集群,并与CLI和仪表板GUI紧密集成。
; G2 ~" F: A2 H- ^! k4 J•cephadm仅支持octopus  v15.2.0或者更高版本。/ G" ^/ \5 x" f3 f/ g: j
•cephadm与新的业务流程API完全集成,并完全支持新的CLI和仪表板功能来管理集群部署。- l& Z, p0 o6 d# y  W8 |' @* G
•cephadm需要容器支持(podman或docker)和Python 3。
5 i  d7 s1 a+ O1 I# U•时间同步
& F5 m" `8 A% u( f0 f. X& w# C, a
" }2 b$ T% J$ b' B+ i这里使用的centos8来安装的ceph
0 G2 Z. e1 R8 c5 k* a' H0 D
  t- r5 I9 ]8 f8 v配置hosts解析cat >> /etc/hosts <<EOF192.168.8.65 node1192.168.8.66 node2192.168.8.67 node3EOF关闭selinux
6 J- y- G1 u1 z( p9 j, U# tsetenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config设置主机名hostnamectl set-hostname node1安装podmandnf install -y podman9 t4 @& {! D) b) X( q9 Q" L# h
安装cephadm; Q  p2 \/ J' j0 t8 l5 F
cephadm命令可以, p6 B9 j0 x. Y! _) U  r0 M
1.引导新集群; K: o9 n" ]* |5 X" g5 c7 G& K
2.使用有效的Ceph CLI启动容器化的Shell
* F; I( G7 v) f: ]3.帮助调试容器化的Ceph守护进程。0 a& D6 U# P2 l" i0 |0 p* ?

0 ]) r2 F6 h0 V. H$ G& T以下操作只在一台节点执行就可以
, _4 E* l  g. q9 m* Rcurl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm4 o$ m5 W. ]  [, W/ h& q  P" h
chmod +x cephadm
; N: W6 E2 i5 G, c6 E; L0 G8 K安装cephadmn, T" H( s3 N8 ?
./cephadm add-repo --release octopus
7 a( Q9 Q4 @$ B./cephadm install
+ ~! R: G0 `8 i4 e8 X要引导群集,需要先创建一个目录:/etc/ceph
: H/ \5 W( i1 k9 w7 x7 u  r3 fmkdir -p /etc/ceph
2 J/ L* M7 b% `. `) @然后运行该命令:ceph bootstrap
$ Z6 a1 C2 q# G+ h/ }- \cephadm bootstrap --mon-ip 192.168.8.65
( h7 I6 \: v4 M- S4 o, t' s, o* {+ ~
' ]+ M. c- H8 }! K1 b3 I1 R8 Y2 C6 ?5 ^
此命令将会进行以下操作:3 K- p- U; q: J5 @
为本地主机上的新群集创建monitor和manager守护程序。
/ v) t1 p7 t3 y, L为 Ceph 群集生成新的 SSH 密钥,并将其添加到root用户的文件/root/.ssh/authorized_keys% T& b+ w5 y' R
将与新群集通信所需的最小配置文件保存到 /etc/ceph/ceph.conf
$ J6 z: V/ S9 P将client.admin管理(特权!)密钥的副本写入/etc/ceph/ceph.client.admin.keyring
# r3 T+ ^# ]# V将公钥的副本写入/etc/ceph/ceph.pub
. b7 P: A' u9 }  p; ?! h: W3 m安装完成后会有一个dashboard界面
, A7 v; Z) ?; f) X$ u9 A9 O9 g# L" |) ]  H9 R

; h! q/ [7 p( j3 h) n' I" m执行完成后我们可以查看ceph.conf已经写入了! L7 c  q7 K& ?5 \# s8 x

- C; I$ d- W% G( w$ _3 b, Y& _( ^4 J
启用 CEPH CLI$ }" G) @7 M) B" P' M+ {6 H1 E
cephadm shell命令在安装了所有Ceph包的容器中启动bash shell。默认情况下,如果在主机上的/etc/ceph中找到配置和keyring文件,则会将它们传递到容器环境中,以便shell完全正常工作。
7 b* q) M- E* V% v+ F- bcephadm shell$ _9 [4 C* [* p
% M  S1 j) Q7 W9 E, @& j- @
可以在节点上安装包含所有 ceph 命令的包,包括 、(用于安装 CephFS 文件系统)等4 P- H% x  d4 t3 c8 W/ s
cephadm add-repo --release octopus
6 C+ x) ]6 {& H$ l0 f7 J2 |/ G2 ecephadm install ceph-common
8 r* U" v( ~% ?* I* ?! r8 N2 [: i" n1 o) d# x  e0 u
安装过程很慢,可以手动将源改为本地镜像:  e/ [# X4 c9 ~2 z; R9 |6 \
. Z- @. f, f  d! t
添加主机到集群
) _- h6 a* ?1 R2 o& |将公钥添加到新主机
3 V+ n8 I. k7 L: ~, Bssh-copy-id -f -i /etc/ceph/ceph.pub node25 }$ @' W* E$ [' O
ssh-copy-id -f -i /etc/ceph/ceph.pub node3
- _- A- t4 w# L; O3 m2 B* I
- J' ?. k# Q4 Y6 E0 i" W! U
2 D0 O9 }9 Z1 r% M告诉Ceph,新节点是集群的一部分3 D+ J1 }  s* H1 G
[root@localhost ~]# ceph orch host add node2
! s: ^- t+ Z6 X" NAdded host 'node2'7 W  Y9 x+ Q0 |0 ?9 x
[root@localhost ~]# ceph orch host add node3
7 l4 U5 v: e- W7 t/ w- c" ^: DAdded host 'node3'
) A* g% }  @: h2 [7 e1 C. j* m7 U
( V2 m7 l8 Z& G1 a$ s( h添加主机会自动扩展mon和mgr节点+ s" f& Z$ S9 Y- p3 |" k2 N

2 n- T; W/ H3 x" Q3 M: E. F5 q5 ]/ q9 {3 G) M) {% ?) ^

, l( D5 I, }4 I部署其他监视器(可选)8 t8 w: X! y! h
典型的 Ceph 群集具有三个或五个分布在不同主机的mon守护程序。如果群集中有五个或更多节点,建议部署五个mon。3 d. w$ Q: c* Q7 `8 Y0 ~  ]$ `! x
当Ceph知道mon应该使用什么IP子网时,它可以随着群集的增长(或收缩)自动部署和缩放mon。默认情况下,Ceph假定其他mon使用与第一个mon的IP相同的子网。
! ?* y8 O/ e) Q9 V5 \9 H在单个子网的情况下,如果向集群中添加主机,默认最多只会添加5个mon 如果有特定的IP子网给mon使用,可以使用CIDR格式配置该子网:
  g0 E, a' q/ u4 }, `: i# B7 J+ @+ {( y
ceph config set mon public_network 10.1.2.0/24
. w! a) m: C! V3 d: H4 [, {cephadm只会在配置了特定子网IP的主机上部署mon守护程序 如果要调整特定子网mon的默认数量,可以执行以下命令:
- v/ u1 S/ s( D/ `$ R& lceph orch apply mon *<number-of-monitors>*1 P. I/ M' Q7 O+ V5 P" \
如果要在一组特定的主机上部署mon,可以执行以下命令:
# _7 N  i4 s5 _: r7 L5 tceph orch apply mon *<host1,host2,host3,...>*
; z$ p/ S( a. \! r3 D. p% D如果要查看当前主机和标签,可以执行以下命令:) v( M6 Q- }* l. b* f' ^: c

% ^8 L) z8 [2 f8 y- v[root@node1 ~]# ceph orch host ls
6 q( Y7 z% e+ RHOST   ADDR   LABELS  STATUS  9 t, f/ C4 v$ t& f8 x, f- m. G2 P
node1  node1                  9 P7 V/ d9 f" n/ t
node2  node2                  
, b" J- U+ o, ~' x  B0 Y6 `node3  node3  
9 v/ Z  v" u2 X( C, m: p! \如果要禁用自动mon部署,执行以下命令:
6 G& g0 v' _' o# i' N: D# l, Kceph orch apply mon --unmanaged/ K+ d5 ~* i4 `* o
要在不同网络中添加mon执行以下命令:7 \: w2 s2 Y* j: _% i
ceph orch apply mon --unmanaged
+ ~6 R! G/ k4 ?9 F. t; Z! h! Rceph orch daemon add mon node2:192.168.8.66
! N! `; _! ~4 }! y% xceph orch daemon add mon node2:192.168.17.0/24/ c. t& |! q: e; f  i, H2 g

2 B+ j1 G; h- r9 s! X$ n如果要添加mon到多个主机,也可以用以下命令:% `: p& W/ t; B
ceph orch apply mon "host1,host2,host3"3 E9 Y: S6 W: \  o4 q& l1 B
& H) \$ J5 h* x) ~
部署OSD
# ~/ f2 ]( m7 s6 f  F" n% s可以用以下命令显示集群中的存储设备清单& S8 N3 u8 }6 k  f
ceph orch device ls
% E+ I0 z/ z# j
" c( \1 t$ k. C" |3 v9 t: S7 G如果满足以下所有_条件_,则存储设备被视为可用:8 C  h; I5 I; {! v( T  H! h3 Y2 X+ C" U
设备必须没有分区。  i8 t. M$ F0 }; J3 D! A1 E# ]2 m' x
设备不得具有任何 LVM 状态。
! V5 k4 m* W9 ?* W; `不得安装设备。0 P3 s) k! V$ u" W% f2 w
设备不能包含文件系统。8 @# e, V# r& H
设备不得包含 Ceph BlueStore OSD。
0 M+ V3 @  w, N; J+ R; p8 s, c设备必须大于 5 GB。: n( B# C0 t3 ~! L7 d4 g4 q
Ceph 拒绝在不可用的设备上预配 OSD。为保证能成功添加osd,我刚才在每个node上新加了一块磁盘 创建新 OSD 的方法有几种方法:
! m- @4 ]' w* m在未使用的设备上自动创建osd1 p" Z( F  |* I! {. C
[root@node1 ~]# ceph orch apply osd --all-available-devices
& z* V; u! c' O, nScheduled osd.all-available-devices update...% Z% X6 u. t4 I% z9 I. _# b
可以看到已经在三块磁盘上创建了osd* a" ?6 I- \; E, H) p  F

6 O; G0 @" K/ v9 W从特定主机上的特定设备创建 OSD
+ G+ y6 j: W4 gceph orch daemon add osd host1:/dev/sdb
1 S; k' Z" K/ ]部署MDS
5 ]) U1 |% r8 `使用 CephFS 文件系统需要一个或多个 MDS 守护程序。如果使用新的ceph fs卷接口来创建新文件系统,则会自动创建这些文件 部署元数据服务器:
6 R+ R$ E1 F0 ^! f" U# w  @+ [ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]"
1 v" R+ Q* u5 q1 {" ]5 SCephFS 需要两个 Pools,cephfs-data 和 cephfs-metadata,分别存储文件数据和文件元数据1 `% _, W3 a0 ^' |2 L1 k
[root@node1 ~]# ceph osd pool create cephfs_data 64 64# z0 _# G3 c# [# l, S
[root@node1 ~]# ceph osd pool create cephfs_metadata 64 64" v% T( x: [' r9 n9 M5 n7 d
创建一个 CephFS, 名字为 cephfs- ~" K0 U) [9 _( v. Z
[root@node1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
* d+ n5 ~" x/ g! w[root@node1 ~]# ceph orch apply mds cephfs --placement="3 node1 node2 node3"
+ U8 q: B7 J  R7 j, M- U% kScheduled mds.cephfs update...
1 K5 g* t3 Z+ S; d( T9 w8 [: S3 C4 B# \/ o% ?
验证至少有一个MDS已经进入active状态,默认情况下,ceph只支持一个活跃的MDS,其他的作为备用MDS
5 b$ t$ F: D$ ~+ i' E* rceph fs status cephfs% Z. X- W" X; R$ Z, ]. l& _
- a) U' j  d/ x& f1 y
部署RGW- A' X# j+ F0 O$ N& K
Cephadm将radosgw部署为管理特定领域和区域的守护程序的集合,RGW是Ceph对象存储网关服务RADOS Gateway的简称,是一套基于LIBRADOS接口封装而实现的FastCGI服务,对外提供RESTful风格的对象存储数据访问和管理接口。" g! z) }5 X/ h8 c
' L/ i. D% [5 b0 t6 J' x% {
使用cephadm时,radosgw守护程序是通过mon配置数据库而不是通过ceph.conf或命令行配置的。如果该配置尚未就绪,则radosgw守护进程将使用默认设置启动(默认绑定到端口80)。要在node1、node2和node3上部署3个服务于myorg领域和us-east-1区域的rgw守护进程,在部署rgw守护进程之前,如果它们不存在,则自动创建提供的域和区域:
4 \" |+ U% B" |; _4 Sceph orch apply rgw myorg cn-east-1 --placement="3 node1 node2 node3"
4 E  M" f+ H7 N" e- U4 O/ @或者可以使用radosgw-admin命令手动创建区域、区域组和区域:
" a7 Q" g, A; Z( I' Fradosgw-admin realm create --rgw-realm=myorg --default7 ?) ?6 b. ?6 h
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
' ~# M' Y' }7 `( O( v. Jradosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default& o) ^! \  B1 A. w# x
radosgw-admin period update --rgw-realm=myorg --c$ \' |3 @3 Z, C8 P4 ^
可以看到RGW已经创建完成' D6 g) Y7 M2 J7 T4 L" V/ g' i* q$ i
' n& p  s& L; P4 S% T0 o
并且cephadm会自动安装Prometheus和grafana等组件,grafana默认用户密码为admin/admin,并且已经导入了Prometheus监控ceph的仪表盘! h" g* a+ q% ]* V  D7 v$ n" W% w

# B/ \3 `: ~" Q  ~* U- a  Q
7 W, h' r, R4 D5 @4 f
# U* }" U& y) o4 s! B9 m: @7 L9 H+ U8 }9 x: z; U5 \
) G8 \2 N5 n/ }1 Z
 楼主| 发表于 2021-7-19 10:56:11 | 显示全部楼层
安装ceph源# J2 O: N% _% L, |! b$ y

$ q  L7 v! L# }" x: w[root@controller ~]# yum install centos-release-ceph-octopus.noarch -y
/ u! {# W0 I- z[root@computer ~]# yum install centos-release-ceph-octopus.noarch -y" k# k: q7 y/ q! M
安装ceph组件
+ E$ M- _& i' C5 i# Y" s  W
) [9 J* s+ i8 P( x  S) |) H) _% z% `! i[root@controller ~]# yum install cephadm -y) g' ^: ^! M% V3 r" |  u4 Y& a
[root@computer ~]# yum install ceph -y  ?5 A8 j8 m, @5 `  c
computer结点安装libvirt
5 X" i  r4 Z# i* \! K: t) e* a; \- A4 ~3 B8 j; `7 |9 C% J: }- W
[root@computer ~]# yum install libvirt -y% A; g3 H3 Z8 z; O! e9 Z6 z
部署ceph集群4 x# J5 w9 T6 ^; c
创建集群
: @9 [7 N6 W0 n3 J* a0 b
7 e6 @( @' f2 }4 S% {3 R: Z[root@controller ~]# mkdir -p /etc/ceph
, K0 `0 T! W- c5 p2 [* u+ t& P[root@controller ~]# cd /etc/ceph/
9 N* |$ q$ g5 @) `[root@controller ceph]# cephadm boostrap --mon-ip 192.168.29.148
/ _7 `/ g7 ^! c- v[root@controller ceph]# ceph status
( ?( C& P* d: v- J( b& S4 V7 H[root@controller ceph]#  cephadm install ceph-common8 `0 P8 f5 R6 _, K- u5 l
[root@controller ceph]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@computer% B" j) H( I4 k2 \
修改配置
4 `0 R( U. K' Q! `0 n1 M% b: k$ x) z" B2 \7 b9 r0 V
[root@controller ceph]# ceph config set mon public_network 192.168.29.0/24
8 Q& E* J: F2 |, A( H" l+ g* K添加主机
5 F  s$ r6 b' B, u& m
  D* e, \' h2 _[root@controller ceph]# ceph orch host add computer
& T% A' h1 s4 S6 Y  R) T" [) [[root@controller ceph]# ceph orch host ls2 w' l' J2 e# d! q% R. a; b
初始化集群监控
& h" n: ]1 ^' Q, f. U
& x$ C; _) w; |0 X& |" j; L[root@controller ceph]# ceph orch host label add controller mon
8 V3 y1 Q( J* p3 L" O) d[root@controller ceph]# ceph orch host label add computer mon; K7 x: _$ w) ]
[root@controller ceph]# ceph orch apply mon label:mon
1 e, U0 x/ `; Y6 y$ p* e[root@controller ceph]# ceph orch daemon add mon computer:192.168.29.1490 c6 b$ D% [; p  ^' o1 @- r2 Q
创建OSD; r3 w4 M3 ?' ]& ?
& Z5 R* R- h+ o( n' u
[root@controller ceph]# ceph orch daemon add osd controller:/dev/nvme0n2
! m7 y6 l. L* e# T& W[root@controller ceph]# ceph orch daemon add osd computer:/dev/nvme0n3$ {6 ]" P, A1 c% J! F0 M
查看集群状态
7 D( o3 M+ N* n3 i& F3 f2 ~) l0 W  e/ W  R0 K
[root@controller ceph]# ceph -s) w) |- k' o$ V, C" V; g: H! x
查看集群容量
6 N4 f' i7 v% ?$ ^' H5 g! F% U* {; O- p9 ]9 L1 x- t; q$ @
[root@controller ceph]# ceph df
: @7 u6 r4 m" w+ _. ~/ w. }$ A创建pool- r0 u3 K( x6 p9 v5 `; O  y& ^

' J; k: v, M2 d& m+ Q[root@controller ceph]# ceph osd pool create volumes 64
5 J9 C( z/ o% X% F5 [; R[root@controller ceph]# ceph osd pool create vms 64
, @3 u( }* }1 O. k& D3 h9 G% W2 }" A" |/ u% C% {2 a5 d
#设置自启动/ d% V" K( \+ j* }, b
[root@controller ceph]# ceph osd pool application enable vms mon
: m& {, }! F2 i* l1 [[root@controller ceph]# ceph osd pool application enable volumes mon& U! h$ H4 l) Y3 y5 {
查看mon,osd,pool状态, @/ w3 S& c: E6 |# v

' \0 [0 o: }0 }; q  o) ~5 {[root@controller ceph]# ceph mon stat$ l8 f4 ]$ o( e& u- A8 P
[root@controller ceph]#  ceph osd status
' i# r" G0 c: @$ d2 \[root@controller ceph]# ceph osd lspools- h! l& w& ^- {* c- F6 l; T
查看pool情况; ^+ v: E( }+ c, M2 l

# R4 ^) `2 _; e1 ^9 s[root@controller ~]# rbd ls vms
3 X* W) Z8 l5 x& B: V[root@controller ~]# rbd ls volumes
 楼主| 发表于 2021-7-19 11:34:30 | 显示全部楼层
RUNNING THE BOOTSTRAP COMMAND
, _! S8 i4 S' IRun the ceph bootstrap command:& H5 Y7 y7 y7 H

" ~! p) d/ q0 P- b& ncephadm bootstrap --mon-ip *<mon-ip>*
; p6 L' w3 C7 kThis command will:
% g  I$ V5 j) R1 B4 `5 z5 r1 {* Z9 ?. K8 f% ]! M' H  V
Create a monitor and manager daemon for the new cluster on the local host.
4 {, ?8 C4 I7 }/ c! i' a3 C4 W4 u. r4 T+ u
Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.- g1 Q, N$ |% e4 a3 f. N7 w+ E
2 K+ U/ T. Y; x: T7 D. y7 a
Write a copy of the public key to /etc/ceph/ceph.pub.# W% P" o  Y5 z5 e! v* G* H

2 k3 j5 K' C, I( p6 T. g! _. I" NWrite a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with the new cluster.* ?5 c7 q  t; R7 m; Z# p

: v+ H4 d( R  M  B* [Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.
0 m8 f. y$ o3 T1 ?! F
# X8 |4 q/ z5 g! v$ DAdd the _admin label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring.2 |! _- J) s& b2 L. x+ `

7 {: z( b2 I! N4 F; }& WFURTHER INFORMATION ABOUT CEPHADM BOOTSTRAP. l+ v5 c3 @: I2 |& U
The default bootstrap behavior will work for most users. But if you’d like immediately to know more about cephadm bootstrap, read the list below.
+ A+ G7 n) i4 S4 ^* i1 s0 M+ D  w: o4 @) S+ o
Also, you can run cephadm bootstrap -h to see all of cephadm’s available options.
1 Q5 i$ r( D1 V- `* L' Y; g9 X5 N# E( H) e
By default, Ceph daemons send their log output to stdout/stderr, which is picked up by the container runtime (docker or podman) and (on most systems) sent to journald. If you want Ceph to write traditional log files to /var/log/ceph/$fsid, use --log-to-file option during bootstrap.4 z5 J1 G# i. X/ {: }# t

. ]% s7 H+ k2 mLarger Ceph clusters perform better when (external to the Ceph cluster) public network traffic is separated from (internal to the Ceph cluster) cluster traffic. The internal cluster traffic handles replication, recovery, and heartbeats between OSD daemons. You can define the cluster network by supplying the --cluster-network option to the bootstrap subcommand. This parameter must define a subnet in CIDR notation (for example 10.90.90.0/24 or fe80::/64).
# \+ l8 D/ C, q$ Y/ q
& R8 D- _' n) k# y" g0 Y6 {cephadm bootstrap writes to /etc/ceph the files needed to access the new cluster. This central location makes it possible for Ceph packages installed on the host (e.g., packages that give access to the cephadm command line interface) to find these files.  N) P5 g$ j, b
% E! F0 {9 D! c9 a7 b
Daemon containers deployed with cephadm, however, do not need /etc/ceph at all. Use the --output-dir *<directory>* option to put them in a different directory (for example, .). This may help avoid conflicts with an existing Ceph configuration (cephadm or otherwise) on the same host.
, i8 Z+ _- a3 f9 M" C: a! n) {; @) u% k
You can pass any initial Ceph configuration options to the new cluster by putting them in a standard ini-style configuration file and using the --config *<config-file>* option. For example:
6 e6 S( q4 U" E# k1 a& L# E7 e8 f5 b  l* b. q- K
cat <<EOF > initial-ceph.conf3 e. l( P  x# J" r5 c! V
[global]3 \% M! X$ Z2 k% ?- K( s  h1 Q
osd crush chooseleaf type = 0
4 [2 K4 `$ p7 z( ~* r4 k% dEOF
1 L# U! J6 ?* _; T* S5 P+ f: t./cephadm bootstrap --config initial-ceph.conf ...9 V$ U7 J' L6 t+ n( y
The --ssh-user *<user>* option makes it possible to choose which ssh user cephadm will use to connect to hosts. The associated ssh key will be added to /home/*<user>*/.ssh/authorized_keys. The user that you designate with this option must have passwordless sudo access.
+ ]! q. W" k. n8 b) C- Q* y, K
& \0 a' p/ u" _. \) dIf you are using a container on an authenticated registry that requires login, you may add the three arguments:7 k) ]+ Z% r; w2 M) s1 X4 ^
! |1 q- g0 \, ]! V" v6 ?
--registry-url <url of registry>
/ O7 l3 j4 @; Y2 L/ \9 @$ m8 A1 L% \9 j4 V) u* d" r
--registry-username <username of account on registry>
; J+ H% ~+ d* D# c; Z
4 D5 x1 V2 }/ g5 O--registry-password <password of account on registry>0 Y+ q2 W( e' P
9 x3 _1 |) @8 ^4 e
OR
/ ~, ?* ^& n$ T# D( d* u( r; S+ M% c+ W
--registry-json <json file with login info>
5 W: \+ l, o" u% E9 h4 t' H+ t, c
$ ~0 V/ p$ d9 L3 E( v& tCephadm will attempt to log in to this registry so it can pull your container and then store the login info in its config database. Other hosts added to the cluster will then also be able to make use of the authenticated registry.
8 \+ ^) Z: E& X6 w; \. E# C
2 l. P; g! M5 ]& N/ XENABLE CEPH CLI4 {" y5 X$ A3 g+ m7 F9 l
Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ceph command. There are several ways to do this:! z  ?8 \5 u2 p) N# d" q
" f- w( n8 ^' K% e5 j  u' [0 Q, e
The cephadm shell command launches a bash shell in a container with all of the Ceph packages installed. By default, if configuration and keyring files are found in /etc/ceph on the host, they are passed into the container environment so that the shell is fully functional. Note that when executed on a MON host, cephadm shell will infer the config from the MON container instead of using the default configuration. If --mount <path> is given, then the host <path> (file or directory) will appear under /mnt inside the container:
$ j1 L" l( s+ f7 e  T( A8 L
: i6 y1 L- D( dcephadm shell
/ h) a$ X  e0 B' Q) DTo execute ceph commands, you can also run commands like this:; s+ t$ }1 M/ Z4 [$ p

" _5 ~' @6 F5 ?" v$ b9 k' ncephadm shell -- ceph -s
7 C- A. m& t7 ~You can install the ceph-common package, which contains all of the ceph commands, including ceph, rbd, mount.ceph (for mounting CephFS file systems), etc.:
! [3 i1 \) U& s! Y/ D1 b+ ~3 V! G! `0 g" i: T$ ]/ \
cephadm add-repo --release pacific+ y4 L# d% b- v, V) G+ u6 C
cephadm install ceph-common+ g) q+ c6 I1 u3 u, r3 r" u
Confirm that the ceph command is accessible with:) A, d  v: }2 `8 v/ G1 t

2 w& ~' S5 U" \ceph -v* c. n9 w6 n4 t+ |4 _" c, G7 y
Confirm that the ceph command can connect to the cluster and also its status with:
% E7 Q0 b0 Y" _
7 z7 j: d+ o, _1 N( rceph status( |+ R' N/ O) D
ADDING HOSTS
8 B5 R, a9 n$ {6 h4 ~8 DNext, add all hosts to the cluster by following Adding Hosts.) ^! w5 E6 H4 d

; g, x9 D$ c( Q" S: H! m( lBy default, a ceph.conf file and a copy of the client.admin keyring are maintained in /etc/ceph on all hosts with the _admin label, which is initially applied only to the bootstrap host. We usually recommend that one or more other hosts be given the _admin label so that the Ceph CLI (e.g., via cephadm shell) is easily accessible on multiple hosts. To add the _admin label to additional host(s),3 i9 V3 j. ]; J2 Y, y+ K$ g

: G/ p* u3 ?- W7 ?/ m' u' t& Bceph orch host label add *<host>* _admin
% f5 c* @/ l/ R% \ADDING ADDITIONAL MONS
! f6 f3 J. o0 m1 p! VA typical Ceph cluster has three or five monitor daemons spread across different hosts. We recommend deploying five monitors if there are five or more nodes in your cluster.
3 h( n: Z/ U' W) F4 B
) E# R0 h, ~: m3 J1 v6 [Please follow Deploying additional monitors to deploy additional MONs.
# v' {% s+ d+ Y1 p- w0 \) ]/ h: W. E: e# B
ADDING STORAGE- z  v, v2 D8 Y' x: {* T
To add storage to the cluster, either tell Ceph to consume any available and unused device:6 c5 q/ ]+ y: U: P- X, v
+ P5 n2 o6 T: A3 Y
ceph orch apply osd --all-available-devices9 H- P5 z& S: X
Or See Deploy OSDs for more detailed instructions.
 楼主| 发表于 2021-7-19 11:37:35 | 显示全部楼层
cat <<EOF > initial-ceph.conf
$ \/ g% _! P# H: V [global]
6 r4 ]5 F4 f$ i# U osd crush chooseleaf type = 0
0 W' @! V/ H9 l EOF, S( X" j  ~' ]7 l+ F( ~
./cephadm bootstrap --config initial-ceph.conf --mon-ip *<mon-ip>*
2 [: Z  L+ J; o5 r2 e, @, U* X& l+ K& m" V, x* d& Y
cephadm shell
 楼主| 发表于 2021-7-19 11:38:48 | 显示全部楼层
To deploy an iSCSI gateway, create a yaml file containing a service specification for iscsi:
' ^2 m  a8 h6 C6 r4 x0 ]0 o9 ]$ P1 W/ o. @8 l0 s# o0 }
service_type: iscsi
0 q" P6 [  _. _( F# jservice_id: iscsi
& |2 H% G4 S4 {6 n* K) i8 L& Cplacement:
8 q7 P( k) u' q2 t  hosts:
4 J) ]9 R( v; w) B' r    - host1# E5 f" G7 i; p- s# x
    - host22 w7 W% B$ L; `" m8 k* M
spec:, N5 T- N) B$ H
  pool: mypool  # RADOS pool where ceph-iscsi config data is stored./ T. S$ P7 k; Y0 u$ L. b2 C
  trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2"
7 [3 H- I  z, e  api_port: ... # optional+ e( J' k. ^5 h) F
  api_user: ... # optional
! }' N$ Q; ^' M+ a# z( Y9 B* m9 R  api_password: ... # optional
) Y; {/ k1 Z% s9 V  api_secure: true/false # optional% G( t5 w6 A2 v
  ssl_cert: | # optional) ]- S5 J+ L2 H) i% u
    ...) Q6 i3 H0 n. L6 C! x
  ssl_key: | # optional. p! Q/ ]9 \# k
    ...; Y( l% z8 L8 `3 `# z9 ?2 x7 m6 f
For example:& @) S( V& L9 K. ]) U1 i
) v0 a: D( c% x( I& b
service_type: iscsi
2 q+ }& H2 q0 l* Nservice_id: iscsi2 A$ ^. m  G! _
placement:4 h% W6 n$ f) \# H
  hosts:
) s: K; _* K8 l. j0 J! ^  - [...]
# h' \, v/ {. i  Fspec:1 L5 f/ ?, c2 h+ S+ N. L) O* S
  pool: iscsi_pool8 q' V3 a5 Y5 k# X
  trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2,IP_ADDRESS_3,..."; k9 v4 N" r" y2 v* I. G
  api_user: API_USERNAME. a. q3 |% I$ ?* Z9 B( W1 ~# C8 z
  api_password: API_PASSWORD0 Z) C# }/ m" \
  api_secure: true4 }& ^: l) v, U& G+ N. ?
  ssl_cert: |' i! W. E) W; {+ Y% ]6 _8 y
    -----BEGIN CERTIFICATE-----$ ^( i# \- c( \* s! h, e
    MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3: U/ C* z: j* b4 J5 w3 T
    DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T
- k" l- i4 P3 C$ r7 t" `    [...]3 i3 s0 c9 R) d% A: j$ F$ n. G$ k
    -----END CERTIFICATE-----* g5 R% o0 Y6 g" d: @
  ssl_key: |
( U( y, H6 d* h, I# a6 [, T  H: Q    -----BEGIN PRIVATE KEY-----
$ O+ s& l2 L9 f- T0 P9 v    MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4
3 h1 J! s9 m' N2 z' z    /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h1 t( I0 I- i- q
    [...]; g- l, M8 H6 x# k  F8 x) S3 ^' M. D
    -----END PRIVATE KEY-----5 ^% g$ P! c# L. I
The specification can then be applied using:
) [+ c( m6 @# r- _9 f7 {( D* R2 l
ceph orch apply -i iscsi.yaml
 楼主| 发表于 2021-7-19 11:44:03 | 显示全部楼层
ceph orch apply mon --unmanaged
) o  `+ R1 g* N4 aceph orch apply mon --unmanaged+ Y' c9 g# \9 n% P" W/ N+ b$ @
) q: z5 L: F6 N6 R7 u- L  c. s( x! M. w
ceph orch daemon add mon newhost1:10.1.2.123
+ s, ~" J& S( \5 {
9 S6 V( M: S* V4 P) E7 |ceph orch daemon add mon newhost2:10.1.2.0/24# c9 g% E3 b& g, m( E0 `
( |: H: n  J& B

( W) |9 t! p% K$ d$ ~, @ceph orch apply mon "host1,host2,host3"+ W( U7 B# l" M0 Z
 楼主| 发表于 2021-7-19 13:50:11 | 显示全部楼层
环境介绍
4 H# ?0 Z- U+ P# \  b$ gIP地址 配置 主机名 Ceph版本
" E8 v# Y" B" v0 b+ s4 U10.15.253.161 c2m8h300 cephnode01 Octopus 15.2.4
5 {3 i/ p+ ^* J. y6 P+ V10.15.253.193 c2m8h300 cephnode02 Octopus 15.2.4: h! e4 u9 j6 R0 X+ L
10.15.253.225 c2m8h300 cephnode03 Octopus 15.2.4
$ M3 o! |) d, x6 Q
( P& y. e1 b, c- r" J# x) N( j#Linux系统版本4 k% A7 r2 Z+ e2 m1 S. t  h
[root@cephnode01 ~]# cat /etc/redhat-release
- F/ H6 }  H) o. A' U' VCentOS Linux release 8.2.2004 (Core)
9 |7 U2 x6 F, M6 p# n  }1 c) p[root@cephnode01 ~]# uname -r
) m$ C- |% A9 X* O9 a1 Z4.18.0-193.14.2.el8_2.x86_64
% m" H4 |) f! b8 V% E- |3 w: S2 R& r  S* O# d9 e5 U
#网络设计:建议各网络单独分开3 a- ]+ K) P" B( x
10.15.253.0/24 #Public Network  公共网络
8 s$ F8 ~3 \9 s: {# e2 }9 x172.31.253.0/24 #Cluster Network 集群网络
6 Z6 _3 q& l9 T#每台ceph节点下除系统盘外,最少挂载两块相同的大容量硬盘,不需要进行分区
6 g+ r+ S6 a' v* i4 y8 r( z4 ^[root@cephnode01 ~]# lsblk
) B  t( A3 w6 ^! ^1 XNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT1 \9 H; K' q' X$ H0 h
sda      8:0    0   20G  0 disk
' \" K9 o6 D! y- P/ N& _├─sda1   8:1    0  200M  0 part /boot
/ a9 H, T4 S, W, T+ O├─sda2   8:2    0    1G  0 part [SWAP]
% H7 y2 n: f, U" d2 Z0 U└─sda3   8:3    0 18.8G  0 part /" b& p1 W+ Q+ k( p2 s$ B3 H
sdb      8:16   0   20G  0 disk
. d- ^$ o' p3 ?. j2.1.1 Ceph安装与版本选择
: W" N0 T1 ^* j( v5 E7 a% y, n0 Ohttps://docs.ceph.com/docs/master/install/
' i. T/ }- d8 k# L) e' @ceph-deploy 是用于快速部署群集的工具;社区不再积极维护ceph-deploy。仅支持Nautilus版之前的Ceph版本上进行。它不支持RHEL8,CentOS 8或更新的操作系统。
# G( {, w% {# Z6 a2 G- P这里的系统环境是centos8系统,所以需要使用cephadm部署工具部署octopus版的ceph
3 D2 F( o4 U6 x0 t  k2.1.2 基础环境准备0 Z9 S: M5 G4 ?( d- I
全部Ceph节点上操作;以cephnode01节点为例;
3 y" Y& F1 r0 Z7 e4 e" R& _
6 u1 q7 X4 ?7 F#(1)关闭防火墙:  s& W/ A' l! o- f3 z& e% o
systemctl stop firewalld
+ F, l+ S! ~: T8 [1 F/ K; Hsystemctl disable firewalld8 {+ e# _  O+ t) L" J
#(2)关闭selinux:
2 N' d; Y( `4 G2 ?) L6 _5 psed -i 's/enforcing/disabled/' /etc/selinux/config
4 p/ E7 T' Q. h4 l2 w) Bsetenforce 05 ~, w+ V* r0 P% P) Y% @5 c
#(3)在cephnode01上配置免密登录到cephnode02、cephnode03  }* E) T  {3 ~9 U2 H# L
dnf install sshpass -y
8 @$ l" z5 D7 ?1 d" ~& a8 gssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''& d2 N3 p. y4 p# i4 V2 s
for ip in 161 193 225 ;do sshpass -pZxzn@2020 ssh-copy-id -o StrictHostKeyChecking=no 10.15.253.$ip ;done
& i) [5 Z( f* r# t; S2 z5 ^1 x3 Y#(4)在cephnode01上添加主机名:已经配置过则不需要再次添加5 Y! L7 m  I5 @' [& y" ^. B
cat >>/etc/hosts <<EOF
% H/ Y# o( g8 d10.15.253.161 cephnode01) G* J& E! n* ^8 @7 s6 ~, s
10.15.253.193 cephnode02: L# ]6 y: x# T
10.15.253.225 cephnode03
- W0 u) X8 F3 o5 X2 k- \0 o; VEOF( w2 E' A' \0 q% W
for ip in 193 225 ;do scp -rp /etc/hosts root@10.15.253.$ip:/etc/hosts ;done" g7 Y3 w: l& S- l2 y+ A
#(5)设置文件连接数最大值& `: r' x6 |! \: r6 Q- ~/ W
echo "ulimit -SHn 102400" >> /etc/rc.local" Y3 O, ^- [3 j: z/ d- `# J
cat >> /etc/security/limits.conf << EOF* h( x- A7 a9 o% ?$ B
* soft nofile 655353 ~- ~/ a: r1 X: E! c
* hard nofile 65535( n, L2 t  E$ S- ?0 x1 b( S8 b
EOF
. m! w! T7 G6 G; D9 i* C' y) R- C#(6)内核参数优化
. b) w( H* C3 T- ^/ Wecho 'net.ipv4.ip_forward = 1' >>/etc/sysctl.conf$ n+ Z* T9 i+ J# B. M
echo 'kernel.pid_max = 4194303' >>/etc/sysctl.conf4 f' m- ?3 l; F7 }9 {( E8 [. v$ e+ X
#内存不足时低于此值,使用交换空间
) D7 K) O1 j  I4 {; Y+ u: secho "vm.swappiness = 0" >>/etc/sysctl.conf
/ n' W( J# `. L/ O8 Q( {0 u& asysctl -p
6 E, M# @+ j! Z  q3 H) V" V7 E2 ^#(7)同步网络时间和修改时区;已经添加不需要配置
1 q, B& i1 i& B6 s* o安装chrony时间同步 同步cephnode01节点& c/ k0 x2 y; R# s
yum install chrony -y
- Q2 j4 c8 k1 i, nvim /etc/chrony.conf
: \! y; D. W0 qserver cephnode01 iburst
0 L0 E- u6 b. d& C8 J---
; c2 `8 V3 r4 ~$ s& g# `systemctl restart chronyd.service5 B( ]/ L5 t" d, i4 r5 \7 Y
systemctl enable chronyd.service. o, K& q' \* L! m% ~) r; C: a
chronyc sources; @. \1 e7 f; f! L* S2 A: ~# U) u
#(8)read_ahead,通过数据预读并且记载到随机访问内存方式提高磁盘读操作; L+ o, h: s! M; q4 u2 h5 s3 i
echo "8192" > /sys/block/sda/queue/read_ahead_kb/ _! R2 z1 X9 b2 p  a
#(9) I/O Scheduler,SSD要用noop(电梯式调度程序),SATA/SAS使用deadline(截止时间调度程序)
/ E' u/ F( ~. E  s) w( {2 [; P#https://blog.csdn.net/shipeng1022/article/details/78604910
5 F( k9 V: x- |$ M9 J* e+ Oecho "deadline" >/sys/block/sda/queue/scheduler
! {% |! ?9 t, e/ @5 aecho "deadline" >/sys/block/sdb/queue/scheduler
+ M/ p2 G8 O& H#echo "noop" >/sys/block/sd[x]/queue/scheduler7 f' e! C' B  S2 Y$ m. E4 g1 ?# ]+ k
3. 添加Octopus版yum源( l4 K6 H" I2 ]0 }9 S7 ^
% U2 _2 K6 p; n7 ]
cat >>/etc/yum.repos.d/ceph.repo <<EOF2 m, L$ @+ x4 z4 Z1 N+ z" O7 h
[Ceph]
  r2 w  Z7 D; t' `9 ]name=Ceph packages for $basearch! `- N' v+ P$ b: a2 I' t8 S
baseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/$basearch
7 W2 C- K6 f: a7 ^) Q7 K% }1 Zenabled=1
1 b$ z. E; F2 b9 `( e1 q4 I3 Ygpgcheck=0
" z5 F' A6 X7 y' ?) |- {type=rpm-md4 g, J2 Q5 [7 G9 X: e9 [
[Ceph-noarch]  ]" G- H$ Y* [- b" N, P
name=Ceph noarch packages
7 ?, i4 h* c( \baseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/noarch
2 t  m" H7 H" n: {2 X) h' @/ _enabled=1
$ {/ l1 L) O0 D+ n6 s! g+ Ngpgcheck=0; G. o* C( A, [
type=rpm-md
5 l3 P4 Z7 j. _+ G. o[ceph-source]
5 g) Y1 Q/ O$ D  ^- q# p" t! dname=Ceph source packages
) P/ ^9 y2 F/ D* O( Dbaseurl=https://mirrors.aliyun.com/ceph/rpm-octopus/el8/SRPMS
. d7 P! U3 T9 v/ F" C/ N. I4 K  Menabled=1! w- E0 X0 U* E( S0 i
gpgcheck=0
0 J, `1 ^% U( O5 b3 Ltype=rpm-md& V* i* @: g; w
EOF
3 ]0 C7 Q! d5 w' Jyum clean all && yum makecache
+ x) L# d  u' O6 Z' ^! F#安装基础软件& p! M. G0 x( @; _
yum install net-tools wget vim bash-completion lrzsz unzip zip -y3 o' P1 h5 u1 k4 c  ?% [. H! \  \
4. cephadm工具部署
% C7 S4 m  m/ Z3 Z8 w  f8 n5 p- r, Ihttps://docs.ceph.com/docs/master/cephadm/install/* H3 _# C2 V  }- l4 {' e1 r
在15版本,支持使用cephadm工具部署,ceph-deploy在14版本前都支持5 A: j" h; o- O
4.1 拉取最新的cephadm并赋权* y" s: X3 E# R) |3 B# z
在cephnode01节点配置;
7 s# K7 O% I! ?5 ?. t: F" t2 A2 }: y0 {6 ?' _! W5 s8 R
[root@cephnode01 ~]# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm& _8 B, }; t8 ]# F- l3 w/ Z
[root@cephnode01 ~]# chmod +x cephadm
, d# G8 ~3 t. i0 V) s[root@cephnode01 ~]#ll1 K- _  w/ A9 U+ \# y: R( Z+ B" ?
-rwxr-xr-x. 1 root root 184653 Sep 10 12:01 cephadm) t( c& ]! H) |8 v! g2 \
4.2 使用cephadm获取octopus最新版本并安装" L1 V2 y* d6 P) h
已手动配置为国内yum源,不需要按官方文档的步骤再进行添加yum源
/ `5 M5 z' H$ d2 R1 o% }5 I, i! V, i  [" _' L
#全部ceph节点安装
; v2 j  M$ V2 L+ }$ [" n+ U[root@cephnode01 ~]# dnf install python3 podman -y
8 A; x! G, u4 ], U/ X[root@cephnode01 ~]# ./cephadm install( j' _7 y. t9 z; G) }
...
; C, B0 }8 ]  J, v[root@cephnode01 ~]# which cephadm  d. R+ m* z7 `9 |8 r3 S0 O
/usr/sbin/cephadm
, g# X- L, f0 k5. 创建ceph新集群
3 \9 r  A* h8 V& V; v6 }5.1 指定管理节点4 ~# X/ f( h% k- V8 ^; l$ s) a
创建一个可以被任何访问Ceph集群的主机访问的网络,指定mon-ip,并将生成的配置文件写进/etc/ceph目录里  `1 t/ A8 Z- e& m# E3 ~

3 `6 ]- C0 Z9 ~, T' q9 t+ M[root@cephnode01 ~]# mkdir -p /etc/ceph7 [- ]9 u9 [2 O6 b- f( ^
[root@cephnode01 ~]# cephadm bootstrap --mon-ip 10.15.253.161
# Q3 \2 c9 B9 {' N...
! x) S5 g6 M- a        URL: https://cephnode01:8443/5 D- K6 h& T9 I4 Q
        User: admin
, ^# l& Y/ O1 q! }' ^' u    Password: 6v7xazcbwk
2 j$ Z( D8 D/ E9 f7 o& D' B6 @% n* I...
5 M# p( Q& r4 z% @2 v& Z/ @  _: n- ~可登陆URL: https://cephnode01:8443/,首次登陆要修改密码,进行验证$ |5 t* G# l. z
: [: u. k7 @/ C) @9 P5 O2 Y& ]
, G7 i# b- C% X
5.2 将ceph命令映射到本地" n( c: v1 M& W+ C+ n, J: ~. }/ c
Cephadm不需要在主机上安装任何Ceph包。但是,建议启用对ceph命令的简单访问。
# ~! h2 U$ p& x- Ecephadm shell命令在安装了所有Ceph包的容器中启动一个bash shell。默认情况下,如果在主机上的/etc/ceph中找到配置和keyring文件,它们将被传递到容器环境中,这样就可以完全正常工作了。1 H# t% T& N6 r" [$ H0 h
1 r6 S% ^; l' i5 K. h
[root@cephnode01 ~]# cephadm shell
& l5 F0 w5 |7 Q1 y8 w0 k5 l* a[root@cephnode01 ~]# alias ceph='cephadm shell -- ceph'
+ Z. Y$ z, U: [; l8 o4 L) s' m[root@cephnode01 ~]# exit& {2 A  U% S+ k
#安装ceph-common包;包括ceph,rbd,mount.ceph的命令* P; T; U% E2 O% Y" M* A7 \* J* ]
[root@cephnode01 ~]# ephadm install ceph-common包;包括ceph,rbd,mount.ceph的命令
* j  `4 _- g% d% J1 ^8 i) @6 E#查看版本
+ U2 x+ v/ `: e! v7 T[root@cephnode01 ~]# ceph -v
7 N# ?, Z( }5 [7 j1 Gceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)' ]7 D, @8 b( Q3 `, I0 R- H2 n8 R* o
查看状态. E4 A1 S' E8 ]& l* F

9 [: d0 a; g# W+ m3 f[root@cephnode01 ~]# ceph status: q, F7 j/ g6 W1 y5 p% n: e, }
  cluster:
. H( l" k, |! s& `3 K+ D0 U9 Z/ K9 m    id:     8a4fdb4e-f31c-11ea-be33-000c29358c7a
. \3 n/ z. y. H5 @+ e6 |- [" h! J    health: HEALTH_OK
- B3 Z( Q2 m% Q  B+ Z    Reduced data availability: 1 pg inactive
2 C0 `3 q6 Z3 Z1 U( J  `    OSD count 0 < osd_pool_default_size 3
5 Y* R1 ~; Q& z& Q) I 4 ]$ w' }- f. J/ _& K/ c0 r
  services:
+ m0 g& d$ o+ y: L    mon: 1 daemons, quorum ceph135 (age 14m)
4 b2 m& w' O. @+ L' Q. j    mgr: ceph03.oesega(active, since 10m)
5 p$ K! c. ^, L; S* `8 i    osd: 0 osds: 0 up (since 31m), 0 in
4 b( u) C$ |0 ?- G7 m  C : l( y$ B, }' ~9 K( y
  data:/ n  M3 w9 f0 g& P; R
    pools:   1 pools, 1 pgs
' H% n4 @, C3 ]* b/ p: i    objects: 0 objects, 0 B) u; N7 A7 b0 u! d$ K: X! E: @* @
    usage:   0 B used, 0 B / 0 B avail; ^  C1 ^; X3 C. Y4 I
    pgs:     100.000% pgs unknown# C+ t& ]9 R+ G8 r! ]! o
             1 unknown, I' p9 J: E+ c: ?
5.3 添加新节点进ceph集群: K" v$ `/ H4 N* t
. e! b3 J$ ^( [# r! J% o" V7 O
[root@cephnode01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@cephnode02
4 I$ z4 M' I$ t[root@cephnode01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@cephnode036 @1 H- D( ]/ \- P
[root@cephnode01 ~]# ceph orch host add cephnode02& u" r# D, A9 _1 {  O+ }
Added host 'cephnode02'
/ q" s& W, m( u9 p$ `2 _. q$ B[root@cephnode01 ~]# ceph orch host add cephnode039 \4 `: r6 P3 i% w; a8 [
Added host 'cephnode03'
2 p4 e" W# {$ X5.4 部署添加 monitor
" h" H) X2 q4 _选择需要设置mon的节点,全选
# M$ ^5 h% N4 U$ {. T
+ J; N( y. Y+ ^[root@cephnode01 ~]# ceph orch host label add cephnode01 mon
$ w" `# R+ F, M% K0 }0 }- u$ b6 CAdded label mon to host cephnode01) L7 s: e9 C3 f, U8 c9 s
[root@cephnode01 ~]# ceph orch host label add cephnode02 mon
( @" l& d/ g* h. K. y9 IAdded label mon to host cephnode02
" H, K6 P- g1 V[root@cephnode01 ~]# ceph orch host label add cephnode03 mon0 E, H  Z* Z# N$ z2 j# }& a
Added label mon to host cephnode03
4 V' u: ?' j: B[root@cephnode01 ~]# ceph orch host ls
9 [% ~& s& q* v8 I$ S0 i+ q) k: uHOST        ADDR        LABELS  STATUS  4 V9 \2 w8 l4 Q# w- V
cephnode01  cephnode01  mon            * z, |& |( h4 S
cephnode02  cephnode02  mon            
  j) T3 a. |$ ]. Icephnode03  cephnode03  mon+ L4 F, F4 W; m/ n& k: R
告诉cephadm根据标签部署mon,这步需要等待各节点拉取images并启动容器# [4 j2 }8 ]$ \* d+ z2 F
0 A7 ]' ^% M8 K  v5 [
[root@cephnode01 ~]# ceph orch apply mon label:mon
' D6 G$ X$ B% w/ W; k5 C具体验证是否安装完成,其他两台节点可查看下! B) Z. i5 K3 a, e9 ~- q  U5 c

; Z) l2 ~! u- d/ v[root@cephnode02 ~]# podman ps -a
. j: _5 i6 q- P4 ?...  m8 G2 h& o! w( }* }# x
[root@cephnode02 ~]# podman images9 c+ p. H* `) q* V' T. m: q9 O/ c
REPOSITORY                     TAG       IMAGE ID       CREATED         SIZE9 l$ \7 _7 k% u+ g4 P- H
docker.io/ceph/ceph            v15       852b28cb10de   3 weeks ago     1 GB
+ ^5 m% q0 o+ ?8 ^7 p1 q" Xdocker.io/prom/node-exporter   v0.18.1   e5a616e4b9cf   15 months ago   24.3 MB
  t% r+ c5 D( n) J6 m- u. s6. 部署OSD
  T( M1 n2 I5 z( `. f3 p6.1 查看可使用的硬盘4 H; \/ M2 V4 K

, u( t! \: y- r! S: I, |  v( M6 U[root@cephnode01 ~]# ceph orch device ls
8 O) U, D, h/ w& v/ bHOST    PATH      TYPE   SIZE  DEVICE  AVAIL  REJECT REASONS                        , h! o' q. |0 d  T
ceph01  /dev/sda  hdd   20.0G          False  locked, Insufficient space (<5GB) on vgs, LVM detected  6 k  b7 c7 h  W8 N2 p
ceph01  /dev/sdb  hdd   20.0G          True
* M. d3 m+ V* e4 qceph02  /dev/sda  hdd   20.0G          False  Insufficient space (<5GB) on vgs, LVM detected, locked  " K$ Z# ?( ~) k9 ^; E
ceph02  /dev/sdb  hdd   20.0G          True
" h4 ]/ b1 @' k# q: h4 cceph03  /dev/sda  hdd   20.0G          False  locked, Insufficient space (<5GB) on vgs, LVM detected  ; b4 f7 O& c( H8 I# B
ceph03  /dev/sdb  hdd   20.0G          True' E% E/ w0 h. M
6.2 使用所有可用硬盘3 H7 R4 ]; ^' J9 \9 G

  Z* U' \8 |, x* o' n8 _1 A8 B" {[root@cephnode01 ~]# ceph orch apply osd --all-available-devices; e0 C2 b' m3 I2 y0 x* g2 r: N! [
添加单块盘的方式" X- N. Q3 F/ J; |

3 `/ j. i+ B' h' x, ~, V4 `5 i[root@cephnode01 ~]# ceph orch daemon add osd cephnode02:/dev/sdc; b5 M+ w& T6 r6 S
6.3 验证部署情况+ d+ \6 L$ g" z( b" B/ z
( T! P# K8 x. `5 g
[root@cephnode01 ~]# ceph osd df
9 X& ?. I) p/ _( kID  CLASS  WEIGHT   REWEIGHT  SIZE    RAW USE  DATA     OMAP     META      AVAIL   %USE  VAR   PGS  STATUS+ _- {' |; Q) L5 ]* U: N) }
0    hdd  0.01949   1.00000  20 GiB  1.0 GiB  3.8 MiB    1 KiB  1024 MiB  19 GiB  5.02  1.00    1      up
2 E% U9 z" R4 }* ~9 a2 f 1    hdd  0.01949   1.00000  20 GiB  1.0 GiB  3.8 MiB    1 KiB  1024 MiB  19 GiB  5.02  1.00    1      up1 x3 y2 Z( M1 q6 `
2    hdd  0.01949   1.00000  20 GiB  1.0 GiB  3.8 MiB    1 KiB  1024 MiB  19 GiB  5.02  1.00    1      up
8 ?( p, x# [  |7 |8 S                       TOTAL  60 GiB  3.0 GiB   11 MiB  4.2 KiB   3.0 GiB  57 GiB  5.02                  
) F# F/ H& |2 Y; T# Y# FMIN/MAX VAR: 1.00/1.00  STDDEV: 0
, V; p( Z' H3 N8 |2 e, n3 B7. 存储部署
. f' d5 D& {: M7.1 CephFS部署7 k. [" R& E; G; b& D) H
部署cephfs的mds服务,指定集群名及mds的数量
7 N1 g$ H- n% }7 S9 R5 C" {
/ z! t- m$ d  G, T; ^5 H[root@cephnode01 ~]# ceph orch apply mds fs-cluster --placement=3
; S. z* U9 v% D& W' E) c. P, s7 r& G( l$ C
[root@cephnode01 ~]# ceph -s7 e4 O, x, B$ K4 y! m7 }5 Z9 b
  cluster:3 c% ?' d# A. B- g& X
    id:     8a4fdb4e-f31c-11ea-be33-000c29358c7a5 a" }, r" X9 H# S" T* V1 x
    health: HEALTH_OK
$ D  N6 Y- Y1 N6 A ' J. J# d6 D  h+ \5 \. k
  services:
0 s9 p. A0 V) l5 l# Q    mon: 3 daemons, quorum cephnode01,cephnode02,cephnode03 (age 1m)
7 y& Q3 H3 o/ n1 |0 |    mgr: cephnode01.oesega(active, since 49m), standbys: cephnode02.lphrtb, cephnode03.wkthtb" R5 V4 m8 l5 O6 T, x3 B
    mds:  3 up:standby' _9 O! X% |7 `; s' u" Z% h8 v. J
    osd: 3 osds: 3 up (since 51m), 3 in (since 30m)
" l1 h. T: d& m2 ~
/ O: R) e' U4 `* |7 k* C  data:
. ^0 v- [( R* C) I9 e    pools:   1 pools, 1 pgs& S. p3 m5 E3 e" r+ o& q3 Z
    objects: 0 objects, 0 B2 \% }8 n& x2 c; a# u& Z
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail- {! O9 G) O. v" k& @0 N
    pgs:     1 active+clean+ @7 `" x8 Q$ c8 ]$ c
7.2 部署RGW
7 Y  {( h: d% g创建一个领域) l, N$ q+ f, _

5 B6 [* q( q# {7 `[root@cephnode01 ~]# radosgw-admin realm create --rgw-realm=rgw-org --default
' S; }4 u  _. Z& {, O) s2 A! r6 K& K{4 e( O9 J# }' K% s7 m9 h
    "id": "43dc34c0-6b5b-411c-9e23-687a29c8bd00",% }: u' `0 K+ N2 C8 v( |! K8 K3 N
    "name": "rgw-org",' a" B  A7 ?; |! {/ |
    "current_period": "ea3dd54c-2dfe-4180-bf11-4415be6ccafd",7 i1 ^) }3 Z6 F. a" x! G
    "epoch": 1' x2 G, r4 E& z8 d1 K/ f/ x
}
1 b& Q: L1 m: v1 ^创建一个zonegroup区域组
7 M; R( i7 P. [, M) T0 Q6 q* V% F1 q7 }$ F
[root@cephnode01 ~]# radosgw-admin zonegroup create --rgw-zonegroup=rgwgroup --master --default
  v) {8 g  h% Z! E) i% U: f{) I: r9 R' J4 i/ M+ h
    "id": "1878ecaa-216b-4c99-ad4e-b72f4fa9193f",
- D* ?+ y# d7 t9 O# ]# {9 x    "name": "rgwgroup",8 D  Y+ F( B1 S7 ?5 z2 k
    "api_name": "rgwgroup",9 t- ^6 M) O: k: R8 v- m) F  d) q
    "is_master": "true",
5 h8 E  _. y- T2 j    "endpoints": [],
/ x, b9 E4 s- `/ u9 }  r    "hostnames": [],6 L  C  ?( e& }( f0 D5 l
    "hostnames_s3website": [],
/ H& L  ^2 a0 @& b5 O" Q    "master_zone": "",
& q& o! W, c( v; |9 I    "zones": [],
( ^; H" g: h% N3 H! m$ X    "placement_targets": [],
/ A" x1 v( P& g3 i' d5 y    "default_placement": "",( u4 B+ [- \6 ~! v5 b3 J! f
    "realm_id": "43dc34c0-6b5b-411c-9e23-687a29c8bd00",0 {' z0 l. n% B- h7 L
    "sync_policy": {
, I. x$ C. C/ \/ S% g        "groups": []
. I4 ?: [( X7 F; S( I' P) H% O0 {+ q    }2 U1 G- V& B- k, T" q/ \# V
}- m. @! o* L7 n3 j
创建一个区域
8 C. G5 {' F1 Z: X3 B2 N0 M" i6 x- g: `! t8 Y0 n9 e
[root@cephnode01 ~]# radosgw-admin zone create --rgw-zonegroup=rgwgroup --rgw-zone=zone-dc1 --master --default
! ~4 T+ Q3 z+ f. p! v+ w. l+ P8 A{
3 J: q: p9 X) [* F7 w# {1 {: ~) y    "id": "fbdc5f83-9022-4675-b98e-39738920bb57",
+ E: @9 \" g9 T+ v" Q" o. A* u7 H    "name": "zone-dc1",+ V$ I$ S. }. m% ^
    "domain_root": "zone-dc1.rgw.meta:root",3 A! i" q) Q! U
    "control_pool": "zone-dc1.rgw.control",
2 ]4 j! n6 d+ V0 J2 j( D  [( ]    "gc_pool": "zone-dc1.rgw.log:gc",& {, |/ m9 v. M2 a% z! J: K% ?
    "lc_pool": "zone-dc1.rgw.log:lc",
* L) Q& _4 z% h    "log_pool": "zone-dc1.rgw.log",6 a% j1 O1 m' h; M$ V
    "intent_log_pool": "zone-dc1.rgw.log:intent",
/ ~' ?4 v+ ~, |8 `0 r" H    "usage_log_pool": "zone-dc1.rgw.log:usage",
: j# V9 s& G6 C2 ~  N7 R% H    "roles_pool": "zone-dc1.rgw.meta:roles",
% b- X7 a3 O3 Z, v8 ^/ `, c    "reshard_pool": "zone-dc1.rgw.log:reshard",
9 ?: o: [1 m0 b( ^6 x' q, p    "user_keys_pool": "zone-dc1.rgw.meta:users.keys",
* ~' Q7 s. ~$ `; {    "user_email_pool": "zone-dc1.rgw.meta:users.email",
/ V3 |  M5 w6 E( q( J1 N3 v* U2 l  i    "user_swift_pool": "zone-dc1.rgw.meta:users.swift",$ r' g8 a3 J. f7 g; a
    "user_uid_pool": "zone-dc1.rgw.meta:users.uid",
9 P- ^1 F) V3 o    "otp_pool": "zone-dc1.rgw.otp",
- f7 [0 ?4 ~: i9 @6 O1 p& t' F+ j    "system_key": {
7 e% l1 Z. e+ ?1 H+ D7 U        "access_key": "",. J3 N$ I2 `+ Z; L* F- u
        "secret_key": ""2 e! W$ M' R; J0 h# i4 x
    },
% o! \1 S% s( s4 l5 l5 O) t    "placement_pools": [1 q  D4 c7 z4 |
        {
4 {( S3 ~- m! J! S            "key": "default-placement",
' ?+ `( ^% f* ?" V: g            "val": {+ c* q8 e$ i: m( D
                "index_pool": "zone-dc1.rgw.buckets.index",: X% C' p( O  ~/ p
                "storage_classes": {
( ~- E$ p1 E8 W! d; F                    "STANDARD": {
  W' `  A+ ~$ W/ a$ o                        "data_pool": "zone-dc1.rgw.buckets.data"
9 A3 j. `8 c0 J! h  t' G                    }2 P; B7 J6 {% o9 B6 A% j5 K
                },
2 r( n- c0 M5 @% e                "data_extra_pool": "zone-dc1.rgw.buckets.non-ec",
) s/ H$ Z* w6 G                "index_type": 0- [" |) |0 l( g7 |7 R5 G
            }
! T2 k& h; I; i, S8 l        }: b: @/ |# Z3 M! k
    ],9 C1 F! A; T" z
    "realm_id": "43dc34c0-6b5b-411c-9e23-687a29c8bd00"
2 F# N! U& F( S# k+ i# \" ^0 K}
3 H' D) l# V4 J5 Z' A" E为特定领域和区域部署一组radosgw守护进程,这里只指定了两个节点开启rgw
+ ]+ z8 t2 a: ]" @' P8 W1
+ w% \) [: J* r$ r, }[root@cephnode01 ~]# ceph orch apply rgw rgw-org zone-dc1 --placement="2 cephnode02 cephnode03". q  q0 i3 x- m
验证
! I4 B- M2 p+ }7 ~4 [7 M% r7 K) m( _
[root@cephnode01 ~]# ceph -s
$ K& {: q6 ~2 F  cluster:
5 _4 o, h8 q4 K* T$ z$ q    id:     8a4fdb4e-f31c-11ea-be33-000c29358c7a
* D5 m4 O8 x  Z, P( x    health: HEALTH_OK
2 H  K% ?* w$ O+ d0 k6 ~ 3 |" w3 F- w; J" u* o
  services:
+ W! }% w% ^* h- X    mon: 3 daemons, quorum cephnode01,cephnode02,cephnode03 (age 1m)
2 S! M: S  R7 |- D7 z! U# Y    mgr: cephnode01.oesega(active, since 49m), standbys: cephnode02.lphrtb, cephnode03.wkthtb3 L$ I% s0 U/ b
    mds:  3 up:standby7 E) r" M) g/ C6 q
    osd: 3 osds: 3 up (since 51m), 3 in (since 30m)1 D0 e. Y. j, W
    rgw: 2 daemons active (rgw-org.zone-dc1.cephnode02.cdgjsi, rgw-org.zone-dc1.cephnode03.nmbbsz)/ O/ b8 Q2 v& Q0 W' u4 [
  data:
$ s& Z: M( r) P    pools:   1 pools, 1 pgs) g) m# z6 B) X0 y) c
    objects: 0 objects, 0 B
: X  i/ L) B- d3 `    usage:   3.0 GiB used, 57 GiB / 60 GiB avail( Q+ L# l' A- a+ J7 p0 G% o
    pgs:     1 active+clean
5 V6 o( J  k- [  P- }为RGW开启dashborad
% p0 Z( [; Y" q& L' `
* f2 S5 ~5 ~0 f, \$ y7 }! Z#创建rgw的管理用户
1 i$ O3 }' d# O3 ^# \1 D5 f# b[root@cephnode01 ~]# radosgw-admin user create --uid=admin --display-name=admin --system
+ ]# q3 i/ i8 D8 N1 E{
7 |- T+ r( i% a/ n7 K    "user_id": "admin",
/ k: Y2 L' r& p+ R1 m    "display_name": "admin",
8 M( P' M: o# E( b0 }, |0 H6 W5 b    "email": "",
8 Z) B; u) x) R" G" v    "suspended": 0,& ^1 h' \, {0 H; k  {3 W; q
    "max_buckets": 1000,
. t  ?% c4 |6 d& s: H5 E% A: v    "subusers": [],% U% r5 D8 ]1 X3 T7 A
    "keys": [
% |" S2 m; R9 n        {
1 t% V" Y% t. J# y            "user": "admin",
  M6 S0 E/ v  W6 h            "access_key": "WG9W5O9O11TGGOLU6OD2",
( ~0 Z. X3 }( v            "secret_key": "h2DfrWvlS4NMkdgGin4g6OB6Z50F1VNmhRCRQo3W"0 G: H+ W6 C, Y% g; f
        }3 q3 B  {4 a+ t/ N+ s
    ],
8 `" a: l1 W, I  }' D* \0 N( O    "swift_keys": [],9 g- Z6 N/ E# e6 h
    "caps": [],2 n" @) P+ Y- f. r6 @
    "op_mask": "read, write, delete",# r% K3 }7 B9 D
    "system": "true",% ?+ G/ q$ q1 e$ G
    "default_placement": "",4 U' e' v* l; q* h
    "default_storage_class": "",/ c1 S6 M; ]6 ^2 b
    "placement_tags": [],
' n5 B+ m9 L, l2 O" B* }/ `3 U    "bucket_quota": {& [4 G: ~& q! p0 T
        "enabled": false,
& ~9 J2 B. Q3 F3 E! `4 B& T' ]        "check_on_raw": false,- a5 R& V4 M7 g
        "max_size": -1,
& K/ Q) I! ?9 x8 ~% V        "max_size_kb": 0,
$ t( g6 F+ Z7 R% H5 q$ P" F        "max_objects": -19 ], K9 E  T7 v. I
    },
5 _! }5 O5 L5 y9 I% q    "user_quota": {
+ @% Q( w1 X0 ^' c$ p        "enabled": false,( @% P1 I. G. |+ b
        "check_on_raw": false,
2 d7 O1 k9 m' T3 ~        "max_size": -1,
, [6 A1 o( V# G, a) \& K( ]        "max_size_kb": 0,
' l9 b6 Y5 [- L, N5 f        "max_objects": -1
" A) {+ |' z9 J! S* D$ U    },
% C1 k& T; Y7 |1 B5 [+ o    "temp_url_keys": [],$ {8 d, H+ ^  {1 A% f/ S6 y
    "type": "rgw"," }$ B4 {1 _9 `9 K0 ]  u: \! a
    "mfa_ids": []
4 H& n1 w  v) x, S6 g* U+ H8 a}  h& Y7 |% x" ]/ t, g
设置dashboard凭证
6 ?2 X; Y7 v: Y* {
& q4 m# \  W& e/ `( ~* H; j: s5 ?/ p$ Y[root@cephnode01 ~]# ceph dashboard set-rgw-api-access-key WG9W5O9O11TGGOLU6OD2
1 _9 h9 \8 E8 k: F+ wOption RGW_API_ACCESS_KEY updated
/ ~9 v8 p4 F. [7 G. z6 f+ h[root@cephnode01 ~]# ceph dashboard set-rgw-api-secret-key h2DfrWvlS4NMkdgGin4g6OB6Z50F1VNmhRCRQo3W) x! A7 _3 j8 C, `( L
Option RGW_API_SECRET_KEY updated$ P8 e$ r9 Y, e& H2 K/ {* M: E# |
设置禁用证书验证、http访问方式及使用admin账号
6 p9 e1 Q7 s  [4 s: \' H* W# a9 E/ |6 x7 U9 Q! D4 `7 p
ceph dashboard set-rgw-api-ssl-verify False
9 Z  T' G5 x0 R; G" @. G" wceph dashboard set-rgw-api-scheme http7 T# t) b5 `- G8 ?8 E: k1 [" o& c- y! [
ceph dashboard set-rgw-api-host 10.15.253.2257 x) X7 K6 N; R5 z) L
ceph dashboard set-rgw-api-port 80
. x% C" Y" g) q( t& O; M1 ?: N9 Tceph dashboard set-rgw-api-user-id admin
* [: T& A7 r6 B6 _% a3 z. @1 w重启RGW
1 s2 b$ t* D( y0 O. H3 Q) `, G5 @; r
ceph orch restart rgw( o: `& Z% O' D7 V# R& C* Y
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2021-11-28 19:20 , Processed in 0.054516 second(s), 23 queries .

Powered by LR.LINUX.cloud bbs168x X3.2 Licensed

© 2012-2022 Comsenz Inc.

快速回复 返回顶部 返回列表