2022. 2. 27. 13:46ใ๐ฏ OpenSource/Ceph
์ด์ ๊ฒ์๊ธ์์ ceph ๋ฐฐํฌ๋ฅผ ์ํ ํจํค์ง ๋ฐ ๋คํธ์ํฌ ์ค์ ์ ์๋ฃํ์๋ค.
- https://greencloud33.tistory.com/44?category=950924
์ด์ ceph cluster๋ฅผ ์์ฑํ์ฌ ๋ฐฐํฌ๋ฅผ ์งํํ๋ค.
๋ฌผ๋ก ์ง๋ ์์ ์ ํตํด์ deploy ์๋ฒ์์ ๊ฐ๊ฐ์ ceph ์๋ฒ๋ก ssh ์ ์์ด ๋๋ ์ํฉ์ด์ด์ผ ํ๋ค.
ceph ๋ฐฐํฌํ directory ์์ฑ
deploy ๋
ธ๋์ ceph cluster ๋ฐฐํฌ์ ํ์ํ ํ์ผ๋ค์ ๊ด๋ฆฌํ ๋๋ ํฐ๋ฆฌ๋ฅผ ์์ฑํ๋ค.
๊ฐ์ข
keyring๊ณผ conf ํ์ผ๋ค์ด ์ด์ ์ํ๋ค.
deploy ์๋ฒ์์ ๋ฐฐํฌ๋ฅผ ํ ๋ ์ด ๋๋ ํฐ๋ฆฌ์์ ์์
์ ํ๋ฉด ๋๋ค.
root@deploy:~# mkdir /home/ceph-cluster
ํด๋ฌ์คํฐ ์์ฑํ๊ธฐ
deploy ์๋ฒ์์ ceph ํธ์คํธ ๋ช ์ ์ด์ฉํ์ฌ ํด๋ฌ์คํฐ๋ฅผ ์์ฑํ๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy new wglee-ceph-001 wglee-ceph-002 wglee-ceph-003
์ ๋ช
๋ น์ด์ ํจ๊ป ์๋์ผ๋ก ์์ฑ๋ ceph.conf๋ฅผ ๋ฐฑ์
ํด ๋๊ณ ์์ ์ ์งํํ๋ค.
public_network๋ client <-> OSD ํต์ ์ ์ํ ๊ฒ์ด๊ณ ,
cluster_network๋ OSD ์ฌ์ด์์ PG replication์ ์ํ ๊ฒ์ด๋ค.
์ฉ๋์ ๋ฐ๋ผ ๋คํธ์ํฌ๋ฅผ ๋ถ๋ฆฌํ์ฌ public_network๋ ceph ์๋ฒ์ Tanent(& Data) Network ๋์ญ์ผ๋ก ์ง์ ํ๊ณ
cluster_network๋ ceph ์๋ฒ์ Storage_network ๋์ญ์ผ๋ก ์ง์ ํ๋ค.
root@deploy:/home/ceph-cluster# cat ceph.conf
[global]
fsid = 4ec23dde-416c-4a0b-8c6d-6d10a960b090
mon_initial_members = wglee-ceph-001, wglee-ceph-002, wglee-ceph-003
mon_host = 20.20.10.50,20.20.10.51,20.20.10.52
public_network = 20.20.10.0/24
cluster_network = 20.20.20.0/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
๋์ ceph ๋ ธ๋๋ค์ ceph-deploy์ ๊ด๋ จ๋ ํจํค์ง๋ฅผ ์ค์นํ๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy install wglee-ceph-001 wglee-ceph-002 wglee-ceph-003
monitor ๋ฐ๋ชฌ์ ๋ฐฐํฌํ๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy mon create-initial
์ด์ ์ธ์ฆ์ ํ์ํ ๊ฐ์ข keyring์ด ์์ฑ๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
root@deploy:/home/ceph-cluster# ls
ceph-deploy-ceph.log ceph.bootstrap-osd.keyring ceph.conf
ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph.conf.bak
ceph.bootstrap-mgr.keyring ceph.client.admin.keyring ceph.mon.keyring
deploy ์๋ฒ์์ ๋๋ ํฐ๋ฆฌ ์์น์ ์๊ด ์์ด ceph cli๋ฅผ ์ฌ์ฉํ ์ ์๋๋ก admin key์ conf ํ์ผ๋ค์ ceph ์๋ฒ๋ค์ ๋ณต์ฌํ๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy admin wglee-ceph-001 wglee-ceph-002 wglee-ceph-003
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin wglee-ceph-001 wglee-ceph-002 wglee-ceph-003
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['wglee-ceph-001', 'wglee-ceph-002', 'wglee-ceph-003']
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f95d74ee9a0>
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f95d7f02040>
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to wglee-ceph-001
[wglee-ceph-001][DEBUG ] connection detected need for sudo
[wglee-ceph-001][DEBUG ] connected to host: wglee-ceph-001
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to wglee-ceph-002
[wglee-ceph-002][DEBUG ] connection detected need for sudo
[wglee-ceph-002][DEBUG ] connected to host: wglee-ceph-002
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to wglee-ceph-003
[wglee-ceph-003][DEBUG ] connection detected need for sudo
[wglee-ceph-003][DEBUG ] connected to host: wglee-ceph-003
manager ๋ฐ๋ชฌ์ wglee-ceph-001์ ๋ฐฐํฌํ๋ค.
luminous ์ด์์ ๋ฒ์ ์์ ์ํํ๋ฉด ๋๋๋ฐ ๋๋ nautilius ๋ผ์ ์งํํ๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy mgr create wglee-ceph-001
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create wglee-ceph-001
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f111771c970>
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7f1117763ca0>
[ceph_deploy.cli][INFO ] mgr : [('wglee-ceph-001', 'wglee-ceph-001')]
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts wglee-ceph-001:wglee-ceph-001
[wglee-ceph-001][DEBUG ] connection detected need for sudo
[wglee-ceph-001][DEBUG ] connected to host: wglee-ceph-001
[ceph_deploy.mgr][INFO ] Distro info: ubuntu 20.04 focal
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to wglee-ceph-001
[wglee-ceph-001][WARNIN] mgr keyring does not exist yet, creating one
[wglee-ceph-001][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.wglee-ceph-001 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-wglee-ceph-001/keyring
[wglee-ceph-001][INFO ] Running command: sudo systemctl enable ceph-mgr@wglee-ceph-001
[wglee-ceph-001][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@wglee-ceph-001.service → /lib/systemd/system/ceph-mgr@.service.
[wglee-ceph-001][INFO ] Running command: sudo systemctl start ceph-mgr@wglee-ceph-001
[wglee-ceph-001][INFO ] Running command: sudo systemctl enable ceph.target
์ด ์ํ์์ ceph -s๋ฅผ ํ๋ฉด ceph ํด๋ฌ์คํฐ๊ฐ ์์ฑ๋ ๊ฒ์ ๋ณผ ์ ์๋ค.
์์ง osd ์ถ๊ฐ๋ ํ์ง ์์ ์ํ์ด๋ค.
root@wglee-ceph-001:/etc/ceph# ceph -s
cluster:
id: 4ec23dde-416c-4a0b-8c6d-6d10a960b090
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
OSD count 0 < osd_pool_default_size 3
services:
mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 23m)
mgr: wglee-ceph-001(active, since 9m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
OSD ์ถ๊ฐํ๊ธฐ
๊ฐ ceph ์๋ฒ์ /dev/vdb, /dev/vdc, /dev/vdd disk๋ฅผ ๊ฐ๊ฐ์ OSD๋ก ์ถ๊ฐํ๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy osd create --data /dev/vdb wglee-ceph-001
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/vdb wglee-ceph-001
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f54ad9d0880>
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f54ada6eaf0>
[ceph_deploy.cli][INFO ] data : /dev/vdb
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] host : wglee-ceph-001
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[wglee-ceph-001][DEBUG ] connection detected need for sudo
[wglee-ceph-001][DEBUG ] connected to host: wglee-ceph-001
[ceph_deploy.osd][INFO ] Distro info: ubuntu 20.04 focal
[ceph_deploy.osd][DEBUG ] Deploying osd to wglee-ceph-001
[wglee-ceph-001][WARNIN] osd keyring does not exist yet, creating one
[wglee-ceph-001][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6b67b2d6-841f-4f46-953e-d159ef6852cb
[wglee-ceph-001][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5 /dev/vdb
[wglee-ceph-001][WARNIN] stdout: Physical volume "/dev/vdb" successfully created.
[wglee-ceph-001][WARNIN] stdout: Volume group "ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5" successfully created
[wglee-ceph-001][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 23841 -n osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5
[wglee-ceph-001][WARNIN] stdout: Logical volume "osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb" created.
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[wglee-ceph-001][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[wglee-ceph-001][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb /var/lib/ceph/osd/ceph-0/block
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[wglee-ceph-001][WARNIN] stderr: 2022-02-25T15:32:37.265+0900 7f08a8c6b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[wglee-ceph-001][WARNIN] 2022-02-25T15:32:37.265+0900 7f08a8c6b700 -1 AuthRegistry(0x7f08a40592a0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[wglee-ceph-001][WARNIN] stderr: got monmap epoch 2
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQADeBhiIGFXOBAAYsImY5hMRW6DIbtEXh9lwg==
[wglee-ceph-001][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[wglee-ceph-001][WARNIN] stdout: added entity osd.0 auth(key=AQADeBhiIGFXOBAAYsImY5hMRW6DIbtEXh9lwg==)
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6b67b2d6-841f-4f46-953e-d159ef6852cb --setuser ceph --setgroup ceph
[wglee-ceph-001][WARNIN] stderr: 2022-02-25T15:32:37.505+0900 7f730fa3ad80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[wglee-ceph-001][WARNIN] stderr: 2022-02-25T15:32:37.561+0900 7f730fa3ad80 -1 freelist read_size_meta_from_db missing size meta in DB
[wglee-ceph-001][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb /var/lib/ceph/osd/ceph-0/block
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-6b67b2d6-841f-4f46-953e-d159ef6852cb
[wglee-ceph-001][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-6b67b2d6-841f-4f46-953e-d159ef6852cb.service → /lib/systemd/system/ceph-volume@.service.
[wglee-ceph-001][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
[wglee-ceph-001][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[wglee-ceph-001][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@0
[wglee-ceph-001][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[wglee-ceph-001][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[wglee-ceph-001][INFO ] checking OSD status...
[wglee-ceph-001][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host wglee-ceph-001 is now ready for osd use.
OSD ๋ฅผ ์ถ๊ฐํ๋ฉด์ watch ceph -s ๋ก ํด๋ฌ์คํฐ์ ์ํ๋ฅผ ํ์ธํ๋ค.
root@wglee-ceph-001:/etc/ceph# ceph -s
cluster:
id: 4ec23dde-416c-4a0b-8c6d-6d10a960b090
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
OSD count 1 < osd_pool_default_size 3
services:
mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 24m)
mgr: wglee-ceph-001(active, since 10m)
osd: 1 osds: 1 up (since 17s), 1 in (since 17s)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 1.0 GiB used, 92 GiB / 93 GiB avail
pgs: 100.000% pgs not active
1 undersized+peered
# OSD ๊ฐ์๊ฐ ๋์ด๋๋ฉฐ, up ์ํ๊ฐ ๋๋ ๊ฒ์ ๋ณผ ์ ์๋ค.
Every 2.0s: ceph -s wglee-ceph-001: Fri Feb 25 15:39:29 2022
cluster:
id: 4ec23dde-416c-4a0b-8c6d-6d10a960b090
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
Degraded data redundancy: 1 pg undersized
OSD count 2 < osd_pool_default_size 3
services:
mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 31m)
mgr: wglee-ceph-001(active, since 17m)
osd: 2 osds: 2 up (since 71s), 2 in (since 71s); 1 remapped pgs
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 184 GiB / 186 GiB avail
pgs: 1 active+undersized+remapped
progress:
Rebalancing after osd.1 marked in (68s)
[............................]
๊ทธ ํ๋ก ์๋์ ๊ฐ์ด OSD๋ฅผ ๋ง์ ์ถ๊ฐํ๋ค.
1122 ceph-deploy osd create --data /dev/vdb wglee-ceph-001
1131 ceph-deploy osd create --data /dev/vdc wglee-ceph-001
1132 ceph-deploy osd create --data /dev/vdd wglee-ceph-001
1133 ceph-deploy osd create --data /dev/vdb wglee-ceph-002
1134 ceph-deploy osd create --data /dev/vdb wglee-ceph-003
1135 ceph-deploy osd create --data /dev/vdc wglee-ceph-002
1136 ceph-deploy osd create --data /dev/vdc wglee-ceph-003
1137 ceph-deploy osd create --data /dev/vdd wglee-ceph-002
1138 ceph-deploy osd create --data /dev/vdd wglee-ceph-003
root@wglee-ceph-001:/var/log/ceph# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 0.81807 - 838 GiB 9.1 GiB 60 MiB 0 B 9 GiB 829 GiB 1.08 1.00 - root default
-3 0.27269 - 279 GiB 3.0 GiB 20 MiB 0 B 3 GiB 276 GiB 1.08 1.00 - host wglee-ceph-001
0 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 0 up osd.0
1 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 0 up osd.1
2 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 1 up osd.2
-5 0.27269 - 279 GiB 3.0 GiB 20 MiB 0 B 3 GiB 276 GiB 1.08 1.00 - host wglee-ceph-002
3 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 0 up osd.3
5 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 0 up osd.5
7 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 1 up osd.7
-7 0.27269 - 279 GiB 3.0 GiB 20 MiB 0 B 3 GiB 276 GiB 1.08 1.00 - host wglee-ceph-003
4 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 0 up osd.4
6 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 0 up osd.6
8 hdd 0.09090 1.00000 93 GiB 1.0 GiB 6.6 MiB 0 B 1 GiB 92 GiB 1.08 1.00 1 up osd.8
TOTAL 838 GiB 9.1 GiB 60 MiB 0 B 9 GiB 829 GiB 1.08
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
์ถ๊ฐ๋ ๋ชจ๋ ์๋ฃํ์๋๋ฐ cluster ์ํ๊ฐ HEALTH_WARN ์ด์๋ค.
"mons are allowing insecure global_id reclaim"
-> client๋ ๋ฐ๋ชฌ๋ค์ ceph cluster ์ ์ฒด์์ ๊ณ ์ ํ global id๋ฅผ ๋ถ์ฌ ๋ฐ์์ ๊ด๋ฆฌ ๋๋ค.
๋ง์ฝ ๋คํธ์ํฌ ์ค๋จ ๋ฑ์ ์ด์๋ก connection์ด ๋๊ฒจ์ ์ฌ์ธ์ฆ์ด ํ์ํ๋ฉด ์ด์ ์ ์ฐ๋ global_id๋ฅผ ์ ์ํ์ฌ ๋ด๊ฐ ์ธ์ฆ๋ ์ฌ์ฉ์๊ฐ ๋ง๋ค๋ ๊ฒ์ ์ฆ๋ช
ํ๊ฒ ๋์ด ์๋ค.
ํด๋น ์๋ฌ๋ ceph์ด client๊ฐ ์ ์ํ ์ด์ ์ ์ฐ๋ global_id๋ฅผ vaild ํ๋ค๊ณ ํ๋จํ์ง ๋ชปํด์ ๋ฐ์ํ๋ค.
https://docs.ceph.com/en/latest/security/CVE-2021-20288/
root@wglee-ceph-001:/var/log/ceph# ceph -s
cluster:
id: 4ec23dde-416c-4a0b-8c6d-6d10a960b090
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 67m)
mgr: wglee-ceph-001(active, since 53m)
osd: 9 osds: 9 up (since 30m), 9 in (since 30m)
task status:
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 829 GiB / 838 GiB avail
pgs: 1 active+clean
root@wglee-ceph-001:/var/log/ceph# ceph health detail
HEALTH_WARN mons are allowing insecure global_id reclaim
[WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure global_id reclaim
mon.wglee-ceph-001 has auth_allow_insecure_global_id_reclaim set to true
mon.wglee-ceph-002 has auth_allow_insecure_global_id_reclaim set to true
auth_allow_insecure_global_id_reclaim ์ต์
์ false ๋ก ํ์ฌ์ old global_id๊ฐ ๋ถ๋ช
ํ client๋ง recoonect ๊ฐ๋ฅํ๊ณ ,
vaildํ์ง ์์ ๊ฒฝ์ฐ๋ ๋ณ๋์ ์๋ ๋ฐ์ ์์ด ๊ฑฐ๋ถ ํ๋๋ก ์ค์ ํ๋ค.
์ด์ ํด๋ฌ์คํฐ ์ํ๊ฐ HEALTH_OK ๋์๋ค.
root@wglee-ceph-001:/var/log/ceph# ceph config set mon auth_allow_insecure_global_id_reclaim false
root@wglee-ceph-001:~# ceph health detail
HEALTH_OK
root@wglee-ceph-001:~# ceph -s
cluster:
id: 4ec23dde-416c-4a0b-8c6d-6d10a960b090
health: HEALTH_OK
services:
mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 71m)
mgr: wglee-ceph-001(active, since 57m)
osd: 9 osds: 9 up (since 34m), 9 in (since 34m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 829 GiB / 838 GiB avail
pgs: 1 active+clean
์ด๋ ๊ฒ ํด๋ฌ์คํฐ ์์ฑ ๋ฐ osd, monitor, manager ๋ฐ๋ชฌ ๋ฐฐํฌ๊ฐ ์๋ฃ ๋์๋ค.
์ด์ ๊ธฐ๋ณธ์ ์ธ ์ถ๊ฐ๊ฐ ๋๋ ๊ฒ์ ๋ณด์์ผ๋ ๊ณ ๊ฐ์ฉ์ฑ์ ์ํด manager ๋ฐ๋ชฌ์ wglee-ceph-002~003์๋ ๋ฐฐํฌํ๋ฉฐ active/standby๋ก ๋์ํ๋๋ก ํ ๊ฒ์ด๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy mgr create wglee-ceph-002
root@deploy:/home/ceph-cluster# ceph-deploy mgr create wglee-ceph-003
watch ๋ก ceph ํด๋ฌ์คํฐ ์ํ๋ฅผ ํ์ธํ๋ค.
Every 2.0s: ceph -s wglee-ceph-002: Tue Mar 1 17:23:21 2022
cluster:
id: 4ec23dde-416c-4a0b-8c6d-6d10a960b090
health: HEALTH_OK
services:
mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 4d)
mgr: wglee-ceph-001(active, since 4d), standbys: wglee-ceph-002, wglee-ceph-003
osd: 9 osds: 9 up (since 4d), 9 in (since 4d)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 829 GiB / 838 GiB avail
pgs: 1 active+clean
=== ์ด๋ ๊ฒ OSD ๋ฐ ๊ฐ์ข ๋ฐ๋ชฌ ๋ฐฐํฌ๊ฐ ์๋ฃ๋์๋ค. ===
๋๋ ์ฌ๊ธฐ์ ์ถ๊ฐ๋ก ceph์ object storage๋ก๋ ์ฌ์ฉํ๊ณ ์ถ์ด์ ์ผ๋จ rgw๋ฅผ ์ค์นํด ๋ณด์๋ค.
root@deploy:/home/ceph-cluster# ceph-deploy rgw create wglee-ceph-001
root@wglee-ceph-002:~# ceph osd pool ls
device_health_metrics
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
'๐ฏ OpenSource > Ceph' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
[ ceph ] Rados Block Device (RBD) ์ค์นํ๊ธฐ (0) | 2022.05.21 |
---|---|
[ ceph-deploy ] 01. ์ฌ์ ์์ (0) | 2022.02.26 |
Ceph ๊ธฐ๋ณธ ๋์ ์๋ฆฌ (0) | 2021.02.21 |