[ ceph-deploy ] 02. ceph cluster ๋ฐฐํฌํ•˜๊ธฐ

2022. 2. 27. 13:46ใ†๐ŸŽฏ OpenSource/Ceph

์ด์ „ ๊ฒŒ์‹œ๊ธ€์—์„œ ceph ๋ฐฐํฌ๋ฅผ ์œ„ํ•œ ํŒจํ‚ค์ง€ ๋ฐ ๋„คํŠธ์›Œํฌ ์„ค์ •์„ ์™„๋ฃŒํ•˜์˜€๋‹ค.

  • https://greencloud33.tistory.com/44?category=950924
    ์ด์ œ ceph cluster๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ฐฐํฌ๋ฅผ ์ง„ํ–‰ํ•œ๋‹ค.
    ๋ฌผ๋ก  ์ง€๋‚œ ์ž‘์—…์„ ํ†ตํ•ด์„œ deploy ์„œ๋ฒ„์—์„œ ๊ฐ๊ฐ์˜ ceph ์„œ๋ฒ„๋กœ ssh ์ ‘์†์ด ๋˜๋Š” ์ƒํ™ฉ์ด์–ด์•ผ ํ•œ๋‹ค.

 

ceph ๋ฐฐํฌํ•  directory ์ƒ์„ฑ

deploy ๋…ธ๋“œ์— ceph cluster ๋ฐฐํฌ์— ํ•„์š”ํ•œ ํŒŒ์ผ๋“ค์„ ๊ด€๋ฆฌํ•  ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค.
๊ฐ์ข… keyring๊ณผ conf ํŒŒ์ผ๋“ค์ด ์ด์— ์†ํ•œ๋‹ค.
deploy ์„œ๋ฒ„์—์„œ ๋ฐฐํฌ๋ฅผ ํ•  ๋•Œ ์ด ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ์ž‘์—…์„ ํ•˜๋ฉด ๋œ๋‹ค.

root@deploy:~# mkdir /home/ceph-cluster

 

ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑํ•˜๊ธฐ

deploy ์„œ๋ฒ„์—์„œ ceph ํ˜ธ์ŠคํŠธ ๋ช…์„ ์ด์šฉํ•˜์—ฌ ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy new wglee-ceph-001 wglee-ceph-002 wglee-ceph-003

์œ„ ๋ช…๋ น์–ด์™€ ํ•จ๊ป˜ ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ceph.conf๋ฅผ ๋ฐฑ์—…ํ•ด ๋‘๊ณ  ์ˆ˜์ •์„ ์ง„ํ–‰ํ•œ๋‹ค.
public_network๋Š” client <-> OSD ํ†ต์‹ ์„ ์œ„ํ•œ ๊ฒƒ์ด๊ณ ,
cluster_network๋Š” OSD ์‚ฌ์ด์—์„œ PG replication์„ ์œ„ํ•œ ๊ฒƒ์ด๋‹ค.
์šฉ๋„์— ๋”ฐ๋ผ ๋„คํŠธ์›Œํฌ๋ฅผ ๋ถ„๋ฆฌํ•˜์—ฌ public_network๋Š” ceph ์„œ๋ฒ„์˜ Tanent(& Data) Network ๋Œ€์—ญ์œผ๋กœ ์ง€์ •ํ•˜๊ณ 
cluster_network๋Š” ceph ์„œ๋ฒ„์˜ Storage_network ๋Œ€์—ญ์œผ๋กœ ์ง€์ •ํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# cat ceph.conf
[global]
fsid = 4ec23dde-416c-4a0b-8c6d-6d10a960b090
mon_initial_members = wglee-ceph-001, wglee-ceph-002, wglee-ceph-003
mon_host = 20.20.10.50,20.20.10.51,20.20.10.52
public_network = 20.20.10.0/24
cluster_network = 20.20.20.0/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

๋Œ€์ƒ ceph ๋…ธ๋“œ๋“ค์— ceph-deploy์™€ ๊ด€๋ จ๋œ ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy install wglee-ceph-001 wglee-ceph-002 wglee-ceph-003

monitor ๋ฐ๋ชฌ์„ ๋ฐฐํฌํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy mon create-initial

์ด์ œ ์ธ์ฆ์— ํ•„์š”ํ•œ ๊ฐ์ข… keyring์ด ์ƒ์„ฑ๋œ ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.

root@deploy:/home/ceph-cluster# ls
ceph-deploy-ceph.log        ceph.bootstrap-osd.keyring  ceph.conf
ceph.bootstrap-mds.keyring  ceph.bootstrap-rgw.keyring  ceph.conf.bak
ceph.bootstrap-mgr.keyring  ceph.client.admin.keyring   ceph.mon.keyring

deploy ์„œ๋ฒ„์—์„œ ๋””๋ ‰ํ„ฐ๋ฆฌ ์œ„์น˜์— ์ƒ๊ด€ ์—†์ด ceph cli๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก admin key์™€ conf ํŒŒ์ผ๋“ค์„ ceph ์„œ๋ฒ„๋“ค์— ๋ณต์‚ฌํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy admin wglee-ceph-001 wglee-ceph-002 wglee-ceph-003
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin wglee-ceph-001 wglee-ceph-002 wglee-ceph-003
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['wglee-ceph-001', 'wglee-ceph-002', 'wglee-ceph-003']
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f95d74ee9a0>
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f95d7f02040>
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to wglee-ceph-001
[wglee-ceph-001][DEBUG ] connection detected need for sudo
[wglee-ceph-001][DEBUG ] connected to host: wglee-ceph-001
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to wglee-ceph-002
[wglee-ceph-002][DEBUG ] connection detected need for sudo
[wglee-ceph-002][DEBUG ] connected to host: wglee-ceph-002
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to wglee-ceph-003
[wglee-ceph-003][DEBUG ] connection detected need for sudo
[wglee-ceph-003][DEBUG ] connected to host: wglee-ceph-003

manager ๋ฐ๋ชฌ์„ wglee-ceph-001์— ๋ฐฐํฌํ•œ๋‹ค.
luminous ์ด์ƒ์˜ ๋ฒ„์ „์—์„œ ์ˆ˜ํ–‰ํ•˜๋ฉด ๋˜๋Š”๋ฐ ๋‚˜๋Š” nautilius ๋ผ์„œ ์ง„ํ–‰ํ–ˆ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy mgr create wglee-ceph-001
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create wglee-ceph-001
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f111771c970>
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f1117763ca0>
[ceph_deploy.cli][INFO  ]  mgr                           : [('wglee-ceph-001', 'wglee-ceph-001')]
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts wglee-ceph-001:wglee-ceph-001
[wglee-ceph-001][DEBUG ] connection detected need for sudo
[wglee-ceph-001][DEBUG ] connected to host: wglee-ceph-001
[ceph_deploy.mgr][INFO  ] Distro info: ubuntu 20.04 focal
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to wglee-ceph-001
[wglee-ceph-001][WARNIN] mgr keyring does not exist yet, creating one
[wglee-ceph-001][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.wglee-ceph-001 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-wglee-ceph-001/keyring
[wglee-ceph-001][INFO  ] Running command: sudo systemctl enable ceph-mgr@wglee-ceph-001
[wglee-ceph-001][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@wglee-ceph-001.service → /lib/systemd/system/ceph-mgr@.service.
[wglee-ceph-001][INFO  ] Running command: sudo systemctl start ceph-mgr@wglee-ceph-001
[wglee-ceph-001][INFO  ] Running command: sudo systemctl enable ceph.target

์ด ์ƒํƒœ์—์„œ ceph -s๋ฅผ ํ•˜๋ฉด ceph ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ์ƒ์„ฑ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค.
์•„์ง osd ์ถ”๊ฐ€๋Š” ํ•˜์ง€ ์•Š์€ ์ƒํƒœ์ด๋‹ค.

root@wglee-ceph-001:/etc/ceph# ceph -s
  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 23m)
    mgr: wglee-ceph-001(active, since 9m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

OSD ์ถ”๊ฐ€ํ•˜๊ธฐ

๊ฐ ceph ์„œ๋ฒ„์˜ /dev/vdb, /dev/vdc, /dev/vdd disk๋ฅผ ๊ฐ๊ฐ์˜ OSD๋กœ ์ถ”๊ฐ€ํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy osd create --data /dev/vdb wglee-ceph-001
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/vdb wglee-ceph-001
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf object at 0x7f54ad9d0880>
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f54ada6eaf0>
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  host                          : wglee-ceph-001
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[wglee-ceph-001][DEBUG ] connection detected need for sudo
[wglee-ceph-001][DEBUG ] connected to host: wglee-ceph-001
[ceph_deploy.osd][INFO  ] Distro info: ubuntu 20.04 focal
[ceph_deploy.osd][DEBUG ] Deploying osd to wglee-ceph-001
[wglee-ceph-001][WARNIN] osd keyring does not exist yet, creating one
[wglee-ceph-001][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6b67b2d6-841f-4f46-953e-d159ef6852cb
[wglee-ceph-001][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5 /dev/vdb
[wglee-ceph-001][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[wglee-ceph-001][WARNIN]  stdout: Volume group "ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5" successfully created
[wglee-ceph-001][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 23841 -n osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5
[wglee-ceph-001][WARNIN]  stdout: Logical volume "osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb" created.
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[wglee-ceph-001][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[wglee-ceph-001][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb /var/lib/ceph/osd/ceph-0/block
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[wglee-ceph-001][WARNIN]  stderr: 2022-02-25T15:32:37.265+0900 7f08a8c6b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[wglee-ceph-001][WARNIN] 2022-02-25T15:32:37.265+0900 7f08a8c6b700 -1 AuthRegistry(0x7f08a40592a0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[wglee-ceph-001][WARNIN]  stderr: got monmap epoch 2
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQADeBhiIGFXOBAAYsImY5hMRW6DIbtEXh9lwg==
[wglee-ceph-001][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[wglee-ceph-001][WARNIN]  stdout: added entity osd.0 auth(key=AQADeBhiIGFXOBAAYsImY5hMRW6DIbtEXh9lwg==)
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6b67b2d6-841f-4f46-953e-d159ef6852cb --setuser ceph --setgroup ceph
[wglee-ceph-001][WARNIN]  stderr: 2022-02-25T15:32:37.505+0900 7f730fa3ad80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[wglee-ceph-001][WARNIN]  stderr: 2022-02-25T15:32:37.561+0900 7f730fa3ad80 -1 freelist read_size_meta_from_db missing size meta in DB
[wglee-ceph-001][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[wglee-ceph-001][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-0dac5fa9-3772-4b44-823d-8a35e5e200b5/osd-block-6b67b2d6-841f-4f46-953e-d159ef6852cb /var/lib/ceph/osd/ceph-0/block
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[wglee-ceph-001][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-6b67b2d6-841f-4f46-953e-d159ef6852cb
[wglee-ceph-001][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-6b67b2d6-841f-4f46-953e-d159ef6852cb.service → /lib/systemd/system/ceph-volume@.service.
[wglee-ceph-001][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
[wglee-ceph-001][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[wglee-ceph-001][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@0
[wglee-ceph-001][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[wglee-ceph-001][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[wglee-ceph-001][INFO  ] checking OSD status...
[wglee-ceph-001][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host wglee-ceph-001 is now ready for osd use.

OSD ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด์„œ watch ceph -s ๋กœ ํด๋Ÿฌ์Šคํ„ฐ์˜ ์ƒํƒœ๋ฅผ ํ™•์ธํ•œ๋‹ค.

root@wglee-ceph-001:/etc/ceph# ceph -s
  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            OSD count 1 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 24m)
    mgr: wglee-ceph-001(active, since 10m)
    osd: 1 osds: 1 up (since 17s), 1 in (since 17s)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   1.0 GiB used, 92 GiB / 93 GiB avail
    pgs:     100.000% pgs not active
             1 undersized+peered


# OSD ๊ฐœ์ˆ˜๊ฐ€ ๋Š˜์–ด๋‚˜๋ฉฐ, up ์ƒํƒœ๊ฐ€ ๋˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. 
Every 2.0s: ceph -s                                wglee-ceph-001: Fri Feb 25 15:39:29 2022

  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            Degraded data redundancy: 1 pg undersized
            OSD count 2 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 31m)
    mgr: wglee-ceph-001(active, since 17m)
    osd: 2 osds: 2 up (since 71s), 2 in (since 71s); 1 remapped pgs

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   2.0 GiB used, 184 GiB / 186 GiB avail
    pgs:     1 active+undersized+remapped

  progress:
    Rebalancing after osd.1 marked in (68s)
      [............................]

๊ทธ ํ›„๋กœ ์•„๋ž˜์™€ ๊ฐ™์ด OSD๋ฅผ ๋งˆ์ € ์ถ”๊ฐ€ํ–ˆ๋‹ค.

1122  ceph-deploy osd create --data /dev/vdb wglee-ceph-001
1131  ceph-deploy osd create --data /dev/vdc wglee-ceph-001
1132  ceph-deploy osd create --data /dev/vdd wglee-ceph-001
1133  ceph-deploy osd create --data /dev/vdb wglee-ceph-002
1134  ceph-deploy osd create --data /dev/vdb wglee-ceph-003
1135  ceph-deploy osd create --data /dev/vdc wglee-ceph-002
1136  ceph-deploy osd create --data /dev/vdc wglee-ceph-003
1137  ceph-deploy osd create --data /dev/vdd wglee-ceph-002
1138  ceph-deploy osd create --data /dev/vdd wglee-ceph-003

root@wglee-ceph-001:/var/log/ceph# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META   AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME
-1         0.81807         -  838 GiB  9.1 GiB   60 MiB   0 B  9 GiB  829 GiB  1.08  1.00    -          root default
-3         0.27269         -  279 GiB  3.0 GiB   20 MiB   0 B  3 GiB  276 GiB  1.08  1.00    -              host wglee-ceph-001
 0    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    0      up          osd.0
 1    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    0      up          osd.1
 2    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    1      up          osd.2
-5         0.27269         -  279 GiB  3.0 GiB   20 MiB   0 B  3 GiB  276 GiB  1.08  1.00    -              host wglee-ceph-002
 3    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    0      up          osd.3
 5    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    0      up          osd.5
 7    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    1      up          osd.7
-7         0.27269         -  279 GiB  3.0 GiB   20 MiB   0 B  3 GiB  276 GiB  1.08  1.00    -              host wglee-ceph-003
 4    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    0      up          osd.4
 6    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    0      up          osd.6
 8    hdd  0.09090   1.00000   93 GiB  1.0 GiB  6.6 MiB   0 B  1 GiB   92 GiB  1.08  1.00    1      up          osd.8
                       TOTAL  838 GiB  9.1 GiB   60 MiB   0 B  9 GiB  829 GiB  1.08
MIN/MAX VAR: 1.00/1.00  STDDEV: 0

 

์ถ”๊ฐ€๋Š” ๋ชจ๋‘ ์™„๋ฃŒํ•˜์˜€๋Š”๋ฐ cluster ์ƒํƒœ๊ฐ€ HEALTH_WARN ์ด์—ˆ๋‹ค.
"mons are allowing insecure global_id reclaim"
-> client๋‚˜ ๋ฐ๋ชฌ๋“ค์€ ceph cluster ์ „์ฒด์—์„œ ๊ณ ์œ ํ•œ global id๋ฅผ ๋ถ€์—ฌ ๋ฐ›์•„์„œ ๊ด€๋ฆฌ ๋œ๋‹ค.
๋งŒ์•ฝ ๋„คํŠธ์›Œํฌ ์ค‘๋‹จ ๋“ฑ์˜ ์ด์Šˆ๋กœ connection์ด ๋Š๊ฒจ์„œ ์žฌ์ธ์ฆ์ด ํ•„์š”ํ•˜๋ฉด ์ด์ „์— ์“ฐ๋˜ global_id๋ฅผ ์ œ์‹œํ•˜์—ฌ ๋‚ด๊ฐ€ ์ธ์ฆ๋œ ์‚ฌ์šฉ์ž๊ฐ€ ๋งž๋‹ค๋Š” ๊ฒƒ์„ ์ฆ๋ช…ํ•˜๊ฒŒ ๋˜์–ด ์žˆ๋‹ค.
ํ•ด๋‹น ์—๋Ÿฌ๋Š” ceph์ด client๊ฐ€ ์ œ์‹œํ•œ ์ด์ „์— ์“ฐ๋˜ global_id๋ฅผ vaild ํ•˜๋‹ค๊ณ  ํŒ๋‹จํ•˜์ง€ ๋ชปํ•ด์„œ ๋ฐœ์ƒํ•œ๋‹ค.
https://docs.ceph.com/en/latest/security/CVE-2021-20288/

root@wglee-ceph-001:/var/log/ceph# ceph -s
  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 67m)
    mgr: wglee-ceph-001(active, since 53m)
    osd: 9 osds: 9 up (since 30m), 9 in (since 30m)

  task status:

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   9.1 GiB used, 829 GiB / 838 GiB avail
    pgs:     1 active+clean

root@wglee-ceph-001:/var/log/ceph# ceph health detail
HEALTH_WARN mons are allowing insecure global_id reclaim
[WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure global_id reclaim
    mon.wglee-ceph-001 has auth_allow_insecure_global_id_reclaim set to true
    mon.wglee-ceph-002 has auth_allow_insecure_global_id_reclaim set to true

auth_allow_insecure_global_id_reclaim ์˜ต์…˜์„ false ๋กœ ํ•˜์—ฌ์„œ old global_id๊ฐ€ ๋ถ„๋ช…ํ•œ client๋งŒ recoonect ๊ฐ€๋Šฅํ•˜๊ณ ,
vaildํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ๋Š” ๋ณ„๋„์˜ ์•Œ๋žŒ ๋ฐœ์ƒ ์—†์ด ๊ฑฐ๋ถ€ ํ•˜๋„๋ก ์„ค์ •ํ•œ๋‹ค.
์ด์ œ ํด๋Ÿฌ์Šคํ„ฐ ์ƒํƒœ๊ฐ€ HEALTH_OK ๋˜์—ˆ๋‹ค.

root@wglee-ceph-001:/var/log/ceph# ceph config set mon auth_allow_insecure_global_id_reclaim false

root@wglee-ceph-001:~# ceph health detail
HEALTH_OK


root@wglee-ceph-001:~# ceph -s
  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 71m)
    mgr: wglee-ceph-001(active, since 57m)
    osd: 9 osds: 9 up (since 34m), 9 in (since 34m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   9.1 GiB used, 829 GiB / 838 GiB avail
    pgs:     1 active+clean

์ด๋ ‡๊ฒŒ ํด๋Ÿฌ์Šคํ„ฐ ์ƒ์„ฑ ๋ฐ osd, monitor, manager ๋ฐ๋ชฌ ๋ฐฐํฌ๊ฐ€ ์™„๋ฃŒ ๋˜์—ˆ๋‹ค.
์ด์ œ ๊ธฐ๋ณธ์ ์ธ ์ถ”๊ฐ€๊ฐ€ ๋˜๋Š” ๊ฒƒ์„ ๋ณด์•˜์œผ๋‹ˆ ๊ณ ๊ฐ€์šฉ์„ฑ์„ ์œ„ํ•ด manager ๋ฐ๋ชฌ์„ wglee-ceph-002~003์—๋„ ๋ฐฐํฌํ•˜๋ฉฐ active/standby๋กœ ๋™์ž‘ํ•˜๋„๋ก ํ•  ๊ฒƒ์ด๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy mgr create wglee-ceph-002
root@deploy:/home/ceph-cluster# ceph-deploy mgr create wglee-ceph-003

watch ๋กœ ceph ํด๋Ÿฌ์Šคํ„ฐ ์ƒํƒœ๋ฅผ ํ™•์ธํ•œ๋‹ค. 

Every 2.0s: ceph -s                                               wglee-ceph-002: Tue Mar  1 17:23:21 2022
 
  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 4d)
    mgr: wglee-ceph-001(active, since 4d), standbys: wglee-ceph-002, wglee-ceph-003
    osd: 9 osds: 9 up (since 4d), 9 in (since 4d)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   9.1 GiB used, 829 GiB / 838 GiB avail
    pgs:     1 active+clean

=== ์ด๋ ‡๊ฒŒ OSD ๋ฐ ๊ฐ์ข… ๋ฐ๋ชฌ ๋ฐฐํฌ๊ฐ€ ์™„๋ฃŒ๋˜์—ˆ๋‹ค. ===

๋‚˜๋Š” ์—ฌ๊ธฐ์„œ ์ถ”๊ฐ€๋กœ ceph์„ object storage๋กœ๋„ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์–ด์„œ ์ผ๋‹จ rgw๋ฅผ ์„ค์น˜ํ•ด ๋ณด์•˜๋‹ค. 

root@deploy:/home/ceph-cluster# ceph-deploy rgw create wglee-ceph-001

root@wglee-ceph-002:~# ceph osd pool ls
device_health_metrics
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta

'๐ŸŽฏ OpenSource > Ceph' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

[ ceph ] Rados Block Device (RBD) ์„ค์น˜ํ•˜๊ธฐ  (0) 2022.05.21
[ ceph-deploy ] 01. ์‚ฌ์ „์ž‘์—…  (0) 2022.02.26
Ceph ๊ธฐ๋ณธ ๋™์ž‘ ์›๋ฆฌ  (0) 2021.02.21