[ ceph ] Rados Block Device (RBD) ์„ค์น˜ํ•˜๊ธฐ

2022. 5. 21. 22:13ใ†๐ŸŽฏ OpenSource/Ceph

ceph์„ openstack์˜ cinder์™€ ๊ฐ™์€ block stroage ์„œ๋น„์Šค์˜ ๋ฐฑ์—”๋“œ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” RBD๋ฅผ ๊ตฌ์„ฑํ•ด์•ผ ํ•œ๋‹ค.
RBD(Rados Block Device)๋Š” Ceph Block Device๋ผ๊ณ ๋„ ์•Œ๋Ÿฌ์ ธ ์žˆ๋‹ค.
https://docs.ceph.com/en/latest/rbd/rbd-openstack/

์ž‘์—…ํ•˜๊ธฐ์— ์•ž์„œ ์ด์ „์— ๊ตฌ์„ฑํ•œ ceph ํด๋Ÿฌ์Šคํ„ฐ์˜ ์ƒํƒœ๊ฐ€ HEALTH__OK์ธ ๊ฒƒ์„ ํ™•์ธํ•˜๊ณ  ์‹œ์ž‘ํ•œ๋‹ค.
https://greencloud33.tistory.com/45 ์ž‘์—… ํ›„๋กœ pool ๋ฐ disk ์ƒํƒœ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.

root@deploy:/home/ceph-cluster# ceph osd lspools
1 device_health_metrics
2 .rgw.root
3 default.rgw.log
4 default.rgw.control
5 default.rgw.meta


root@deploy:/home/ceph-cluster# ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    838 GiB  828 GiB  956 MiB   9.9 GiB       1.19
TOTAL  838 GiB  828 GiB  956 MiB   9.9 GiB       1.19

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0    262 GiB
.rgw.root               2   32  1.3 KiB        4  768 KiB      0    262 GiB
default.rgw.log         3   32  3.4 KiB      207    6 MiB      0    262 GiB
default.rgw.control     4   32      0 B        8      0 B      0    262 GiB
default.rgw.meta        5    8      0 B        0      0 B      0    262 GiB

 

deploy ์„œ๋ฒ„์—์„œ ceph cli ์‚ฌ์šฉํ•˜๊ธฐ

์‚ฌ์‹ค ์ด ๊ณผ์ •์€ RBD ์„ค์น˜์™€ ๋ฌด๊ด€ํ•˜๊ธฐ๋Š” ํ•œ๋ฐ ํŽธ์˜์„ฑ์„ ์œ„ํ•ด์„œ ์ง„ํ–‰ํ•˜์˜€๋‹ค.
deploy ์„œ๋ฒ„์— ceph client๋ฅผ ์„ค์น˜ํ•˜์—ฌ ceph ํด๋Ÿฌ์Šคํ„ฐ์˜ ๋‚ด์šฉ์„ cli๋กœ ์ œ์–ด, ์กฐํšŒํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph-deploy install ceph-client

ํŒจํ‚ค์ง€๋งŒ ์„ค์น˜ํ•˜๊ณ  deploy ์„œ๋ฒ„์—์„œ ceph -s๋ฅผ ํ–ˆ์„ ๋•Œ ์˜ค๋ฅ˜๊ฐ€ ๋‚œ๋‹ค.
ceph admin ๊ณ„์ •์œผ๋กœ ceph cluster์— ์ธ์ฆ์„ ๋ชป๋ฐ›์•„์„œ ๊ทธ๋Ÿฐ ๊ฒƒ์œผ๋กœ, ์—๋Ÿฌ์— ๋‚˜ํƒ€๋‚˜๋Š” /etc/ceph ํ•˜์œ„ ๊ฒฝ๋กœ์—
ceph.admin.keyring์„ ๋ณต์‚ฌํ•ด ์ค€๋‹ค.

root@deploy:/home/ceph-cluster# ceph -s
2022-03-01T17:55:30.576+0900 7fa8edfb6700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-03-01T17:55:30.576+0900 7fa8edfb6700 -1 AuthRegistry(0x7fa8e80590e0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2022-03-01T17:55:30.576+0900 7fa8ecd54700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-03-01T17:55:30.576+0900 7fa8ecd54700 -1 AuthRegistry(0x7fa8e805b698) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2022-03-01T17:55:30.576+0900 7fa8ecd54700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-03-01T17:55:30.576+0900 7fa8ecd54700 -1 AuthRegistry(0x7fa8ecd53130) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[errno 2] RADOS object not found (error connecting to the cluster)

root@deploy:/home/ceph-cluster# cp -a ceph.client.admin.keyring /etc/ceph

์ด์ œ๋Š” deploy ์„œ๋ฒ„์—์„œ๋„ ceph ํด๋Ÿฌ์Šคํ„ฐ ์ •๋ณด๋ฅผ ์กฐํšŒํ•  ์ˆ˜ ์žˆ๋‹ค.

root@deploy:/home/ceph-cluster# ceph -s
  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 4d)
    mgr: wglee-ceph-001(active, since 4d), standbys: wglee-ceph-002, wglee-ceph-003
    osd: 9 osds: 9 up (since 4d), 9 in (since 4d)
    rgw: 1 daemon active (wglee-ceph-001)

  task status:

  data:
    pools:   5 pools, 105 pgs
    objects: 187 objects, 4.7 KiB
    usage:   9.2 GiB used, 829 GiB / 838 GiB avail
    pgs:     105 active+clean

 

ceph.conf ์„ค์ •ํ•˜๊ธฐ

pool ์ƒ์„ฑ ์ „์— ceph.conf์— osd default ๊ฐ’ ๋ฐ cache ์„ค์ •์„ ํ•˜์˜€๋‹ค.
ceph.conf ์ƒ˜ํ”Œ์— ์žˆ๋Š” ์ฃผ์„์„ ์ฝ์–ด๋ณด๋‹ˆ osd pool default pg num ๊ฐ™์€ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ณ„์‚ฐํ•˜๋Š”๊ฑธ ๊ถŒ์žฅํ•œ๋‹ค๊ณ  ํ•œ๋‹ค.

(OSD ๊ฐœ์ˆ˜ * 100 ) / replica ์ˆ˜์น˜

์ € ๊ณ„์‚ฐ์— ์™„์ „ ๋˜‘๊ฐ™์ด ํ•œ๊ฑด ์•„๋‹ˆ์ง€๋งŒ ๊ทผ์‚ฌ์น˜๋กœ 320 ์œผ๋กœ ์„ค์ •ํ–ˆ๋‹ค.
320์€ PG calculator๋กœ ๊ณ„์‚ฐํ–ˆ์„ ๋•Œ ๋‚˜์˜จ ๊ฐ’์ด์—ˆ๋‹ค.

###config for ceph osds
osd pool default size = 3
osd pool default min size  = 2
osd pool default pg num = 320
osd pool default pgp num = 320

[client]
rbd cache = true
rbd cache size = 33554432 # (32MiB)
rbd cache max dirty = 20971520 # (20MiB)
rbd cache target dirty = 10485760 # (10MiB)
rbd cache max dirty age = 1.0
rbd cache writethrough until flush = true

 

PG autoscaler Disable

PG๋ฅผ ์ž๋™์œผ๋กœ ๊ณ„์‚ฐํ•˜์—ฌ ๋ถ„๋ฐฐํ•˜๋Š” ์˜ต์…˜์ด ์ผœ์ ธ ์žˆ์—ˆ๋‹ค.
๋‚˜๋Š” pool ๋ณ„๋กœ ์‚ฌ์šฉ๋ฅ ์„ ๊ณ ๋ คํ•ด์„œ ๋ถ„๋ฐฐํ•˜๊ณ  ์‹ถ์–ด์„œ ์ด๋ฅผ disable ํ–ˆ๋‹ค.

root@deploy:/home/ceph-cluster# ceph osd pool set device_health_metrics pg_autoscale_mode off
set pool 1 pg_autoscale_mode to off
root@deploy:/home/ceph-cluster# ceph osd pool set  .rgw.root  pg_autoscale_mode off
set pool 2 pg_autoscale_mode to off
root@deploy:/home/ceph-cluster# ceph osd pool set  default.rgw.log  pg_autoscale_mode off
set pool 3 pg_autoscale_mode to off
root@deploy:/home/ceph-cluster# ceph osd pool set  default.rgw.control  pg_autoscale_mode off
set pool 4 pg_autoscale_mode to off
root@deploy:/home/ceph-cluster# ceph osd pool set  default.rgw.meta  pg_autoscale_mode off
set pool 5 pg_autoscale_mode to off

์•ž์œผ๋กœ ์ƒ์„ฑ๋  pool ์— ๋Œ€ํ•ด์„œ๋„ ํ•ด๋‹น ์˜ต์…˜์„ ๋„๊ณ  ์‹ถ์œผ๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•˜๋ฉด ๋œ๋‹ค.

root@deploy:/home/ceph-cluster# ceph config set global osd_pool_default_pg_autoscale_mode off

root@deploy:/home/ceph-cluster# ceph config dump
WHO     MASK  LEVEL     OPTION                                 VALUE  RO
global        advanced  osd_pool_default_pg_autoscale_mode     off
  mon         advanced  auth_allow_insecure_global_id_reclaim  false
  mon         advanced  mon_allow_pool_delete                  false

๋ชจ๋‘ off ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค.

root@deploy:/home/ceph-cluster# ceph osd pool autoscale-status
POOL                     SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
device_health_metrics      0                 3.0        838.1G  0.0000                                  1.0       1              off
.rgw.root               1289                 3.0        838.1G  0.0000                                  1.0      16              off
default.rgw.log         3520                 3.0        838.1G  0.0000                                  1.0      16              off
default.rgw.control        0                 3.0        838.1G  0.0000                                  1.0      16              off
default.rgw.meta           0                 3.0        838.1G  0.0000                                  4.0      16              off

 

RBD pool ์ƒ์„ฑํ•˜๊ธฐ

https://docs.ceph.com/en/nautilus/rados/operations/pools/#create-a-pool
์ด์ œ ๋ณธ๊ฒฉ์ ์œผ๋กœ RBD ์„ค์น˜๋ฅผ ์‹œ์ž‘ํ•œ๋‹ค. ๋‹ค์Œ pool์„ ์ƒ์„ฑํ•˜๊ณ , pg_num์„ ์ˆ˜๋™์œผ๋กœ ์ง€์ •ํ•ด ์ฃผ๋ ค๊ณ  ํ•œ๋‹ค.
๊ฐ pool์ด ํ•˜๋Š” ์—ญํ• ์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค
- volume pool : openstack cinder์™€ ์—ฐ๋™. ๋ธ”๋ก์Šคํ† ๋ฆฌ์ง€ ๋ฐ ๋ธ”๋ก์Šคํ† ๋ฆฌ์ง€ ์Šค๋ƒ…์ƒท์„ ์ €์žฅ
- images pool : openstack glance์™€ ์—ฐ๋™. VM ์ƒ์„ฑ์„ ์œ„ํ•œ OS image์— ํ•ด๋‹นํ•œ๋‹ค.
   !! ์ฃผ์˜์‚ฌํ•ญ ) ceph ์ด์šฉํ•ด์„œ VM์˜ OS image ๋งŒ๋“ค๋•Œ๋Š” QCOW2 ๋ณด๋‹ค๋Š” raw ํƒ€์ž…์„ ์‚ฌ์šฉ์ด ๊ถŒ์žฅ๋œ๋‹ค.
- vms pool : openstack VM์˜ ๋ถ€ํŒ… ๋””์Šคํฌ๋ฅผ ์ €์žฅํ•œ๋‹ค. Havana ๋ฒ„์ „๋ถ€ํ„ฐ๋Š” ๊ฐ€์ƒ์„œ๋ฒ„ ๋ถ€ํŒ…๋””์Šคํฌ๊ฐ€ cinder๋ฅผ ํƒ€์ง€ ์•Š๊ณ     ๋ฐ”๋กœ ceph๊ณผ ํ†ต์‹ ํ•˜์—ฌ ์ƒ์„ฑ๋œ๋‹ค. ์œ ์ง€๋ณด์ˆ˜ ์ž‘์—…์—์„œ ๋ฒˆ๊ฑฐ๋กœ์›€์„ ๋œ๊ณ  live-migration ์—๋„ ์ด์ ์ด ์žˆ๋‹ค๊ณ  ํ•œ๋‹ค.
- backups pool : ๋ธ”๋ก ์Šคํ† ๋ฆฌ์ง€๋ฅผ ๋ฐฑ์—…ํ•  ์ˆ˜ ์žˆ๋Š” cinder์˜ ๊ธฐ๋Šฅ์ด ์žˆ๊ณ , ๋ฐฑ์—… ๋‚ด์šฉ์„ ์ €์žฅํ•˜๋Š” pool๋กœ ๋ณด์ธ๋‹ค.
  ๋‚˜๋Š” ์•„์ง clnder backup ๊ตฌํ˜„๊นŒ์ง€๋Š” ์ƒ๊ฐ์„ ์•ˆํ•˜๊ณ  ์žˆ์ง€๋งŒ ์ผ๋‹จ pool์€ ์ƒ์„ฑํ–ˆ๋‹ค.

root@deploy:/home/ceph-cluster# ceph osd pool create volumes
pool 'volumes' created
root@deploy:/home/ceph-cluster# ceph osd pool create images
pool 'images' created
root@deploy:/home/ceph-cluster# ceph osd pool create backups
pool 'backups' created
root@deploy:/home/ceph-cluster# ceph osd pool create vms
pool 'vms' created

 

pg_num ์ ์šฉ

์‚ฌ์‹ค ๋‚ด๊ฐ€ ๋งž๊ฒŒ ํ•œ ๊ฒƒ์ธ์ง€ ์ž˜ ๋ชจ๋ฅด๊ฒ ๋‹ค..
https://access.redhat.com/labs/cephpgc/ ์—ฌ๊ธฐ์„œ osd 10๊ฐœ, replica 3์œผ๋กœ ์„ค์ •ํ•˜๊ณ  ํ•„์š”ํ•œ pool๋“ค์„ ์ž…๋ ฅํ•ด์„œ ๊ณ„์‚ฐํ•ด ๋ณด์•˜๋‹ค.


์ด ์‚ฌ์ง„์—์„œ ๊ณ„์‚ฐ๋œ ์ˆ˜์น˜๋กœ ์ ์šฉํ–ˆ๋‹ค. 

root@deploy:/home/ceph-cluster# ceph osd pool set device_health_metrics pg_num 16
root@deploy:/home/ceph-cluster# ceph osd pool set device_health_metrics pgp_num 16
set pool 1 pgp_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool get .rgw.root pgp_num
pgp_num: 16
root@deploy:/home/ceph-cluster# ceph osd pool get .rgw.root pg_num
pg_num: 16
root@deploy:/home/ceph-cluster# ceph osd pool set .rgw.root pg_num 16
root@deploy:/home/ceph-cluster# ceph osd pool set .rgw.root pgp_num 16
set pool 2 pgp_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool set default.rgw.log pg_num 16
root@deploy:/home/ceph-cluster# ceph osd pool set default.rgw.log pgp_num 16
set pool 3 pgp_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool set default.rgw.control pg_num 16
root@deploy:/home/ceph-cluster# ceph osd pool set default.rgw.control pgp_num 16
set pool 4 pgp_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool set default.rgw.meta pg_num 16
root@deploy:/home/ceph-cluster# ceph osd pool set default.rgw.meta pgp_num 16
set pool 5 pgp_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool set backups pg_num 32
root@deploy:/home/ceph-cluster# ceph osd pool set backups pgp_num 32
set pool 12 pgp_num to 32
root@deploy:/home/ceph-cluster# ceph osd pool set volumes pg_num 128
set pool 10 pg_num to 128
root@deploy:/home/ceph-cluster# ceph osd pool set volumes pgp_num 128
set pool 10 pgp_num to 128
root@deploy:/home/ceph-cluster# ceph osd pool set images pg_num 16
set pool 11 pg_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool set images pgp_num 16
set pool 11 pgp_num to 16
root@deploy:/home/ceph-cluster# ceph osd pool set vms pg_num 64
set pool 13 pg_num to 64
root@deploy:/home/ceph-cluster# ceph osd pool set vms pgp_num 64
set pool 13 pgp_num to 64

ceph status๋ฅผ watch ๋ช…๋ น์œผ๋กœ ํ™•์ธํ•˜์—ฌ ๋ชจ๋“  PG๊ฐ€ active+clean ๋  ๋•Œ๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฐ๋‹ค.
320 ๊ฐœ์˜ PG๊ฐ€ ๋ชจ๋‘ active+clean ๋˜์—ˆ๋‹ค.

Every 2.0s: ceph -s                                                                                    deploy: Sat May 21 17:36:31 2022

  cluster:
    id:     4ec23dde-416c-4a0b-8c6d-6d10a960b090
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum wglee-ceph-001,wglee-ceph-002,wglee-ceph-003 (age 3h)
    mgr: wglee-ceph-002(active, since 3h), standbys: wglee-ceph-001, wglee-ceph-003
    osd: 9 osds: 9 up (since 3h), 9 in (since 2M)
    rgw: 1 daemon active (wglee-ceph-001)

  task status:

  data:
    pools:   9 pools, 320 pgs
    objects: 219 objects, 4.7 KiB
    usage:   10 GiB used, 828 GiB / 838 GiB avail
    pgs:     320 active+clean

ceph df ๊ฒฐ๊ณผ. PG๊ฐ€ ๋‚ด๊ฐ€ ์„ค์ •ํ•œ ๋Œ€๋กœ ๋ถ„๋ฐฐ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค.


root@deploy:/home/ceph-cluster# ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    838 GiB  828 GiB  1.1 GiB    10 GiB       1.20
TOTAL  838 GiB  828 GiB  1.1 GiB    10 GiB       1.20

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics   1   16      0 B        0      0 B      0    262 GiB
.rgw.root               2   16  1.3 KiB        4  768 KiB      0    262 GiB
default.rgw.log         3   16  3.4 KiB      207    6 MiB      0    262 GiB
default.rgw.control     4   16      0 B        8      0 B      0    262 GiB
default.rgw.meta        5   16      0 B        0      0 B      0    262 GiB
volumes                10  128      0 B        0      0 B      0    262 GiB
images                 11   16      0 B        0      0 B      0    262 GiB
backups                12   32      0 B        0      0 B      0    262 GiB
vms                    13   64      0 B        0      0 B      0    262 GiB

 

RBD init

pool์„ ์ƒ์„ฑํ•˜๋ฉด ์–ด๋–ค application์—์„œ ์‚ฌ์šฉํ•  ๊ฒƒ์ธ์ง€๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•œ๋‹ค. rbd block device ๊ฐ™์€ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด init ํ•˜์—ฌ ์ด๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค.

root@deploy:/home/ceph-cluster# rbd pool init volumes
root@deploy:/home/ceph-cluster# rbd pool init images
root@deploy:/home/ceph-cluster# rbd pool init backups
root@deploy:/home/ceph-cluster# rbd pool init vms

 

RBD object ์ƒ์„ฑ ํ…Œ์ŠคํŠธ

์ด์ œ pool ๊นŒ์ง€ ์„ค์ •์ด ์™„๋ฃŒ ๋˜์—ˆ์œผ๋‹ˆ ๋ฆฌ์†Œ์Šค๊ฐ€ ์ž˜ ์ƒ์„ฑ๋˜๋Š”์ง€ ํ…Œ์ŠคํŠธ ํ•ด ๋ณธ๋‹ค.
--size์˜ ์ธ์ž๋กœ ๋“ค์–ด๊ฐ€๋Š” ๊ฐ’์˜ ๋‹จ์œ„๋Š” MiB์ด๋‹ค. ๋‚˜๋Š” 1MiB ๋ฅผ ํ•˜๋‚˜ ์ƒ์„ฑํ•ด ๋ณด์•˜๋‹ค.

root@deploy:/home/ceph-cluster# rbd create --size 1 volumes/wglee-test

๋‹ค์Œ๊ณผ ๊ฐ™์ด wglee-test ๋ผ๋Š” object๊ฐ€ volumes pool์— ์ƒ์„ฑ๋˜์—ˆ๋‹ค.
volumes pool์˜ STORED ๋ฐ์ดํ„ฐ๋„ 70B๋กœ ์ฆ๊ฐ€ํ–ˆ๋‹ค.

root@deploy:/home/ceph-cluster# rbd ls volumes
wglee-test

root@deploy:/home/ceph-cluster# rbd info volumes/wglee-test
rbd image 'wglee-test':
        size 1 MiB in 1 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 6a5831cfe8f3
        block_name_prefix: rbd_data.6a5831cfe8f3
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Sat May 21 17:50:32 2022
        access_timestamp: Sat May 21 17:50:32 2022
        modify_timestamp: Sat May 21 17:50:32 2022

root@deploy:/home/ceph-cluster# ceph df
...
--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
...
volumes                10  128     70 B        5  576 KiB      0    262 GiB
images                 11   16     19 B        1  192 KiB      0    262 GiB
backups                12   32     19 B        1  192 KiB      0    262 GiB
vms                    13   64     19 B        1  192 KiB      0    262 GiB

rbd resize๋กœ ํฌ๊ธฐ๋ฅผ ๋ณ€๊ฒฝํ•ด ๋ณธ๋‹ค.

root@deploy:/home/ceph-cluster# rbd resize --size 5 volumes/wglee-test
Resizing image: 100% complete...done.

root@deploy:/home/ceph-cluster# rbd info volumes/wglee-test
rbd image 'wglee-test':
        size 5 MiB in 2 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 6a5831cfe8f3
        block_name_prefix: rbd_data.6a5831cfe8f3
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Sat May 21 17:50:32 2022
        access_timestamp: Sat May 21 17:50:32 2022
        modify_timestamp: Sat May 21 17:50:32 2022

ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ์ƒ์„ฑํ•œ๊ฑฐ ์‚ญ์ œํ•˜๊ธฐ
์‚ญ์ œ ํ•œ ํ›„์— df ํ–ˆ์„ ๋•Œ volumes pool ์šฉ๋Ÿ‰๋„ ์ค„์–ด๋“ค์—ˆ๋‹ค. ์ž˜ ๋˜๋Š” ๋“ฏ ํ•˜๋‹ค! ๐Ÿ˜‰๐Ÿ˜ƒ

root@deploy:/home/ceph-cluster# rbd rm volumes/wglee-test
Removing image: 100% complete...done.

root@deploy:/home/ceph-cluster# ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    838 GiB  828 GiB  1.1 GiB    10 GiB       1.21
TOTAL  838 GiB  828 GiB  1.1 GiB    10 GiB       1.21

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
...
volumes                10  128     19 B        3  192 KiB      0    262 GiB
...

'๐ŸŽฏ OpenSource > Ceph' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

[ ceph-deploy ] 02. ceph cluster ๋ฐฐํฌํ•˜๊ธฐ  (0) 2022.02.27
[ ceph-deploy ] 01. ์‚ฌ์ „์ž‘์—…  (0) 2022.02.26
Ceph ๊ธฐ๋ณธ ๋™์ž‘ ์›๋ฆฌ  (0) 2021.02.21