Назад | Перейти на главную страницу

Как расширить логический том на lvm raid 5

Извините за мой сломанный / плохой английский ...

У меня есть 3 ТБ 4 диска на lvm2, и я создал lv с помощью raid 5.

Я добавил 2 новых диска в свою группу томов и попытался расширить lv. но это не сработает.

Вот мой статус vg.


# vgdisplay 
  --- Volume group ---
  VG Name               vg2
  System ID             
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  43
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               13.65 TiB
  PE Size               4.00 MiB
  Total PE              3576980
  Alloc PE / Size       2861584 / 10.92 TiB
  Free  PE / Size       715396 / 2.73 TiB
  VG UUID               h5w1kW-pdym-Na7U-dRHf-9Xk5-NX3F-GA19Uf

Вот мой лв-статус.


# lvs
  LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv2  vg2  rwi-a-r--- 8.19t  
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg2/lv2
  LV Name                lv2
  VG Name                vg2
  LV UUID                aaC9Qc-1Yev-rfyh-fzZh-K32v-nRsj-Bf3msZ
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                8.19 TiB
  Current LE             2146185
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:8

Вот это lvextend выполнить результат.


# lvextend -v  -l +100%FREE /dev/vg2/lv2
    Converted 100%FREE into at most 715396 physical extents.
  Using stripesize of last segment 64.00 KiB
    Archiving volume group "vg2" metadata (seqno 43).
    Extending logical volume vg2/lv2 to up to 10.92 TiB
    Found fewer allocatable extents for logical volume lv2 than requested: using 2146185 extents (reduced by 715395).
  Size of logical volume vg2/lv2 unchanged from 8.19 TiB (2146185 extents).
    Loading vg2-lv2_rimage_3 table (253:7)
    Suppressed vg2-lv2_rimage_3 (253:7) identical table reload.
    Loading vg2-lv2_rmeta_3 table (253:6)
    Suppressed vg2-lv2_rmeta_3 (253:6) identical table reload.
    Loading vg2-lv2_rimage_2 table (253:5)
    Suppressed vg2-lv2_rimage_2 (253:5) identical table reload.
    Loading vg2-lv2_rmeta_2 table (253:4)
    Suppressed vg2-lv2_rmeta_2 (253:4) identical table reload.
    Loading vg2-lv2_rimage_1 table (253:3)
    Suppressed vg2-lv2_rimage_1 (253:3) identical table reload.
    Loading vg2-lv2_rmeta_1 table (253:2)
    Suppressed vg2-lv2_rmeta_1 (253:2) identical table reload.
    Loading vg2-lv2_rimage_0 table (253:1)
    Suppressed vg2-lv2_rimage_0 (253:1) identical table reload.
    Loading vg2-lv2_rmeta_0 table (253:0)
    Suppressed vg2-lv2_rmeta_0 (253:0) identical table reload.
    Loading vg2-lv2 table (253:8)
    Suppressed vg2-lv2 (253:8) identical table reload.
    Not monitoring vg2/lv2
    Suspending vg2-lv2 (253:8) with device flush
    Suspending vg2-lv2_rimage_3 (253:7) with device flush
    Suspending vg2-lv2_rmeta_3 (253:6) with device flush
    Suspending vg2-lv2_rimage_2 (253:5) with device flush
    Suspending vg2-lv2_rmeta_2 (253:4) with device flush
    Suspending vg2-lv2_rimage_1 (253:3) with device flush
    Suspending vg2-lv2_rmeta_1 (253:2) with device flush
    Suspending vg2-lv2_rimage_0 (253:1) with device flush
    Suspending vg2-lv2_rmeta_0 (253:0) with device flush
    Loading vg2-lv2_rimage_3 table (253:7)
    Suppressed vg2-lv2_rimage_3 (253:7) identical table reload.
    Loading vg2-lv2_rmeta_3 table (253:6)
    Suppressed vg2-lv2_rmeta_3 (253:6) identical table reload.
    Loading vg2-lv2_rimage_2 table (253:5)
    Suppressed vg2-lv2_rimage_2 (253:5) identical table reload.
    Loading vg2-lv2_rmeta_2 table (253:4)
    Suppressed vg2-lv2_rmeta_2 (253:4) identical table reload.
    Loading vg2-lv2_rimage_1 table (253:3)
    Suppressed vg2-lv2_rimage_1 (253:3) identical table reload.
    Loading vg2-lv2_rmeta_1 table (253:2)
    Suppressed vg2-lv2_rmeta_1 (253:2) identical table reload.
    Loading vg2-lv2_rimage_0 table (253:1)
    Suppressed vg2-lv2_rimage_0 (253:1) identical table reload.
    Loading vg2-lv2_rmeta_0 table (253:0)
    Suppressed vg2-lv2_rmeta_0 (253:0) identical table reload.
    Loading vg2-lv2 table (253:8)
    Suppressed vg2-lv2 (253:8) identical table reload.
    Resuming vg2-lv2_rimage_3 (253:7)
    Resuming vg2-lv2_rmeta_3 (253:6)
    Resuming vg2-lv2_rimage_2 (253:5)
    Resuming vg2-lv2_rmeta_2 (253:4)
    Resuming vg2-lv2_rimage_1 (253:3)
    Resuming vg2-lv2_rmeta_1 (253:2)
    Resuming vg2-lv2_rimage_0 (253:1)
    Resuming vg2-lv2_rmeta_0 (253:0)
    Resuming vg2-lv2 (253:8)
    Monitoring vg2/lv2
    Creating volume group backup "/etc/lvm/backup/vg2" (seqno 44).
  Logical volume lv2 successfully resized.

Как я могу продлить свой уровень?


Обновлено

Я пытался переопределить политику распределения, но она та же.


# lvextend -v --alloc normal  -l +100%FREE /dev/vg2/lv2
    Converted 100%FREE into at most 715392 physical extents.
  Using stripesize of last segment 64.00 KiB
    Archiving volume group "vg2" metadata (seqno 52).
    Extending logical volume vg2/lv2 to up to 10.92 TiB
    Found fewer allocatable extents for logical volume lv2 than requested: using 2146188 extents (reduced by 715392).
  Size of logical volume vg2/lv2 unchanged from 8.19 TiB (2146188 extents).
    Loading vg2-lv2_rimage_3 table (253:7)
    Suppressed vg2-lv2_rimage_3 (253:7) identical table reload.
    Loading vg2-lv2_rmeta_3 table (253:6)
    Suppressed vg2-lv2_rmeta_3 (253:6) identical table reload.
    Loading vg2-lv2_rimage_2 table (253:5)
    Suppressed vg2-lv2_rimage_2 (253:5) identical table reload.
    Loading vg2-lv2_rmeta_2 table (253:4)
    Suppressed vg2-lv2_rmeta_2 (253:4) identical table reload.
    Loading vg2-lv2_rimage_1 table (253:3)
    Suppressed vg2-lv2_rimage_1 (253:3) identical table reload.
    Loading vg2-lv2_rmeta_1 table (253:2)
    Suppressed vg2-lv2_rmeta_1 (253:2) identical table reload.
    Loading vg2-lv2_rimage_0 table (253:1)
    Suppressed vg2-lv2_rimage_0 (253:1) identical table reload.
    Loading vg2-lv2_rmeta_0 table (253:0)
    Suppressed vg2-lv2_rmeta_0 (253:0) identical table reload.
    Loading vg2-lv2 table (253:8)
    Suppressed vg2-lv2 (253:8) identical table reload.
    Not monitoring vg2/lv2
    Suspending vg2-lv2 (253:8) with device flush
    Suspending vg2-lv2_rimage_3 (253:7) with device flush
    Suspending vg2-lv2_rmeta_3 (253:6) with device flush
    Suspending vg2-lv2_rimage_2 (253:5) with device flush
    Suspending vg2-lv2_rmeta_2 (253:4) with device flush
    Suspending vg2-lv2_rimage_1 (253:3) with device flush
    Suspending vg2-lv2_rmeta_1 (253:2) with device flush
    Suspending vg2-lv2_rimage_0 (253:1) with device flush
    Suspending vg2-lv2_rmeta_0 (253:0) with device flush
    Loading vg2-lv2_rimage_3 table (253:7)
    Suppressed vg2-lv2_rimage_3 (253:7) identical table reload.
    Loading vg2-lv2_rmeta_3 table (253:6)
    Suppressed vg2-lv2_rmeta_3 (253:6) identical table reload.
    Loading vg2-lv2_rimage_2 table (253:5)
    Suppressed vg2-lv2_rimage_2 (253:5) identical table reload.
    Loading vg2-lv2_rmeta_2 table (253:4)
    Suppressed vg2-lv2_rmeta_2 (253:4) identical table reload.
    Loading vg2-lv2_rimage_1 table (253:3)
    Suppressed vg2-lv2_rimage_1 (253:3) identical table reload.
    Loading vg2-lv2_rmeta_1 table (253:2)
    Suppressed vg2-lv2_rmeta_1 (253:2) identical table reload.
    Loading vg2-lv2_rimage_0 table (253:1)
    Suppressed vg2-lv2_rimage_0 (253:1) identical table reload.
    Loading vg2-lv2_rmeta_0 table (253:0)
    Suppressed vg2-lv2_rmeta_0 (253:0) identical table reload.
    Loading vg2-lv2 table (253:8)
    Suppressed vg2-lv2 (253:8) identical table reload.
    Resuming vg2-lv2_rimage_3 (253:7)
    Resuming vg2-lv2_rmeta_3 (253:6)
    Resuming vg2-lv2_rimage_2 (253:5)
    Resuming vg2-lv2_rmeta_2 (253:4)
    Resuming vg2-lv2_rimage_1 (253:3)
    Resuming vg2-lv2_rmeta_1 (253:2)
    Resuming vg2-lv2_rimage_0 (253:1)
    Resuming vg2-lv2_rmeta_0 (253:0)
    Resuming vg2-lv2 (253:8)
    Monitoring vg2/lv2
    Creating volume group backup "/etc/lvm/backup/vg2" (seqno 53).
  Logical volume lv2 successfully resized.


# vgs -oname,vg_attr,extendable
  VG   Attr   Extendable
  vg2  wz--n- extendable

Я изменил политику распределения уровней, но lvextend говорит, что все еще недостаточно экстентов ..


# lvchange --alloc normal vg2/lv2
  Logical volume "lv2" changed.

# lvs
  LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv2  vg2  rwn-a-r--- 8.19t                                    100.00   
# lvextend -v --alloc normal  -l +100%FREE /dev/vg2/lv2
    Converted 100%FREE into at most 715392 physical extents.
  Using stripesize of last segment 64.00 KiB
    Archiving volume group "vg2" metadata (seqno 52).
    Extending logical volume vg2/lv2 to up to 10.92 TiB
    Found fewer allocatable extents for logical volume lv2 than requested: using 2146188 extents (reduced by 715392).
  Size of logical volume vg2/lv2 unchanged from 8.19 TiB (2146188 extents).
    Loading vg2-lv2_rimage_3 table (253:7)
    Suppressed vg2-lv2_rimage_3 (253:7) identical table reload.
    Loading vg2-lv2_rmeta_3 table (253:6)
    Suppressed vg2-lv2_rmeta_3 (253:6) identical table reload.
    Loading vg2-lv2_rimage_2 table (253:5)
    Suppressed vg2-lv2_rimage_2 (253:5) identical table reload.
    Loading vg2-lv2_rmeta_2 table (253:4)
    Suppressed vg2-lv2_rmeta_2 (253:4) identical table reload.
    Loading vg2-lv2_rimage_1 table (253:3)
    Suppressed vg2-lv2_rimage_1 (253:3) identical table reload.
    Loading vg2-lv2_rmeta_1 table (253:2)
    Suppressed vg2-lv2_rmeta_1 (253:2) identical table reload.
    Loading vg2-lv2_rimage_0 table (253:1)
    Suppressed vg2-lv2_rimage_0 (253:1) identical table reload.
    Loading vg2-lv2_rmeta_0 table (253:0)
    Suppressed vg2-lv2_rmeta_0 (253:0) identical table reload.
    Loading vg2-lv2 table (253:8)
    Suppressed vg2-lv2 (253:8) identical table reload.
    Not monitoring vg2/lv2
    Suspending vg2-lv2 (253:8) with device flush
    Suspending vg2-lv2_rimage_3 (253:7) with device flush
    Suspending vg2-lv2_rmeta_3 (253:6) with device flush
    Suspending vg2-lv2_rimage_2 (253:5) with device flush
    Suspending vg2-lv2_rmeta_2 (253:4) with device flush
    Suspending vg2-lv2_rimage_1 (253:3) with device flush
    Suspending vg2-lv2_rmeta_1 (253:2) with device flush
    Suspending vg2-lv2_rimage_0 (253:1) with device flush
    Suspending vg2-lv2_rmeta_0 (253:0) with device flush
    Loading vg2-lv2_rimage_3 table (253:7)
    Suppressed vg2-lv2_rimage_3 (253:7) identical table reload.
    Loading vg2-lv2_rmeta_3 table (253:6)
    Suppressed vg2-lv2_rmeta_3 (253:6) identical table reload.
    Loading vg2-lv2_rimage_2 table (253:5)
    Suppressed vg2-lv2_rimage_2 (253:5) identical table reload.
    Loading vg2-lv2_rmeta_2 table (253:4)
    Suppressed vg2-lv2_rmeta_2 (253:4) identical table reload.
    Loading vg2-lv2_rimage_1 table (253:3)
    Suppressed vg2-lv2_rimage_1 (253:3) identical table reload.
    Loading vg2-lv2_rmeta_1 table (253:2)
    Suppressed vg2-lv2_rmeta_1 (253:2) identical table reload.
    Loading vg2-lv2_rimage_0 table (253:1)
    Suppressed vg2-lv2_rimage_0 (253:1) identical table reload.
    Loading vg2-lv2_rmeta_0 table (253:0)
    Suppressed vg2-lv2_rmeta_0 (253:0) identical table reload.
    Loading vg2-lv2 table (253:8)
    Suppressed vg2-lv2 (253:8) identical table reload.
    Resuming vg2-lv2_rimage_3 (253:7)
    Resuming vg2-lv2_rmeta_3 (253:6)
    Resuming vg2-lv2_rimage_2 (253:5)
    Resuming vg2-lv2_rmeta_2 (253:4)
    Resuming vg2-lv2_rimage_1 (253:3)
    Resuming vg2-lv2_rmeta_1 (253:2)
    Resuming vg2-lv2_rimage_0 (253:1)
    Resuming vg2-lv2_rmeta_0 (253:0)
    Resuming vg2-lv2 (253:8)
    Monitoring vg2/lv2
    Creating volume group backup "/etc/lvm/backup/vg2" (seqno 53).
  Logical volume lv2 successfully resized.

РЕДАКТИРОВАТЬ

Вот это pvdisplay результат. /dev/sdb1 является распределяемый


# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sdg1
  VG Name               vg2
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               QzjE6n-FRSj-NloW-ejFv-B0i0-lfqn-1O03Vu

  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               vg2
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               bwFwkf-d2zz-1TQR-PR11-IsgN-0P2n-BYMhfW

  --- Physical volume ---
  PV Name               /dev/sde1
  VG Name               vg2
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               fWnIz6-Jgf3-QpPW-VKvr-Od1H-cFAp-UrQe6E

  --- Physical volume ---
  PV Name               /dev/sdf1
  VG Name               vg2
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               e5rd2D-Xsh8-HD93-KVDs-TtPC-2sM1-i1AROl

  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg2
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               715392
  Allocated PE          4
  PV UUID               SrIKSJ-RzON-Kelu-rC0O-8rLd-rIpI-Fkd1BW

Я понимаю, что lvm не поддерживает смену типа рейда с рейда 5 на рейд 6.

Планировалось сменить raid 5 на raid 6 после добавления 2-х дисков.

Теперь я изменил свой план.

  1. Удалите данные на lv, которые я могу восстановить из другого хранилища.
  2. Создайте деградированное хранилище mdadm raid 6 с 3 или 4 дисками.
  3. Переместите все данные из lvm raid 5 в деградированное хранилище raid 6.
  4. Уничтоженный рейд lvm и весь диск lvm добавить к деградированному рейду 6.
  5. Перестроить хранилище raid 6 и получить удаленные данные.

Я не уверен, что этот план хорош или нет, в любом случае теперь я знаю, что lvm raid - не лучший выбор для управления хранилищем.

Я закрою свой пост, моя борьба окончена. ;)