Я восстанавливаю сервер Dell Poweredge R510 под управлением Scientific Linux 5.5 после неожиданного отключения электроэнергии. Сервер настроил наш предыдущий системный администратор (я аспирант). После перезагрузки я вижу сообщение:
fsck.ext3: Device or resource busy when trying to open /dev/sdb1
File system mounted or opened exclusively by another program?
/ dev / sdb1 - это каталог / home, состоящий из управляемого Megaraid RAID5, состоящего из 9x600 ГБ SAS-дисков.
# megacli -AdpAllInfo -aALL
Adapter #0
==============================================================================
Versions
================
Product Name : PERC H700 Integrated
Serial No : 06C006X
FW Package Build: 12.3.0-0032
Mfg. Data
================
Mfg. Date : 06/19/10
Rework Date : 06/19/10
Revision No : A00
Battery FRU : N/A
Image Versions in Flash:
================
BIOS Version : 3.09.00
FW Version : 2.30.03-0872
Preboot CLI Version: 04.02-004:#%00008
Ctrl-R Version : 2.02-0009
NVDATA Version : 2.03.0053
Boot Block Version : 2.02.00.00-0000
BOOT Version : 01.250.04.219
Pending Images in Flash
================
None
PCI Info
================
Vendor Id : 1000
Device Id : 0079
SubVendorId : 1028
SubDeviceId : 1f17
Host Interface : PCIE
Number of Frontend Port: 0
Device Interface : PCIE
Number of Backend Port: 8
Port : Address
0 500065b36789abff
1 0000000000000000
2 0000000000000000
3 0000000000000000
4 0000000000000000
5 0000000000000000
6 0000000000000000
7 0000000000000000
HW Configuration
================
SAS Address : 5842b2b01789d900
BBU : Present
Alarm : Absent
NVRAM : Present
Serial Debugger : Present
Memory : Present
Flash : Present
Memory Size : 1024MB
TPM : Absent
On board Expander: Absent
Upgrade Key : Absent
Settings
================
Current Time : 3:20:56 1/13, 2013
Predictive Fail Poll Interval : 300sec
Interrupt Throttle Active Count : 16
Interrupt Throttle Completion : 50us
Rebuild Rate : 30%
PR Rate : 30%
BGI Rate : 30%
Check Consistency Rate : 30%
Reconstruction Rate : 30%
Cache Flush Interval : 4s
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups : 12s
Physical Drive Coercion Mode : 128MB
Cluster Mode : Disabled
Alarm : Disabled
Auto Rebuild : Enabled
Battery Warning : Enabled
Ecc Bucket Size : 15
Ecc Bucket Leak Rate : 1440 Minutes
Restore HotSpare on Insertion : Disabled
Expose Enclosure Devices : Disabled
Maintain PD Fail History : Disabled
Host Request Reordering : Enabled
Auto Detect BackPlane Enabled : SGPIO/i2c SEP
Load Balance Mode : Auto
Use FDE Only : Yes
Security Key Assigned : No
Security Key Failed : No
Security Key Not Backedup : No
Any Offline VD Cache Preserved : No
Allow Boot with Preserved Cache : No
Disable Online Controller Reset : No
PFK in NVRAM : No
Use disk activity for locate : No
Capabilities
================
RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span
Supported Drives : SAS, SATA
Allowed Mixing:
Mix in Enclosure Allowed
Status
================
ECC Bucket Count : 0
Limitations
================
Max Arms Per VD : 32
Max Spans Per VD : 8
Max Arrays : 128
Max Number of VDs : 64
Max Parallel Commands : 1008
Max SGE Count : 60
Max Data Transfer Size : 8192 sectors
Max Strips PerIO : 42
Min Strip Size : 8 KB
Max Strip Size : 1.0 MB
Max Configurable CacheCade Size: 0 GB
Current Size of CacheCade : 0 GB
Current Size of FW Cache : 0 MB
Device Present
================
Virtual Drives : 2
Degraded : 0
Offline : 0
Physical Devices : 14
Disks : 12
Critical Disks : 0
Failed Disks : 0
Supported Adapter Operations
================
Rebuild Rate : Yes
CC Rate : Yes
BGI Rate : Yes
Reconstruct Rate : Yes
Patrol Read Rate : Yes
Alarm Control : Yes
Cluster Support : No
BBU : Yes
Spanning : Yes
Dedicated Hot Spare : Yes
Revertible Hot Spares : Yes
Foreign Config Import : Yes
Self Diagnostic : Yes
Allow Mixed Redundancy on Array : No
Global Hot Spares : Yes
Deny SCSI Passthrough : No
Deny SMP Passthrough : No
Deny STP Passthrough : No
Support Security : Yes
Snapshot Enabled : No
Support the OCE without adding drives : Yes
Support PFK : No
Supported VD Operations
================
Read Policy : Yes
Write Policy : Yes
IO Policy : Yes
Access Policy : Yes
Disk Cache Policy : Yes
Reconstruction : Yes
Deny Locate : No
Deny CC : No
Allow Ctrl Encryption: No
Enable LDBBM : Yes
Supported PD Operations
================
Force Online : Yes
Force Offline : Yes
Force Rebuild : Yes
Deny Force Failed : No
Deny Force Good/Bad : No
Deny Missing Replace : No
Deny Clear : No
Deny Locate : No
Disable Copyback : No
Enable JBOD : No
Enable Copyback on SMART : No
Enable Copyback to SSD on SMART Error : No
Enable SSD Patrol Read : No
PR Correct Unconfigured Areas : Yes
Enable Spin Down of UnConfigured Drives : No
Disable Spin Down of hot spares : Yes
Spin Down time : 30
Error Counters
================
Memory Correctable Errors : 0
Memory Uncorrectable Errors : 0
Cluster Information
================
Cluster Permitted : No
Cluster Active : No
Default Settings
================
Phy Polarity : 0
Phy PolaritySplit : 0
Background Rate : 30
Strip Size : 64kB
Flush Time : 4 seconds
Write Policy : WB
Read Policy : Adaptive
Cache When BBU Bad : Disabled
Cached IO : No
SMART Mode : Mode 6
Alarm Disable : Yes
Coercion Mode : 128MB
ZCR Config : Unknown
Dirty LED Shows Drive Activity : No
BIOS Continue on Error : No
Spin Down Mode : None
Allowed Device Type : SAS/SATA Mix
Allow Mix in Enclosure : Yes
Allow HDD SAS/SATA Mix in VD : No
Allow SSD SAS/SATA Mix in VD : No
Allow HDD/SSD Mix in VD : No
Allow SATA in Cluster : No
Max Chained Enclosures : 1
Disable Ctrl-R : No
Enable Web BIOS : No
Direct PD Mapping : Yes
BIOS Enumerate VDs : Yes
Restore Hot Spare on Insertion : No
Expose Enclosure Devices : No
Maintain PD Fail History : No
Disable Puncturing : No
Zero Based Enclosure Enumeration : Yes
PreBoot CLI Enabled : No
LED Show Drive Activity : Yes
Cluster Disable : Yes
SAS Disable : No
Auto Detect BackPlane Enable : SGPIO/i2c SEP
Use FDE Only : Yes
Enable Led Header : No
Delay during POST : 0
EnableCrashDump : No
Disable Online Controller Reset : No
EnableLDBBM : Yes
Un-Certified Hard Disk Drives : Allow
Treat Single span R1E as R10 : Yes
Max LD per array : 16
Power Saving option : Disable all power saving options
Default spin down time in minutes: 30
Enable JBOD : No
Exit Code: 0x00
dmesg
непосредственно перед отображением ошибки:
device-mapper: multipath: version 1.0.5 loaded
device-mapper: multipath round-robin: version 1.0.0
device-mapper: table 253:0: multipath: error getting device
device-mapper: ioctl: error: adding target to table
device-mapper: table 253:0: multipath: error getting device
device-mapper: ioctl: error: adding target to table
Если я закомментирую соответствующую запись в файле / etc / fstab и перезагружусь
LABEL=/home /home ext3 defaults 1 2
система загружается как обычно (но без диска). Однако я все еще не могу смонтировать диск. Дальнейшее небольшое расследование дает следующее:
# mount /dev/sdb1 /home
mount: /dev/sdb1 already mounted or /home busy
# lsof /dev/sdb
COMMAND PID USER FD TYPE DEVICE SIZE MODE NAME
multipath 3864 root 5r BLK 8,16 2582 /dev/sdb
# fuser /dev/sdb
3864
#ps -ef | grep 3864
3864 1 0 19:22 ? 00:00:00 /sbin/multipathd
Судя по всему, многопутевость мешает монтировать диск вручную. Будет ли мне безопасно или правильно убить демона multipath? Пути и конфигурация для multipathd
являются следующими:
multipathd> show paths
hcil dev dev_t pri dm_st chk_st next_check
0:2:0:0 sda 8:0 1 [undef] [ready] [orphan]
0:2:1:0 sdb 8:16 1 [active][ready] XXXXXX.... 13/20
multipathd> show config
defaults {
verbosity 2
user_friendly_names yes
}
blacklist {
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
devnode ^hd[a-z]
device {
vendor DGC
product LUNZ
}
device {
vendor EMC
product LUNZ
}
device {
vendor IBM
product S/390.*
}
device {
vendor IBM
product S/390.*
}
device {
vendor STK
product Universal Xport
}
}
blacklist_exceptions {
}
devices {
device {
vendor NETAPP
product LUN
path_grouping_policy multibus
path_checker directio
features 1 queue_if_no_path
prio_callout /sbin/mpath_prio_ontap /dev/%n
failback immediate
flush_on_last_del yes
}
device {
vendor APPLE*
product Xserve RAID
path_grouping_policy multibus
}
device {
vendor 3PARdata
product VV
path_grouping_policy multibus
}
device {
vendor DEC
product HSG80
path_grouping_policy group_by_prio
path_checker hp_sw
features 1 queue_if_no_path
hardware_handler 1 hp-sw
prio_callout /sbin/mpath_prio_hp_sw /dev/%n
}
device {
vendor COMPAQ
product (MSA|HSV)1.0.*
path_grouping_policy group_by_prio
path_checker hp_sw
features 1 queue_if_no_path
hardware_handler 1 hp-sw
prio_callout /sbin/mpath_prio_hp_sw /dev/%n
no_path_retry 12
rr_min_io 100
}
device {
vendor (COMPAQ|HP)
product HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]|HSV4[05]0
path_grouping_policy group_by_prio
path_checker tur
prio_callout /sbin/mpath_prio_alua /dev/%n
failback immediate
no_path_retry 12
rr_min_io 100
}
device {
vendor (COMPAQ|HP)
product MSA VOLUME
path_grouping_policy group_by_prio
path_checker tur
prio_callout /sbin/mpath_prio_alua /dev/%n
failback immediate
no_path_retry 12
rr_min_io 100
}
device {
vendor HP
product MSA2[02]12fc|MSA2012i
path_grouping_policy multibus
path_checker tur
prio_callout /bin/true
failback immediate
no_path_retry 18
rr_min_io 100
}
device {
vendor HP
product MSA2012sa|MSA23(12|24)(fc|i|sa)|MSA2000s VOLUME
path_grouping_policy group_by_prio
path_checker tur
prio_callout /sbin/mpath_prio_alua /dev/%n
failback immediate
no_path_retry 18
rr_min_io 100
}
device {
vendor HP
product HSVX700
path_grouping_policy group_by_prio
path_checker tur
hardware_handler 1 alua
prio_callout /sbin/mpath_prio_alua /dev/%n
failback immediate
no_path_retry 12
rr_min_io 100
}
device {
vendor HP
product A6189A
path_grouping_policy multibus
}
device {
vendor DDN
product SAN DataDirector
path_grouping_policy multibus
}
device {
vendor EMC
product SYMMETRIX
path_grouping_policy multibus
getuid_callout /sbin/scsi_id -g -u -ppre-spc3-83 -s /block/%n
}
device {
vendor DGC
product .*
product_blacklist LUNZ
path_grouping_policy group_by_prio
path_checker emc_clariion
features 1 queue_if_no_path
hardware_handler 1 emc
prio_callout /sbin/mpath_prio_emc /dev/%n
failback immediate
no_path_retry 60
}
device {
vendor FSC
product CentricStor
path_grouping_policy group_by_serial
}
device {
vendor (HITACHI|HP)
product OPEN-.*
path_grouping_policy multibus
path_checker tur
failback immediate
no_path_retry 12
}
device {
vendor HITACHI
product DF.*
path_grouping_policy group_by_prio
prio_callout /sbin/mpath_prio_hds_modular %d
failback immediate
}
device {
vendor EMC
product Invista
product_blacklist LUNZ
path_grouping_policy multibus
path_checker tur
no_path_retry 5
}
device {
vendor IBM
product ProFibre 4000R
path_grouping_policy multibus
}
device {
vendor IBM
product 1722-600
path_grouping_policy group_by_prio
path_checker rdac
features 1 queue_if_no_path
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 300
}
device {
vendor IBM
product 1724
path_grouping_policy group_by_prio
path_checker rdac
features 1 queue_if_no_path
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 300
}
device {
vendor IBM
product 1726
path_grouping_policy group_by_prio
path_checker rdac
features 1 queue_if_no_path
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 300
}
device {
vendor IBM
product 1742
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
}
device {
vendor IBM
product 1814
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor IBM
product 1745|1746
path_grouping_policy group_by_prio
path_checker rdac
features 2 pg_init_retries 50
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 15
}
device {
vendor IBM
product 1815
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor IBM
product 1818
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor IBM
product 3526
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
}
device {
vendor IBM
product 3542
path_grouping_policy group_by_serial
path_checker tur
}
device {
vendor IBM
product 2105(800|F20)
path_grouping_policy group_by_serial
path_checker tur
features 1 queue_if_no_path
}
device {
vendor IBM
product 1750500
path_grouping_policy group_by_prio
path_checker tur
features 1 queue_if_no_path
prio_callout /sbin/mpath_prio_alua /dev/%n
failback immediate
}
device {
vendor IBM
product 2107900
path_grouping_policy multibus
path_checker tur
features 1 queue_if_no_path
}
device {
vendor IBM
product 2145
path_grouping_policy group_by_prio
path_checker tur
features 1 queue_if_no_path
prio_callout /sbin/mpath_prio_alua /dev/%n
failback immediate
}
device {
vendor IBM
product S/390 DASD ECKD
product_blacklist S/390.*
path_grouping_policy multibus
getuid_callout /sbin/dasd_id /dev/%n
path_checker directio
features 1 queue_if_no_path
}
device {
vendor IBM
product S/390 DASD FBA
product_blacklist S/390.*
path_grouping_policy multibus
getuid_callout /sbin/dasd_id /dev/%n
path_checker directio
}
device {
vendor NETAPP
product LUN.*
path_grouping_policy group_by_prio
path_checker directio
features 1 queue_if_no_path
prio_callout /sbin/mpath_prio_ontap /dev/%n
failback immediate
rr_min_io 128
}
device {
vendor IBM
product Nseries.*
path_grouping_policy group_by_prio
features 1 queue_if_no_path
prio_callout /sbin/mpath_prio_ontap /dev/%n
failback immediate
rr_min_io 128
}
device {
vendor Pillar
product Axiom [35]00
path_grouping_policy group_by_prio
path_checker tur
prio_callout /sbin/mpath_prio_alua %d
}
device {
vendor AIX
product VDASD
path_grouping_policy multibus
path_checker directio
failback immediate
no_path_retry 60
}
device {
vendor SGI
product TP9[13]00
path_grouping_policy multibus
}
device {
vendor SGI
product TP9[45]00
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
}
device {
vendor SGI
product IS.*
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor STK
product OPENstorage D280
path_grouping_policy group_by_prio
path_checker tur
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
}
device {
vendor STK
product FLEXLINE 380
product_blacklist Universal Xport
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor SUN
product (StorEdge 3510|T4)
path_grouping_policy multibus
}
device {
vendor PIVOT3
product RAIGE VOLUME
path_grouping_policy multibus
getuid_callout /sbin/scsi_id -p 0x80 -g -u -d /dev/%n
path_checker tur
features 1 queue_if_no_path
rr_min_io 100
}
device {
vendor SUN
product CSM200_R
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor SUN
product LCSM100_F
path_grouping_policy group_by_prio
path_checker rdac
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry queue
}
device {
vendor (LSI|ENGENIO)
product INF.*
path_grouping_policy group_by_prio
path_checker rdac
features 2 pg_init_retries 50
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 15
}
device {
vendor DELL
product MD3000|MD3000i
path_grouping_policy group_by_prio
path_checker rdac
features 2 pg_init_retries 50
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 15
}
device {
vendor DELL
product MD32xx|MD32xxi
path_grouping_policy group_by_prio
path_checker rdac
features 2 pg_init_retries 50
hardware_handler 1 rdac
prio_callout /sbin/mpath_prio_rdac /dev/%n
failback immediate
no_path_retry 15
}
device {
vendor COMPELNT
product Compellent Vol
path_grouping_policy multibus
path_checker tur
failback immediate
no_path_retry queue
}
device {
vendor GNBD
product GNBD
path_grouping_policy multibus
getuid_callout /sbin/gnbd_import -q -U /block/%n
path_checker directio
}
}
multipaths {
}
Содержимое моего /etc/multipath.conf
файл:
# Blacklist all local devices
devnode_blacklist {
devnode "sd[a-b]$"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"
}
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
}
devices {
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
features "1 queue_if_no_path"
path_checker directio
failback immediate
flush_on_last_del yes
}
}
Сам привод в хорошем состоянии. Если я использую загруженный через USB LiveCD .iso, я могу смонтировать / dev / sdb1 без проблем. Кажется, что все файлы присутствуют. Бег fsck -ylv /dev/sdb1
, диск выглядит нормально. Результат показан ниже:
# fsck.ext3 -fyv /dev/sdc1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
1387340 inodes used (0.12%)
219427 non-contiguous inodes (15.8%)
# of inodes with ind/dind/tind blocks: 343585/63330/32
869857188 blocks used (74.28%)
0 bad blocks
96 large files
1310760 regular files
71629 directories
0 character device files
0 block device files
0 fifos
1 link
4942 symbolic links (4497 fast symbolic links)
0 sockets
--------
1387332 files
Для полноты вывода fdisk -l
:
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
WARNING: The size of this disk is 4.8 TB (4796404727808 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
WARNING: GPT (GUID Partition Table) detected on '/dev/dm-0'! The util fdisk doesn't support GPT. Use GNU Parted.
WARNING: The size of this disk is 4.8 TB (4796404727808 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/sda: 599.5 GB, 599550590976 bytes
255 heads, 63 sectors/track, 72891 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 5099 40957686 83 Linux
/dev/sda2 5100 69194 514843087+ 83 Linux
/dev/sda3 69195 71744 20482875 82 Linux swap / Solaris
/dev/sda4 71745 72891 9213277+ 5 Extended
/dev/sda5 71745 72381 5116671 83 Linux
/dev/sda6 72382 72891 4096543+ 83 Linux
Disk /dev/sdb: 4796.4 GB, 4796404727808 bytes
255 heads, 63 sectors/track, 583129 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT
Disk /dev/dm-0: 4796.4 GB, 4796404727808 bytes
255 heads, 63 sectors/track, 583129 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/dm-0p1 1 267350 2147483647+ ee EFI GPT
Disk /dev/dm-1: 4796.4 GB, 4796404693504 bytes
255 heads, 63 sectors/track, 583129 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Я подозреваю, что изменение некоторых настроек в файле конфигурации, будь то для multipath или для какой-либо другой утилиты, решит мою проблему. Однако я не знаю, как действовать дальше.
Вы можете занести диск в черный список, и функция multipath пропустит его. Ставить:
blacklist {
devnode "sd[a-b]"
}
defaults {
user_friendly_names yes
}
в /etc/multipath.conf
и перезагрузитесь. Похоже, что с файловой системой все в порядке, так что не беспокойтесь об этом. Когда вы выпускаете lsof
он должен быть в разделе, а не на всем устройстве (lsof /dev/sdb1
не lsof /dev/sdb
). То же самое для fuser
. Однако сначала попробуйте черный список, так как это может быть именно то, что вам нужно.
Может быть этот помогает. В моем случае у меня были проблемы с многопутевостью и «Устройство или ресурс занят». я использовал multipath -l
чтобы перечислить все сопоставления. Затем я удалил сопоставления одно за другим с помощью multipath -f <MAPPING NAME>
. Вы, вероятно, могли бы использовать multipath -F
чтобы удалить их все сразу. Затем я смог создать RAID.
Я должен отметить, что эти жесткие диски были обнулены, и я просто создал по одному большому разделу GPT на каждом из них. Внесение в черный список /etc/multipath.conf
тоже кажется правдоподобным. Вы не хотите, чтобы multipath касался ваших жестких дисков, когда вы помещаете их в массив.
В твоем /etc/multipath.conf
изменение файла:
devnode_blacklist {
devnode "sd [a-b] $"
...
}
кому:
devnode_blacklist {
devnode "sd [a-b] *"
...
}
Это внесет в черный список / dev / sdb1, тогда как ваша текущая конфигурация не внесет в черный список / dev / sdb1