Cache pool SSD disk replaced by M.2 disk - no activity


Recommended Posts

Today I added two NVM M.2 disks via two PCIe adapter cards to my server. While array stopped I removed one SSD device from the cache pool and added one of the NVM disks instead. Everything looked successfully.

 

After restart of the array there's no activity on this new NVM disk. I waited a couple of minutes - no activity. After 10 minutes or so I clicked on the first cache device on that "balance" button. I now see massive activity on the old SSD device from the cache pool. The new NVM disk stays without activity.

 

PCIe adapters are 2x "Lycom DT-120 M.2":

http://www.lycom.com.tw/DT-120.htm

 

NVM disks are 2x "Samsung 970 EVO 250GB":

https://www.samsung.com/de/memory-storage/970-evo-nvme-m-2-ssd/MZ-V7E250BW/

 

Motherboard is "Supermicro - X9DR3-F" (in fact it's a X9DRi-F, at least that is written on the board):

https://www.supermicro.com/products/motherboard/Xeon/C600/X9DRi-F.cfm

 

Both CPUs are running, so all PCIe slots are working. For these Lycom PCIe adapters I'm using the slots 2 (CPU1, x8) and 3 (CPU1, x16) - counting from the CPU. There's an additional LSI 9300-8i in slot 5 (CPU2, X8).

 

Any help is highly appreciated.

root@Tower:~# btrfs dev stats -c /mnt/cache
[/dev/sdb1].write_io_errs    0
[/dev/sdb1].read_io_errs     0
[/dev/sdb1].flush_io_errs    0
[/dev/sdb1].corruption_errs  0
[/dev/sdb1].generation_errs  0
[/dev/sdc1].write_io_errs    0
[/dev/sdc1].read_io_errs     0
[/dev/sdc1].flush_io_errs    0
[/dev/sdc1].corruption_errs  0
[/dev/sdc1].generation_errs  0

 

***EDIT***: "btrfs filesystem show" shows the old members of the cache pool (sdb1, sdc1). But sdc1 is no longer part of the pool. The main page and the btrfs command line show different facts. Is it possible that the 250GB NVM replacing a 256GB SSD worked for Unraid but not for BTRFS?

root@Tower:~# btrfs filesystem show
Label: none  uuid: 5a54f36e-e516-4c7e-9b68-8bae98a6d227
        Total devices 2 FS bytes used 68.09GiB
        devid    1 size 238.47GiB used 69.03GiB path /dev/sdb1
        devid    2 size 238.47GiB used 69.03GiB path /dev/sdc1

 

tower-diagnostics-20190115-1243.zip

Balance.jpg

Cache pool.jpg

Edited by hawihoney
Link to comment

Seems somebody should add a check in Unraid, for smaller disks added to cache pool. I bet this is the problem:

 

In the meantime I'm working the other way:

 

- Stopped all dockers

- Set docker=off in Settings

- Stop array

- Removed the remaining old SSD from cache pool

- Added the second new NVM to cache pool

- Restart

- Format unmountable disk (new NVM disk1 from cache pool)

- Mount old SSD with Unassigned devices

- Copy cache content from old SSD to new cache pool (both NVMs show activity)

- Fingers cross that I can start all dockers then.

 

Edited by hawihoney
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.