hawihoney Posted January 15, 2019 Share Posted January 15, 2019 (edited) Today I added two NVM M.2 disks via two PCIe adapter cards to my server. While array stopped I removed one SSD device from the cache pool and added one of the NVM disks instead. Everything looked successfully. After restart of the array there's no activity on this new NVM disk. I waited a couple of minutes - no activity. After 10 minutes or so I clicked on the first cache device on that "balance" button. I now see massive activity on the old SSD device from the cache pool. The new NVM disk stays without activity. PCIe adapters are 2x "Lycom DT-120 M.2": http://www.lycom.com.tw/DT-120.htm NVM disks are 2x "Samsung 970 EVO 250GB": https://www.samsung.com/de/memory-storage/970-evo-nvme-m-2-ssd/MZ-V7E250BW/ Motherboard is "Supermicro - X9DR3-F" (in fact it's a X9DRi-F, at least that is written on the board): https://www.supermicro.com/products/motherboard/Xeon/C600/X9DRi-F.cfm Both CPUs are running, so all PCIe slots are working. For these Lycom PCIe adapters I'm using the slots 2 (CPU1, x8) and 3 (CPU1, x16) - counting from the CPU. There's an additional LSI 9300-8i in slot 5 (CPU2, X8). Any help is highly appreciated. root@Tower:~# btrfs dev stats -c /mnt/cache [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush_io_errs 0 [/dev/sdb1].corruption_errs 0 [/dev/sdb1].generation_errs 0 [/dev/sdc1].write_io_errs 0 [/dev/sdc1].read_io_errs 0 [/dev/sdc1].flush_io_errs 0 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 ***EDIT***: "btrfs filesystem show" shows the old members of the cache pool (sdb1, sdc1). But sdc1 is no longer part of the pool. The main page and the btrfs command line show different facts. Is it possible that the 250GB NVM replacing a 256GB SSD worked for Unraid but not for BTRFS? root@Tower:~# btrfs filesystem show Label: none uuid: 5a54f36e-e516-4c7e-9b68-8bae98a6d227 Total devices 2 FS bytes used 68.09GiB devid 1 size 238.47GiB used 69.03GiB path /dev/sdb1 devid 2 size 238.47GiB used 69.03GiB path /dev/sdc1 tower-diagnostics-20190115-1243.zip Edited January 15, 2019 by hawihoney Quote Link to comment
JorgeB Posted January 15, 2019 Share Posted January 15, 2019 Strange, I see the replace command in the log but it appears it dind't start, try it manually to see if you get an error, without stopping the array, type on the console: btrfs replace start /dev/sdc1 /dev/nvme0n1p1 /mnt/cache Quote Link to comment
hawihoney Posted January 15, 2019 Author Share Posted January 15, 2019 (edited) Seems somebody should add a check in Unraid, for smaller disks added to cache pool. I bet this is the problem: In the meantime I'm working the other way: - Stopped all dockers - Set docker=off in Settings - Stop array - Removed the remaining old SSD from cache pool - Added the second new NVM to cache pool - Restart - Format unmountable disk (new NVM disk1 from cache pool) - Mount old SSD with Unassigned devices - Copy cache content from old SSD to new cache pool (both NVMs show activity) - Fingers cross that I can start all dockers then. Edited January 15, 2019 by hawihoney Quote Link to comment
JorgeB Posted January 15, 2019 Share Posted January 15, 2019 47 minutes ago, hawihoney said: I bet this is the problem You're right, I didn't even noticed that, I already reported that bug, it's also in the FAQ, you can't replace a cache pool member with a smaller device. Quote Link to comment
hawihoney Posted January 15, 2019 Author Share Posted January 15, 2019 Copying 849,000 files (70GB, thanks Plex) from SATA3 SSD to PCIe M.2 is running since over an hour, current speed is down to 9MB/s. Temperature is ok. These SSDs/M.2s are not that good for huge transfers. Still fingers crossing. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.