jafi Posted November 17, 2020 Share Posted November 17, 2020 (edited) I use unraid version 6.9.0-beta35 and have two different pool devices, but they show same info. That seems strange. I try to use 120GB ssd for only one VM image, but currently can't do that because of this strange behavior. Very possible that I have done something wrong. Edited November 21, 2020 by jafi Quote Link to comment
trurl Posted November 17, 2020 Share Posted November 17, 2020 You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. Quote Link to comment
jafi Posted November 17, 2020 Author Share Posted November 17, 2020 mastermind-diagnostics-20201118-0144.zip Quote Link to comment
trurl Posted November 18, 2020 Share Posted November 18, 2020 system/btrfs-usage.txt seems to indicate both devices are in both pools. system/df.txt seems to indicate cache pool isn't mounted but ssd pool is. and shares show several that are on cache and disk1 and ssd. So, pretty mixed up. 28 minutes ago, jafi said: I have done something wrong. Can you tell us more about how you got to this point? Probably going to require a do over. What do you get from the command line with this? ls -lah /mnt/cache and this? ls /lah /mnt/ssd Quote Link to comment
jafi Posted November 18, 2020 Author Share Posted November 18, 2020 (edited) 8 hours ago, trurl said: So, pretty mixed up. Can you tell us more about how you got to this point? Probably going to require a do over. Sure. First I created disk 1 & parity. Second I added a cache, nvme drive. For weeks I did not use the ssd, it was just powered, but not actived on unraid (pool or array). Yesterday I added a second pool for that ssd, because I need it for my Windows VM. What do you mean by "require a do over". Backup my VM and reset unraid and start from clean table? ls -lah /mnt/cache ls -lah /mnt/cache total 16K drwxrwxrwx 1 nobody users 48 Nov 18 01:56 ./ drwxr-xr-x 7 root root 140 Nov 18 01:46 ../ drwxrwxrwx 1 nobody users 90 Nov 16 16:29 appdata/ drwxrwxrwx 1 nobody users 62 Nov 16 16:37 domains/ drwxrwxrwx 1 nobody users 212 Nov 17 22:50 isos/ drwxrwxrwx 1 nobody users 26 Aug 26 22:58 system/ ls /lah /mnt/ssd ls -lah /mnt/ssd total 16K drwxrwxrwx 1 nobody users 48 Nov 18 01:56 ./ drwxr-xr-x 7 root root 140 Nov 18 01:46 ../ drwxrwxrwx 1 nobody users 90 Nov 16 16:29 appdata/ drwxrwxrwx 1 nobody users 62 Nov 16 16:37 domains/ drwxrwxrwx 1 nobody users 212 Nov 17 22:50 isos/ drwxrwxrwx 1 nobody users 26 Aug 26 22:58 system/ Thank you for your help! Edited November 18, 2020 by jafi Quote Link to comment
JorgeB Posted November 18, 2020 Share Posted November 18, 2020 You'll need to remove one of the devices from the pool, there are many ways to do it, easiest is probably to assign both to the same pool and then remove one of them, note that you need to reset pool config first or it be wiped, this should be straight forward but make sure anything important is backed up before starting. Stop the array, if Docker/VM services are using the cache pool disable them, unassign the cache devices from both pools, start array to make Unraid "forget" current cache config, stop array, reassign both cache devices to the same pool (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array, now do a standard pool device removal and when done assign that device to the new pool. 1 Quote Link to comment
jafi Posted November 21, 2020 Author Share Posted November 21, 2020 Thank you! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.