Jump to content

(SOLVED) Two different pool, same info


jafi

Recommended Posts

I use unraid version 6.9.0-beta35 and have two different pool devices, but they show same info. That seems strange. I try to use 120GB ssd for only one VM image, but currently can't do that because of this strange behavior.

 

Very possible that I have done something wrong.

Screenshot_20201118_012335.png

Edited by jafi
Link to comment

system/btrfs-usage.txt seems to indicate both devices are in both pools.

system/df.txt seems to indicate cache pool isn't mounted but ssd pool is.

and shares show several that are on cache and disk1 and ssd.

 

So, pretty mixed up.

28 minutes ago, jafi said:

I have done something wrong.

Can you tell us more about how you got to this point?

 

Probably going to require a do over.

 

What do you get from the command line with this?

ls -lah /mnt/cache

and this?

ls /lah /mnt/ssd

 

Link to comment
8 hours ago, trurl said:

So, pretty mixed up.

Can you tell us more about how you got to this point?

 

Probably going to require a do over.

 

 

Sure.

 

First I created disk 1 & parity.

Second I added a cache, nvme drive.

For weeks I did not use the ssd, it was just powered, but not actived on unraid (pool or array).

Yesterday I added a second pool for that ssd, because I need it for my Windows VM.

 

What do you mean by "require a do over".

Backup my VM and reset unraid and start from clean table?

ls -lah /mnt/cache

ls -lah /mnt/cache
total 16K
drwxrwxrwx 1 nobody users  48 Nov 18 01:56 ./
drwxr-xr-x 7 root   root  140 Nov 18 01:46 ../
drwxrwxrwx 1 nobody users  90 Nov 16 16:29 appdata/
drwxrwxrwx 1 nobody users  62 Nov 16 16:37 domains/
drwxrwxrwx 1 nobody users 212 Nov 17 22:50 isos/
drwxrwxrwx 1 nobody users  26 Aug 26 22:58 system/

 

ls /lah /mnt/ssd

ls -lah /mnt/ssd
total 16K
drwxrwxrwx 1 nobody users  48 Nov 18 01:56 ./
drwxr-xr-x 7 root   root  140 Nov 18 01:46 ../
drwxrwxrwx 1 nobody users  90 Nov 16 16:29 appdata/
drwxrwxrwx 1 nobody users  62 Nov 16 16:37 domains/
drwxrwxrwx 1 nobody users 212 Nov 17 22:50 isos/
drwxrwxrwx 1 nobody users  26 Aug 26 22:58 system/

 

 

Thank you for your help!

Edited by jafi
Link to comment

You'll need to remove one of the devices from the pool, there are many ways to do it, easiest is probably to assign both to the same pool and then remove one of them, note that you need to reset pool config first or it be wiped, this should be straight forward but make sure anything important is backed up before starting.

 

Stop the array, if Docker/VM services are using the cache pool disable them, unassign the cache devices from both pools, start array to make Unraid "forget" current cache config, stop array, reassign both cache devices to the same pool (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array, now do a standard pool device removal and when done assign that device to the new pool.

 

 

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...