[SOLVED] Synchronized pool devices


18 posts in this topic Last Reply

Recommended Posts

Hi,

 

I used to consume a pool of 2 SDD to run/backup my docker apps and VMs. I finally decided to split the pool so that I get 2 pools with one SDD each, to run separately docker and VMs (no more backup of course). However, once created, they look synchronized. I mean when I change the content of one (/mnt/cache_docker), for instance creating/removing a share or downloading a file, the exact same action happens on the other one (/mnt/cache_vm). Is there a way to break this link ?

 

Many thanks,

Link to post

It might be necessary to give more detail on the steps you took to break them into  2 pools.    Sounds as if they may still be running as a single pool? 

 

attaching your system’s diagnostics zip filen(obtained via Tools > Diagnostics) attached to your next post may allow for more informed feedback.

Link to post

So, I

- stopped the array

- disabled docker and vms (via settings menus)

- went to my unique pool and set the second slot to "no device" 

- set Slots to 1 instead of 2

- clicked on "ADD POOL", set cache_vm as the name, and 1 as the slots

- selected the remaining SDD (the one removed earlier) instead of "no device" in the second pool combo

- restarted the pool

- changed in shares the "select cache pool" value by pointing the docker related shares to the cache_docker, and the vm ones to cache_vm

 

many operations, maybe in the wrong order ?

 

 

Link to post

Indeed they look to have the same UUID : 8d14eb99-e521-4c8b-9e04-7a448ccd6f30

What's the way to change this please ?

 

 

root@HORUS:~# blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat"
/dev/nvme1n1p1: UUID="8d14eb99-e521-4c8b-9e04-7a448ccd6f30" UUID_SUB="f0ef586f-bf35-4655-aeaa-5f6ac9208262" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme0n1p1: UUID="8d14eb99-e521-4c8b-9e04-7a448ccd6f30" UUID_SUB="45556f71-e386-417a-bc2b-09439fda7c0e" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/sde1: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ec60a12d-4cc8-4146-b863-edf1755c1b59"
/dev/sdc1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="48c31838-659b-4f63-b868-b013ca28663e"
/dev/sdd1: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fbd77100-0ab9-4154-a780-70974f0ade93"
/dev/sdb1: UUID="a753234c-6126-45e2-8a06-48acb30a6487" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a6f418bb-c19e-4216-881e-a6b6bb094e1e"
/dev/md1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs"
/dev/md2: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs"
/dev/md3: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs"
/dev/loop2: UUID="ae7bf697-26fe-4346-b42f-23027d10b0a5" UUID_SUB="5a792d8f-358b-440b-b30a-d1fbb89c4e17" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/loop3: UUID="f17a1a2b-7ac8-4080-95d2-7200eea5d82d" UUID_SUB="87f2d071-fdc3-42f0-a06d-a8c1e04cf6a9" BLOCK_SIZE="4096" TYPE="btrfs"

 

Link to post
Posted (edited)

Both problem disk are NVMe ?

Suppose change UUID would solve the problem, pls have backup, then change either one device UUID

 

btrfstune -u /dev/nvme0n1p1 or 1n1p1

 

then reboot

 

Yes should delete the partition when disk release from pool, clear before use.

Edited by Vr2Io
Link to post

Yes both are NVMe

I've removed nvme0n1p1  from any pool so that it is in the "Unassigned Devices" now, and the "MOUNT" button is available

 

however btrfstune -u /dev/nvme0n1p1

produces ERROR: /dev/nvme0n1p1 is mounted 

 

Link to post

Stopping the array allowed me to run the command successfully, but after reboot a new id was assigned to both nvme !

 

root@HORUS:~# blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat"
/dev/nvme1n1p1: UUID="cf2aac75-db53-47b1-931a-ae65b2eff77b" UUID_SUB="f0ef586f-bf35-4655-aeaa-5f6ac9208262" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme0n1p1: UUID="cf2aac75-db53-47b1-931a-ae65b2eff77b" UUID_SUB="45556f71-e386-417a-bc2b-09439fda7c0e" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/sdb1: UUID="a753234c-6126-45e2-8a06-48acb30a6487" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a6f418bb-c19e-4216-881e-a6b6bb094e1e"
/dev/sdc1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="48c31838-659b-4f63-b868-b013ca28663e"
/dev/sdd1: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fbd77100-0ab9-4154-a780-70974f0ade93"
/dev/sde1: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ec60a12d-4cc8-4146-b863-edf1755c1b59"
/dev/md1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs"
/dev/md2: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs"
/dev/md3: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs"
/dev/loop2: UUID="ae7bf697-26fe-4346-b42f-23027d10b0a5" UUID_SUB="5a792d8f-358b-440b-b30a-d1fbb89c4e17" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/loop3: UUID="f17a1a2b-7ac8-4080-95d2-7200eea5d82d" UUID_SUB="87f2d071-fdc3-42f0-a06d-a8c1e04cf6a9" BLOCK_SIZE="4096" TYPE="btrfs"

Link to post

Sound like you should run some command to make a RAID 1 pool to be a single disk pool, but I am not family in that.

 

I usually delete RAID pool's partition in UD and create again if disk have rearrange.

Link to post

Rerunning the change UUID steps, both disks were changed simulteneously, they look intricated !

 

root@HORUS:~# btrfstune -u /dev/nvme0n1p1
WARNING: it's recommended to run 'btrfs check --readonly' before this operation.
        The whole operation must finish before the filesystem can be mounted again.
        If cancelled or interrupted, run 'btrfstune -u' to restart.
We are going to change UUID, are your sure? [y/N]: y
Current fsid: cf2aac75-db53-47b1-931a-ae65b2eff77b
New fsid: b62f4e84-a8d6-449d-8bc4-c77f1e9c2c6b
Set superblock flag CHANGING_FSID
Change fsid in extents
Change fsid on devices
Clear superblock flag CHANGING_FSID
Fsid change finished
root@HORUS:~# blkid
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat"
/dev/nvme1n1p1: UUID="b62f4e84-a8d6-449d-8bc4-c77f1e9c2c6b" UUID_SUB="f0ef586f-bf35-4655-aeaa-5f6ac9208262" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme0n1p1: UUID="b62f4e84-a8d6-449d-8bc4-c77f1e9c2c6b" UUID_SUB="45556f71-e386-417a-bc2b-09439fda7c0e" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/sdb1: UUID="a753234c-6126-45e2-8a06-48acb30a6487" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a6f418bb-c19e-4216-881e-a6b6bb094e1e"
/dev/sdc1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="48c31838-659b-4f63-b868-b013ca28663e"
/dev/sdd1: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fbd77100-0ab9-4154-a780-70974f0ade93"
/dev/sde1: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ec60a12d-4cc8-4146-b863-edf1755c1b59"

 

I am quite lost actually :(

 

Quote

I usually delete RAID pool's partition in UD and create again if disk have rearrange.

 

I don't see any particular command/button in the interface

 

Link to post

You can't change the UUID because both devices are still part of the same pool, do this:

 

Stop the array, if Docker/VM services are using the cache pool disable them, unassign all pool devices (from both pools), start array to make Unraid "forget" current pool config, stop array, reassign both devices to the same pool (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any pool device), start array, pool should mount normally and you can start docker/VM services if stopped before, now see here to remove one of the devices from the pool, when it's done you can re-add it to a different pool (it will need to be formatted).

 

 

Link to post

I've performed the 1st part untill the removal procedure

 

But then , I can do 

1- stop the array

2- unassign pool disk to remove

 

but not 

3- start the array

 

as the START buttong is greyed, and a "Missing Cache disk" warning is displayed

(pics attached)

 

thanks !

Image 2.png

Image 1.png

Link to post
  • JorgeB changed the title to [SOLVED] Synchronized pool devices

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.