valasg

Members
  • Posts

    12
  • Joined

  • Last visited

valasg's Achievements

Noob

Noob (1/14)

1

Reputation

  1. My bad ... Everything works fine now, thank you all for your help !
  2. I've performed the 1st part untill the removal procedure But then , I can do 1- stop the array 2- unassign pool disk to remove but not 3- start the array as the START buttong is greyed, and a "Missing Cache disk" warning is displayed (pics attached) thanks !
  3. And coudn't be something to be tuned at BIOS level ?
  4. Rerunning the change UUID steps, both disks were changed simulteneously, they look intricated ! root@HORUS:~# btrfstune -u /dev/nvme0n1p1 WARNING: it's recommended to run 'btrfs check --readonly' before this operation. The whole operation must finish before the filesystem can be mounted again. If cancelled or interrupted, run 'btrfstune -u' to restart. We are going to change UUID, are your sure? [y/N]: y Current fsid: cf2aac75-db53-47b1-931a-ae65b2eff77b New fsid: b62f4e84-a8d6-449d-8bc4-c77f1e9c2c6b Set superblock flag CHANGING_FSID Change fsid in extents Change fsid on devices Clear superblock flag CHANGING_FSID Fsid change finished root@HORUS:~# blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/nvme1n1p1: UUID="b62f4e84-a8d6-449d-8bc4-c77f1e9c2c6b" UUID_SUB="f0ef586f-bf35-4655-aeaa-5f6ac9208262" BLOCK_SIZE="4096" TYPE="btrfs" /dev/nvme0n1p1: UUID="b62f4e84-a8d6-449d-8bc4-c77f1e9c2c6b" UUID_SUB="45556f71-e386-417a-bc2b-09439fda7c0e" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdb1: UUID="a753234c-6126-45e2-8a06-48acb30a6487" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a6f418bb-c19e-4216-881e-a6b6bb094e1e" /dev/sdc1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="48c31838-659b-4f63-b868-b013ca28663e" /dev/sdd1: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fbd77100-0ab9-4154-a780-70974f0ade93" /dev/sde1: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ec60a12d-4cc8-4146-b863-edf1755c1b59" I am quite lost actually I don't see any particular command/button in the interface
  5. Stopping the array allowed me to run the command successfully, but after reboot a new id was assigned to both nvme ! root@HORUS:~# blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/nvme1n1p1: UUID="cf2aac75-db53-47b1-931a-ae65b2eff77b" UUID_SUB="f0ef586f-bf35-4655-aeaa-5f6ac9208262" BLOCK_SIZE="4096" TYPE="btrfs" /dev/nvme0n1p1: UUID="cf2aac75-db53-47b1-931a-ae65b2eff77b" UUID_SUB="45556f71-e386-417a-bc2b-09439fda7c0e" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdb1: UUID="a753234c-6126-45e2-8a06-48acb30a6487" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a6f418bb-c19e-4216-881e-a6b6bb094e1e" /dev/sdc1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="48c31838-659b-4f63-b868-b013ca28663e" /dev/sdd1: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fbd77100-0ab9-4154-a780-70974f0ade93" /dev/sde1: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ec60a12d-4cc8-4146-b863-edf1755c1b59" /dev/md1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" /dev/md2: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" /dev/loop2: UUID="ae7bf697-26fe-4346-b42f-23027d10b0a5" UUID_SUB="5a792d8f-358b-440b-b30a-d1fbb89c4e17" BLOCK_SIZE="4096" TYPE="btrfs" /dev/loop3: UUID="f17a1a2b-7ac8-4080-95d2-7200eea5d82d" UUID_SUB="87f2d071-fdc3-42f0-a06d-a8c1e04cf6a9" BLOCK_SIZE="4096" TYPE="btrfs"
  6. Yes both are NVMe I've removed nvme0n1p1 from any pool so that it is in the "Unassigned Devices" now, and the "MOUNT" button is available however btrfstune -u /dev/nvme0n1p1 produces ERROR: /dev/nvme0n1p1 is mounted
  7. Indeed they look to have the same UUID : 8d14eb99-e521-4c8b-9e04-7a448ccd6f30 What's the way to change this please ? root@HORUS:~# blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/nvme1n1p1: UUID="8d14eb99-e521-4c8b-9e04-7a448ccd6f30" UUID_SUB="f0ef586f-bf35-4655-aeaa-5f6ac9208262" BLOCK_SIZE="4096" TYPE="btrfs" /dev/nvme0n1p1: UUID="8d14eb99-e521-4c8b-9e04-7a448ccd6f30" UUID_SUB="45556f71-e386-417a-bc2b-09439fda7c0e" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sde1: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ec60a12d-4cc8-4146-b863-edf1755c1b59" /dev/sdc1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="48c31838-659b-4f63-b868-b013ca28663e" /dev/sdd1: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fbd77100-0ab9-4154-a780-70974f0ade93" /dev/sdb1: UUID="a753234c-6126-45e2-8a06-48acb30a6487" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a6f418bb-c19e-4216-881e-a6b6bb094e1e" /dev/md1: UUID="29d7fb75-7e4c-4a0d-99b7-82b1a6c12d9a" BLOCK_SIZE="512" TYPE="xfs" /dev/md2: UUID="1c3c2a87-61c6-4609-a974-82e4cfed151a" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="92b8f2be-7eac-49e6-bac5-48f9da265c07" BLOCK_SIZE="512" TYPE="xfs" /dev/loop2: UUID="ae7bf697-26fe-4346-b42f-23027d10b0a5" UUID_SUB="5a792d8f-358b-440b-b30a-d1fbb89c4e17" BLOCK_SIZE="4096" TYPE="btrfs" /dev/loop3: UUID="f17a1a2b-7ac8-4080-95d2-7200eea5d82d" UUID_SUB="87f2d071-fdc3-42f0-a06d-a8c1e04cf6a9" BLOCK_SIZE="4096" TYPE="btrfs"
  8. So, I - stopped the array - disabled docker and vms (via settings menus) - went to my unique pool and set the second slot to "no device" - set Slots to 1 instead of 2 - clicked on "ADD POOL", set cache_vm as the name, and 1 as the slots - selected the remaining SDD (the one removed earlier) instead of "no device" in the second pool combo - restarted the pool - changed in shares the "select cache pool" value by pointing the docker related shares to the cache_docker, and the vm ones to cache_vm many operations, maybe in the wrong order ?
  9. Thanks for your quick reply. "Sounds as if they may still be running as a single pool" -> absolutely ! I am attaching the diagnostics. Best regards, horus-diagnostics-20210501-1907.zip
  10. Hi, I used to consume a pool of 2 SDD to run/backup my docker apps and VMs. I finally decided to split the pool so that I get 2 pools with one SDD each, to run separately docker and VMs (no more backup of course). However, once created, they look synchronized. I mean when I change the content of one (/mnt/cache_docker), for instance creating/removing a share or downloading a file, the exact same action happens on the other one (/mnt/cache_vm). Is there a way to break this link ? Many thanks,
  11. Hello, I used to configuring my NextCloud container via the Docker UI, but I might have set an invalid container path value, and the container moved to "orphan image" state, with no way to use the UI again. Is there a way to edit the container config directly somewhere ? Many thanks