Removing cache drive so from 2 to 1 - now says no file system


Go to solution Solved by JorgeB,

Recommended Posts

I followed the steps in the manual for removing a drive.

First perform a balance to change to a single drive.

It confirmed it was in s single disk mode.

I thought that meant it was then ok to stop the array and remove the drive.

When I restarted the array it says there is not file system and it cannot read data on either disks.

 

While there is no critical data. 

What steps should I have taken?

 

Link to comment

2026156240_Screenshot2023-01-12at16_30_14.thumb.png.fc6766993a79dc660c92d78827a3d61a.png

184801964_Screenshot2023-01-12at16_28_58.thumb.png.9ad12422d362490d3ea1731bf7a313b9.png

 

https://wiki.unraid.net/Manual/Storage_Management#Removing_disks_from_a_multi-device_pool

 

I think I've probably killed the data which is ok as all the appdata is backed up to array.

 

How is one supposed to remove a drive from a 2 drive cache pool.

 

Pool TWO has 2 drives in raid1. I want to eliminate the mirror and run in single mode.

 

Is it better to copy the data to another drive and wipe both drives?

 

The system did hang during one of the BTFS balance check but this was actually when it was checking another pool not this one.

 

Thanks for you help 🙂

Link to comment
root@Moulin-Rouge:~# blkid
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat"
/dev/loop1: TYPE="squashfs"
/dev/sdf1: UUID="72c4582b-ce97-49cb-b904-4e2c05073dda" UUID_SUB="30473b54-e841-4935-9880-ea16cc627683" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="506db2dc-01"
/dev/nvme0n1p1: UUID="24e861c6-e86f-4b55-b120-6ad8847c8d97" UUID_SUB="2f33bc93-9362-490f-a71e-04c42a03d74b" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="506db2db-01"
/dev/sdd1: UUID="aaa46ac2-3000-4696-b0eb-cea90fb5ea17" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="998d4295-3b12-4efe-a724-bd27751fe6f4"
/dev/sdb1: UUID="6c8c21d2-3659-46bd-b2bc-7a32eca49098" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="0ec34a92-1dfd-48a3-9dbe-adf6bd457e17"
/dev/sdg1: UUID="72c4582b-ce97-49cb-b904-4e2c05073dda" UUID_SUB="ca0c75b3-f370-4caf-988c-b6001af533be" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="506db2d2-01"
/dev/loop0: TYPE="squashfs"
/dev/sde1: UUID="39879032-0914-4994-b0cb-53c9fd59f0b1" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="32000865-aad5-4bfe-ba13-02573c2916c0"
/dev/sdc1: UUID="ffafdb22-0f4d-49bf-b29c-e7521e488a3e" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="2d2ef932-f44d-37fc-375d-6a90c02b990f"
/dev/md2: UUID="aaa46ac2-3000-4696-b0eb-cea90fb5ea17" BLOCK_SIZE="512" TYPE="xfs"
/dev/md3: UUID="39879032-0914-4994-b0cb-53c9fd59f0b1" BLOCK_SIZE="512" TYPE="xfs"
/dev/md1: UUID="ffafdb22-0f4d-49bf-b29c-e7521e488a3e" BLOCK_SIZE="512" TYPE="xfs"
/dev/nvme1n1p1: PARTUUID="506db2d0-01"
root@Moulin-Rouge:~# 

 

Link to comment
4 minutes ago, JorgeB said:

If it's a raid1 pool just stop array, uassign the device you want to remove, start array.

Ok thanks - It said disk missing and wouldn't start the array. I must have missed a step. or just messed it up with the balance.

 

Thanks - I suppose I don't really know what the highlighted bit in yellow means. Could we word it with an example in the manual. 

 

Diags attached.

moulin-rouge-diagnostics-20230112-1654.zip

Link to comment
19 minutes ago, dopeytree said:

I suppose I don't really know what the highlighted bit in yellow means. Could we word it with an example in the manual. 

It just means the pool needs to be redundant, for example raid1, it won't work for a single profile or raid0 pool.

 

 

Diags are after rebooting so assuming nvme1n1 was the other pool member and with the array stopped type:

 

btrfs-select-super -s 1 /dev/nvme1n1p1

 

Then and without starting the array post the output of

 

btrfs fi show

 

P.S. btrfs is detecting data corruption in multiple devices, unless these are old errors that were never reset you likely have a RAM problem.

 

 

Link to comment
8 hours ago, dopeytree said:

I followed the steps in the manual for removing a drive.

First perform a balance to change to a single drive.

It confirmed it was in s single disk mode.

I thought that meant it was then ok to stop the array and remove the drive.

When I restarted the array it says there is not file system and it cannot read data on either disks.

 

While there is no critical data. 

What steps should I have taken?

 

The instructions say to make sure that is in a redundant mode BEFORE you remove the drive.    From your description this is not what you did?

Link to comment

I don't really understand.. it says to scroll down to balance. It should then say scroll backup and just stop the array and remove the drive.

I'm sure I did try this and it wouldn't let me re-start the array but perhaps it was after I did something else.

 

BTRFS can add and remove devices online, and freely convert between RAID levels after the file system has been created. - I guess this doesn't mean single disk mode?

 

What case use would someone use the balance single drive. Is that what you'd do after removing the mirrored cache?

 

Anyway thanks for guidance guys.

Am running memtest. You can't easily make the uefi image on mac anymore. Probably could have done it on the steamdeck in desktop mode.. Anyway my brother imaged the usb stick on windows.

 

Will run those commands once memtest finishes.

Edited by dopeytree
Link to comment
12 hours ago, dopeytree said:

What case use would someone use the balance single drive. Is that what you'd do after removing the mirrored cache?

No, that is done automatically after you remove a drive (if only one remains), you might want to do that manually if for example you have two devices and want to use their full space for storage instead of having a mirror.

Link to comment

Something strange is happening.

Server name has changed to the default Tower when should be moulin-rouge.

What causes this?

 

Ran memtest overnight.

I've read some people do memory test for 24hrs beyond but the free uefi one stoped after doing the 4 tests types.

So not sure if there's a difference between the memtest website version compares to unraid bundled version.

 

Result is Memtest said it passed but I will try some other ram to be safe.

 

Here's the output commands requested above:

 

root@Tower:~# btrfs-select-super -s 1 /dev/nvme1n1p1
using SB copy 1, bytenr 67108864
root@Tower:~# btrfs fi show
Label: none  uuid: 72c4582b-ce97-49cb-b904-4e2c05073dda
        Total devices 2 FS bytes used 889.57GiB
        devid    1 size 1.86TiB used 869.03GiB path /dev/sdg1
        devid    2 size 1.86TiB used 869.03GiB path /dev/sdf1

Label: none  uuid: 24e861c6-e86f-4b55-b120-6ad8847c8d97
        Total devices 2 FS bytes used 1.11TiB
        devid    2 size 1.86TiB used 516.03GiB path /dev/nvme1n1p1
        devid    3 size 1.82TiB used 630.03GiB path /dev/nvme0n1p1

root@Tower:~# 

 

I think this indicates I made the NVMe drives unsynced?

 

The other pool (sdg1 & sdf1) has disconnected but am I able to mount both the drives with the unassigned plugin.

 

The nvme drives are not mountable with the unassigned plugin.

Edited by dopeytree
Link to comment
4 minutes ago, dopeytree said:

Server name has changed to the default Tower when should be moulin-rouge.

What causes this?

This will happen if Unraid cannot read the configuration information off the flash drive.

 

You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread taken while this problem is being encountered.

Link to comment

I see it has the FSCK0000.REC etc

hmm ok so we are back to a bad usb stick...

I had that a few months ago when initialising the server.

We ended up replacing the usb3 samsung drive with a usb2 drive and everything was good until about last week.

 

If I download the USB stick backup from my-servers it would be cool to be able to also jump back a week. The current one is from yesturday 24hours ago & I think the stick corruption happened today as I was still logging in with moulin-rouge.local yesterday so actually all ok.

 

Anyway diags attached. Many thanks.

 

Also there's a small bug in the code on the forum here to do with colour. Sometimes if you copy and past it chooses the wrong colour. It needs something that says if background is light use dark colour & vice versa for dark background. Or could be a safari bug. 

 

tower-diagnostics-20230113-1502.zip

Edited by dopeytree
Link to comment
50 minutes ago, dopeytree said:
Label: none  uuid: 24e861c6-e86f-4b55-b120-6ad8847c8d97
        Total devices 2 FS bytes used 1.11TiB
        devid    2 size 1.86TiB used 516.03GiB path /dev/nvme1n1p1
        devid    3 size 1.82TiB used 630.03GiB path /dev/nvme0n1p1

Pool is back, to use it you first need to reset it, so unassign all members from this pool, start array, stop array, re-assign both pool devices and start the array.

  • Thanks 1
Link to comment

Ace thanks @JorgeB

 

Both pools backup. Panic over.

Although still going to try different ram & usb stick for stability.

 

Please could you just talk me through my original intention which was to remove the mirror on the NVMe pool & instead have 2nvme pools.

3 pools total.

1x each for NVMe drive (2)

Then the existing pool two which is mirrored and stays mirrored.

 

Do I need to do anything to check the NVMe are up to date as it sounds like one has more data than the other nvme drive?

2 size 1.86TiB used 516.03GiB path /dev/nvme1n1p1
3 size 1.82TiB used 630.03GiB path /dev/nvme0n1p1

Before then stoping array. remove nvme disk from cache pool. restart array with single nvme.

 

Thanks

Link to comment

If you still want to remove a device from that pool, first balance the pool to raid1, when that is done, stop the array, unassign the device you want to remove (leave it unassigned for now), start the array, wait for the balance to finish, when done you can stop the array and create a new pool with the old device.

Link to comment

 

I cant get it to let me start the array with 1 drive removed from the pool.

 

1096909836_Screenshot2023-01-14at04_07_34.thumb.png.256beb478735709a09ef15aabf197c53.png

1078944834_Screenshot2023-01-14at04_07_44.thumb.png.84c7de492cdd9ecdaa2963984cfdadee.png

 

root@Tower:~# btrfs fi show
Label: none  uuid: 24e861c6-e86f-4b55-b120-6ad8847c8d97
        Total devices 2 FS bytes used 1.12TiB
        devid    2 size 1.86TiB used 1.13TiB path /dev/nvme1n1p1
        devid    3 size 1.82TiB used 1.13TiB path /dev/nvme0n1p1

Label: none  uuid: 72c4582b-ce97-49cb-b904-4e2c05073dda
        Total devices 2 FS bytes used 898.84GiB
        devid    1 size 1.86TiB used 910.03GiB path /dev/sdg1
        devid    2 size 1.86TiB used 910.03GiB path /dev/sdf1

Label: none  uuid: 5179e4f0-91cc-4afb-a038-692b2e3afe1c
        Total devices 1 FS bytes used 468.00KiB
        devid    1 size 10.00GiB used 126.38MiB path /dev/loop2

root@Tower:~# 


 

tower-diagnostics-20230114-0419.zip

Edited by dopeytree
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.