removing a cache pool


Go to solution Solved by JorgeB,

Recommended Posts

I don't really know where to start here.  when I first set up my unraid I remember getting confused with setting up the cache but got something that worked so moved on.

the time has come now to try to fix that mess as I need the sata ports.

I have 3 SSD's in my cache pool,  1x2tb and 2x 120Gb I plan to get rid of the 2x120Gb ones leaving the 2Tb one in the cache pool on its own to give me 2 extra ports for other things.

I'm not entirely sure how to do it though  from this it looks to me that I've somehow created a hybrid pool 

 

488956295_Screenshot2024-02-20165301.thumb.png.51834762e8a7338b2b3143994d3d944b.png

873271828_Screenshot2024-02-20165611.thumb.png.61b8249f71f99ec24b7a0cc37f5c5f4f.png

 

is there a way without destroying anything to remove the cache completely and re create it with the one drive only?

 

Link to comment

Make sure you don't have an SSH window opened to a mount point.

 

To fix the pool issue the easiest way is to re-import it, it should then use the correct profile, to re-import, stop array, unassign all pool devices, start array, stop array, re-assign all pool devices, start array.

Link to comment

I have repeated the removal of all drives, start array, stop array.  i figured out that if I reassign all the drives in the same places but before i start the array i can click the "cache" drive and I can change the file system type from "Auto" to "btrfs" and then choose "raid 1"  should I apply this or will i lose my data?

 

943989768_Screenshot2024-02-21125645.thumb.png.16e4b5743ae82277e2401a34fff3135f.png

Link to comment

 

             Data      Data     Metadata  System                              
Id Path      single    RAID1    RAID1     RAID1    Unallocated Total     Slack
-- --------- --------- -------- --------- -------- ----------- --------- -----
 1 /dev/sdf1   5.00GiB 33.00GiB   2.00GiB 32.00MiB    71.76GiB 111.79GiB     -
 2 /dev/sdq1 807.44MiB 38.00GiB   2.00GiB        -    71.00GiB 111.79GiB     -
 3 /dev/sde1 244.95GiB 71.00GiB   2.00GiB 32.00MiB     1.51TiB   1.82TiB     -
-- --------- --------- -------- --------- -------- ----------- --------- -----
   Total     250.74GiB 71.00GiB   3.00GiB 32.00MiB     1.65TiB   2.04TiB 0.00B
   Used       34.12GiB 68.45GiB 154.41MiB 64.00KiB     

 

Conversation to raid1 didn't finish, or it aborted, it's using single and raid1 profiles for data, run the conversion to raid1 again and post new diags once it's done. 

Link to comment

Balance is aborting because btrfs is detecting data corruption:

 

Feb 21 14:07:29 SAG-A-STAR kernel: BTRFS info (device sdf1): relocating block group 8046271987712 flags data
Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS warning (device sdf1): csum failed root -9 ino 257 off 433868800 csum 0x9b7bca66 expected csum 0x9ad9809d mirror 1
Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS error (device sdf1): bdev /dev/sde1 errs: wr 0, rd 0, flush 0, corrupt 772, gen 0
Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS warning (device sdf1): csum failed root -9 ino 257 off 433868800 csum 0x9b7bca66 expected csum 0x9ad9809d mirror 1
Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS error (device sdf1): bdev /dev/sde1 errs: wr 0, rd 0, flush 0, corrupt 773, gen 0
Feb 21 14:07:32 SAG-A-STAR kernel: BTRFS info (device sdf1): balance: ended with status: -5

 

Run a scrub, it should list the corrupt files in the syslog, delete/restore those files from a backup, and then try again, alternatively, backup what you can from the pool and reformat with the single device.

 

It would also be a good idea to run memtest, data corruption can be the result of bad RAM.

Link to comment
2 minutes ago, ApriliaEdd said:

pool? do i just share everything and back it up to another PC?

That is one option.  
 

Alternatively you can move it to the array using the process documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.

Link to comment

@JorgeB

Morning,

new day same problem.  Stopped the array, removed one of the 120Gb disks and hit start 🤞........same thing as yesterday "Wrong Pool State".

 

ran a scrub: no errors, ran a balance which took 20 mins but completed without error, and tried again same story.

 

but, I had the logs open when I deselected the 120Gb drive and noticed this

 

1014810634_Screenshot2024-02-22091523.png.d46b20a007968a19402104474145667a.png

 

Does it think i have 4 drives in the cache?

 

(sdy) is in another pool and has never been part of the cache

1005094673_Screenshot2024-02-22091657.thumb.png.3a839870980992aacf8044fcebf9674e.png

 

So I guess next question is how do I correct this?  that pool is used for a specific share for a proxmox backup server VM that didn't like being on the array. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.