ApriliaEdd Posted February 20 Share Posted February 20 I don't really know where to start here. when I first set up my unraid I remember getting confused with setting up the cache but got something that worked so moved on. the time has come now to try to fix that mess as I need the sata ports. I have 3 SSD's in my cache pool, 1x2tb and 2x 120Gb I plan to get rid of the 2x120Gb ones leaving the 2Tb one in the cache pool on its own to give me 2 extra ports for other things. I'm not entirely sure how to do it though from this it looks to me that I've somehow created a hybrid pool is there a way without destroying anything to remove the cache completely and re create it with the one drive only? Quote Link to comment
JorgeB Posted February 20 Share Posted February 20 First you need to convert the pool to raid1, then you can remove one device at a time, see here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480418 1 Quote Link to comment
ApriliaEdd Posted February 20 Author Share Posted February 20 so i just select convert to raid1 mode in the gui and let it do its thing then remove the 2 small drives 1 by 1? Quote Link to comment
ApriliaEdd Posted February 20 Author Share Posted February 20 thanks, ill let you know how i go its doing a preclear of a disk at the moment Quote Link to comment
ApriliaEdd Posted February 20 Author Share Posted February 20 ok i converted to raid 1, current status is stopping the array and selecting no device for one of the 120gb drives gives this when starting array ive stopped anything i can think of that will use the cache for now while its so small Quote Link to comment
JorgeB Posted February 21 Share Posted February 21 You may need to change the pool profile manually, post the diagnostics. Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 I saw this this morning and was thinking the same problem at the moment is it wont stop the array from the gui. I've tried from 2 different pc's sag-a-star-diagnostics-20240221-0915.zip Quote Link to comment
JorgeB Posted February 21 Share Posted February 21 Make sure you don't have an SSH window opened to a mount point. To fix the pool issue the easiest way is to re-import it, it should then use the correct profile, to re-import, stop array, unassign all pool devices, start array, stop array, re-assign all pool devices, start array. Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 no ssh windows open that i'm aware of. rebooting worked and allowed me to stop the array. before I do this should I set all shares to array primary and run mover to move all data off the cache pool or will data survive this? Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 still the same situation. I try to remove disk and it wont start the array then Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 I have repeated the removal of all drives, start array, stop array. i figured out that if I reassign all the drives in the same places but before i start the array i can click the "cache" drive and I can change the file system type from "Auto" to "btrfs" and then choose "raid 1" should I apply this or will i lose my data? Quote Link to comment
JorgeB Posted February 21 Share Posted February 21 Leave in auto and post new diags after array start. Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 @JorgeB Diagnostics attached as requested sag-a-star-diagnostics-20240221-1359.zip Quote Link to comment
JorgeB Posted February 21 Share Posted February 21 Data Data Metadata System Id Path single RAID1 RAID1 RAID1 Unallocated Total Slack -- --------- --------- -------- --------- -------- ----------- --------- ----- 1 /dev/sdf1 5.00GiB 33.00GiB 2.00GiB 32.00MiB 71.76GiB 111.79GiB - 2 /dev/sdq1 807.44MiB 38.00GiB 2.00GiB - 71.00GiB 111.79GiB - 3 /dev/sde1 244.95GiB 71.00GiB 2.00GiB 32.00MiB 1.51TiB 1.82TiB - -- --------- --------- -------- --------- -------- ----------- --------- ----- Total 250.74GiB 71.00GiB 3.00GiB 32.00MiB 1.65TiB 2.04TiB 0.00B Used 34.12GiB 68.45GiB 154.41MiB 64.00KiB Conversation to raid1 didn't finish, or it aborted, it's using single and raid1 profiles for data, run the conversion to raid1 again and post new diags once it's done. Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 @JorgeB the balance for converting runs for about a second then stops hope the video works convert to raid 1.mp4 sag-a-star-diagnostics-20240221-1407.zip Quote Link to comment
JorgeB Posted February 21 Share Posted February 21 Balance is aborting because btrfs is detecting data corruption: Feb 21 14:07:29 SAG-A-STAR kernel: BTRFS info (device sdf1): relocating block group 8046271987712 flags data Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS warning (device sdf1): csum failed root -9 ino 257 off 433868800 csum 0x9b7bca66 expected csum 0x9ad9809d mirror 1 Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS error (device sdf1): bdev /dev/sde1 errs: wr 0, rd 0, flush 0, corrupt 772, gen 0 Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS warning (device sdf1): csum failed root -9 ino 257 off 433868800 csum 0x9b7bca66 expected csum 0x9ad9809d mirror 1 Feb 21 14:07:30 SAG-A-STAR kernel: BTRFS error (device sdf1): bdev /dev/sde1 errs: wr 0, rd 0, flush 0, corrupt 773, gen 0 Feb 21 14:07:32 SAG-A-STAR kernel: BTRFS info (device sdf1): balance: ended with status: -5 Run a scrub, it should list the corrupt files in the syslog, delete/restore those files from a backup, and then try again, alternatively, backup what you can from the pool and reformat with the single device. It would also be a good idea to run memtest, data corruption can be the result of bad RAM. Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 scrub finished Syslog memtest is a problem, server is in the loft without a screen as i lent it to a mate. i will have to run that at a later date how do I go about backing up the pool? do i just share everything and back it up to another PC? Quote Link to comment
itimpi Posted February 21 Share Posted February 21 2 minutes ago, ApriliaEdd said: pool? do i just share everything and back it up to another PC? That is one option. Alternatively you can move it to the array using the process documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. Quote Link to comment
JorgeB Posted February 21 Share Posted February 21 Looks like there's only one corrupt file, you can also delete it and try the conversion again. Quote Link to comment
ApriliaEdd Posted February 21 Author Share Posted February 21 I think you've cracked it! Thanks for your help today I'll try to remove one of the smaller drives tomorrow. 1 Quote Link to comment
ApriliaEdd Posted February 22 Author Share Posted February 22 @JorgeB Morning, new day same problem. Stopped the array, removed one of the 120Gb disks and hit start 🤞........same thing as yesterday "Wrong Pool State". ran a scrub: no errors, ran a balance which took 20 mins but completed without error, and tried again same story. but, I had the logs open when I deselected the 120Gb drive and noticed this Does it think i have 4 drives in the cache? (sdy) is in another pool and has never been part of the cache So I guess next question is how do I correct this? that pool is used for a specific share for a proxmox backup server VM that didn't like being on the array. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.