BigRed8150 Posted July 12, 2023 Share Posted July 12, 2023 I am pretty sure I did this to myself and I need help fixing this. For whatever reason last night I ran the balance and scrub at the same time and since then the cache drive is now throwing errors and in read-only. I have tried stopping the VMs and Docker and changing the cache on my shares to yes. I tried to invoke the mover to get the data off the cache drive and into my array. (not sure this is right). That did not work. I do have new memory coming in and maybe that will help? If not I am at a loss of what to do to fix this and keep my data. Any guidance would be appreciated and I am still pretty new to the inner workings of unraid. Quote Link to comment
JorgeB Posted July 12, 2023 Share Posted July 12, 2023 Please post the diagnostics. Quote Link to comment
BigRed8150 Posted July 12, 2023 Author Share Posted July 12, 2023 Ok here you go. masterblaster-diagnostics-20230712-1717.zip Quote Link to comment
Squid Posted July 12, 2023 Share Posted July 12, 2023 Jul 12 08:09:24 MasterBlaster kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 First thing you'd want to do is run memtest from the boot menu (or if you're booting via UEFI, then set up a new stick from https://www.memtest86.com/ . Corruption is usually caused by bad memory Quote Link to comment
BigRed8150 Posted July 12, 2023 Author Share Posted July 12, 2023 Ok let me give that a try, thank you. Quote Link to comment
JorgeB Posted July 13, 2023 Share Posted July 13, 2023 After that you should backup and re-create the pool, since the fs is now corrupt and will likely keep going read-only. Quote Link to comment
BigRed8150 Posted July 13, 2023 Author Share Posted July 13, 2023 What is the best way to do that and not lose data? Quote Link to comment
JorgeB Posted July 13, 2023 Share Posted July 13, 2023 Just copy anything important somewhere else, like the array for example, you can use your favorite tool or for example the Dynamix File Manager plugin. Quote Link to comment
BigRed8150 Posted July 13, 2023 Author Share Posted July 13, 2023 how do i recreate the pool? Quote Link to comment
BigRed8150 Posted July 13, 2023 Author Share Posted July 13, 2023 (edited) So i changed out my ram and I still cannot get the cache drive to write to the array. attached is the new diag. I feel lost, i disabled docker and vm, changed the cache prefer to yes and invoked the mover. its not doing anything and. if i use the file explorer on my pc and select the folders to move from the cache to a share on the array i get a permissions error. so what do i do to move the files, and then recreate the pool? masterblaster-diagnostics-20230713-1954.zip Edited July 14, 2023 by BigRed8150 more info Quote Link to comment
JorgeB Posted July 14, 2023 Share Posted July 14, 2023 Reboot, after array start type: btrfs balance cancel /mnt/cache and post new diags Quote Link to comment
BigRed8150 Posted July 14, 2023 Author Share Posted July 14, 2023 ok here it is masterblaster-diagnostics-20230714-0618.zip Quote Link to comment
JorgeB Posted July 14, 2023 Share Posted July 14, 2023 Wasn't fast enough, it must be done right after array start, to see if the cache doesn't go read-only, you have less then one minute, try again, disable array auto-start, reboot, open the terminal window and paste the command before array start, then do it right after the pool mounts. Quote Link to comment
BigRed8150 Posted July 14, 2023 Author Share Posted July 14, 2023 here is the new diag masterblaster-diagnostics-20230714-0708.zip Quote Link to comment
JorgeB Posted July 14, 2023 Share Posted July 14, 2023 Still not fast enough, lets try another way, stop the array and type: mkdir /x mount -t btrfs -o skip_balance dev/nvme0n1p1 /x btrfs balance cancel /x umount /x then start array and post new diags. Quote Link to comment
BigRed8150 Posted July 14, 2023 Author Share Posted July 14, 2023 (edited) when i run the mount command i get : root@MasterBlaster:~# mount -t btrfs -o skip_balance dev/nvme0n1p1 /x mount: /x: special device dev/nvme0n1p1 does not exist. root@MasterBlaster:~# I also tried it without the p1 , mount -t btrfs -o skip_balance dev/nvme0n1 /x Same result Edited July 14, 2023 by BigRed8150 misspell Quote Link to comment
JorgeB Posted July 14, 2023 Share Posted July 14, 2023 sorry, typo, should be: mount -t btrfs -o skip_balance /dev/nvme0n1p1 /x Quote Link to comment
dhenke1690 Posted July 14, 2023 Share Posted July 14, 2023 I am having the same issue. Definitely following this thread. I ran a Memtest86 on all 4 ram sticks and received errors. Then I ran the test on each stick individually but they passed. My ram sticks are mismatched (one set is 8gb sticks at 3200mhz and the other 16gb sticks at 3000mhz) but I run them at the default 2133mhz. Been running this way for over a year and never had a problem with it. I guess I'm going to back up the cache drive and move all the data to the array and format the drive. During this process I learned that backup plugin was deprecated so the last backup I have is from March *facepalm*. Quote Link to comment
BigRed8150 Posted July 14, 2023 Author Share Posted July 14, 2023 ok Im back and ran the commands, here is the diag file masterblaster-diagnostics-20230714-1709.zip Quote Link to comment
BigRed8150 Posted July 14, 2023 Author Share Posted July 14, 2023 ok running that command and then a scrub brought eveything back online. so is the nvme bad and need replaced? or did replacing the ram do the trick? Quote Link to comment
JorgeB Posted July 15, 2023 Share Posted July 15, 2023 Due to the crash during balance I would still recommend backing up and restoring the pool, but now at least you should be able to use the mover to move the data to the array. Quote Link to comment
BigRed8150 Posted July 15, 2023 Author Share Posted July 15, 2023 so i have done a unbalance move then copied back the files to run the docker and vms. After i got evrything up and running i tried a scrub with fix errors it finished. I am now doing a move and the cache drive is growing in used space.... Quote Link to comment
BigRed8150 Posted July 15, 2023 Author Share Posted July 15, 2023 here is my current diag masterblaster-diagnostics-20230715-1215.zip Quote Link to comment
BigRed8150 Posted July 15, 2023 Author Share Posted July 15, 2023 with all of this mess my docker plex is operational but no access to the media. this is such a complete mess. what do i to get back to stable? Quote Link to comment
JorgeB Posted July 16, 2023 Share Posted July 16, 2023 Sorry, can't help with Plex as I've never used, best to use the container support thread/discord. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.