• Posts

  • Joined

  • Last visited

About rclifton

  • Birthday January 25


  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rclifton's Achievements


Newbie (1/14)



  1. From Unraid 6.9 and up btrfs volumes are supposed to be mounted with the discard=async mount option which should mean that trim isn't needed. However, I have found that if I stop using trim, after a few days the performance of my nvme cache drive drops dramatically so I have left it installed. It could just be my use case, I don't know. My cache drive has several hundred GB's of writes daily with the files being then moved from the cache onto either my array or sent to my TrueNAS server running on another machine. I still see, the details of how much space was recovered every time trim executes and do not have any conflicts with it and V-Rising or any other games I host. How is your drive connected to your system? Is it nvme? Connected to onboard sata port? Or is it connected to a HBA or another way?
  2. I run trim hourly on my cache drive and have no issues with stuck threads, or anything else. The world has been up 12 days since the last reboot and everything is fine. The one difference I do notice between us tho is that my cache is formatted btrfs not xfs, not sure why it would matter but perhaps that is the issue if it is somehow related to trim.
  3. I've been using this container for years and it's been great! Thanks first of all for all of them as I use quite a few! Something has broken tho and I'm not sure where/what it is. I was on the server yesterday messing with some commands for the chia container I had just installed and saw that SAB had just downloaded something, so I know it was working as late as yesterday afternoon. This morning when I logged in I noticed the container was stopped which I thought was odd. Trying to restart it resulted in an immediate "execution error" popup. I tried looking at the logs but it looked like the container was dying before it even started. Not sure what else to do I deleted the container thinking I would just download it again and everything would be fine, except when I try to pull down the container I am getting the following message and the pull fails. Pulling image: binhex/arch-sabnzbd:latest IMAGE ID [960334309]: Pulling from binhex/arch-sabnzbd. IMAGE ID [701d67ccb854]: Already exists. IMAGE ID [ebf9b61b3eda]: Already exists. IMAGE ID [c295ce7a4387]: Already exists. IMAGE ID [6295d38d4fc8]: Already exists. IMAGE ID [61bd528f9496]: Already exists. IMAGE ID [5ec61f70bff6]: Pulling fs layer. IMAGE ID [02f939f4c3b7]: Pulling fs layer. IMAGE ID [312948fc17be]: Pulling fs layer. TOTAL DATA PULLED: 0 B Error: open /var/lib/docker/tmp/GetImageBlob914982683: no such file or directory **EDIT** Nevermind, slowly over the course of today most of my containers stopped and would not restart. It appears something was corrupted somehow, I deleted the docker directory and was able to pull down all my containers again. Everything is back to 100% now...
  4. Because I will do a parity sync once everything is back in place, But mainly because copying 4TB+ of data without parity copies at about 160MBps and with parity it's more like 70-90MBps.. I've got backups of everything so I really don't see the need to write parity for all that data when I'm going to sync it all after the copy anyway..
  5. Thanks! And now that I've done this and can see what has happened. I'll mention in case someone in the future finds this post. Reformatting the drive I assume essentially zeros out the parity for that drive as Unraid will not attempt to rebuild it after you format it so make sure you have a backup of what's on that drive specifically and not just a full backup. I have now disabled my parity drive and am copying over the contents of what was on that specific drive now. Once that is finished I will put the parity drive back in place and run a parity sync. Thank you for all the help!
  6. Yes, you did. And I thank you for the help, but at the risk of sounding like an idiot basically I'm asking what do I do after I format it? Do I recopy the data back to the same drive and put it back and Unraid thinks everythings a-ok now? Do I put the drive I copied everything to into the old drives spot and Unraid will figure it out? Or something entirely different that I'm not seeing? I guess I'm kind of confused because telling me to format the drive doesn't help when you also said parity can't help me, but if I format the drive what fixes the problem of getting the data back safely? I hope all of this actually makes sense, thanks...
  7. Hi, On Sunday I had a freakout moment and noticed some data was missing (thread here). I followed the link that was posted in one of the responses and ran the restore command and copied the data to a spare drive. I then went one extra and copied all the data off the array using krusader to some usb drives. After doing all that I ran the btrfs check --repair /dev/md4 command and it found and corrected errors. The problem is after rebooting the system is still reporting the same errors on drive 4. I'm not really sure what to do at this point. I know I could just blow it up and restore from my backups, but not sure I really want to go that route before exhausting any other avenues.. At this point could I run the BTRFS check --repair command again and then power down the system, remove the drive, format a spare and copy the contents from the old drive that I already backed up onto the new drive and then restart? Or something else? I'm kind of stuck at this point as I'm not real sure where to go from here at this point... I pasted the output of the BTRFS check --repair command below : The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting repair. Opening filesystem to check... Checking filesystem on /dev/md4 UUID: f10d28a7-144b-4b86-8489-c8be6efcc9f8 [1/7] checking root items Fixed 0 roots. [2/7] checking extents parent transid verify failed on 87638016 wanted 25935 found 25928 parent transid verify failed on 87638016 wanted 25935 found 25928 Ignoring transid failure bad block 87638016 ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space cache [4/7] checking fs roots parent transid verify failed on 87638016 wanted 25935 found 25928 Ignoring transid failure (the above line repeated several 100x and was removed) Ignoring transid failure Wrong key of child node/leaf, wanted: (65781, 1, 0), have: (256, 1, 0) Wrong generation of child node/leaf, wanted: 25928, have: 25935 Deleting bad dir index [6934,96,5] root 5 Deleting bad dir index [63235,96,3] root 5 Deleting bad dir index [63235,96,4] root 5 Deleting bad dir index [63235,96,5] root 5 Deleting bad dir index [1264,96,18] root 5 Deleting bad dir index [1254,96,5] root 5 Deleting bad dir index [6934,96,6] root 5 ERROR: errors found in fs roots found 4022054645760 bytes used, error(s) found total csum bytes: 0 total tree bytes: 30851072 total fs tree bytes: 2523136 total extent tree bytes: 26148864 btree space waste bytes: 7056862 file data blocks allocated: 545990963200 referenced 544234590208 Thanks for any help!
  8. I have not yet, most of them if I'm reading right, simply copy the data to another drive. I plan to do that later this afternoon and then run the check --repair command and if it fails I'll nuke it all and copy it all back over.. Either way it looks like I'll be copying all the data, I just don't have enough USB drive's for a complete backup So I plan to pick some up later this afternoon..
  9. I'm not sure at this point if it would be easier to copy as much of the array as possible onto some USB drives and then just nuke it and start over. Or nuke it and reload my backup, which is on a system I literally just moved to my sister's house a few weeks ago and will probably take a week at least to download... Sometimes this is all a little to much like actual work lol...
  10. I've got a spare drive, can I just pull this drive and replace it with the spare and then rebuild from parity? I recently moved my server into a new case and I'm wondering if something happened during that move (I very briefly attempted to use a different controller card and think that might have caused this). If I can just remove and replace the drive, I'll reformat it on another system and then add it back to the server and run preclear on it to see if there is actually a real issue with the drive or if I caused it..
  11. Attached is a copy of the diagnostics. I ran a BTRFS scrub on drive 4 with the "repair corrupted blocks" option checked. It found 1 error and said it was uncorrectable.. I'm still seeing : Nov 9 00:00:42 Tower kernel: BTRFS error (device md4): parent transid verify failed on 87638016 wanted 25935 found 25928 Nov 9 00:00:42 Tower kernel: BTRFS error (device md4): parent transid verify failed on 87638016 wanted 25935 found 25928 Spamming the log and at this point am hoping someone else has dealt with this in the past and has some suggestions.. Thanks tower-diagnostics-20201108-2357.zip
  12. Today being Sunday I was doing weekly housekeeping on my server and noticed that 2 directories on my server were suddenly empty!! The directory /downloads and TV/TV are empty, every file, sub folder etc GONE! I have no drives in a degraded state, no errors showing on any of the drives, I'm currently running a parity check to be sure but highly doubt it will find anything as one just ran on Nov 1ST and everything was fine. I have only 3 user accounts on this server, the root account, a backup account and a generic account. I control the passwords to all 3, no one else knows them and they are 16 char, randomly generated passwords so again, not something someone could get easily.. No other data is missing or affected, just those 2 directories, The only apps on the server that access them are sabnzbd, radaar, sonaar and delugevpn, I use the binhex containers for all of the above but they also have access to a number of other directories that are fine.. I access the /TV/TV from my PC as well as 3 MiTV boxes in my house, they all use the generic account with a password to access it and I rarely access the /downloads directory from my pc only. I'm not sure what to make of it at this point, it's VERY odd it's everything inside these 2 directories but the directories themselves are still there and just fine and the free/used space remains exactly the same as if the files were still there. I was watching TV last night until after midnight so I KNOW at least some of it was still there at that point. Is there a log or some other journal that tracks file deletions that I am unware of that might show what happened? Thanks, and sorry for the long rambling post I'm really scratching my head at this point as to what the heck happened and frankly slightly nervous about starting anything back up again until I figure out exactly what the heck happened! **EDIT** I can see that at least some, but hopefully all of the files are actually still there if I use krusader and look at the individual disks.. I'm just not sure why the directory within the share shows empty or how to actually fix it so that the files that are on the individual disk start showing within said directory on the actual share.. **EDIT 2** I see the log is filling up with : Nov 8 17:13:48 Tower kernel: BTRFS error (device md4): parent transid verify failed on 87638016 wanted 25935 found 25928 Nov 8 17:13:48 Tower kernel: BTRFS error (device md4): parent transid verify failed on 87638016 wanted 25935 found 25928 Nov 8 17:13:48 Tower kernel: BTRFS error (device md4): parent transid verify failed on 87638016 wanted 25935 found 25928 Just being spammed, I assume this means something happened to device md4, after I figure out which device md4 is would a btrfs scrub command fix the issue if anyone knows? I'm a bit hesitant to just start trying stuff. While I do have backups of everything, it would be a REAL pain to restore 12TB+ of data...
  13. Is anyone else having an issue with the container not letting their drives spin down anymore all of a sudden? Sometime between now and the last update I noticed that my drives were no longer spinning down. After spending the better part of this weekend trying to track it down I've discovered that it is this container. If I spin my drives down manually, after about 35 to 50 seconds the container will make a write to the array in a specific order and size every time.. A few seconds later the remaining drives will spin up and it will make another write. If I spin them down again the exact same thing happens.. This is a fairly new issue as I have always in the past been able to spin the array down without issue.. Did something change? Anyone else seeing the same behavior?
  14. The RTL8117 on that board is for management only, using Asus's Control Center software.. You should be using the intel nic as the default network connection (eth0) on your setup.
  15. Nevermind, I figured out what the issue was. User error =(