Jump to content

guyonphone

Members
  • Content Count

    238
  • Joined

  • Last visited

Community Reputation

2 Neutral

About guyonphone

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just want to give an update. After putting the 6TB back in, and rebuilding parity, i booted the array to maintenance mode, and ran an 'xfs_repair -n' I received a notice that the drive had valuable journal logs that need to be written back to the array and that I should try to mount the drive first. I tried and the drive would not mount. Therefore I ran an 'xfs_repair -L' and zeroed the log. I then was able to mount the drive, it doesn't look like i suffered any/much corruption. Once my array was healthy I then replaced the 6TB drive with a 14TB drive and it's rebuilding. As for my CPU temps. I took out the ol' air compressor w/ moisture trap and hit the fins with 120PSI this cleaned the heatsync honestly it wasn't that dusty (no build up between the fan and the fins) I think my heatsync just isn't really up to the task of properly cooling this CPU (it's a low profile noctua heatsync/fan) i will look into getting a beefier heatsync. Thanks for your help Johnnie Black, turl!
  2. Ok, my problem has come back so let me try to explain in the best way possible to solve this issue. Originally: I had some issue which corrupted my filesystem on drive md19. I hadn't fully realized this and I pulled the drive to replace it with a larger drive. After this my array showed I lost a bunch of files. I stopped the rebuild, pulled the new drive (14tb) put the old drive back in (6tb), did a new config, and started the rebuild back. All my files suddenly returned that were lost on that drive. Now: My drive is rebuilding using the original drive(6TB) which was in md19. I checked my system and I now see that I have lost a bunch of files again. It looks like this is the culprit: May 22 06:39:46 Unraid kernel: XFS (md19): xfs_do_force_shutdown(0x8) called from line 439 of file fs/xfs/libxfs/xfs_defer.c. Return address = 00000000e39b5244 May 22 06:39:46 Unraid kernel: XFS (md19): Corruption of in-memory data detected. Shutting down filesystem May 22 06:39:46 Unraid kernel: XFS (md19): Please umount the filesystem and rectify the problem(s) My array is currently rebuilding parity. What is the best method to continue forward without losing my data? 1. Should I allow the array the time to finish rebuilding and then try an XFS repair? Currently if I look at MD19 it shows empty no files. But the array sees that it is full of data. or 2. Should I stop the array rebuild and try to run an XFS repair on an emulated disk? or 3. Should I stop the rebuild pull the drive, see if I can copy data off the drive using some sort of third party tool to pull the data? I know Tee-Tee Jorge said I should backup the data and format the drive, and I want to do what he says, but what's complicating things for me is the array rebuild. Other Questions: Why is Corruption of in-memory data happening? Looks like the answer to this is just XFS corruption, not RAM. Will the XFS corruption cause issues with the Array rebuild? My guess is no, since the parity is raw values? unraid-diagnostics-20200522-1305.zip
  3. Ha Ha! Ive been around on the forums for a while. Not necessarily advanced in the Unraid Technical sense. Right now I have a minimum free setting of 15GB usually 20GB, what can I say, i trim it down when im running out of space to squeeze the most out of things as I can. Drives are expensive. However 20GB even 50GB in relation to a 14TB drive is going to show 100% full, when you've filled the drives up to that level. Currently the disk i have with the least amount of free space is 6GB.
  4. Hello Constructor, What is the correct amount of capacity to have free to safely write to my array? Thanks
  5. Hi Tee-Tee Jorge, Im NOT having corruption over and over on the same drive, it's been different drives every time. First it was my docker image on my cache drive. Then it was my cache drive running XFS. And now it says md19 is having this issue which is emulated and in the process of being rebuilt as we speak. I am receiving the following in my logs May 21 17:35:52 Unraid emhttpd: error: get_fs_sizes, 6412: Input/output error (5): statfs: /mnt/user/lost+found I have noticed I now have missing data from my array. You helped another user with these issues of which I appear to be having the exact same issues. I am going to put the old disk back in and perform a new config. I have attached my current logs for you to look over if you would look them over I would appreciate it as I don't know how to correctly determine what is happening. unraid-diagnostics-20200521-1739.zip
  6. Ram Tested Good. My array is rebuilding right now, what steps should I take next?
  7. My array is in a degraded status right now because I was upgrading a drive from a 6tb to a 14tb. This error occurred during the rebuild process. I will do as you ask, i appreciate the help
  8. I have launched Memtest on the machine with a usb drive with just memtest86+! on it. Ill report back on the memory test status. If anyone is good enough to interpret my logs and see perhaps what is wrong I would truly appreciate it
  9. Hello, I have had some issues recently on and off with my server with XFS corruption on my cache drive. I repair the corruption with xfs_repair -v for the drive and then the issue comes back a week later. I just replaced a drive and now I am having some strange issues I don't know how to solve. But are not xfs corruption on the cache drive. It's showing a problem with md19 which is a drive im trying to replace. Please see my attached diagnostic logs. Previous post of issues: unraid-diagnostics-20200520-1051.zip I decided it would be a good idea to run memtest. However when I choose that from the UNRAID boot menu, it imediately reboots my server. I am unable to load Memtest
  10. It looks like I am still having problems, going to try the suggestion found here: https://forums.unraid.net/topic/66749-xfs-file-system-corruption-safest-way-to-fix/ Actually, I don't know which device is having the xfs problem. md5 is a reiserfs disk. I unfortunately have a mix of xfs and reiserfs. Need help on what to do next. Going to attach diagnostics. Update: I think it's my nvme cache drive which is having issues. This is a bummer. Going to see if I can repair, but I think the drive might be having issues itself. I think my drive overheated After running a repair it appears as though my issues have been solved
  11. Hello, My server frooze on me. I was able to pull the syslog, see attached. I rebooted the server but Unraid woudln't load the gui. I ended up pulling the USB drive and putting it into a windows computer to repair it. After that I was able to boot the server and it appears to be running fine now. Logs say that device md5 has corruption, i'm interested in knowing if I should do anything to check the health of the server. Syslog https://www.dropbox.com/s/mmmnz32pmqhzl9y/syslog.txt?dl=0
  12. I had the same issue as you, I also had to update my ovpn file from the console. Something is clearly wrong with the docker.
  13. Ah! Shoot, yes, you are right. I did change my MTU setting, to Jumbo frames a while back! I am using 10 gigabit so it's really been helping the speeds, and so far there hadn't been any negative consequences. I take it that's the cause? The thing that get's me here, MTU is Layer 2, I have a firewall. So when unRaid sends packets to my firewall, the MTU size is 9000, but the firewall should be tearing down those ethernet frames and rebuilding them to send to the cable modem at 1500 MTU with it's mac address, and in return the MTU size should be 1500 from the modem to firewall to unRAID on the return path. I guess I don't exactly get why this is hanging up this plugin. unRAID otherwise has perfectly fine access to the internet. Thanks!
  14. Not that I can tell, or am aware of. I can browse to https://lsio.ams3.digitaloceanspaces.com/ in a browser and I get an XML page. So I don't believe I am being blocked.
  15. I do have a piHole, and I did see some others having issues with that, but... I checked the query log, and refreshed after clicking on the plugin, and don't see the piHole blocking anything from "*digitalopenspaces.com". My unraid network settings, are not set to use the piHole for DNS. I have unRaid set to use 1.1.1.1 I disabled the piHole for 5 minutes, and tried, and had the same issue.