Varean

Members
  • Posts

    41
  • Joined

  • Last visited

Varean's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. I have a remote syslog server set up currently, I just took a copy of the log from shortly before I connected to the Wireguard VPN, until after the crash and named it 'rsyslog' and put it in the same directory as the actual syslog from the startup after. I can post the entire remote syslog file if you'd like.
  2. Hi @JorgeB, sorry to necro this thread but the issue happened again and I noticed in the syslog it gave the same general protection fault. I checked your link but unfortunately I am running an Intel system so it may not apply. I do think it may have failed shortly after I had connected to the system using the Wireguard VPN tunnel I had configured on it for my phone, trying to check the uptime/status of my server and pre-clear I was performing on an unassigned device. I saw an error in my remote syslog similar to this thread. I've provided my diagnostics and included a copy of my remote syslog file named 'rsyslog' In the meantime I've disabled Wireguard on the server to avoid connecting to it. alexandria-diagnostics-20240409-1807.zip
  3. I think all of these issues came about after the upgrade from 6.9.2 to 6.12.8, I did this because some of the plugins couldn't update until I had a higher version of Unraid, and also because my Plex database got corrupted after a failed appdata backup procedure. After reformatting my Cache drive so far it hasn't been forced into Read-Only mode, but I haven't had it running long enough before I started getting other errors which was fixed by JorgeB suggesting I switch my docker from macvlan to ipvlan. So far 3 1/2 days uptime so I am hopeful.
  4. So far my Unraid has been stable for the past 3 days after making the change you suggested, which is longer than it normally has been. I appreciate all the help, sent some beer money your way for all the help.
  5. I just recently upgraded from 6.9.2 to 6.12.8 and started running into this similar issue. Symptoms are basically where the system becomes unresponsive, and I can't even SSH into it or ping it. Today after letting it sit for about 3hrs while it's doing a parity check I tried to load the GUI and I got a 404 nginx error - restarting the syslog service that allowed me to load the page correctly. I think once parity is finished I was going to downgrade back to 6.9.2 since it was much stable and see if it persists. Is there a way to monitor your log folder size? I am never able to see if that's getting too full before I just have an issue - and what script did you use to increase the size of it?
  6. For the last few weeks I've been running into issue after issue between my cache drive being forced into read-only, and now my unraid server can't seem to go more than a day or two before it will just freeze up and require an unclean shutdown. I may have fixed my Cache Drive issue but because of this new symptom my system will not run long enough before I have to physically reboot it. Steps Performed - I am unable to ping the server, and cannot get it to respond to SSH - I have run the MEMTEST function 5 times and have not received any errors - Nothing is displayed on the external monitor hooked up to it. I have downloaded my Diagnostic file, and as I have a remote syslog server set up I have included a copy of the syslog from that server as it has the data up until the failure. It's in the same directory as the syslog file, but it is named 'rsyslog.log' alexandria-diagnostics-20240405-2011.zip
  7. Kind of a Neco, is anyone seeing this issue with drives bigger than 10TB? I'm on 6.12.8 and have a 16TB Ironwolf I am pre-clearing and wondering if i'll need to run this process as well on the disk. I had to do it on my 8TB drives and it worked.
  8. I'm going to try two different things today, worst case is they don't work. I've moved everything off of my cache drive and to my array, then reformatted it from btrfs to xfs. Now I am moving everything back to it that was on it. Even after formatting Unraid was still reporting it about about 3GB of usage on it? Not sure if that indicates an issue. Then once I get the chance this evening I'll power it down and pull one of the RAM sticks to see. It was suggested by a friend to re-seat the CPU but I don't think that really could be an issue. In the morning i'll see if the dockers remain functional or if the cache drive goes to read-only again.
  9. I agree, I've spent about a week trying to find out a solution on my own before posting. The hardware luckily was all brand new when I bought it (mobo is an Asus Prime Z690), I know the order of operations was Backup of my AppData failed, and Plex told me I had a corrupted database > recreated my plex database > updated my Unraid version > had to perform a force update on each docker container as they wouldn't launch > after about a week and a half I started getting errors and unable to play back media in Plex. Tried restarting containers and received an execution error because my cache drive was in read only mode.
  10. Understood, and to provide additional context this system has been running for coming up to two years here in the next two months, I would presume that if there would be an issue with the memory controller then it would have reared its ugly head sooner rather than later. Part of me thinks even though the Cache drive passed it's SMART test it might be related? I know it all started when my CA Backup/Restore Appdata plugin tried to run overnight and I woke up to a bunch of errors related to the cache drive being full, and then about a week or two later these issues started popping up.
  11. I rebooted and performed a memtest but I received a Pass. I also verified that the cache drive cable was seated correctly and the cable on the motherboard is also seated. (attached new diagnostics) alexandria-diagnostics-20240326-1823.zip
  12. For the past week, i've had my cache drive forced to Read-Only mode about 3 times. Looking at the logs I'm wondering if it's an issue with my RAM as I see But I can't be quite sure as that appears after the cache drive is forced into Read-Only mode. I thought maybe the SSD had failed but the SMART tests are showing accurate data. The time inbetween being forced into Cache mode also varies, it mostly appears to be happening overnight. alexandria-diagnostics-20240325-1729.zip
  13. Attached is the Diagnostics, I don't believe the system has gone down since the error, but it wasn't due to power loss. SkyNet-diagnostics-20221002-1509.zip
  14. I have an 8TB Seagate Ironwolf drive, and Unraid is showing notifications as having Read errors on this disk, but I ran the Extended SMART test and it reported back that it had passed the test. I've had this device powered on for about 5 months and ran it through 3 Pre-Clear cycles that passed. It doesn't seem to have gotten worse over the last 3-4 days since it first reported it. I'm not sure how to proceed - should I RMA the drive to Seagate under warranty and swap in my backup, or would it be safe to continue to use with precautions? ST8000VN004-2M2101_WSD4EWR1-20221002-1318.txt syslog.txt