• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

noja's Achievements


Apprentice (3/14)




Community Answers

  1. Thanks for the insights @trurl. I had restarted docker prior to downloading the diagnostics to see if things responded better after shutting down docker. I have no idea why the docker.img file was on the array as the system share is set to Prefer:Cache. In any case, I shut down docker again and moved it back to the ssd and things quickly moved back to normal. So thank you for that heads up! One thing that keeps spamming through my logs though are: May 24 12:20:09 AVASARALA kernel: docker0: port 25(veth203c48e) entered blocking state May 24 12:20:09 AVASARALA kernel: docker0: port 25(veth203c48e) entered forwarding state May 24 12:20:09 AVASARALA kernel: docker0: port 25(veth203c48e) entered disabled state It's more than just port 25 too. Should I try and track down why that's happening?
  2. Docker is essentially refusing to work right now. All of a sudden today they just started crashing one by one and the whole GUI started getting bogged down. I turned off the docker service and life got a lot better. Before I swap it out, is someone able to help me understand if my ssd is failing and might be the culprit? Thanks for any help!
  3. Fix Common Problems identified that I have an MCE error. I took a look at my logs and found this: 1 CE memory scrubbing error on CPU_SrcID#1_Ha#0_Chan#1_DIMM#0 or CPU_SrcID#1_Ha#0_Chan#1_DIMM#1 or CPU_SrcID#1_Ha#0_Chan#1_DIMM#2 (channel:1 page:0xd7237f offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0008:00c1 socket:1 ha:0 channel_mask:2 rank:255) First - is that going to be the source of the error? I didn't have the MCE log installed in NerdPack so I'm not sure how to check that. Second - assuming that would be the source of the error - would the highlighted slots be what I'm looking to pull in order to find the culprit?
  4. Just ran into this problem as well. I believe my culprit was Ferdi which I added Unraid to a little while ago. Never thought that having it open in Ferdi for a week straight would cause me some issues. However, "quick" reboot and I'm back to normal. I have since removed the Unraid service from Ferdi. The log file is back to a manageable level.
  5. To anyone looking for this in the future, I never did sort out why the networking wouldn't work. My solution was to create a new VLAN for my servers like I had been planning for a long time. Setting the static IPs over to the new VLAN flushed whatever issues arose and everything works fine. I had to connect directly to the servers and reboot in GUI mode and I set the IPs that way. I have to assume that if I switched the IPs back to the original VLAN, life would work again, but I'm not going to test that now that I've set things up the way they should have been done in the first place.
  6. Thanks for the help! So I've set the VLAN to 1, but how do I change the "main" interface to something else? My "main" interface isn't a VLAN.
  7. I got a new pfsense firewall installed today and had to change the previous lan into a VLAN with a tag. The new LAN is exactly the same so that all my static IPs could stick around, but now it has a VLAN tag whereas before it didn't. Both of my Unraid servers are fully accessible via SSH and the NFS/SMB connections are working fine, but docker containers and GUI are inaccessible. Is there config that I missed that I can edit through SSH? So logs say something like: "Dec 30 13:56:09 SERVER-NAME nginx: 2021/12/30 13:56:09 [error] 32152#32152: *19991992 auth request unexpected status: 502 while sending to client, client: 172.16.XXX.XX, server: , request: "GET /Dashboard HTTP/2.0", host: "LOTSOFLETTERS&"
  8. I had this issue too for awhile. Turned out it was my unassigned devices NFS connections that was at issue. When I disconnected them before the reboot, life was good. Not sure what the permanent fix was, but eventually reboots began to happen as normal.
  9. Ahh, I didn't know that. Thank you for the direction! Parity-sync is going and I'll update if any issues arise.
  10. Hi - I finally got around to adding a second parity drive for the first time over the weekend. Parity-sync completed successfully and I didn't have any issues. I took a look at the server this morning and noticed that there is that happy red X next to my brand new parity drive. Oddly, I can still run SMART tests which suggest that it's smart report is kosher. I then stopped the array and reseated my cables, but that did not help. More searching through the forum suggested that there might be an issue with my brand new Ironwolf 8TB drive in this post, however the applied fix and subsequent reboot haven't fixed the issue. The drive is connected to my X8DTL-F through an LSI HBA which then connects to an HP SAS expander and then into the drive. Any direction towards a fix would be amazing. Thank you all!
  11. Hello! I just added a second parity for the first time ever and of course I ended up with this problem I think. It's my only ST8000VN004 drive (lots of ST4000 drives with no issues otherwise). It appears that Seagate has changed their folder structure a bit in the and I'm unsure which route to take. Right now the folder structure in the zip is: Linux->RAID or Non-RAID->centos-7-x86_64 or centos-7_aarch64 Not sure which one I should be grabbing to start this process. Thanks for any help!
  12. Ran into an issue today where I had a Synology share mounted under UD. I had manually shutdown the Synology, but forgot to unmount the shares from Unraid first. UD would not allow me to unmount the share until I turned the Synology back on. Additionally, got impatient and decided to just reboot the server. However, a graceful shutdown and reboot was blocked by that unmount. Turning the Synology back on finally allows the server to reboot. Weird stuff.
  13. Don't think so. I think I understand what trapexit is arguing, but there has to be a better solution. I've been using autofs on an ubuntu setup and it still ends up with stale file handles all the time. I should note that I have hard links off and no cache for the share. The only other option I can see is the Tunable (fuse_remember): setting under Settings->NFS, but the warning about out of memory errors has me a little skittish for setting that to -1.
  14. Hey! So I've basically decided to give up on cloning the contents of the ssd. What I've done is remapped the folder where Plex itself stores it's automated backups to an NFS share from my backup location. Essentially, I added an NFS share to the container called "/databasebackups" Then in Plex itself, I told it to backup to that folder. Essentially, I now have a backup of the database including watch stats, libraries, etc. - but if that SSD ever takes a dive, I'll have to re-download all the images, posters, metadata. Does that make sense? Given that I'm kinda comfortable with that, I've also started using that same method for all my programs that have an integrated backup feature - Sonarr, Radarr, etc. While those are also covered in the CA Appdata Backup - I like the idea of being able to restore one item at a time, rather than the entire appdata folder that CA forces you to do.