noja

Members
  • Posts

    124
  • Joined

  • Last visited

Everything posted by noja

  1. I just had something similar start happening for me as well. I'd get as far as entering the credentials from the user I created on Unraid but then it would ask me to select a certificate to use for the share. Since I didn't have any certs set up, I used the address that the MyServer plugin defaults to since it has a LetsEncrypt cert running. Connecting to that ij5lk2j4j52j34k2k3l4jl23.unraid.net address now lets me log in with no issues under the un/pw I expected.
  2. Hey! I'm having a hell of a time trying to figure out how to import my LastFM csv. The instructions from Github tell me to However, I have no idea how to start the container in "Shell mode" on Unraid. I've tried to taking my CSV from https://benjaminbenben.com/lastfm-to-csv/ and importing it while Maloja is running, but it errored out on every single line I'm definitely feeling a little stuck on how to get all my history imported. (If it helps, I do have Multi-Scrobbler up and running well and connecting to my Maloja instance - but I don't see a way from that app to import from LastFM history either) Thanks!
  3. Of course life is back to normal now - thank you! That being said, starting the array seemed like a bit of a chore and Firefox asked me if I wanted to "reconfirm my form submission" meaning something timed out while the array was starting. I think I'll go through and just remove any older plugins that I haven't use in awhile and see what's up. During boot though, I am getting this log Any chance that nvidia driver is causing problems? Thanks again!
  4. Ok! Safe mode has fixed the issue where the array wouldn't appear to start. Thank you! What's the best method to track down which plugin might have been acting up? manky-dreadful-diagnostics-20220802-0744.zip
  5. Found my server this morning with the array offline after I went through parity last night (FWIW this is the first parity check since I upgraded to 6.10.3). I didn't see anything missing or otherwise concerning other than a log about disk 8 having some errors that it's had for like forever. So when I tried to start up the array again the really weird thing is that docker started up and seems to work fine, but now the array still wont start and there are no disks reported missing. It's still giving me Stale Configuration though. Should I just set a new configuration? Thanks for any help! manky-dreadful-diagnostics-20220801-1426.zip
  6. Thanks for the insights @trurl. I had restarted docker prior to downloading the diagnostics to see if things responded better after shutting down docker. I have no idea why the docker.img file was on the array as the system share is set to Prefer:Cache. In any case, I shut down docker again and moved it back to the ssd and things quickly moved back to normal. So thank you for that heads up! One thing that keeps spamming through my logs though are: May 24 12:20:09 AVASARALA kernel: docker0: port 25(veth203c48e) entered blocking state May 24 12:20:09 AVASARALA kernel: docker0: port 25(veth203c48e) entered forwarding state May 24 12:20:09 AVASARALA kernel: docker0: port 25(veth203c48e) entered disabled state It's more than just port 25 too. Should I try and track down why that's happening?
  7. Docker is essentially refusing to work right now. All of a sudden today they just started crashing one by one and the whole GUI started getting bogged down. I turned off the docker service and life got a lot better. Before I swap it out, is someone able to help me understand if my ssd is failing and might be the culprit? Thanks for any help! avasarala-diagnostics-20220523-2010.zip
  8. Fix Common Problems identified that I have an MCE error. I took a look at my logs and found this: 1 CE memory scrubbing error on CPU_SrcID#1_Ha#0_Chan#1_DIMM#0 or CPU_SrcID#1_Ha#0_Chan#1_DIMM#1 or CPU_SrcID#1_Ha#0_Chan#1_DIMM#2 (channel:1 page:0xd7237f offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0008:00c1 socket:1 ha:0 channel_mask:2 rank:255) First - is that going to be the source of the error? I didn't have the MCE log installed in NerdPack so I'm not sure how to check that. Second - assuming that would be the source of the error - would the highlighted slots be what I'm looking to pull in order to find the culprit?
  9. Just ran into this problem as well. I believe my culprit was Ferdi which I added Unraid to a little while ago. Never thought that having it open in Ferdi for a week straight would cause me some issues. However, "quick" reboot and I'm back to normal. I have since removed the Unraid service from Ferdi. The log file is back to a manageable level.
  10. To anyone looking for this in the future, I never did sort out why the networking wouldn't work. My solution was to create a new VLAN for my servers like I had been planning for a long time. Setting the static IPs over to the new VLAN flushed whatever issues arose and everything works fine. I had to connect directly to the servers and reboot in GUI mode and I set the IPs that way. I have to assume that if I switched the IPs back to the original VLAN, life would work again, but I'm not going to test that now that I've set things up the way they should have been done in the first place.
  11. Thanks for the help! So I've set the VLAN to 1, but how do I change the "main" interface to something else? My "main" interface isn't a VLAN.
  12. I got a new pfsense firewall installed today and had to change the previous lan into a VLAN with a tag. The new LAN is exactly the same so that all my static IPs could stick around, but now it has a VLAN tag whereas before it didn't. Both of my Unraid servers are fully accessible via SSH and the NFS/SMB connections are working fine, but docker containers and GUI are inaccessible. Is there config that I missed that I can edit through SSH? So logs say something like: "Dec 30 13:56:09 SERVER-NAME nginx: 2021/12/30 13:56:09 [error] 32152#32152: *19991992 auth request unexpected status: 502 while sending to client, client: 172.16.XXX.XX, server: , request: "GET /Dashboard HTTP/2.0", host: "LOTSOFLETTERS&NUMBERS.unraid.net"
  13. I had this issue too for awhile. Turned out it was my unassigned devices NFS connections that was at issue. When I disconnected them before the reboot, life was good. Not sure what the permanent fix was, but eventually reboots began to happen as normal.
  14. Ahh, I didn't know that. Thank you for the direction! Parity-sync is going and I'll update if any issues arise.
  15. Hi - I finally got around to adding a second parity drive for the first time over the weekend. Parity-sync completed successfully and I didn't have any issues. I took a look at the server this morning and noticed that there is that happy red X next to my brand new parity drive. Oddly, I can still run SMART tests which suggest that it's smart report is kosher. I then stopped the array and reseated my cables, but that did not help. More searching through the forum suggested that there might be an issue with my brand new Ironwolf 8TB drive in this post, however the applied fix and subsequent reboot haven't fixed the issue. The drive is connected to my X8DTL-F through an LSI HBA which then connects to an HP SAS expander and then into the drive. Any direction towards a fix would be amazing. Thank you all! manky-dreadful-diagnostics-20211027-1533.zip
  16. Hello! I just added a second parity for the first time ever and of course I ended up with this problem I think. It's my only ST8000VN004 drive (lots of ST4000 drives with no issues otherwise). It appears that Seagate has changed their folder structure a bit in the SeaChestUtilities.zip and I'm unsure which route to take. Right now the folder structure in the zip is: Linux->RAID or Non-RAID->centos-7-x86_64 or centos-7_aarch64 Not sure which one I should be grabbing to start this process. Thanks for any help!
  17. Ran into an issue today where I had a Synology share mounted under UD. I had manually shutdown the Synology, but forgot to unmount the shares from Unraid first. UD would not allow me to unmount the share until I turned the Synology back on. Additionally, got impatient and decided to just reboot the server. However, a graceful shutdown and reboot was blocked by that unmount. Turning the Synology back on finally allows the server to reboot. Weird stuff.
  18. Don't think so. I think I understand what trapexit is arguing, but there has to be a better solution. I've been using autofs on an ubuntu setup and it still ends up with stale file handles all the time. I should note that I have hard links off and no cache for the share. The only other option I can see is the Tunable (fuse_remember): setting under Settings->NFS, but the warning about out of memory errors has me a little skittish for setting that to -1.
  19. Hey! So I've basically decided to give up on cloning the contents of the ssd. What I've done is remapped the folder where Plex itself stores it's automated backups to an NFS share from my backup location. Essentially, I added an NFS share to the container called "/databasebackups" Then in Plex itself, I told it to backup to that folder. Essentially, I now have a backup of the database including watch stats, libraries, etc. - but if that SSD ever takes a dive, I'll have to re-download all the images, posters, metadata. Does that make sense? Given that I'm kinda comfortable with that, I've also started using that same method for all my programs that have an integrated backup feature - Sonarr, Radarr, etc. While those are also covered in the CA Appdata Backup - I like the idea of being able to restore one item at a time, rather than the entire appdata folder that CA forces you to do.
  20. Yep - I'll add another one to the list. Was timing out on the web ui and enabling privoxy solved that. After getting it up and running, I tried the old "turn it off and on again" on privoxy, and having it off results in no web ui. Funky.
  21. Happy to do that. However, looking at it currently, I've had zero errors overnight now. Thinking about this, the errors do seem to coincide with a massive file transfer that I was doing over the last 4-5 days. I moved all my movies to a second server (9ish TB) and since the transfer finished, there have been zero errors. Over the time, I was using Krusader and DoubleCommander for the transfers. Could there potentially be a larger networking issue on my router side that it couldn't handle the load? pfSense logs aren't showing me anything and I have an SG-3100, so it should be good, but I guess that doesn't mean anything.
  22. So I keep getting a ton of errors spamming the log, mostly based on disk IO I think. Specifically, I keep seeing these three: kernel: traps: lsof[4210] general protection... unassigned.devices: Error: shell_exec(/usr/bin/lsof '/mnt/disks/plexappdata' 2>/dev/null | /bin/sort -k8 | /bin/uniq -f7 | /bin/grep -c -e REG) took longer than 5s! nginx: 2020/10/08 14:52:15 [error] 9129#9129: *311962 upstream timed out (110: Connection timed out) while reading response header from upstream...from the main server dashboard. Finally, the server gui itself is incredibly slow to complete any tasks. Just downloading the diagnostics for this post timed out and failed twice. Weird note regarding this part - I've been using the Chromium Edge mostly and the diagnostics only downloaded once I used Firefox. I have no idea what's killing me on all this. manky-dreadful-diagnostics-20201008-1510.zip
  23. I'm currently also getting a ton of these errors.