• Posts

  • Joined

  • Last visited

Everything posted by relink

  1. Im thinking this was the issue, g2g, xteve, and Plex were all updating around midnight, so if g2g or xteve wasn't done yet and plex did its import that would explain why my guide would have been empty. So what I did was changed the cron.txt so that g2g updates at 9:00pm, I set xteve to update at 10:00pm, and left Plex at its default of between 11:00pm-Midnight. This way there is absolutely zero chance any one of them will ever overlap.
  2. I can certainly try it if nothing else works, but I've had bigger lineups than that working for almost a year, on the same system with 32GB ram. I think ill check into this first, as the g2g cron, xteve, and Plex all update at the exact same time. Im going to try and stagger them. However I am going on 2 days and I still have guide data so far....
  3. Yup, I ran it manually and everything worked fine. Im not sure I follow, I have 3 lineups all very small, 79 channels , 19 channels and 9 channels. and each one has its own yaml. All 3 are set to download 14 days of guide data, and the cron is whatever the default in the container is. Part of the issue seems to be that already downloaded data is being replaced by nothing. I ran cron manually today, and currently have data but there is a good chance I will wake up tomorrow and have no guide data again.
  4. My guide ends up blank almost every morning. I’ve been running your Xteve guide2go container for a couple years now. A few weeks ago I started loosing guide data almost every morning. I changed IPTV providers last week so I took the opportunity to start over from scratch, new, new yaml files, everything should be new. I thought I fixed it, but I woke up today to no guide data again. not sure what else to check?
  5. I've been using the same router for years now, im not sure what exactly id be looking for.
  6. I know this post is about 2 weeks old, but I am having nearly the same issue with my "Dashboard" and "main" pages. Except I can confirm that the same issue happens in Firefox and Chrome (Brave), as well as in any browser on my iPhone. So this doesn't seem isolated to one particular browser or device. -All graphs are missing (Always, except immediately following a reboot) -All storage devices are missing (visually) (Always, except immediately following a reboot) -Sometimes Disk Location is missing but not always -Sometimes Docker containers are missing (visually) but not always. -Sometimes only the top bar loads and none of the dashboard page loads at all until I refresh the page a few times. I'm not sure at what point this issue is happening, it'll usually go several days before it starts happening. So far the only way I have been able to fix it is to reboot. Screenshots below, and diagnostics attached. EDIT: I just rebooted so I could have my dash back and even after a fresh reboot it takes 10-15sec for the missing parts to show up.
  7. Wow, I’m starting to think the syslog server built into my Synology isn’t very good…I searched for “macvlan” and it found nothing.
  8. When I went through the previous logs I noticed something mentioned about Plex. Just recently my Plex server went offline, the container says it’s running and I can still access the unRaid UI but Plex is not loading. I attached a current diagnostics to see if it helps out at all.
  9. Good to know, I didn't know that. Diag is attached.
  10. I have been having issues with kernel panics ever since updating to 6.9.x, however in my last post about a month ago I seemed to have figured it out, my server has been up since that last post. Until 3 days ago... Out of the blue it's locked up again, cant even ssh kind of locked up. So I unfortunately had to do a hard reboot. This was in the middle of the night so I didn't troubleshoot I just went back to bed, and everything was fine in the AM. Yesterday it happened again around 2:08PM, and once again no SSH or anything, so I had too hard reboot. This time I made sure to pull the logs from the syslog server on my Synology, I do apologize it only exports as a .csv or .html, but it is attached. I also tried to grab a new diagnostics but despite the fact that everything seems to be running fine, I cant get the diagnostics to download. All_2021-5-26-15_32_30.csv
  11. Thank you for laying that out for me, I had no idea those steps were required to make this plugin work. But it is definitely working now. oom errors are rare, but it's nice to have a safety net now.
  12. @primeval_god thanks for the tip, I tried running the command and got the following output, and the swap is still not enabled. swapon: /mnt/cache-nvme/swap/swapfile: insecure permissions 0666, 0600 suggested. swapon: /mnt/cache-nvme/swap/swapfile: read swap header failed After some googling I see people using other Linux distorts running “mkswap”, but they are all referring to a swap partition. I’m not sure if that’s the correct fix for unraid.
  13. Hmmm, yah it looks like “/dev/sda/“ doesn’t even exist in the container. I would need to use “shfs” which apparently doesn’t work with this variable, as it expects a physical device...crap. Well the other ones work, and if the write become too much then I’ll just set the containers to write to the cache and I’ll run the mover at a lower IO priority.
  14. This is what every example I could find said to use. It seemed a little off to me too, so I’m still looking into it, but haven’t found any different info yet.
  15. I was beginning to think the same thing. Specifically Sonarr, Radarr, Lidarr, and their accompanying app as they can all pull in large amounts of data, and theres nothing stopping the 3 of them from all trying to do so at the same time. So I applied the below solution to all of them and so far so almost 24hours stable. First off I pinned them all to only 4 out of 12 threads. Next I added the below options to "Extra Parameters" on each container to throttle them back; --memory=2G --cpu-shares=100 --device-write-bps /dev/sda:10MB Only time will tell if the issue is solved.
  16. Im having the same issue. Mine is set to "/mnt/cache-nvme/swap/" so I ran the below commands; - cd /mnt/cache-nvme/swap/ - truncate -s 0 ./swapfile - chattr +C ./swapfile - btrfs property set ./swapfile compression none everything looks fine, and I don't get any errors. I see "Swap file exists: ✔" but when I click "start" nothing happens and it still shows "Swap file in use: ✖"
  17. So, my server made it through the night and it's still running right now. I only have one error in my syslog this morning; blk_update_request: critical target error, dev sda, sector 76048 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0 Im not sure how or what changed but let me try to layout the current status and my thoughts. Since yesterday morning the following dockers have been disabled: -Sabnzbd -Sonarr -Radarr -Lidarr -AMD Even with those being disabled that still left a lot running, such as Plex, Nextcloud, MariaDB, NPM, PiHole, etc. I am actually watching Plex while typing this and its working fine. Yesterday I had a friend remotely play a 4K HDR movie and transcode it to 1080P with tone-mapping, all while 2 4K movies direct played in my house. While that was going on, I started a transfer for about 600GB to a non cache enabled share, and I also started a transfer from my unraid server of around another 600GB, this all ran fine. So I decided to double down and I started a non-correcting parity check on top of all of that. The Parity check started out kind of slow, and after a couple minutes made it up-to about 120MB/s, while the writes to the non-cache share slowed proportionately. Once the parity check hit 120MB/s the writes essentially stopped (showed 0Kb/s), when I stopped the parity check the transfers picked right back up. I did all that because the most prevalent error in my syslog pertained to "aacraid" which is my raid controller. so I stressed it to the best of my ability and I could not make it crash again. I did get one of those macvlan call traces yesterday morning, but It didn't seem to have caused any issues when it happened. I didn't really understand what the fix for that was either. Im not sure what else to check now, I find it hard to believe that any of my containers were capable if causing hardware errors, that i couldn't reproduce myself with the transfers i did. I've had the custom IPs for several years and had no problems, but I'm still willing to try the fix, I just don't know how.
  18. Well to be fair the writes didn't full on crash, they are just terribly slow. They writes are running around 50KB/s, however the parity check is running around 120MB/s, so im assuming the Parity check has a higher priority over the SMB transfer...this is all still running by the way.
  19. Ok, so I have several 100GB going into a no cache share, I have several 100GB being read from the array, and I'm running a parity check. The writes have slowed to about 1MB/s or less, the reads are between 34-40MB/s, and the parity check is running at about 45MB/s. Also CPU useage is around 50% average, and RAM is around 50%. Ok things changed before I even posted this. The Parity check is now running around 80MB/s and the writes have practically stalled.
  20. just to spice things up since just writing data doesn't seem to be enough, I decided to start a non-correcting parity check while it's still transferring. I'll probably start to do a large read transfer now too.
  21. No parity checks run just fine, over 100MB/s the entire time. Im also currently running a simple test, Im transferring several 100GB of data from my one of my computers to a no cache share on my unraid server...and so far im transferring at about 36MB/s and not a single error in my log and everything is still working...I dont get it...
  22. What would I need to do to ensure that I will be able to restore my current config when I'm done? Also I'm not sure anything would happen, the troubleshooting I have done so far seems to point to issues when there are a high number of writes going onto the array. I kept my server online all day yesterday until I ran the memtest, and it ran fine. What I did different was I disabled things like Radarr and Sabnzbd, anything that would do heavy writes. In fact Im about to run a test and see if I can make it crash by doing a large transfer from my desktop to unraid.
  23. Appdata, System, and Domains are all on a NVME cache pool. Also I have the Maxview utility installed as a container. I don't know if you're familiar but it's a tool to monitor and manage Adaptec cards. The screenshot below is my HBAs status, which all looks good to me.
  24. @Vr2Io Alright the memtest ran over night and passed. MemTest86-Report-20210428-211336.html
  25. I read over that thread numerous times. It really only seems to pertain to 1st gen Ryzen, but I tried it anyway, and its made no difference.