• Posts

  • Joined

  • Last visited

Everything posted by dlchamp

  1. I was using wireguard with PIA, but as of yesterday morning it's not working.
  2. I don't recall having this issue 10 days ago. Maybe I just didn't pay attention, but I know it's happening to me currently. Any status if it's PIA or should I start looking at my network config. I was playing with pfblockerng, but I did load the QBit webUI last night, so I don't think that did anything to block me.
  3. Figured it out: If you started a game on your PC like I did, then wanted to move into this docker container without losing progress, it's simple. Create the container and make sure the World Name is the same as your original world (THIS IS IMPORTANT) Server name, password, etc doesn't matter. That world name has to match, though. Stop the container Then, copy the files from C:\Users\%username%\AppData\LocalLow\IronGate\Valheim\worlds to /mnt/user/appdata/valheim/.config/unity3d/IronGate/Valheim/worlds Start the container and you SHOULD be able to connect and be exactly where you logged out with all inventory items, gear, and buildings right where you left them. ____ For Valheim, I was playing with one friend where I just had the server running using the in-game process. We have a couple more people joining us over the weekend, and possibly more soon after so I want to move our current progress to the docker container. Is there a way to do this? I want to keep the same map seed, our gathered resources, tools, armor, and buildings.
  4. You say that, but with NordVPN, no port forwarding, I've been able to download at 60MB/s pretty regularly. Of course, this varies by indexer and seed amount. Granted, with PIA properly setup, I can get over 100MB/s, 60MB is still pretty damn fast.
  5. I'm aware of that... I'm talking about a total. Like.. Tdarr
  6. After realizing that my mistake are why I was having problems, it's been working flawlessly. I've converted almost 3000 episodes over the last couple days using my P400 set to 6 workers. I do have a feature request. The conversion to HEVC is mostly to save space, is there any plans to add stats to show how much space was saved, or average conversion time, or average size reduction?
  7. Changed my encoder to hevc_nvenc and it started using the GPU. Got 6 streams going on my P400 now.
  8. I've been using it for a bit and it's been working fine. I saw the GPU support had been added so I update the container config to use my GPU, but now the queue is empty and doesn't seem to be actually looking for anything to re-encode. Even though it is set to scan on start, I had to change the scan timer to 1 minute, then it would start filling the queue after a minute. It's not using the GPU, though. 4 cores/8 threads maxed out currently and 0 sessions loaded into the GPU. Set to use HEVC and Libx265 I'm also not sure why LSIO tag is there. I don't remember it being there before. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 01-envfile: executing... [cont-init.d] 01-envfile: exited 0. [cont-init.d] 10-adduser: executing... ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... **** (permissions_config) Settings permissions **** [cont-init.d] 20-config: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services Running Unmanic from installed module [services.d] done. Starting migrations There is nothing to migrate
  9. I saw your post here And I just saw the same thing. I was spammed 39 emails within the last hour. Then, the server crashed, but I'm not home to be able to take a look at logs. /bin/sh: line 1: 15204 Bus error /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null /bin/sh: line 1: 17382 Bus error /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null /bin/sh: line 1: 18658 Bus error /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null /bin/sh: line 1: 19841 Bus error /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null /bin/sh: line 1: 20412 Bus error /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null ... ... This is definitely a first. Edit: Server has been down for roughly 45 minutes, and I'm still getting these emails.
  10. I had another crash early Saturday night. I set the RAM back to auto and disabled XMP. It's back up and running, but here is the log that was written right before the crash. I've noticed the AMD-VI error I was getting before hasn't appeared since that one day for the 2 minutes it was being spammed. Pastebin - Log from 1/18, right before the crash happened.
  11. Well, that is a big difference here. My WIndows machine gets shutdown every night when I start getting ready for bed. This machine stays on 24/7 or as close to at as possible. As I told the others, I will definitely remove the XMP profile if it decides to crash again.
  12. For sure. I removed the undervolt, but XMP is still enabled. I will set it to auto if another crash happens. I ran a parity check after the migration to the new hardware, so the undervolt and RAM overclock were applied, but of course, that doesn't mean it's not a problem.
  13. Thing is though, I have IOMMU disabled in the BIOS. I read that it was potentially a problem having it enabled with certain NVMe controllers, so I did disable it, but unRAID is still seeing it as enabled. Though, HVM is showing as disabled, but isn't disabled in the BIOS.
  14. Is that a LInux thing? Running XMP profiles is pretty common. That very RAM came out of my gaming system that I ran at 3200 for the last year or so. I'll set it to auto if I get another crash. As of now, it's stable.
  15. Thanks for getting back to me! Since my final crash reported on Wednesday, the system has been stable and running. I did update to 6.8.1 this morning, but it's back up and running normal. I'm continuing to monitor, but I'm going to start my rclone script that I've been using for backups to see if for some reason that forces a crash as that is the only thing missing since the crash on Wednesday.
  16. I removed the undervolt. BIOS has been set back to default after my initial post, which was followed by two crashes. After removing the undervolt, I set the C-state to disabled, Set Power Supply idle control to Typical... (whatever it is), and then manually set the SOC voltage to 1.0 and set RAM back to 3200 by enabling the XMP profile. I did see it crash a couple more times after doing this, but it's now been up without issue since Wed. afternoon, however, I did update to 6.8.1 this morning so it's been rebooted.
  17. The motherboard has a built in heatsink for the NVMe which is installed.
  18. Another crash this morning. Nothing in the logs, but I did grab a picture of the monitor when it happened. This morning after I made the changes and rebooted, I got this error on screen during the reboot process. After rebooting, it started up normally, but then crashed maybe 30 minutes later. When I went home for lunch to check it out I saw this error. Rebooted and it appears to be running, but I'm VPN'd in and monitoring it as much as I can.
  19. I verified last night that C-states were disabled, but the Power Supply Idle Control was left to auto. I went ahead and set that as well. I also loaded BIOS defaults before setting these to remove my undervolt to rule out that as an issue. I'll update here if/when there is another crash.
  20. I verified last night that C-states were disabled, but the Power Supply Idle Control was left to auto. I went ahead and set that as well. I also loaded BIOS defaults before setting these to remove my undervolt to rule out that as an issue.
  21. I did read this. I did disable C-states in the BIOS. I do recall seeing another setting. I'll give it a try, I just don't fully understand why a week of stability, then I swap to nvme and now issues begin.
  22. Because the server is in a closet with less than stellar airflow and my dual Xeons would make the room the closet was in extra toasty. I have a lot more experience with Ryzen CPUs and making this chip run cooler overall was a simple task. It's not the cause for instability, that I can guarantee. I can reset everything to default just to rule out it as a possibility, but I really think the issue is elsewhere.
  23. My unRAID sever originally was running Dual Xeon E5-2660s, but after fighting with it's random freezes and crashes over a 2-3 month period, I decided it was enough and wanted to go with something a little newer and more efficient. I got a great deal on a BNIB Ryzen 7 1800X + MSI X370 Pro Carbon mobo. I did undervolt this chip. I also ran Prime95 and Realbench in Windows for a few days, then P95 again in a separate unraid install for 2 days straight to make 100% certain this configuration was stable. And it was. I moved it into production and it ran for a bit over a week. I decided that, since I now have the option, I wanted to free up a SATA slot for another drive and move my cache to a large NVMe drive. This change was also to make it possible for me to send Sab and QBitt downloads to the cache freeing up the array IO during those download and unpacking sessions. This seemed to work fine for another couple days, but then I restarted my Plex container and it wouldn't start back up. Looked at logs and it was spamming "Starting Plex Media server". Then looked at my unRAID logs and it was spamming something about issues with my NVMe drive. I should have saved it, but I didn't have my l logs mirrored or writing to another location and reboot and lost them. I did some searching and found that the issue potentially came from downloading torrrents to a BTFS formatted drive. I'm not familiar with BTFS or why it's really a problem, but I backed up what I could, then formatted the cache drive to XFS and restored everything back. This issue did cause me to have to remove all Plex appdata and completely reinstall that container because many of the .db files wre corrupted and I would still get the "Starting Plex Media Server" spam until I did so. Fine, no big deal. But, after this happened, I started having my log file written to my appdata folder so that it would persist through a reboot if something else started happening. Which, it did. A few days later, I noticed my log was being spammed with: Jan 12 06:45:02 Anton kernel: nvme 0000:01:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x00000000da1b0000 flags=0x0000] Jan 12 06:45:34 Anton kernel: AMD-Vi: Event logged [IO_PAGE_FAULT device=01:00.0 domain=0x0000 address=0x00000000fad4e000 flags=0x0000] Jan 12 06:45:43 Anton kernel: amd_iommu_report_page_fault: 194 callbacks suppressed There were multiple of each, but these spammed alternating for about 2 minutes straight. As you can see this happened on the 12th and there wasn't a crash or any noticeable issues. Yesterday, the 13th, the server crashed. Nothing appears to have been written to the log about said crash, but on screen it said something about "Kernel panic - Shutting down CPUs with NMI" This crash happened 3 times over the next few hours, but now it's been up for the last 12 hours. I read something about IOMMU issues with Ryzen and NVMe drives, so I disabled it in the BIOS - unRAID still shows it's enabled in the WebGUI. I do not use VMs. I am running the Nvidia build of 6.8 by LS.io as I have a P400 being used with my Plex container. (patiently waiting for their 6.8.1 release.) BIOS is the latest non beta. Diag only shows my logs as of the last reboot so I am attaching the logs that I have been mirroring since Jan 6th. Aside: I've seen a "clock unsynchronized" error a few times, but I don't know how to handle that. Time and date are correct in BIOS and unRAID. anton-diagnostics-20200114-0824.zip syslog-