jaylo123

Members
  • Posts

    69
  • Joined

Everything posted by jaylo123

  1. Yea, I think you're better suited for running two systems - one for UnRAID and a 2nd system for gaming. You can run Windows 10 in a VM in UnRAID and pass through the GPU, but now you have a new challenge of how to properly stream the game from the VM to another computer. Advantage there is that you can play any game you want from any computer you want. I do this. Disadvantage is that you now have an extra layer of complexity that can fail, and if it fails you'll probably spend more time trying to fix it than you'd like. And trust me, as a father and husband, an angry wife and kids is no way to spend a weekend Edit: Another disadvantage is that video streaming will be horrible. Watching an h.264 or h.265 video using a remote desktop viewing program, no matter the tech, is very problematic and introduces a lot of jitter.
  2. What do you mean "Windows 10 on top of UnRAID"? I don't follow that.
  3. Yea, keep the VPN on UnRAID IMO. I personally use this and it works just fine. I just point my containers to this container for VPN access and it works fine. https://hub.docker.com/r/binhex/arch-delugevpn/ Video if you're uncomfortable or unfamiliar with the UnRAID GUI or infrastructure: https://www.youtube.com/watch?v=5AEzm5y2EvM For remote VPN connectivity if you're not at home, use the ASUS router with the MerlinRT firmware. https://www.asuswrt-merlin.net/ I use it and it is a lifesaver, and works just fine with Windows' default VPN client (if you use Windows). Or your OpenVPN client of choice. Hopefully Wireguard support is added soon, to either Merlin or Asus' builds.
  4. jaylo123

    VPN Provider?

    I use https://www.privateinternetaccess.com/ and they work fine. They also forward, but only from some sites. They have docs on how to guide you. Works great for me. Edit: https://www.privateinternetaccess.com/helpdesk/kb/articles/how-do-i-enable-port-forwarding-on-my-vpn That will tell you which sites to use for whatever VPN solution you have in your setup. You will need to follow the setup guide for whatever container you're using to configure the VPN client on your end. This usually means hacking it on the command line, or configuring the setup file beforehand using Notepad or something on Windows and then overwriting the file of the same name on your container's VPN setup and restarting the container so it loads the new configuration. Check the support threads for your VPN container of choice on further instructions.
  5. jaylo123

    My new setup!

    Mostly for Plex streaming, and I have an Nvidia GRID M60 to install so I can also use VMs for some remote Steam gaming, but I'm happy with it! Sorry, didn't know where else to post this, and seems no one else really seems to understand what this means so wanted to share here Definitely overkill, but I wanted to plan ahead and have this last for at least 7+ years. I'm toying with adding another 16TB Seagate Exos drive, but for now it will hold up. Currently doing an rsync from my main system before I transfer the license, which is why the HD temps are a bit higher than normal usage. Will take a few days as my 'old' server (but still current server) is running off of a mix of USB2/3 drives and internal storage. The USB drives used to be my backup targets, but I was running out of space 2 years ago. I know, horrible, horrible setup, but hey, had to do what I had to do! I plan on installing the Nvidia-friendly UnRAID build after I transfer the license so I can utilize one of the GRID GPUs for Plex encoding, and plan on passing the 2nd GPU on the GRID card to a VM for gaming. Anyway, no real point to this post, just wanted to share. The only thing I would suggest to the dev team is to dynamically generate root's SSH keys on a 'firstboot' based on /dev/random or time epoch or something that is unique to each key generation. I was honestly quite surprised that I could passwordlessly SSH between boxes as root without any security checks, and it's not like those private keys are exactly a secret under /root/.ssh. Maybe that's by design.
  6. A Roku, Apple TV, Firestick, etc, won't have access to NVDEC. Sorry if I misunderstood something?
  7. I didn't do any of that. Works fine for me. Like I say, the video is accurate - just stop when you have to open a terminal to the container. I actually followed your post and the video to get it working for me (Nvidia GRID M60 GPU), but stopped when it told me to crack open a terminal window. If you have to open a terminal window to the container, then that's when you would feed the issue upstream to this thread so the container managers can see if it's something they want to support and/or manage. The reason you don't want to muck with it is when an updated version of the container is released, it will likely blow out any customizations or modifications you've made to the container. And that's by design.
  8. Hi all - I am looking to build a new UnRAID Plex server soon. The purpose of it is to host a Plex media server. Easy enough. The storage disks will be 5400 RPM. I am wondering if I should spring for my parity drives to be a bit faster in the event of a parity rebuild. Would it help with the speed of the rebuild effort, or does it just not matter?
  9. Is there any reason to this? These are just media files, not an always-persistent, mission-critical financial database. You are doing mostly reads and minimal writes. Your protection is wrapped around the parity drive and any backups you are doing external to the server. ECC provides none of that value. The only group that I really know of that really touts this as something that should be required are the FreeNAS people, and it just isn't necessary for home consumer use, and will save you a lot of money. Edit: Unless you're hosting the storage on a filesystem like ZFS that would benefit from ECC. But stock XFS or even EXT4 are just fine for your use case. No need to overengineer it (and TRUST me, I have a habit of doing that myself).
  10. That video is accurate up until it tells you to go into a terminal and start mucking around on the command line. NVDEC must also be enabled in Plex itself. Good luck!
  11. I can verify that my Plex sqlitedb has not had an issue ever since I moved it to my SSD cache drive and off of my spinning disks in the main array. I did this before I even knew about this thread / posts on Reddit about this issue. I rebuilt and rebuilt, but every day or two the database would become malformed. It's been up and stable for weeks now. Shame, I lost a lot of user data, and since the backups were stored on the spinning disks they, too, were toast. My fault, I should have kept better backups, but it is what it is.
  12. Hello, I'm seeing a ton of errors around IOMMU, VFIO and my onboard Intel GPU. I don't necessarily want to disable it in case I need console access to my Unraid server. The errors are around PTE read errors: Dec 26 20:48:43 unraid kernel: DMAR: [DMA Read] Request device [00:02.0] fault addr 6fae4000 [fault reason 06] PTE Read access is not set Dec 26 20:48:43 unraid kernel: DMAR: DRHD: handling fault status reg 2 Dec 26 20:48:43 unraid kernel: DMAR: [DMA Read] Request device [00:02.0] fault addr 6fae5000 [fault reason 06] PTE Read access is not set Dec 26 20:48:43 unraid kernel: DMAR: [DMA Read] Request device [00:02.0] fault addr 6fae5000 [fault reason 06] PTE Read access is not set Dec 26 20:48:43 unraid kernel: DMAR: [DMA Read] Request device [00:02.0] fault addr 6fae5000 [fault reason 06] PTE Read access is not set Dec 26 20:48:43 unraid kernel: DMAR: [DMA Read] Request device [00:02.0] fault addr 6fae5000 [fault reason 06] PTE Read access is not set Dec 26 20:48:43 unraid kernel: DMAR: DRHD: handling fault status reg 2 Dec 26 20:48:48 unraid kernel: dmar_fault: 5102159 callbacks suppressed Rinse, repeat. Now, the "correct" fix is to disable the onboard GPU in my BIOS. However, I would lose all console access from the local system if I do this, as my actual GPU is an Nvidia GRID M60, and it's working just fine. So I need the GPU to work during POST, but after the OS takes over I couldn't care less if it is functional. So instead, I would like to just blacklist the 00:02.0 device entirely from loading at boot time. What can I pass along on the kernel line in syslinux.cfg that will just disable this device from loading into an IOMMU group / VFIO group / just shut it the heck down?
  13. I have this problem for an onboard Intel GPU. I have no intention of passing it through. It is in its own IOMMU group. Would there be any way to just not use this IOMMU group entirely? As others have said, this is the top result when looking for this issue so any help would be appreciated.