Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


jonp last won the day on July 15

jonp had the most liked content!

Community Reputation

301 Very Good

About jonp

  • Rank
    Advanced Member


  • Gender
  • URL
  • Location
    Chicago, IL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi there, I can't say we've tested passthrough on the embedded Radeon graphics included on some Ryzen builds, as the use-cases for it would be pretty narrow. I don't believe we even have a test bench with Radeon included. That said, passing through an integrated graphics device can sometimes be more difficult than one which is discrete. Intel iGPU pass through took a while to get supported and even then it's not 100% flawless. Unfortunately unless someone else in the community has done it before with the same/similar hardware, we can't really give you any other advice than what we have said in other threads or in the FAQ. In addition, if you really want to get some help, it might be useful for you to give more details about what you have tried thus far.
  2. Hi there, A basic google search of setting up a steam cache will show you how to do this. As far as where that cache "lives" in Unraid, I would recommend just letting it live in the array. With respect to your original post, I'm not sure what your obsession with RAID 0 is exactly if all you're looking to do is set up a basic media server. First and foremost, your max speed to accessing data on the NAS will be capped at your network speed. Most networks are still at 1gbps, so unless you're packing 10gbps networking (which I doubt), your max speeds for accessing data on the system will be ~ 100MB/s. That's slower than any average modern HDD anyway. SO all you get with RAID 0 is increased capacity at the cost of a higher probability for critical failure and loss of data (since there is no redundancy in RAID 0 anyway). So my recommendation for you is to not worry about hardware RAID or even RAID 0. Just set your server up with HDDs for the array and if you want to add a little performance, you can add an SSD or dedicated HDD for the cache. To maximize performance, go to the Disk Settings page (Settings > Disk Settings) and toggle the write method to reconstruct/write. This will increase general write speed to the array (at the cost of keeping all your disks spinning during a write operation).
  3. Hi everyone, Long story short is this: we haven't updated this container in a while primarily for three reasons: 1) we have been hard at work on figuring out a SQLite corruption issue related to Plex 2) we have been working on 6.8 3) Plex has an official docker container available in Community Apps now that they maintain themselves. So in short, we will be asking everyone to migrate to the Plex official docker container. We originally created the docker for Plex in the early 6.0 days when there wasn't an alternative at all, but with Plex officially supporting docker themselves, it seems unnecessary and redundant for us to devote developer resources to maintaining our own. I am sorry for us not being more proactive in letting the community know our intentions here.
  4. Why not use NFS for the Kodi/FireTVs and SMB for Windows access? You can configure shares to export to both. Getting NFS to work on Windows isn't something we're really familiar with. SMB is the standard for Windows based access to shares.
  5. Ok, so the image I'm looking at is of a crashed server? I don't see any errors in the log to indicate a crash or anything. If that's the case, then it's most likely not a software bug causing the crashing. If the server was working fine for years no problem and this just happened "all of the sudden", maybe a little PC maintenance is in order? Airdusting, check cable connections, ensure good airflow.
  6. In the instance of a hard crash like this, the best thing to do is to hook up a monitor and keyboard to the system and boot into non-GUI mode so you can tail the log while the server is running. This will print every log event directly to the monitor and when it crashes, you can take a picture of the last messages in the log so we can see what is causing the crash. See this FAQ in the wiki for instructions: https://wiki.unraid.net/Unraid_6/Frequently_Asked_Questions#My_system_is_crashing_but_my_logs_don.27t_contain_the_event._What_do_I_do_to_obtain_support.3F
  7. Hi there, To be perfectly honest, I don't think you're going to see any meaningful performance differences between an x8 and x16 slot. Reference: https://www.gamersnexus.net/guides/2488-pci-e-3-x8-vs-x16-performance-impact-on-gpus That said, if you're dead set on wanting this, the best thing to do is to purchase hardware that has a built-in graphics device (integrated GPU). This will then act as the primary graphics device for Unraid OS, allowing your primary add-on GPU to be passed through to a VM without any issues. If you want this to work on the hardware you already have, we at Lime Tech can't really provide direct support for that. All motherboard/cpu/gpu combinations can have some unique aspects that we can't reproduce unless we have the same gear. The best thing you could do is to post a bug report to Linux: https://www.kernel.org/doc/html/latest/admin-guide/reporting-bugs.html.
  8. Hi there, Apologies for the delayed reply on our parts. First and foremost, I'm assuming based on mentioning that you are using Lets Encrypt that you are using HTTPS for accessing your webGui. If you disable that, does access from other PCs work again? What about going to the page with the certificate and updating DNS? I'm also curious about the timing of this issue. Did you just enable the HTTPS when the issue occurred or had that been configured for a while? Any recent reboots to the server? Do you have the server set to use a static IP address?
  9. Ok, I see lots of CSRF token errors on your logs which indicates a browser left open to Unraid after a reboot. Let's try this: 1) Close out of all browser sessions connected to the Unraid webGui. 2) Restart your server. 3) Attempt to adjust the CPU pinning again. 4) Verify it didn't take. 5) Collect diagnostics and repost them here. I want to see the logs right after you attempt to change the pinning and it fails. Want to see what it says. Also can you confirm what type of browser you're using and try using a different one just to make sure this isn't a weird browser issue.
  10. Specifically looking to see the volume mappings part.
  11. Can you post a screenshot of your Docker page on the webGui so we can see what containers you have running and what their configuration settings are?
  12. Please be more descriptive in what is slowing down. Does it take longer to navigate around the webGui? Try disabling Plex and see if the same behavior occurs. First step is to isolate the issue.
  13. What hostility? I haven't seen any from anyone but yourself. We are hard at work trying to reproduce and resolve this issue, but you seem to think that because we haven't yet, that we're sitting here just twiddling our thumbs. We are not. We have multiple test servers constantly running Plex and injecting new data to it to try and force corruption. Hasn't happened to us once. That leads us to believe that this may be specific to individual setups/hardware, but we haven't figured out why just yet. You have a completely valid method to get back to a working state: roll back to the 6.6.7 release. Otherwise we are continuing to do testing and will provide more for folks to try in this bug report thread as we have ideas to narrow this down. Clearly this issue isn't as widespread as some may think, otherwise I think we'd have an outpouring of users and this thread would be a lot longer than 4 pages at this point. That said, it is a VERY valid concern that we are very focused on resolving, but sometimes things take longer to fix.
  14. This confuses me. Why did you need to move cables around? Shouldn't you have just needed to add one for the new 8TB drive? Also, even if you did move the cables around, if everything was attached properly, then shouldn't you have only had to add ONE more disk to the array? I think you may have made a big boo boo as you only added one new disk physically, but you indicate that you had to choose new slots for TWO. You may have inadvertently purged some data. The logs themselves don't reveal much because everything you did with respect to array operations was done before the reboot, so I can't see what actions actually took place there.