• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

snazz's Achievements


Newbie (1/14)



  1. @dborisAre you passing through a hardware NIC? I'm getting an error "Generating NiceHash Miner rig identifier failed!" followed by it not connecting on the next screen. My network config for the VM: Edit: Nevermind, got it working. Not sure what it was exactly. I created a new VM with a passed-though USB NIC and it worked. I removed the NIC passthrough and it still worked, so I guess something got messed up with the original VM.
  2. Just wanted to +1 this recent development and add a bit of insight that may help others. I have a Plex setup that does not have surround sound capability but can handle H.265 video. When I stream a H.265 video on that device that does not have a stereo track, Plex transcodes the video stream to H.264 while it downmixes the multi-channel audio to stereo. This transcoded stream now requires more bandwidth simply because it's H.264. This change to Unmanic is awesome in that it has automatically gone through my entire library adding a stereo audio track to videos that didn't already have one, and now the client I'm referring to will directplay the H.265 videos because of the presence of the 2 channel audio track. Thanks @Josh.5 !
  3. Unraid 6.8.3 SickChill Info: Branch: master Commit: 99336df163c7e776c7931d864e84d1f0f83afe4d Database Version: 44.2 Python Version: 3.8.5 (default, Jul 27 2020, 08:42:51) [GCC 10.1.0] SSL Version: OpenSSL 1.1.1g 21 Apr 2020 OS: Linux-4.19.107-Unraid-x86_64-with-glibc2.2.5 Locale: en_GB.UTF-8 Sickchill hasn't been able to update for the past several days. I've tried removing the container & image and pulling the latest, but the update fails in the same spot even before I've restored a config from backup. Any help offered is appreciated. Thanks! Here's the latest related entry in the log w/ debugging enabled: 2020-09-10 07:21:13 DEBUG :: WEBSERVER-HOME_0 :: cur_commit = 99336df163c7e776c7931d864e84d1f0f83afe4d, newest_commit = 7435bdf348cf507cc0d026769ea2dc49869667ce, num_commits_behind = 305 2020-09-10 07:21:13 INFO :: WEBSERVER-HOME_0 :: Clearing out update folder /config/sr-update before extracting 2020-09-10 07:21:15 INFO :: WEBSERVER-HOME_0 :: Creating update folder /config/sr-update before extracting 2020-09-10 07:21:15 INFO :: WEBSERVER-HOME_0 :: Downloading update from 2020-09-10 07:21:19 INFO :: WEBSERVER-HOME_0 :: Extracting file /config/sr-update/sr-update.tar 2020-09-10 07:21:28 INFO :: WEBSERVER-HOME_0 :: Deleting file /config/sr-update/sr-update.tar 2020-09-10 07:21:28 INFO :: WEBSERVER-HOME_0 :: Moving files from /config/sr-update/SickChill-SickChill-7435bdf to /opt/sickchill 2020-09-10 07:21:28 DEBUG :: WEBSERVER-HOME_0 :: Traceback: Traceback (most recent call last): File "/opt/sickchill/sickchill/update_manager/", line 190, in update os.renames(old_path, new_path) File "/usr/lib/python3.8/", line 270, in renames rename(old, new) OSError: [Errno 18] Invalid cross-device link: '/config/sr-update/SickChill-SickChill-7435bdf/.appveyor.yml' -> '/opt/sickchill/.appveyor.yml'
  4. The feature I like most is how easy it is to recover from a drive failure. I've seen my fair share of failed drives, so I opted for dual parity as soon as it was added. I'd like to see native support for Nvidia GPU's in Docker added in 2020. I use hardware transcoding with my Plex docker to keep my server's ageing CPU less busy.
  5. The update resolved the issue for me. Thanks!
  6. Awesome! Are you gaming with these VMs by chance? If so, how's the performance?
  7. Haven't tried a 2nd GPU yet, but I've been thinking about trying to pass through a 2nd GPU to the existing VM I have. I would guess that if I'm able to do that, you should be able to pass a 2nd GPU to a 2nd VM successfully. I'll update this thread with results if/when I give it a try.
  8. Very similar build here! I have the same mobo, but with a 1700x and 16GB DDR4 3000mhz (Corsair Vengeance LPX.). It's nice to see someone running similar hardware in the forums to potentially bounce issues / ideas off of. I had been running the RCs from about 11 or 12 onward without issues, currently on 6.4 official. I have about a dozen Dockers running, and a simple Windows 10 VM with a GTX 1080 passed through for Nicehash. I haven't had a single lockup ever. Bios version has been 3.2 since I bought the mobo a couple months ago. I echo your sentiments about UnRaid and the community here. A few friends of mine are now running UnRaid because of my raving about it too. Cheers!
  9. @Eagle470 I might have figured it out. See if you have a VM defined that is set to use a .vdisk file on a share which includes the drive that won't spin down. I found one such VM that I wasn't using on my system, deleted it and the associated .vdisk, and so far all of my drives are staying spun down.
  10. I'm running into the same exact issue. I've disabled cache dirs plugin and disconnected all mapped network drives from my windows machines. Using the Open Files plugin, I can't seem to pin it down either.
  11. Does the "assigning cores in pairs due to hyperthreading" recommendation apply to AMD Ryzen CPUs which don't have "Hyperthreading", per se? I have a Windows 10 VM assigned 2 "Hyperthreaded" / SMT cores, and about a dozen docker containers spread out and pinned (in pairs) to various other cores, and I'm seeing a high context switching value (12000 - 15000) as measured by the Glances docker. In general performance seems to be decent in the VM and the docker apps (Plex, Ombi, Sab, Sick, Couch, PlexPy...), but I'm new to AMD CPUs and I'm not sure if this is the optimal approach for my setup. Any ideas on how to minimize context switching, or should I not worry about it? Thx!
  12. Try updating the BIOS in the machine. VT-d works fine on the three systems I've tried b18 on. That or try setting iommu=nopt in your syslinux.cfg file. Thanks for the suggestions. BIOS is already at the latest revision, and iommu=nopt didn't make a difference. I'm still trying to track down what is causing this issue as 6.2 won't even boot on my older system (Intel DX58SO with i7 980x) with VT-d enabled. As I noted I found a workaround, but it comes at a price...not being able to pass through hardware to a VM. 6.1.9 boots fine on the system in question and allows the GPU to be passed through. I realize there were a lot of changes in 6.2 especially with regard to virtualization, so figuring out what change specifically caused this likely isn't simple. I'm willing to test just about any idea...hopefully it will result in finding a solution before others who have yet to experience this are affected. I've attached diagnostics from booting a fresh 6.1.9 install. VT-d was enabled in the bios and IOMMU shows "enabled" in the web gui. With the same bios configuration, 6.2 beta19 (or 18) boots this far and gets stuck: I've tried appending each (one line at a time) of the following to syslinux.cfg / pressing tab at the boot menu and manually adding the following, but all yield the same behavior as in the screenshot: intel_iommu=pt iommu=on iommu=on intel_iommu=on iommu=nopt iommu=nopt intel_iommu=nopt Any ideas? Update: I managed to get it to boot with VT-d enabled. I had to disable the onboard sound in the bios. Sorta strange, but getting closer to a fully working system on 6.2 and a 6-year old motherboard.
  13. At the shell prompt, try entering: fs0: (if that doesn't work, try fs1:) cd efi cd boot bootx64 I'm going off of memory, so the above might not be 100% correct. There's a few posts about that issue elsewhere on the forum with the correct commands if the above aren't correct. If your VM still doesn't boot, try building one with SeaBIOS instead and seeing if that one will boot. There is a known issue with the path to the .vdisk getting lost when you edit the VM. Just make a habit of flipping it from none to custom and ensuring the file path is correct every time you edit the VM until the issue is resolved.
  14. I had the same issue in b18, but it works for me without having to do anything other than you described in b19.