zer0zer0

Members
  • Content Count

    25
  • Joined

  • Last visited

Community Reputation

9 Neutral

About zer0zer0

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I made things as simple as possible and I'm just using /torrents as my path mapping, so I don't think that's the issue. Yes the file gets downloaded and I can see it sitting in the /torrents directory and it starts seeding I've also run DockerSafeNewPerms a few times, but that hasn't changed anything. I can get around it by using nzbToMedia scripts for the time being, but it's weird I can't get it working natively
  2. Anyone else having issues getting torrents to complete with Sonarr? I have paths mapped properly, and have tried qbittorrent, rtorrent, deluge, etc. without any luck Sonarr sends torrents perfectly, and the torrent clients download them perfectly Then the completion just doesn't happen and there's no logs
  3. Are you sure you’re forcing it to transcode? can you see the nvidia card if you run nvidia-smi from inside the docker terminal?
  4. I've seen times when nvidia-smi doesn't show anything but it is actually transcoding. You should be sure you are forcing a transcode and then check in your Plex dashboard for the (hw) text like this...
  5. binhex plexpass container is working well for me
  6. I saw the boot freeze with just the hypervisor.cpuid.v0 = FALSE in the vmx file and no hardware pass through at all. So it should be independent of video cards etc.
  7. I rolled back to UnRAID version 6.9.0-beta35, and this issue is resolved. I'm not sure what changed exactly, but it might be something to do with CPU microcode
  8. I talked to @StevenD and dropped back to UnRAID version 6.9.0-beta35 like he's running, and this issue is resolved
  9. @jwiener3 - This issue is resolved if you drop back to UnRAID version 6.9.0-beta35
  10. I'm seeing an issue when using ESXi and setting hypervisor.cpuid.v0 = FALSE in the vmx file. The system will not boot if you assign more than one cpu to the virtual machine. I'm using ESXi 6.7 with the latest patches and UnRAID 6.9.0-rc2. At first I thought it was an issue with passing through my Nvidia P400, but it happens even without any hardware passed through. My diagnostic zip file is attached. Here is a thread from another user discussing the same issue - darkstor-diagnostics-20210111-1047.zip
  11. To follow up on my issues, I can boot if I choose just one cpu, so it's probably not an issue with this plugin specifically. It is more likely to be UnRAID itself. If I set one cpu and also hypervisor.cpuid.v0 = FALSE in the vmx file I can get it to boot and the plugin appears to be working as expected. @ich777- if you can think of any logs etc. to get this resolved let me know
  12. I have pretty much the exact same issue! If I add hypervisor.cpuid.v0 = FALSE, booting freezes right after loading bzroot. If I set just one cpu I can boot just fine I'm not sure it's Nvidia specific as I still get the error even without my Nvidia card passed through to the virtual machine. And it won't boot even without any hardware at all passed through ESXi 6.7 with the latest patches UnRAID 6.9.0-rc2 Nvidia Quadro P400 A plain Ubuntu 20.10 instance with hypervisor.cpuid.v0 = FALSE set works as expected. No problems at all. Mine was
  13. That's weird that yours is working and mine isn't. I also just tested that it works as expected with a plain Ubuntu 20.10 instance with hypervisor.cpuid.v0 = FALSE set. No problems at all. Mine was working great with linuxserver.io nvidia builds. So it’s definitely not a hardware or esxi issue. I have the following setup: ESXi 6.7 with the latest patches UnRAID 6.9.0-rc2 Nvidia Quadro P400 Passing through both the nvidia card and audio device. Other passed through devices like sas hba, and nvme drives work perfectly. Without any f
  14. Just wanted to let you know that the ESXi logo is a bit messed up
  15. Tried all of the different advanced variables, without any luck, so I gave up and did a bare metal UnRAID with a nested ESXi instead