zer0zer0

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by zer0zer0

  1. I'm not sure what's up, but I'm on 6.10.0-rc8 and I'm not seeing it no matter what I search for in community apps (version 2022.05.15) Searching for "docker folder" comes up with totally irrelevant results and searching for your repo only shows your other awesome work!!
  2. I can confirm that it's there on my 6.10.0-rc2 server with a Mellanox CX3 card as well, and I have used it to change the interface mapping
  3. I'm curious if this is the intended behavior when using the web gui on 6.10.0-rc2? If you use something like http://unraidserver.local it converts it to a really ugly url like this https://a9673c7c842a20425e043d8330e376af784c757b2.unraid.net/ And at the same time you can't get to it because it doesn't resolve nicely with pihole/unbound running on Unraid itself, and I had to go in and create a new local dns entry
  4. Hmm, apt installing python3 and then running those whitelist scripts works perfectly for me... sudo apt update sudo apt install python3
  5. I made things as simple as possible and I'm just using /torrents as my path mapping, so I don't think that's the issue. Yes the file gets downloaded and I can see it sitting in the /torrents directory and it starts seeding I've also run DockerSafeNewPerms a few times, but that hasn't changed anything. I can get around it by using nzbToMedia scripts for the time being, but it's weird I can't get it working natively
  6. Anyone else having issues getting torrents to complete with Sonarr? I have paths mapped properly, and have tried qbittorrent, rtorrent, deluge, etc. without any luck Sonarr sends torrents perfectly, and the torrent clients download them perfectly Then the completion just doesn't happen and there's no logs
  7. Are you sure you’re forcing it to transcode? can you see the nvidia card if you run nvidia-smi from inside the docker terminal?
  8. I've seen times when nvidia-smi doesn't show anything but it is actually transcoding. You should be sure you are forcing a transcode and then check in your Plex dashboard for the (hw) text like this...
  9. binhex plexpass container is working well for me
  10. hmmm, seems it might even be a SuperMicro X10 and/or Xeon E5 v3 specific issue. No issues on my E3-1265L v3 running on an AsRock Rack E3C224D2I @StevenD can run 6.9rc2 with hypervisor.cpuid.v0 = FALSE on his setup with ESXi 7, SuperMicro X9, and E5-2680v2 cpu's Thanks to @ich777, I can confirm that you can run the latest 6.9rc2 with a modified bzroot that has the 6.9.0-beta35 microcode
  11. I saw the boot freeze with just the hypervisor.cpuid.v0 = FALSE in the vmx file and no hardware pass through at all. So it should be independent of video cards etc.
  12. I rolled back to UnRAID version 6.9.0-beta35, and this issue is resolved. I'm not sure what changed exactly, but it might be something to do with CPU microcode
  13. I talked to @StevenD and dropped back to UnRAID version 6.9.0-beta35 like he's running, and this issue is resolved
  14. @jwiener3 - This issue is resolved if you drop back to UnRAID version 6.9.0-beta35
  15. I'm seeing an issue when using ESXi and setting hypervisor.cpuid.v0 = FALSE in the vmx file. The system will not boot if you assign more than one cpu to the virtual machine. I'm using ESXi 6.7 with the latest patches and UnRAID 6.9.0-rc2. At first I thought it was an issue with passing through my Nvidia P400, but it happens even without any hardware passed through. My diagnostic zip file is attached. Here is a thread from another user discussing the same issue - darkstor-diagnostics-20210111-1047.zip
  16. To follow up on my issues, I can boot if I choose just one cpu, so it's probably not an issue with this plugin specifically. It is more likely to be UnRAID itself. If I set one cpu and also hypervisor.cpuid.v0 = FALSE in the vmx file I can get it to boot and the plugin appears to be working as expected. @ich777- if you can think of any logs etc. to get this resolved let me know
  17. I have pretty much the exact same issue! If I add hypervisor.cpuid.v0 = FALSE, booting freezes right after loading bzroot. If I set just one cpu I can boot just fine I'm not sure it's Nvidia specific as I still get the error even without my Nvidia card passed through to the virtual machine. And it won't boot even without any hardware at all passed through ESXi 6.7 with the latest patches UnRAID 6.9.0-rc2 Nvidia Quadro P400 A plain Ubuntu 20.10 instance with hypervisor.cpuid.v0 = FALSE set works as expected. No problems at all. Mine was also working great with linuxserver.io nvidia builds. So it’s definitely not a hardware or esxi issue.
  18. That's weird that yours is working and mine isn't. I also just tested that it works as expected with a plain Ubuntu 20.10 instance with hypervisor.cpuid.v0 = FALSE set. No problems at all. Mine was working great with linuxserver.io nvidia builds. So it’s definitely not a hardware or esxi issue. I have the following setup: ESXi 6.7 with the latest patches UnRAID 6.9.0-rc2 Nvidia Quadro P400 Passing through both the nvidia card and audio device. Other passed through devices like sas hba, and nvme drives work perfectly. Without any flags in the vex file I can boot the UnRAID virtual machine and can see the Nvidia card root@XXXX:~# lspci | grep NV 03:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P400] (rev a1) 03:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1) But anything else has errors like root@XXXX:~# nvidia-smi Unable to determine the device handle for GPU 0000:03:00.0: Unknown Error Nvidia Driver Settings page gives me this error If I add the hypervisor.cpuid.v0 = FALSE variable to the vmx file it freezes after bzroot and won't boot
  19. Just wanted to let you know that the ESXi logo is a bit messed up
  20. Tried all of the different advanced variables, without any luck, so I gave up and did a bare metal UnRAID with a nested ESXi instead
  21. Anyone have any luck getting this working with an UnRAID host running on ESXi 7 with pass through? Mine either freezes at boot time, or if I change some advanced variables around I can get it to boot, but get a message on the plugin page saying "unable to determine the device handle" I have tried the normal advanced settings you need for ESXi and nvidia passthrough like setting hypervisor.cpuid.v0 = “FALSE” and pciHole.start = “2048” but I'm not having much luck so far
  22. Same sort of symptoms for me as well on a virtualized Unraid on ESXi7.0 running on a SuperMicro X10SRH-CF and Xeon 2680v4. Fix common problems plugin is telling me I don't have a CPU Scaling Driver Installed. Although I can see my CPU is not at 100% and looks pretty normal to me?
  23. I'd like to request that the included and excluded disks are added to the shares overview page. I find it cumbersome to have to click into each share individually to see which disks have been assigned. It seems like it would be a really simple addition to either include a column or have an element you hover over or click to drop down. The hover or drop down might be better so it can accommodate systems with lots of disks. Please forgive me if there is some other way to get an overview showing this for all shares An quick and dirty example of how it might look:
  24. Did iSCSI make it into the new 6.9 beta release, or will it hopefully make it's way into the RC?
  25. I might of missed it, but is there any chance we'll see iscsi support in this beta, or 6.9 RC? I know a lot of people would like to see it