zer0zer0

Members
  • Posts

    38
  • Joined

  • Last visited

Posts posted by zer0zer0

  1. 9 hours ago, JorgeB said:

    That start sector is wrong, should be 64, something messed with your partition/disk, you can try running testdisk to see if it finds the old partition.

     

     

     

    Hmm, all of the other disks also start at 8?

     

    Device     Start        End    Sectors  Size Type
    /dev/sdb1      8 2441609210 2441609203  9.1T Linux filesystem
    /dev/sdc1      8 2441609210 2441609203  9.1T Linux filesystem
    /dev/sdd1      8 2441609210 2441609203  9.1T Linux filesystem
    /dev/sde1      8 2441609210 2441609203  9.1T Linux filesystem
    /dev/sdf1      8 2441609210 2441609203  9.1T Linux filesystem

     

  2. 5 hours ago, JorgeB said:

    Are you sure that this was ever formatted? Kind of strange that the filesystem is set to auto, assuming it's still sde post the output of:

    fdisk -l /dev/sde

     

     

    It was definitely formatted with xfs and then all of a sudden just threw that error

     

    root@DARKSTOR:~# fdisk -l /dev/sde
    Disk /dev/sde: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors
    Disk model: HUH721010AL4204
    Units: sectors of 1 * 4096 = 4096 bytes
    Sector size (logical/physical): 4096 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 14CEF3CF-1F72-48D2-8C97-83C61932AE02
    
    Device     Start        End    Sectors  Size Type
    /dev/sde1      8 2441609210 2441609203  9.1T Linux filesystem


     

  3. All of a sudden one of my disks gave me the dreaded error of unmountable, no supported file system :(

     

    Disk 4 - HUH721010AL4204_7PH0NGHC (sde)

     

    Check xfs filesystem is totally missing from the gui for this disk, but there for all the other array disks?

    Running a check from the CLI comes back with could not find a valid secondary superblock

     

    Where do I go from here apart from just replacing the drive?

     

    Diagnostics zip attached

    darkstor-diagnostics-20230810-2128.zip

  4. As far as I can tell all of the drives are operating as expected and all connections are fine.

    It might start our at ~28MB/sec and then it will drop under 10 🙃

     

    I do get a weird result for smart test on the parity drive - Background short  Failed in segment -->       3 

    But it reports are healthy at the same time?

     

    Diagnostics file attached


    Diskspeed tests are also good when benchmarking the drives

    image.thumb.png.39dd51f1c649e0e7118ed95fb24f53ea.png

     

    But, as you can see here the parity drive is only being written really slowly

    image.thumb.png.cc9f24feb83c31c53c5c0561f47e2adc.png

    unraid-diagnostics-20230522-1659.zip

  5. 5 hours ago, Squid said:

    It's in CA.  The original won't show because it's now marked as incompatible, and the fork shows (GuildNet/Squid as author) when running 6.10+

     

    I'm not sure what's up, but I'm on 6.10.0-rc8 and I'm not seeing it no matter what I search for in community apps (version 2022.05.15) :(

    Searching for "docker folder" comes up with totally irrelevant results and searching for your repo only shows your other awesome work!!


    image.thumb.png.cbd23bfcfc75f0fb8bf7543fd8dd4026.png

     

  6. On 9/18/2021 at 4:14 PM, sergio.calheno said:

    Hey,

    First of all, thanks for the great docker!

    However I'm trying to add this whitelist but for that I need python3 inside the docker. Any idea I can get a walk through on how to do it? I've tried searching for it, but I don't get a concrete answer on how to install python3 inside a docker container (or if it's even possible). I tried entering in the console "sudo apt-get python3" but to no avail...

    Any help would be greatly appreciated.

     

    Thanks!

     

    Hmm, apt installing python3 and then running those whitelist scripts works perfectly for me...

     

    sudo apt update

    sudo apt install python3

     

     

    • Thanks 1
  7. 1 hour ago, ich777 said:

    What is failing exactly?

     

    Is the file in the download folder and Sonarr doesn't moves it to the destination?

     

    I made things as simple as possible and I'm just using /torrents as my path mapping, so I don't think that's the issue.

    Yes the file gets downloaded and I can see it sitting in the /torrents directory and it starts seeding

     

    I've also run DockerSafeNewPerms a few times, but that hasn't changed anything.

    I can get around it by using nzbToMedia scripts for the time being, but it's weird I can't get it working natively :(

  8. Anyone else having issues getting torrents to complete with Sonarr?

     

    I have paths mapped properly, and have tried qbittorrent, rtorrent, deluge, etc. without any luck :(
    Sonarr sends torrents perfectly, and the torrent clients download them perfectly

    Then the completion just doesn't happen and there's no logs

  9. 17 minutes ago, Spucoly2 said:

    There is no running processes against my GPU when running 1080 movie.

    nvid.PNG

     

    I've seen times when nvidia-smi doesn't show anything but it is actually transcoding.

    You should be sure you are forcing a transcode and then check in your Plex dashboard for the (hw) text like this...image.png.725306e94090d95d347dec3fd556c5f3.png
     

  10. 3 hours ago, ich777 said:

    This was descussed a few times here in the thread, the solution was actually to switch to the plexinc container or linuxserver container.

    You can also hook @binhex woth a short message or a post in his thread, I think he runs his container just fine with the Plugin.

     

    binhex plexpass container is working well for me :)

  11. hmmm, seems it might even be a SuperMicro X10 and/or Xeon E5 v3 specific issue.

    No issues on my E3-1265L v3 running on an AsRock Rack E3C224D2I

     

    @StevenD can run 6.9rc2 with hypervisor.cpuid.v0 = FALSE on his setup with ESXi 7, SuperMicro X9, and E5-2680v2 cpu's

     

    Thanks to @ich777, I can confirm that you can run the latest 6.9rc2 with a modified bzroot that has the 6.9.0-beta35 microcode :)

  12. 5 hours ago, ich777 said:

    Sorry I really want to help but I don't know much about ESXI. Eventually @StevenD can help, isn't there also a subforum here in the forums about virtualizing Unraid?

     

    I talked to @StevenD and dropped back to UnRAID version 6.9.0-beta35 like he's running, and this issue is resolved :)

    image.png.7b113d8059634db1f3993c4600df31c5.png

     

    • Like 3
  13. To follow up on my issues, I can boot if I choose just one cpu, so it's probably not an issue with this plugin specifically.

    It is more likely to be UnRAID itself.

     

    If I set one cpu and also hypervisor.cpuid.v0 = FALSE in the vmx file I can get it to boot and the plugin appears to be working as expected.

    @ich777- if you can think of any logs etc. to get this resolved let me know

     

    image.png.ce49059c991091dfc429df6a2f466913.png

  14. I have pretty much the exact same issue!
    If I add hypervisor.cpuid.v0 = FALSE, booting freezes right after loading bzroot.

    If I set just one cpu I can boot just fine
     

    I'm not sure it's Nvidia specific as I still get the error even without my Nvidia card passed through to the virtual machine.

    And it won't boot even without any hardware at all passed through :(


    ESXi 6.7 with the latest patches

    UnRAID 6.9.0-rc2

    Nvidia Quadro P400

     

    A plain Ubuntu 20.10 instance with hypervisor.cpuid.v0 = FALSE set works as expected. No problems at all. 

    Mine was also working great with linuxserver.io nvidia builds.

    So it’s definitely not a hardware or esxi issue. 

     


    image.png.4cf066ad4b8625c7e0ee2a90d8688bdc.png
     

  15. On 1/6/2021 at 6:17 AM, StevenD said:

    I'm running unRAID virtualized under ESXI 7 with an RTX 4000, using @ich777 plugin.  I don't recall doing anything special to get it to work.  I do have these settings:

    
    pciHole.dynStart 3072
    
    hypervisor.cpuid.v0 FALSE

     

    but I don't recall specifically setting them.

     

    That's weird that yours is working and mine isn't.

    I also just tested that it works as expected with a plain Ubuntu 20.10 instance with hypervisor.cpuid.v0 = FALSE set. No problems at all. 

    Mine was working great with linuxserver.io nvidia builds.

    So it’s definitely not a hardware or esxi issue. 

    I have the following setup:

    • ESXi 6.7 with the latest patches
    • UnRAID 6.9.0-rc2
    • Nvidia Quadro P400


    Passing through both the nvidia card and audio device. 
    Other passed through devices like sas hba, and nvme drives work perfectly. 
     

    Without any flags in the vex file I can boot the UnRAID virtual machine and can see the Nvidia card

     

    root@XXXX:~# lspci | grep NV
    03:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P400] (rev a1)
    03:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

     

    But anything else has errors like

    root@XXXX:~# nvidia-smi 
    Unable to determine the device handle for GPU 0000:03:00.0: Unknown Error
     

    Nvidia Driver Settings page gives me this error

    image.png.1e9ba10aacf2d0fbbc75f12cbda8684c.png

     

    If I add the hypervisor.cpuid.v0 = FALSE variable to the vmx file it freezes after bzroot and won't boot :(

    image.png.a6a0220caefb54b412211a6af509c513.png
     

  16. 14 hours ago, ich777 said:

     

    Are you using 6.8.3 or 6.9.0RC2?

    I know very littly about ESXi7 so I really can't help here.

    Eventually this can help: Click

     

    Tried all of the different advanced variables, without any luck, so I gave up and did a bare metal UnRAID with a nested ESXi instead :D

    • Like 1
  17. Anyone have any luck getting this working with an UnRAID host running on ESXi 7 with pass through?

    Mine either freezes at boot time, or if I change some advanced variables around I can get it to boot, but get a message on the plugin page saying "unable to determine the device handle"

     

    I have tried the normal advanced settings you need for ESXi and nvidia passthrough like setting hypervisor.cpuid.v0 = “FALSE” and pciHole.start = “2048” but I'm not having much luck so far :(