Jump to content

jaybee

Members
  • Posts

    251
  • Joined

  • Last visited

Posts posted by jaybee

  1. 19 hours ago, jaybee said:

    Same for me. Using dynamix S3 sleep for a long time and since updating the other day it stopped shutting down my server at night like it used to. Going to have to debug this more I think.

     

    Just realised that for some reason since the last update, when I went into the dynamix sleep plugin page on my server, it had a status in the top right hand corner to say that the plugin's status was "STOPPED". (90% sure it was stopped, off the top of my head but it may have said something else. It was orange in colour). I don't know why this was. I made a change to something in the options, clicked apply, then changed it back and clicked apply again and the status changed to "STARTED". My server then shut down as before last night and normal service resumed. 

    So it seems for me the latest update did something where the plugin was left in a stopped state for whatever reason. Might be worth checking this.

  2. On 3/15/2023 at 10:08 AM, Kasjo said:

    On my end sleep ist Not working anymore. It dosnt regonize my Network anymore and i still cant select chachdrive Outside of the Array. Is it possible to geht somewere a older Version? Currently using the darkside40 fork.

    Same for me. Using dynamix S3 sleep for a long time and since updating the other day it stopped shutting down my server at night like it used to. Going to have to debug this more I think.

  3. 45 minutes ago, Acidcliff said:

    Yes - Measured with an "Innr SP120" Zigbee Wall Plug

     

    Fractal Design Ion+ 660p

     

    Yes indeed - no GPU or PCI-E Cards. Just the M2 1TB mentioned


     

    This was measured on a fresh "native" (no vm) Win11 install (no unraid) - so the values came from the Win11-Thread director

     

    I have done some new measurements using a pretty barebones (no Docker Containers or VMs running) Unraid (system mentioned above but now with 2x 12TB Western Digital HDDs):
    - ~55 W with HDDs spinning idle / 45 W with HDDs spinned down (no CPU Pinning - i think unraid defaults to the first cores which are p-cores)

    - ~45 W with HDDs spinning idle / 36 W with HDDs spinned down (exclusive usage of e-cores through CPU Isolation)

    Adding a MSI Aero GTX 1050 2GB results in ~15-20 W additional idle power draw.
    All are measurements of the complete system measured at the wall outlet.

     

    The ~10W were for idling CPU only - not the whole system. Don't know anymore from which source I got that information

     

     

    Thanks very much for the info. So it appears possible to pin unraid OS operations to the E cores and get 36w idle. That is quite good. My current 5900x system idles much higher at around 100w. This is with 2 x GPUs idling, a HBA card, 2 x nvme drives and 8 fans and a fan controller. This probably accounts for 40w extra. So perhaps my 5900x whilst idling consumes maybe 60w at the wall if it were bare bones. A way to improve consumption would be to obviously remove the GPU/s and move to a CPU with quicksync to aid transcoding, and try to move to less disks all running from the on board sata instead of a HBA card. 

     

    • Like 1
  4. On 1/10/2022 at 11:17 AM, Acidcliff said:

    In the end I chose the 12600k with a z690 (MSI PRO A).

     

    If it's for interest for somebody: I measured the idle power draw of the pretty "bare bones system*" in a freshly installed win11. It was about 32-38 W(+/-5).

     

    * 12600k, MSI Z690 PRO A, PNY XLR8 CS3030 1TB M.2, Arctic Liquid Freeze 420, 3x 140mm fans (all stopped for the test), no OC, XMP on

     

    Have yet to test it with unraid installed, but have to overcome some hurdles that came with the platform (e.g. onboard ethernet with intel 225-v not working, need for dedicated GPU since uhd770 has no supported method for sharing between VMs...). Pains of early adopting (or my personal incompetence/ignorance since this is my first build since ~7 years :D)

    Very interesting to me. Is that measured at the wall using a power meter? So total power consumption of the system? What power supply? Can you confirm that is with no gpu or any pci-e cards? I see you say it was barebones with just the M2 1TB in. 

    Is that with unraid running and assigned to the e cores? How many? I'd be interested to see if you are still able to get those numbers when running a windows Vm assigned also to the e cores, and then again with one assigned to power cores to see how it varies. 

     

    I was looking at the 12th gen i7 12700 series which seem to offer ones like the 12700T  (there are also E, H, F various) and I think one of them may be an economy series type OEM spec one which has a TDP of around 35w. It seems a bit confusing right now and I can't find any good performance indication outside of the K series which is reviewed the most being the most popular. 

     

    Where did you read of the 12600 series going "well below" 10watt idle? That's can't be accurate? Do you mean just the CPU rather than total system at the wall watts? Even if just the CPU, that seems still very low.

     

  5. What about gaming on Windows 11 VMs with the AMD performance hit? A user already found they were not able to install the chipset driver fix to prevent the performance issue. Am I understanding correctly that this will not be possible to be resolved due to the emulation causing the chipset driver not to not recognize the hardware as legitimate? 
    Or is this all resolved since Microsoft fixed the issue with Windows hotfix?

  6. I just had a similar issue and thought I would post here for anyone else that encounters similar behaviour with black screen (or no output) from the GPU where everything else looks like it should work. My Nvidia 3060ti also got detected as an "NVIDIA Corporation Device 2489" but this is not an indicator of any problem in itself. Some Nvidia GPUs do get detected as strange names, especially the newer RTX 30XX series.

     

    Symptoms I got:

    VM when started with GPU passed though correctly in terms of config on unraid, goes green in the UI and everything looks as though it would work fine. But no output occurs to a monitor on HDMI or Display port. When used with pure VNC as graphics card it functions fine. When the server boots a display output can be seen during post and BIOS so you know it works normally.

     

    What I changed which caused this issue to occur:

    Upgraded the GPU in my system from an Nvidia 3060 to an Nvidia 3060ti

     

    Actual issue:

    Nvidia drivers in windows VM itself. I believe that because I did not unintall the old nvidia driver before I did the upgrade, Windows kept trying to use the existing driver to work with the card. The problem I think, was that the driver I had installed was a specific one which only works with non LHR 3060 cards as this was the leaked dev driver which unlocks 3060s so they can be mined with. I should have uninstalled this driver first whilst the card was in the server still and whilst inside the VM and used a generic windows auto one, or just installed the latest nvidia drivers first for the 3060. They would probably be similar enough to the 3060ti that it may have initiated it and worked.

     

    I don't think you can simply use VNC connection to go in and remove the driver because then the GPU is not listed in device manager to be able to uninstalled if you see what I mean. Although...

     

    How you might be able to fix it:

    A. It may be possible to use something like DDU free software to force to remove all references of the nvidia drivers. Then it may be possible to start the VM with GPU passed through and it may become auto detected and initiated with a basic found driver by Windows.

    B. Create a new VM from scratch would probably solve the issue and initiate the GPU fresh.

     

    How I fixed it:

    In my case since I already had the new GPU installed in the server and didn't want to put the old one back in or try either of the above two things yet, I decided to:

     

    (If remote desktop is already possible on your VM then skip to step 6)

    1. Configure the VM to only use VNC as GPU.

    2. Start the VM in unraid GUI

    3. Using VNC, access the VM and go to remote desktop settings inside windows. Ensure remote desktop connections are allowed and that you can connect with a valid user.

    4. Exit remote desktop and VNC.

    5. Stop/shutdown the VM from unraid GUI

    6. Configure the VM to now use the GPU passed through as per other instructions in various other threads/youtube videos (out of scope for this explanation as too long)

    7. Start the VM form the unraid GUI

    8. Remote desktop into the VM

    9. In device manager you should now see the nvidia GPU as a device which can be right clicked and then uninstalled. This should remove the problematic and non functional/compatible nvidia driver which is not able to work with the currently installed and passed through GPU. In this case my new 3060ti is the GPU existing, and the driver uninstalled was the old 3060 dev mining driver.

    10. Go to nvidia website and download and install the latest driver for your GPU as you normally would. I would recommend with nvidia that you select "CUSTOM" install and then do a "CLEAN" install to properly clear out any remains of older drivers. The VM will reboot to complete installation. Just remote desktop back in once done to verify it completes and exit installation. 

    11. Disconnect remote desktop and test direct GPU connectivity to a monitor now. You should find you have output as normal on your screen.

     

  7. On 6/5/2021 at 8:05 PM, MarKol4 said:

    The same here. I have an array purely on mediocre HDD drives. I'm doing some testing now and for that purpose there is also no SSD Cache Pool in my unRAID.

     

    There is VM with a virtual disk (raw, VirtIO) attached. I have prepared virtual disk by running SDelete -c in VM in order to get virtual disk fully initialized (just to make sure that entire space of the disk was written with some data).

     

    After that (and restart of the unRAID server) I've tested a virtual disk performance by running CrystalDiskMark in the VM. These are my example results:

     

    ------------------------------------------------------------------------------
    CrystalDiskMark 8.0.1 x64
    ------------------------------------------------------------------------------
    * MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
    * KB = 1000 bytes, KiB = 1024 bytes

    [Read]
      SEQ    1MiB (Q=  8, T= 1):   791.518 MB/s [    754.9 IOPS] < 10448.04 us>
      SEQ    1MiB (Q=  1, T= 1):   885.947 MB/s [    844.9 IOPS] <  1180.72 us>
      RND    4KiB (Q= 32, T= 1):    18.143 MB/s [   4429.4 IOPS] <  7121.53 us>
      RND    4KiB (Q=  1, T= 1):    19.230 MB/s [   4694.8 IOPS] <   212.33 us>

    [Write]
      SEQ    1MiB (Q=  8, T= 1):   221.328 MB/s [    211.1 IOPS] < 37301.93 us>
      SEQ    1MiB (Q=  1, T= 1):   195.541 MB/s [    186.5 IOPS] <  5350.16 us>
      RND    4KiB (Q= 32, T= 1):   168.338 MB/s [  41098.1 IOPS] <   763.88 us>
      RND    4KiB (Q=  1, T= 1):    14.531 MB/s [   3547.6 IOPS] <   280.21 us>

    Profile: Default
       Test: 16 GiB (x5) [D: 0% (0/80GiB)]
       Mode: [Admin]
       Time: Measure 10 sec / Interval 0 sec
       Date: 2021/06/05 20:14:42
         OS: Windows 10 Professional [10.0 Build 19041] (x64)

     

    Write results are way beyond that HDD hardware can do. Especially random 4K writes.

     

    I can see no other explanation than that an unRAID is doing a lot of write-back caching when VM is writing to a virtual disk.

    Is it safe for our data when unRAID is doing that kind caching? Any machine crash or a power outage can corrupt a lot of data if write-back caching is in use. For that purpose we have a BBU on hardware raid controllers which makes write-back caching safe.

     

    Lets imagine that we have a database running in VM and it stores data files on a virtual disk. Should it not be the case that if a database engine commits a transaction and user code is informed about successful commit all the data is already persisted and safe? Is it even possible when unRAID seems to be doing write-back caching?

     

    Can you explain what is going on here and how much safe are our data already written to virtual disk?

     

     

     

    Did you ever get the bottom of this? Sometimes testing with larger file sisze in crystal disk mark like 4gb gives results showing sequential write of around only 1000MB/s which may be more realistic given the bare metal performance should be around 3000MB/s. It definitely seems like a caching issue which makes it hard to truly compare passed through nvme performance vs vdisk. Vdisk has the nice portability aspect of it and ease of backup, but nvme has the advantage of supposedly supporting proper trim and wear levelling stuff when passed through to windows properly I have read.

  8. Hey guys,

     

    I want to setup an nvme drive as pass though in my VM to get close to bare metal performance for gaming. Before I do this, I took some benchmarks of the existing vdisk performance before I did it for comparison. I noticed that the numbers for usual benchmarking tools are claiming very unrealistic performance stating numbers that are simply too fast and impossible. I gather that this is due to the way vdisk works with caching? Is there a way to turn this off and test more easily to get a baseline? I tried i/o zone but could not understand how to use it easily as it is cmd based and does not give easy to read results. I tried i/o meter and the control panel for the software looks ancient and again not sure how to use. 

     

    The results I got on a vdisk running on a sata cache disk which would normally have read speed of approx 500MB/s and write of around 350 MB/s are below: 

     

    Crystal disk mark:

    Sequential - Read = 10,803MB/s, Write = 4,812MB/s

    4KiB Q8T8 - Read = 192MB/s, Write = 112MB/s

    4KiB Q32T1 - Read = 396MB/s, Write = 94MB/s

    4KiB Q1T1 - Read = 46MB/s, Write = 36MB/s

     

    ASSSD:

    Sequential - Read = 6,582MB/s, Write = 2,142MB/s

    4K - Read = 47MB/s, Write = 33MB/s

    4K 64 Thrd - Read = 292MB/s, Write = 72MB/s

    Acc. Time - Read = 0.193ms, Write = 0.106ms 

  9. On 3/18/2021 at 2:37 PM, PTRFRLL said:

     

    I believe you can view the min/max power limits of your card with the following command:

     

    
    nvidia-smi -q -d POWER

     

    Otherwise, you might play around with the memory tweak (--mt) setting for T-rex

     

    Thanks for the support with this.

     

    I've had a pay around with my 3070 and got it passed through to a VM where it does indeed hash at 61mh/s with the memory overclocked. In this docker it sits at 51/52mh/s. I have been successful in adjusting the power limit using nvidia-smi as you proposed above. This worked perfectly and halved my power usage. I was also able to set the core GPU clock also using nvidia-smi with no issues. I did have to set persistance mode on.

     

    For the memory however, I can only see a command mentioned as like this:

     

    nvidia-smi -i 0 -ac 2000,1000

     

    But when I do that I get the following error:

     

    Setting applications clocks is not supported for GPU 00000000:06:00.0.
    Treating as warning and moving on.
    All done.

     

    So it seems adjusting memory clocks is not possible? I googled and also saw that if you have "N/A" in the output for "nvidia-smi -q" in a lot of places, this means that your GPU does not support that feature. I can see N/A against the application clocks part as per below output. Maybe this is why the above error occurs. 

     

     

     Clocks
            Graphics                          : 1005 MHz
            SM                                : 1005 MHz
            Memory                            : 6800 MHz
            Video                             : 900 MHz
        Applications Clocks
            Graphics                          : N/A
            Memory                            : N/A
        Default Applications Clocks
            Graphics                          : N/A
            Memory                            : N/A
        Max Clocks
            Graphics                          : 2100 MHz
            SM                                : 2100 MHz
            Memory                            : 7001 MHz
            Video                             : 1950 MHz
        Max Customer Boost Clocks

     

     

    I can see mention of using "nvidia-settings" or xconfig or powermizer to adjust things like memory, but I think this is only possible on more feature rich Linux distros like ubuntu possibly. Have you any other ideas how we can overclock memory somehow? This would be a killer feature to have it set and running inside a docket as you say, since then it is not bound to a VM and is accessible as a shared resource for both mining at its full potential, as well as for plex transcoding duties. 

  10. The below shows the IOMMU groupings for the Asus B550-E ROG Strix Gaming Motherboard.

    IOMMU set to enabled in BIOS and ACS override setting in unraid VM settings set to disabled. Lines separate each group. Out of the box the below presents the following issues possibly:

     

    1: The disks (bolded below) all come under the same IOMMU group. I was expecting that the onboard SATA disks (in my case Samsung SSDs below) would be separate to the HBA card mechanical spinners. I assumed this meant that the VM could not have an individual SSD passed through to it as the entire drive.

     

    2: The USB controller 3.0 and 2.0 (bolded below) both appear under the same IOMMU group. I assumed this meant that I could not separate the unraid flash drive to the USB ports I would want to pass through to the VM.

     

     

    IOMMU group 0:[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 1:[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

     

    IOMMU group 2:[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

     

    IOMMU group 3:[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 4:[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 5:[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

     

    IOMMU group 6:[1022:1483] 00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

     

    IOMMU group 7:[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 8:[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 9:[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 10:[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

     

    IOMMU group 11:[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

     

    IOMMU group 12:[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

     

    IOMMU group 13:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)

    [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)

     

    IOMMU group 14:[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0

    [1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1

    [1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2

    [1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3

    [1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4

    [1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5

    [1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6

    [1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7

     

    IOMMU group 15:[1987:5012] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

    [N:0:1:1] disk Sabrent__1 /dev/nvme0n1 1.02TB

     

    IOMMU group 16:[1022:43ee] 02:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43ee

    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Bus 001 Device 002: ID 8087:0029 Intel Corp.

    Bus 001 Device 003: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller

    Bus 001 Device 004: ID 05e3:0610 Genesys Logic, Inc. 4-port hub

    Bus 001 Device 005: ID 05e3:0610 Genesys Logic, Inc. 4-port hub

    Bus 001 Device 006: ID 0781:5580 SanDisk Corp. SDCZ80 Flash Drive

    Bus 001 Device 007: ID 0409:005a NEC Corp. HighSpeed Hub

    Bus 001 Device 008: ID 051d:0002 American Power Conversion Uninterruptible Power Supply

    Bus 001 Device 009: ID 1a40:0101 Terminus Technology Inc. Hub

    Bus 001 Device 010: ID 0557:8021 ATEN International Co., Ltd Hub

    Bus 001 Device 011: ID 04b4:0101 Cypress Semiconductor Corp. Keyboard/Hub

    Bus 001 Device 013: ID 045e:0040 Microsoft Corp. Wheel Mouse Optical

    Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

    [1022:43eb] 02:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43eb

    [1:0:0:0] disk ATA SAMSUNG SSD 830 3B1Q /dev/sdb 128GB

    [2:0:0:0] disk ATA SAMSUNG SSD 830 3B1Q /dev/sdc 128GB

    [1022:43e9] 02:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43e9

    [1022:43ea] 03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

    [1022:43ea] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

    [1000:0072] 04:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

    [7:0:0:0] disk ATA SAMSUNG HD203WI 0002 /dev/sdd 2.00TB

    [7:0:1:0] disk ATA Hitachi HDS5C302 A580 /dev/sde 2.00TB

    [7:0:2:0] disk ATA Hitachi HDS5C302 A580 /dev/sdf 2.00TB

    [7:0:3:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdg 2.00TB

    [7:0:4:0] disk ATA ST4000DM000-1F21 CC54 /dev/sdh 4.00TB

    [7:0:5:0] disk ATA WDC WD100EZAZ-11 0A83 /dev/sdi 10.0TB

    [7:0:6:0] disk ATA WDC WD100EZAZ-11 0A83 /dev/sdj 10.0TB

    [8086:15f3] 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 02)

     

    IOMMU group 17:[10de:2484] 06:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070] (rev a1)

    [10de:228b] 06:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)

     

    IOMMU group 18:[10de:1c82] 07:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)

    [10de:0fb9] 07:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

     

    IOMMU group 19:[1022:148a] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function

     

    IOMMU group 20:[1022:1485] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

     

    IOMMU group 21:[1022:1486] 09:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP

     

    IOMMU group 22:[1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

     

    IOMMU group 23:[1022:1487] 09:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

  11.  

     

     

    On 3/11/2021 at 7:05 AM, Dodisbeaver said:

    I am running Windows vm with a cache pool of two ssds. I am passing through an extra ssd with games on via unassigned drives .

     

    When you say you are passing it through...how? If it appears in unassigned drives then does that not mean that it is NOT passed through? What happens to the drive in unassigned drives list when you start the VM? Does it dissappear as it at that point gets passed through?

    When I checked the IOMMU groups on my B550 Asus ROG Strix E board, the IOMMU groups do not look good. Will explain more below.

     

    On 3/11/2021 at 7:05 AM, Dodisbeaver said:

     I am also passing through one of the two usb controllers on the mb. 

     

    I checked my IOMMU groups and the two controllers I think you speak of are listed like so for me. This is with the latest BIOS from February and with ACS patch under VM manager settings set to disabled:

     

    IOMMU group 22:[1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

     

    You can see that they both fall under the same IOMMU group. I have seen countless threads/posts/videos all saying that if things appear in the same IOMMU group, that it means you have to pass the whole thing through. I don't really understand what people are saying here, but I think they mean that under VM settings, when you select something to pass through from that group, then if the one thing you want is say a soundcard but it also has say 2 x GPUs, then you must also select to pass through those GPUs under the GPU selection part. I saw one of spaceinvader's videos where he was passing through a GPU, and he said something like "you must pass through the HDMI sound as well for this GPU as you can see it is in the same IOMMU group". So he went to the soundcard part of the VM settings and passed through the sound from the GPu as well, and then just disabled it inside device manager for the Windows VM, and only left the actual soundcard he wanted enabled in device manager.

    Are you suggesting that the above is not required and that you can selectively just pick what you want? Have I misunderstood?

     

    On 3/11/2021 at 7:05 AM, Dodisbeaver said:

     I have not tested passing the nvme yet. But i dont think that will be a problem. 

     

    The NVME top slot in my Asus ROG Strix E board is inside its own IOMMU group so should not be an issue.

     

    On 3/11/2021 at 7:05 AM, Dodisbeaver said:

    Unraid is running headless with efifb=off in system conf. Ioummu is enabled in bios. And asc is enabled in unraid. I am actually surprised that this worked flawlessly. But it was not without problems. Had a hard time getting the pci usb working in the vm.

     

    I am still reading up on how to pass a GPU through. Seems complex with many different settings and tweaks required. When you say you had ACS set to enabled...which setting? There are 4:

    PCIe ACS override: disabled / downstream / multifunction / both

     

    When I tried each one it did separate things a little more, but even on both I think it was limited because IOMMU groups are still shared. I will post below with all the groupings.

     

    On 3/11/2021 at 5:07 PM, giganode said:

    Also I don't recommend to bind usb controllers to vfio on boot. Just leave them as they are and attach your usb devices as you want.

     

    As above, do you not need to pass through the entire IOMMU group?

     

    On 3/11/2021 at 5:07 PM, giganode said:

    Then, take a look on the site for system devices and make sure that all devices needed for vm's are seperated from the controller handling the unraid stick. You maybe need to switch certain devices to different ports. Finally add the controller(s) to your vm's and enjoy.

    There should be no problem as the controllers do support flr. :) 

     

     

    Did you not just contradict yourself? You say to not bind them, but then do bind the entire controller? Which is it? What is flr?

     

     

    On 3/12/2021 at 9:27 AM, giganode said:

    Another aspect, if you bind a pci device on boot it becomes completely unavailable to the system. So wenether a situation might occur where you need it you gotta unbind it und reboot the server.

     

    On boot of what? The VM or the physical bare metal unraid server? Slightly confused by the above statement. I think you mean if you bind it to the VM as a passthrough device, it becomes unavailable to unraid once the VM has booted up. Well...yes, but why would you pass through anything to the VM that you later may want? 

     

    On 3/12/2021 at 9:27 AM, giganode said:

    Biggest difference:

     

    Binding a device on boot results in binding whatever is in the iommu group.

     

    A pcie passthrough does only passthrough the chosen device, not the whole iommu group.

     

     

    How does one chose to do either one? I see no such method available to differentiate in the VM manager settings or VM template settings. I just get a drop down box of a list of different devices I can select so does this mean I am only doing PCI-e passthrough?

     

     

     

  12. On 2/9/2021 at 11:04 AM, Dodisbeaver said:

    I would also like to know this. Rocking  a b550 with 3600.


    Update:

    I have now setup a unraid server with Asus B550-f gaming non wifi

    ryzen 3600 non x.

    msi gtx 1080 ti

    16 gb ram

    Running beautifully. Got windows vm up and running. There are two usb controllers so it is possible to use VR also.

     

    I followed the remote gaming on unraid, and everything just worked.

     

     

    When you say it's running beautifully .... Have you been able to separate the IOMMU groups out so you can individually pass through selectively what you want?

    I.e. what are the chances of running a VM with usb, gpu1, sata ssd off main board, M2 off main board all passed through, whilst Unraid used gpu2, sata HDDs off main board as well as off a hba card in pcie slot?

    • Like 1
  13. 6 hours ago, letrain said:

    My p2000 does it flawlessly. Mining doesn't touch the encode/decode at all. This docker is great for something like the p2000 just sitting there waiting for streams...or unmanic conversions.  So your 3070 shouldn't even break a sweat. Only thing I wish for is like you said, adjust clocks. My 2070 would be great, but without under locking it gets too damn hot so...a VM it is. Be nice to use it for transcode and drop the p2000 out. Too bad unraid doesn't have amdgpu plugin. My rx580 runs cold no matter what I throw at it.

    Interesting, on the contrary, I have an AMD RX 590 which is a toasty, loud beast in another PC I have. When mining it gets up to 80-85c on stock clocks. I understand that the 590 is actually somewhat different to the 580 despite that it seems they would be very similar. I believe there was an architecture change between them. It mines in the UK here at approximately £2.50 profit a day at the time of writing this. 

    My 3070 when I have monitored it in the trex webgui during mining, never seems to go above 33% fan speed and a temp of 61c. This is running in my loft currently though which is cold. Perhaps I may try plex streams at the same time to see what happens.

  14. On 2/19/2021 at 5:18 PM, PTRFRLL said:

     

    Can you try updating the docker to pull the cuda11 tag like so:

    
    
    ptrfrll/nv-docker-trex:cuda11

     

     

    2021-02-19 10_17_28-Window.png

     

     

    Thanks! Tried that and all working now. I take it there is no option to adjust clocks or power anyway?

    I find tht the mh/s for me is 52mh/s when mining ethereum on a 3070. Apparently my card should do 60mh/s.

     

    Is using it in this way via a docker pretty much the same as bare metal performance? I take it there would be no benefit of running this inside a Windows VM with the GPU passed through instead? I know this defeats the purpose of the docker but I'm just hypothesizing purely on performance hash rate. :)

     

    Nice work.

    • Like 1
  15. Well I simply downloaded the latest trex version that shows in CA listed and it claims on docker status to be running "latest" version. See below logs. Should I be trying to source a newer docker version of trex from elsewhere then?

     

    My unraid server is running version 6.9.0rc2 and nvidia-smi shows this:

    NVIDIA-SMI 455.45.01    Driver Version: 455.45.01    CUDA Version: 11.1

     

    But when I start trex it shows:

    20210218 23:58:10 T-Rex NVIDIA GPU miner v0.19.11 - [CUDA v10.0 | Linux]

     

  16. On 2/5/2021 at 9:10 AM, glompy said:

    How do you configure it to leave some GPU for Plex to transcode?

    What did you find happened when plex tries to use the card to hardware transcode when it was already in use with mining? Did it refuse to use GPU at all or did it crash or lag? What about multiple stream attempts?

  17. On 2/5/2021 at 9:25 PM, hk21x said:

    Hey! I'm mining using a GTX 1050 Ti however the hash rate is incredibly low, is there a way to check it's utilising it to the full capacity, rather than nvidia-smi - as this isn't showing anything specific to T-Rex?

    Did you make any progress with this? I also have a 1050ti I just tried this trex docker out on to mine ethereum and found the hash rate to be less than 1mH/s. This could be normal though now. If you google for "ethereum DAG 4GB mining" you will see that the size of memory required to efficiently (or even at all?) mine ethereum now surpassed 4GB so it seems such cards may be relegated to mining alt coins like ravern or veil whilst still just about being profitable.

  18. I had been using the Dynamix S3 Sleep plugin for a while and all worked well for me, but recently having upgraded Unraid I am finding that when watching Plex, the plugin does not detect this and still shuts down. I have tried turning on to wait for network inactivity now as well as array inactivity to make it more strict, but before I did not need this and just had it set to check for array inactivity. Has anything changed with recent unraid versions affecting functionality? I basically want it to only shutdown from midnight onwards if it detects nothing is being watched.

     

    EDIT: Adjusted settings and seems to be working ok now. I turned on the option for waiting for network inactivity, but had to set it to more than zero, for it to work. i.e. I think it was around 10Kb (the smallest amount not zero) and then things worked ok. 

  19. Just thought I would respond to this topic to clarify some things after I found the exact same issues as the OP. Many people will want to automatically shutdown their server late at night, and then have it power on early in the morning to save power, noise and light. The shutdown part is easy using unraid plugins and is out of scope for discussion here. The startup can be done in really only two main ways which cover unattended power on. Wake on LAN (WOL) and power on by Real time Clock (RTC - computers BIOS clock on the motherboard). For most people in home server environments, WOL is not an option since they will not have any other network equipment online either of which could send the magic packet. This is possible to do over WAN but let's not get into that discussion here since this is to discuss using RTC to power on from full shutdown.

     

    Let it be understood firstly that this is not a motherboard or BIOS issue. This is expected behaviour from Linux Operating Systems during shutdown, that the reset of the BIOS Real Time Clock (RTC) to UTC standard time occurs.

     

    Assuming that the OP is trying to shutdown his server at midnight Australian Eastern Time (AET), then this is 14:00hrs UTC. What he is saying is that  when the server shuts down at midnight AET, it sets the BIOS RTC back to UTC, and at the same time, the date shifts back a day since the reset takes it back accross midnight during the reset. It then comes to 07:00hrs AET and the server does not start up because the RTC is now at 21:00hrs with a date of the day before (since UTC has not passed midnight yet for that day). As a result the server to get back in sync requires both the time and the date resetting. Depending on the time of day that the OP performs any shutdown or reboot, this could happen again where the date also shifts. i.e. if it occurs during 00:00:00 AET to 09:59:59 AET. 

     

    In any case, even if the OP sets shutdown to be 23:59:59 and avoids the date shifting issue, the BIOS RTC will still be set back to UTC which will cause issues with the BIOS RTC timings to start it back up.

     

    So how do we fix the issue? As already suggested above, you simply set your BIOS RTC to the exact current time in UTC. Then you set the startup time in the power on by RTC options to be 21:00 hrs. The only "issue" as such is that yes, your BIOS clock and date will sometimes read an alien time/date to you. But really this is a set and forget option. Once you have this set, unraid will always show your correct local time for everything, including logs. 

    We face a similar issue in the UK since for 6 months of every year we have daylight saving time where we have either BST (British Summer Time) which is UTC +1, or GMT (Greenwich Mean Time) which is a timezone that is basically the same as UTC, UTC +0. So right now I have to set my server to turn on at 06:00 hours for it to come on at 07:00 hrs. When the clocks change again in October, I will have to set it back to 07:00 hrs.

     

    I hope this clarifies the situation.

  20. Hi all,

     

    Recently installed plex LSIO and nvidia unraid both on latest versions. I've been reading up the best I can on all the info and was wondering if someone can confirm that I'm correct on the below right now at time of writing:

     

    1: The official plex apps/dockers did not support Nvidia Hardware decode specifically (hardware encode was enabled and had been working for some time) up until it was recently officially added around June 2019.

     

    2: Before the above happened, there was a workaround which mainly involved a "hack" or wrapper script of some kind which altered ffmpeg included with plex to flag that it can do Nvidia hardware decoding.

    (I'm yet to find this separate thread where this is supposedly discussed and often referred to in this very thread so have only been able to take snippets from various places).

     

    3: Soon after (1) occurred above, the docker plex images were updated (including this one) to also include the ability to Nvidia hardware decode as well as encode. 

     

    4: Therefore simply installing the latest version of LSIO plex does indeed work out of the box with Nvidia Hardware decoding and encoding, so long as it is bundled with nvidia unraid to allow pass through of nvidia gpu to the plex container. (I'm assuming the bolded part is required and it would not work with official unraid).

     

    5: Nvidia hardware decode and encode has worked for ages on windows and hence people have previously worked around the linux issues with using windows VMs. 

     

    6: Updating to the latest official unraid new releases could break nvidia hardware decoding since nvidia unraid could break underneath, so best to wait for this to also be updated first.

     

    Sorry if I got any of the above wrong. 

     

     

     

×
×
  • Create New...