billington.mark

Members
  • Posts

    362
  • Joined

  • Last visited

Posts posted by billington.mark

  1. Hi Guys, I'm late to the SAS party and recently snagged two ST6000NM0014 drives for a very small sum. 
    As I noticed the drives wouldn't spin down, I found this plugin, and the drives do now spin down, but the error count on the WebUI for those disks slowly started to creep up. 
    Manually spinning the drives down in the WebUI also swamps the syslog with IO errors, which led to me having to rebuild the array from parity as a result. 

    Removing the plugin leaves the drives spun up, but obviously it would be better if these played nice!

     

    System details:

    • Unraid 6.9.1 (downgraded from 6.9.2 due to the other spin down issue)
    • LSI SAS2008 controller, with 2x SAS ST6000NM0014's, and the rest are SATA (which spin down fine)

    The OP mentions that the issue can be caused by a combination of controller\disk, rather than the non-standard implementation of power management across different brands, but the thread seems to lean heavily towards the latter? 
     

    I guess my main questions are:

    • Should I hang onto my ST6000NM0014's?
    • Is there a reason the Constellation ES.3 is currently #'d from the exclusion list if its still misbehaving?
    • Would things be better with a different SAS controller?
    • Can the OP be updated with a list of SAS drives that are known to play nice with this plugin?
    • Is this being addressed at a core unraid OS level for 6.10?

    Apologies if these have been answered already but the last few pages of this thread are hard to follow with the 6.9.2 issue being added to the mix!

  2. I upgraded to this mATX X570 motherboard which has 8xSATA ports and 2xNVME ports. I had the H310 to make up the shortfall on a B450 board I had previously, and as I upgraded it was no longer needed. Run it with a Ryzen APU and it happily does everything I need it to, without the need of any expansion cards, or separate GPU:

     

    X570M Pro4

  3. As it says on the tin. Also includes the cables to connect 8 HDDs.

    Recently upgraded and no longer have use for this card. Flashed a while ago to be used as a HBA and has had 8 drives running flawlessly.

     

    £25 inc Postage

    Payment via Paypal

     

    Will be putting this on eBay in a week or so if there's no interest on here. 

     

    Sold

  4. Stuff to try:
     

    • Set up the VM using Q35 instead of i440fx. The virtual PCI-e lanes are set up much differently, which could yeild better results. (this will most likely need an OS reinstall inside the VM).
    • Windows not booting\installing: Set the VM to only have 1 CPU, and install\boot as normal, do a graceful shutdown, then update the VM config to the desired CPU setup. This fixes things after feature updates if the VM misbehaves after them too. 
  5. Sorry, ive not been online here in a while and only just seen that ive been tagged in a few posts. 

    I no longer use a VM for my workstation for various reasons which are off topic!

    (Also I cant remember all the commands, or fully up to speed with the best practice for stubbing and passing devices through in 2020, so im happy to be corrected if any of this advice is out of date!)
    I know there's a new way of passing devices to VMs nowadays, but the premise that you have to pass all devices in an IOMMU group will still apply. 

     

    You need to stub all the devices associcate with the card, so 2c:00.0 (GPU), 2c:00.0 (Audio), 2c:00.0 (USB) ,2c:00.0 (Serial).

    The GPU and Audio are probaby stubbed by unraid in the background automagically (so they'll not have a driver assigned on boot) which is why they're selectable. The serial and USB wont have been, so unraid will be assigning a driver and grabbing the device as its own. 

    Stubbing then rebooting should be enough for the USB and Serial device to show up in the VM config GUI for you to select. After that, it should pass through and just work. 

     

    If they're not selectable, you need to check if a driver has been applied to the device (lspci should tell you). I think a virtio driver is assigned to stubbed devices. If anything else has grabbed it, youve not stubbed it properly.

    If they have a virtio driver assigned, its probably just a GUI limitation. you'll need to go into the XML, find where the GPU and Audio device is assigned, and duplicate whats been set up automatically for the GPU and Audio, for the USB and Serial device.

     

    Heopefully thats enough to work with. 

  6. I had this a while ago and couldn't get anywhere near bear metal performance. 

     

    You can get quite complex with this and start looking into IOthread pinning to help, but there's only so much you can do with a virtual disk controller vs a hardware one. 

    You'll probably notice a bit of a difference if you use the emulatorpin option to take the workload off CPU0 (which will be competing with unraid stuff). if you have a few cores to spare, give it a hyperthreaded pair (one that you're not already using in the VM). 


    In the end I got a PCIe riser for an NVME, and passed that through to the VM. You get about 90% of the way there performance wise compared to bare metal as the controller is part of the NVME itself.

     

    Good luck. 

    • Like 1
  7. I have read reports that the Vega iGPU can be used for transcoding in the plex forums, however with it being quite a niche situation where you have a plex server, AND one of the very few CPUs with the vega iGPU, reports are few and far between. 

    However, to be in a position to test, and to be able to talk with people in the plex forum on how to get it working, I need to be able to expose it to my docker container!

    Also like youve said, its not just plex which could make use of the iGPU.

  8. Please could the drivers for the Ryzen APUs be added. I believe the prerequisite kernel version is 4.15, and we're on 4.19.33 on the latest RC.

     

    It was mentioned here by @eschultz, but ive never seen any mention of it getting implemented, or if it has been, how to enable it:

     

    Id like to use the GPU to aid in transcoding in my plex docker (which while undocumented on the plex side, it does apparently work).

     

    Even if it wasn't enabled by default, and required adding boot code(s) in syslinux, or a modprobe command in the go file, id be happy!

    Or even if there was documentation somewhere on creating a custom kernel with the driver enabled? 

     

    The 2400G is a little workhorse, and adding GPU transcoding would make it a pretty amazing!

    • Like 2
  9. Having an upgrade so I'm selling my dual Xeon setup...

    All items are working great and haven't been overclocked.

     

    I will post internationally, but please bear in mind that international postage from the UK is expensive!

     

    Motherboard and CPUs (sold as a bundle as I don't have the CPU socket protectors:

    AsRock Rack EP2C602-4L/D16.

    https://www.asrockrack.com/general/productdetail.asp?Model=EP2C602-4L/D16#Specifications

    2x Xeon E5-2670

    £340 posted

    SOLD

     

    Memory (new prices as of 09/05/19):

    32GB DDR3 PC3-12800R

    8x M393B5170GB0-CK0 4GB PC3-12800R

    £60

     

    32GB DDR3 PC3-8500R

    8X M393B5170EH1-CF8 4GB PC3-8500R

    £75

     

    All 64GB for £130

     

    These are also for sale on eBay, listed here for slightly cheaper though. 

    All payments through PayPal.

  10. QEMU 4.0 RC0 has been released - https://www.qemu.org/download/#source

    And a nice specific mention in the changelog to things discussed in this thread (https://wiki.qemu.org/ChangeLog/4.0):

     

    Quote

    Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links.

     

    Now that these changes are standard with the Q35 machinetype in 4.0, I think this could also be an additional argument against potentially forcing Windows based VMs to the i440fx machine type if this brings things into performance parity?

     

    If @limetech could throw this into the next RC for people to test out, that would be much appreciated!

    • Like 3
  11. On 2/28/2019 at 9:57 AM, bastl said:

    I saw a post from a Unraid user over in the original forum with the dev for this patch asking why to use Q35 or why not (and no it wasn't me)

    It was me :)

     

    I think the current behaviour in the UI is perfect. Pick an OS, and the sensible, least hassle settings are there for you to use. I dont think options to change the machine type should be removed. At worse, they could possibly be hidden behind an "advanced" switch (which i think currently flips between the form and the xml), then having another tab to view xml instead?...

    I know there's a balance to be found to accommodate all levels of unraid users here, and i dont envy the UI decisions to try and keep everyone happy!

     

    It is worth pointing out that its documented the drivers DO behave differently based on what PCIe link speed they detect, and personally i get better performance numbers, and prefer running a Q35 based VM...

     

    I think the long term fix for this is to either allow the option to run modules such as QEMU, libvirt, docker from the master branch, and allow them to be updated independently to the OS, or to have "bleeding edge" builds where these modules are compiled from master. Easier for me to say, than it is to implement though. 

     

  12. @jonp

    Ive been under the impression for a long time that latency and performance improvements in QEMU needed the Q35 machine type to be taken advantage of. 

    All development ive seen, and tips to improve performance, all seem to be around using the Q35 machine type. 

    At the end of the day, I want to get as close to bare metal performance as possible, thats my aim. Im in no way preaching that we should all move to Q35. 

    Now i have my own performance numbers pre and post patch, i'll happily test the i440fx machine type too. 

     

    Ive also posted this over in the Level1Tech forum to ask them the same question, seeing as its them who've pushed for the development on the Q35 machine type to get these PCIe fixes in the first place. 

     

    As for removing the option in the GUI for Q35 for windows... I think it would be more appropriate to show a warning if Q35 was selected, as apposed to remove the ability to choose it altogether. 

    • Like 1
  13. Yep, looks like its fixed the driver crippling memory scaling (in windows anyway). Im seeing a 5-10% increase in GPU benchmarks after updating to RC4. :) 

    Was hoping for more, but it looks like my bottleneck is my aging CPU now! (2x E5-2670). 

     

    Ive been meaning to put my hand in my pocket and upgrade to a threadripper build for a while now.... Im very interested to see what performance gains you guys are getting after this patch...

     

    Thankyou @limetech :)

  14. The original topic of this post was to highlight a particular problem I was having (And still am),  but the main underlying point here is that over the last couple of years, development on QEMU, introduction of new hardware from AMD, and the general love for virtualisation on workstation hardware has meant development in this space is moving at quite a pace. 

    Short term, a build which would include virtualisation modules from master would make a lot of people happy, but the same is inevitably going to happen when 3rd gen Ryzen, 3rd gen Threadripper, PCIe4, PCIe5, etc, etc drops in the coming months. 

     

    Personally, I think the long term holy grail here is to see the ability to choose which branch we're able to run key modules like QEMU, libvirt, docker from... then be able to update and get the latest patches\performance improvements independently of an unraid release. 

     

    Short term though... a build to keep us all quiet would be lovely :)

    • Like 2
  15. 3 hours ago, Tritech said:

    Yea, I didn't grasp the concept that the initial post was making about creating a pci root bus and assigning it vs a card. The more recent activity there does seem like that the bulk of improvements should come with QEMU updates...whenever we get those.

     

    The guy I got it from said that the last lines in his xml we for a patched QEMU.

     

    I was also recommend "hugepages", but after a cursory search it seems that unraid enabled that by default. Couldn't get a vm to load with it enabled.

    
    <qemu:commandline>
    
        <qemu:arg value='-global'/>
    
        <qemu:arg value='pcie-root-port.speed=8'/>
    
        <qemu:arg value='-global'/>
    
        <qemu:arg value='pcie-root-port.width=16'/>
    
      </qemu:commandline>

     

     

    Ive been pushing for the changes detailed in that level1tech forum post for a while... 

    https://forums.unraid.net/topic/77499-qemu-pcie-root-port-patch/

     

    Feel free to post in there to push the issue.. the next stable release of QEMU doesnt look like its coming up until April\May: https://wiki.qemu.org/Planning/4.0. So fingers crossed there's an Unraid release offering that soon after.

     

    The alternative is for the @limetech guys to be nice to us and include QEMU from the master branch rather than from a stable release in the next RC....

    Considering how many issues it would fix around threadripper, as well as PCIe passthrough performance increases, it would make ALOT of people happy...

     

     

    • Like 1