Jump to content

testdasi

Members
  • Content Count

    2218
  • Joined

  • Last visited

  • Days Won

    8

testdasi last won the day on March 27

testdasi had the most liked content!

Community Reputation

311 Very Good

4 Followers

About testdasi

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

2132 profile views
  1. Did you install the driver using the download from Nvidia website or did you use the default Windows driver? I think it's the default Windows driver which might be why you can get a display without the sound device. It's still rather peculiar that it would even work though. Anyway with regards to your xml, you probably will need a vbios to get both GPU and HDMI audio to work (with proper Nvidia driver) since it's the only GPU in your system. SpaceInvader One has a tutorial on how to get vbios so perhaps watch that.
  2. With 6 drives, having dual parity is an overkill based on Backblaze HDD failure stats. You have to be super risk averse and/or store super important data on the array to require that level of protection. I haven't heard of RC 580 graphics card. I think you meant AMD RX 580? If that's the case, the very first thing you need to do is to try booting Unraid with the mobo integrated ASMedia GPU. The only success stories of passing through RX 580 on here involve booting Unraid with another GPU. Since both SSD's are SATA, using either of them exclusively for a VM (i.e. passing through using ata-id method) will not yield you much (if any) perceivable diff, as compared to putting a vdisk on the drive. With vdisk (and appropriate config), Windows will detect the vdisk correctly as thin-provision and thus enable trim. An ata-id passed through device doesn't support trim, as far as I have managed to try in the past. There is the scsi-bus pass through method which may (or may not) enable trim but I haven't tried that when I had a spare SATA SSD. So if you really want to give the VM exclusive use of an SSD, you are better off with Option 1. That is assuming, being an enterprise MLC SSD, the S3500 runs better without trim.
  3. Create a Windows VM and mount those files in VM. I am not aware of anyway to direct mount vhdx in Unraid.
  4. As a starting point, attach your Diagnostics (Tools -> Diagnostics -> attach zip file). Then copy-paste the PCI devices section from Tools -> System Devices + attach your current xml. When copy-pasting from Unraid, use the forum code functionality (the </> button next to the smiley button) so the code is formatted correctly. There's no need to manually section things out like your manual post since the code block will be obvious. To be honest, this is a rather peculiar case. I have a GT710 and it would not work without the HDMI Audio device passed through. In your VM what works with the GPU without HDMI Audio, what driver do you use?
  5. Spontaneous downgrade is NOT possible. You might want to think very carefully about what happened because if you can eliminate all possibilities of user error then there is only one possibility left - you were hacked.
  6. One thing I forgot to caveat. I was talking in terms of Unraid core experience. If you plan to ever use ZFS (e.g. ZFS plugin "app" that is not officially supported by Unraid i.e. outside of the Unraid core experience) then ECC is considered a must-have requirement. I do agree with your point above. When picking between spending on a backup vs ECC RAM (for Unraid uses), I would pick a backup every time.
  7. The point about ECC RAM being slower than non-ECC RAM is not relevant since you shouldn't be overclocking RAM on Unraid. Those advertised high speed and fast timing are certified overclock and is an overclock nonetheless. ECC won't prevent system crash or data corruption in the sense that if you use it, you won't have those things taking place. It only protects you against crash / corruption in a very specific case that is a single-bit error. Whether that matters or not (vs the cost of ECC RAM) is entirely personal preference. I have run both ECC and non-ECC and my personal anecdotal experience is that it makes no perceivable diff in terms of stability or data corruption.
  8. Removing it from syslinux will only stop it from appearing on the Other PCI Device section of the VM template. Your existing VM config would still have it and thus when you start the VM, it would still pass through. This is apparent in your latest xml diff, which contains the config to pass through 03:00.0. If you are comfortable with deleting the xml section then do it manually via xml edit. Otherwise, I would suggest starting a new template from scratch (perhaps copy-paste the xml here as well so everyone is on the same page with regards to which VM you are testing).
  9. Unraid freezes when you passed through 28:00.3 probably because the Unraid USB stick is on it. I provided instruction on how to identify what port is on which controller, which I don't think you have followed. Your problem is kinda unusual so logically the cause should be something uncommon. And the one thing that is uncommon about your config is that you passed through the USB 3.1 controller, which nobody has ever successfully done so - at least none that I have seen on here. 100% load on 1 core and being stuck on Tianocore screen CAN be the symptoms the controller failure to handshake with the USB devices. And a non-resetting / problematic controller will fail to handshake with USB devices. So 03:00.0 is the primary candidate of your issue but your refusal to remove it from the config is not helping at all. Remove it from your config and see if the problem still occurs.
  10. Welcome to Unraid. Good job with the cable there. That is very neat. A few unsolicited tips for you: Ask yourself whether the data in cache pool needs RAID-1 or not. If you are not storing any important data on there, you can make your SSD last even longer by separating write heavy and ready heavy data to 2 SSD instead. The write heavy SSD (e.g. for download temps) should be almost empty most of the time except when data is being written to it. That will minimise write magnification and wear leveling and thus enhance your SSD lifespan. Alternatively, the next time you get a new SSD (which presumably should be larger than 128GB, they don't make that size anymore), add that as a single-drive cache pool instead and just cp data over from the old cache pool. Then use the old 128GB for write-heavy stuff. With the right treatment, SSD's last forever (I still have a 128GB SATA II (yes, TWO) that does not have even a single reallocated sector despite going way over its rated endurance rating). Aim to get fewer large capacity drives instead of more smaller ones. The more drives you have, the more likely you will have a failed drive. And make sure to preclear each drive before adding to the array to weed out the "infant mortality". Are you running Unraid Nvidia? If you want to use the 1660 (or 970) outside of a VM for Plex, you will need Unraid Nvidia build. Read up on it (there's a forum topic) before jumping in. Alternatively, you can pass through the GPU to a VM (with an OS that supports Nvidia e.g. Windows) and run Plex in there. There's a lot of overhead with this + passing through GPU isn't the easiest thing to do but that's one alternative to consider. Change your CPU cooler fan config. It's better to set it to push air onto the cooler and then (pulled) through the rear exhaust fan. That way you kinda create a push-pull config that works a bit better than a double pull from the look of it right now.
  11. What do you expect to be your parity drive? At boot, it looks like only 3 HDD were loaded, 2x 4TB Seagate and 1x 1TB WD.
  12. (Assuming both have DRAM and are 3D TLC), M.2 NVMe SSD is ALWAYS faster than a SATA SSD. It's less about advantage and more about noticeability. You will tend to notice it more often if your workload is where NVME matters e.g. High IO load (especially simultaneous / parallel IO) Large sequential Random Read Random Write in rapid succession (infrequent random write is cache in RAM and thus is always super fast). One thing nobody seems to mention when talking about SATA vs NVMe is that NVMe is built around parallelism. The biggest implication is that under heavy IO, an NVMe drive is less likely to freeze your system (due to high IO wait). In terms of compatibility, those mobo compatibility lists are never updated and thus would never have any device that come out afterwards. Theoretically, any normal NVMe M.2 will be compatible with any PCIe M.2 slot. What would be an "abnormal" M.2? The Intel H10, for example, requires special bifurcation of an x4 link into x2/x2, which basically nobody supports, even most Intel own chipsets out there. The only important compatibility that you would have to pay attention to is when you want to pass through the NVMe as a PCIe device to a VM. You then need to pay attention to the controller since some just out right won't work (e.g. Intel 660p) and some require special workaround with limitation (e.g. SM2263 controller would not work with more than 15 cores). Note that even in those cases, you still can use the NVMe as a storage device (e.g. put a vdisk on it) and it would still perform better than a SATA SSD.
  13. No, it doesn't - sort of. You are trying to generalise which is highly imprecise when it comes to GPU pass through. Both brands have their problems. All Nvidia GTX / RTX graphic cards can have error code 43 if the Nvidia driver detects that it is being used in VM (that's how Nvidia tries to force users to fork out more money for a Quadro). Error code 43 is a generic "it's not working" error so it muddles the situation so you don't know if you configure things incorrectly or the card has failed or the aforementioned artificial error, etc. All AMD graphic cards can have reset issues which make it impossible to pass through and/or requires the whole server to reboot in order to reboot the VM. This is particularly prevalent with RX500, Vega and Navi (i.e. all the recent AMD GPUs). Resolutions and workarounds center around these common fixes. Pass through all the devices of the graphic card together. E.g. RTX card has FOUR devices (GPU + HDMI Audio + 2 USB devices), typical other graphic cards will have TWO devices (GPU + HDMI Audio). This is one of the most frequently made new user errors. Having a GPU for Unraid to boot with ALWAYS HELPS - even more so than a vbios! This is why I previously recommended you consider Intel offering with iGPU, especially for an ITX build for the reason I already mentioned. The only success stories with AMD RX500 series on here involved booting Unraid with another GPU. Using a vbios can help both AMD and Nvidia but it's easy for new users to download / dump / edit incorrectly. AMD Vega and Navi are basically impossibilities without the vega / navi patches. These patches are not included in Unraid by default because they mess up other otherwise working cards. So the only way to get them is to compile custom kernel. There's a forum member who compiled 6.8.3 with all the patches if you don't know how to compile them yourself. Miscellaneous fixes e.g.: Hyper-V and KVM xml tags to deal with error code 43 Boot Unraid in legacy mode
  14. @MMChris: re-read my post on Monday about separating write heavy and read heavy data with a single-drive cache + unassigned SSD. It is particularly useful if you are going to use it for torrent / any other write-heavy activities. And then read up the below about the still-ongoing performance issue with btrfs multi-drive cache pool. Then ask yourself how important mirror protection is (as opposed to having a backup instead).