Jump to content
We're Hiring! Full Stack Developer ×

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. I posted correction to the other topic too. Thanks for flagging it.
  2. The misinformation in this topic apparently has led to misunderstanding more than a year later so let me post a correction. PSA: the presence of the bridge in the same IOMMU group as the GPU (e.g. like what the OP has) can be ignored for the purpose of PCIe pass-through. Misunderstanding:
  3. What kind of video editing are you doing? 4k? 1080p? H.264 / H.265? ProRes? How are your games stored? Steam library? Battle.net library? No library? What kind of streaming are you doing? Streaming your gaming on Twitch or streaming media to the rest of the household? What kind of "image automation"? Have not heard of this use case before. What needs to be "watch" in the "watch folders"?
  4. The 4x2TB HDD should be in the array (1 parity + 3 data) to give you 6TB of general-purpose shared network storage. With regards to the SSD's, there are many ways to set them up depending on what exactly you are after (e.g. capacity vs speed vs failure protection vs VM requirement etc.). You mentioned needing 10+ VM's but few users really need that many VM's (e.g. there are few reasons why a video editing VM can't be used for playing games). So I would suggest you write down each VM and its use case + hardware needs (e.g. storage, GPU etc.) + any shared hardware. That will give us a better idea as to what kind of storage config is more optimised (+ chances are things will need to be simplified).
  5. For PCIe pass-through purposes, you can ignore the bridge existence. So in your case, group 1 only has your GPU (and the associated HDMI audio) so can be passed through fine.
  6. Short answer is "highly likely NOT" but first I need to correct a misconception found in the replies here. Nvidia does not "actively disable VM passthrough" (here I'm talking about GTX / RTX cards). The right word is "discourage". What the Nvidia driver does is if it detects being run in a virtualised environment, it will error out with code 43. Preferring AMD over Nvidia because of error code 43 is like preferring to stand under a coconut tree instead of a durian tree for fearing of having something dropped on your head. AMD cards are not problem free - they are notorious for reset issue. The bottom line is you shouldn't pick a brand but rather a specific model. There are many users on here who have successfully passed through Nvidia cards and AMD cards so you might want to look around the forum for success stories and use the exact same model. To reduce the chance of getting error 43 with Nvidia cards, you can follow these 3 tips: Boot Unraid in legacy mode (i.e. do NOT boot Unraid in UEFI) Do not use the to-be-passed-through GPU as your primary (i.e. what Unraid boots with) e.g. you can use the onboard as primary or buy a cheapo one to use as primary (assuming your mobo BIOS allows you to pick which slot to boot with e.g. Gigabyte X399 mobo). Dump your own vbios specific to your actual GPU (see SpaceInvaderOne guide on Youtube for how to do it) You don't have to follow all 3 right off the bat but chances are if you have error code 43, you will have to end up having to do them anyway. Now as to why I say "highly likely NOT" for your specific questions You didn't say what your exact spec is (and whether it supports IOMMU, virtualisation etc.). OnBoard GPU passthrough varies from hard to impossible. There's no easy way. So at the minimum, you are likely to need a 3rd GPU, even a cheapo one for the audio editing workstation - assuming you want to run 3 VMs simultaneously. Last but not least, mixing brands is fine. If a card can be passed through in its current slot, it doesn't care what other card is in the other slots on the mobo.
  7. Perhaps my British sarcasm didn't come through as well as I thought. You asked a generic question with little details and hence I replied with a generic answer with little details. What model is your mobo? How old is it? Did you turn on IOMMU in the BIOS? What CPU are you looking to use? What is your budget? What other hardware you have?
  8. Define "graphics performance". Did you run a benchmark? Audio issue - google msi_util on the forum and find the SpaceInvaderOne post in which he attached the tool.
  9. Typically the primary GPU slot is on its own IOMMU group. ITX mobo has so many unused lanes that it requires effort to have something share the same group with the primary GPU slot, which is the only slot.
  10. What about just plain old USB? Thunderbolt external enclosure is a pretty niche (and expensive) solution. Few devices, especially external storage devices, can saturate 10Gbps (current USB top speed), let alone TB3 40Gbps. TB3 cuts off AMD users. USB 4 - that is coming in the next few years - will incorporate Thunderbolt 3 spec. Any effort to incorporate TB3 will be immediately wasted once USB 4 driver is out for Linux - and given the prevalence of USB devices would be the top priority.
  11. I think you have a misunderstanding on the role of TRIM. TRIM allows the OS to tell the SSD which blocks are effectively empty i.e. data on them can be ignored. This allows the SSD, when writing to such blocks, to write a new block instead of needing to read what's on it first before writing. This is how TRIM helps maintaining the SSD performance over time. On its own, the TRIM command does nothing to affect wear leveling or endurance or parity calculation. What affects parity calculation is garbage collection, which is essentially SSD defrag. Theoretically, the OS isn't aware of how data is moved during garbage collection, so any data that is moved would render the parity invalid (for those blocks). As per what johnnie observed, it strangely only seems to affect a small number of SSD's in practice (while theoretically, it should affect all SSD's). That can be explained away if physical and logical block addresses are maintained independently. Simplistic example: John, Jack and Jim live in house number 1, 2, 3 respectively. They are forced to change house (change of physical address) so now live in house number 2, 3, 1 respectively. However, if you don't look for house numbers but rather the signs in front of the house to look for John's house (same logical address) then it doesn't matter. It could also be that the SSD actually tells the OS what data has been moved where and thus parity was calculated in the background and few SSD's actually do this silently. Or it could be that GC just don't run in the majority of johnnie's SSD's - kinda unlikely I would assume With TRIM, the SSD knows which blocks are empty and thus don't do garbage collection on those blocks. That helps with wear level (and thus improves endurance). Using the simplistic example above, it is like you are told John's house is empty (or house number 1 is empty) so you don't have to go look for John to move house. So no, SSD's do NOT need to be TRIM'ed periodically and TRIM, on its own, does not invalidate parity. Depending on the exact code, it could be that TRIM is used to trigger GC (kinda unlikely scenario in my mind but possible), in which case, under the silent GC case, TRIM does invalidate parity but it's a very specific case, not a generalized statement. I believe the reason why Unraid has never officially supported SSD aray is because LT has not done any major testing to assess the scale of any (potential) issue. It would be highly irresponsible for LT to rely only on a single user's anecdotal story. "Trust, but verify" so to speak. However, because of the above points, I would expect SSD array to always come with a caveat that some SSD's can cause parity error.
  12. As always, please attach diagnostics (Tools -> Diagnostics -> attach zip file). In your case, also attach the xml of the VM (if copy-pasting to post, please use the code function - the </> button next to the smiley button).
  13. Can you redo the VM speedtest on speedtest.net instead? Different sites not apples to apples. Also, are you running any proxy / VPN?
  14. It depends on how the files are moved: If done via network (e.g. what I would assume you meant by "SMB GUI") then the file will be moved from disk1 to disk2. This has nothing to do with how smart Unraid is (see below) but rather moving a file through the network between shares will create a new copy of the file and then delete the old copy so it will inevitably move the file from disk1 to disk2. If done via command line then you will have a Share1 folder on disk2.
  15. Yes, as long as you still use the rclone plugin. Because Unraid is loaded into RAM, anything that is not built into Unraid will need to be reinstalled (and if required re-downloaded) at boot. I think there's a folder in which any downloaded package is installed at boot (look on Youtube for lstopo video by SpaceInvadeOne) so theoretically you can download rclone (+ any prerequisite), put it in there, uninstall the rclone plugin and your rclone should still work from boot without needing the download. Advanced user only I guess.
  16. While I get you and would like the same, I don't think it will ever happen (without LT hiring the LS folks on a perm basis). There just aren't enough people.
  17. Can you install the Speedtest plugin and test the Unraid server itself? And then immediately test the VM too (using the speedtest website)? Need to isolate if this is a VM issue or if it's a server issue.
  18. Yes please. In case you haven't heard, there's a bug on the GUI that prevents changing HyperV status so you might want to create a new template.
  19. +1 on this. It's pretty easy to bring down an Unraid server DDoS-style by spamming SSH attempts. I accidentally discovered this when testing out some Putty scripts.
  20. Try using root path instead of relative path. e.g. if file is /mnt/cache/something/file.ext then you should copy /mnt/cache/something/file.ext instead of mnt/cache/something/file.ext even if your current directory is / Also, quote the paths if there are spaces.
  21. Copying directly to the mountpoint does not necessarily mean the file will show up on the GUI i.e. if there's an error uploading it. I would suggest you have a look at this topic below on how to set it up with scripts. It splits the upload folder from the mountpoint and combine using unionfs. That way when you upload (separately done via a script) and things fail, it will get retried on the next upload. I found that to be more reliable than copying directly to the mountpoint.
  22. Yes still on RC7 - I had to start new templates for my testing. Also, it has been a bug since I can remember so must have traced back to even before 6.8
  23. I think that would only affect disk device. My 970 is passed through via PCIe (like a GPU) so it theoretically shouldn't be affected by that.
  24. Also, if you leave a lot of empty space on the SSD then theoretically you can live without TRIM with little to no performance hit.
  25. "Checked all plugins". What are your plugins? What you are seeing sounds like there's something running on a cron so perhaps start with booting in Safe mode (to ensure all plugins are disabled) to see if you have the problem. Also, "only runs on SSD" - are you sure? Did you map the docker paths to /mnt/user or /mnt/cache?
×
×
  • Create New...