Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. No. Game storage would be fine on network drive. VM on the same server accesses network with virtual 100Gbps adapter so there's no bottleneck. Latency is a bit high but for loading games, they rarely ever matter.
  2. Are you writing temp data to array? Torrents (e.g. Transmission) do not like to write to the array due to their IO patterns. They should be on cache or unassigned. Is parity build / check running? Btrfs scrub running? They would also cause slow down. When the array is busy (e.g. parity build / torrents etc.), any attempt to access it will load up the CPU cores due to high IO Wait. The other typical cause of 100% load on a few cores is if you isolcpus them and have dockers using those cores. That's why I said you need to check your core pinnings.
  3. High Water (in fact I prefer Most Free). A major reason for Unraid array in consumer use cases (as opposed to a RAID solution such as ZFS) is that having more failed drive than parity will not cause you to lose all your data (except for when all data drives failed, which I would consider catastrophic and thus can only be mitigated with a backup). If you use Fill Up and let's say you have only 4TB of data, all of it would be on 1 drive. That means if your parity and that drive die, you will lose everything (because the other 4TB is blank). With High Water, at least you still have some data left (as it should be split across both drives). Of course, you could be lucky and lose parity + empty drive but then if you plan on being lucky then why not just run without parity to begin with right? The only reason to use Fill Up in my mind is that all the data on the drives are easily replaceable and/or does not cost much if lost. Archival storage is a good example that probably can use Fill Up.
  4. I would completely disagree with your statement, in the sense that "get by" meaning "manage with difficulty to live or accomplish something". There is no difficulty involved in using non-ECC RAM. It's not like running non-ECC will suddenly cause the server to become unstable but you will "get by" because it only crashes once a week. No such thing. ECC RAM corrects a very specific scenario, that is a single-bit error. It has somewhat of a halo effect because all the enterprise hardware supports (or even requires) it but there's nothing magical about it at all. If you have RAM measured in the TB range then yes, having ECC is rather important just due to probability. And then you have to consider the impact of a memory error in the enterprise settings - it may mean a cell tower going out of service affecting thousands of people for example. Most importantly though, those crashes, no matter how rare, tend to cost WAY more than the cost of ECC RAM sticks. It's generally better to spend $5k on ECC RAM instead of exposing yourself to $5m of litigation cost. In the consumer space (e.g. what Unraid targets), users have 16GB, 32GB even 64GB of RAM, way lower than those enterprise servers. And a crash doesn't carry as much of a financial impact - if any. And let me re-emphasize that if your RAM is stable (and not overclocked), the chance of a single-bit error is extremely low, the inverse of Donald Trump's ego low. So the cost of ECC RAM doesn't quite justify the minute benefits. So I would say, "many save money without it" would probably be a more fitting statement.
  5. I would suggest you check and redo all the core pinning (including isolcpus, docker core pin, VM core pin etc.).
  6. Manufacturer rated overclock is still an overclock. Running "great" by performance doesn't mean the same with stability (usually the opposite). Also RAM speed usually should not have severe impact on your graphics performance. Most games don't even use that much RAM to begin with. What games are you playing?
  7. As a strating point, you need to attach diagnostics zip. Tools -> Diagnostics -> attach zip file.
  8. The 1600 should be able to comfortably handle 2x 1080p streams so for it to "not cutting it", you must be doing 2x 4k streams then, are you? If you are struggling with even 2x 1080p streams then something is wrong with the config and/or hardware. And even 2 4k streams should still be passable with the right config and reasonable bit-rates. How are your core assignements? --cpu-shares? You are not going to find anything cheap when it comes to hardware transcoding. And then you add the complication of micro ATX build and I'd say your first step should be trying to optimise your server and see if it works before trying something drastic.
  9. On top of what Squid said, if possible you should use Remote Desktop Protocol (which is built into Windows).
  10. The answer is unfortunately "it depends". What sort of things are you looking to move? Generally anything that does a lot of random IO will benefit more. But then it depends on how much IO is done too so the benefit may not be perceptible.
  11. No, RGB RAM requires specific driver and it's not on the PCI device list as far as I know.
  12. The ACS Override settings is under Settings -> VM Manager. If I have to guess, 09:00.0 is on the 2280 slot, 41:00.0 is on the top 22110 slot and 42:00.0 is on the bottom 22110 slot.
  13. No prob. I would suggest you edit the original post title to add [Solved] to it. That can help others with the same problem if they stumble upon this post.
  14. The license is tied to your USB stick so it is unlikely that you will be able to use a separate stick to test. What you can do is to clone your main stick to the secondary stick to serve as a back up. Then if your testing on the main stick is not what you expect, you can then just copy the secondary stick over, overwriting everything on the old stick. I can't remember which version it was implemented but nowadays, when updating to a new version, the last version is automatically backed up (to the same stick) so I can revert at a few clicks of a button from within the Unraid GUI. Worth reading up the release notes for that. (I vaguely remember 6.5.3 has it already but then it was so long ago)
  15. You just need to move <boot order='1'/> from the disk device to the hostdev device corresponding to your NMVe, which is 03:00.0 Step 1: Change this <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_250GB_S21NNSAG732394B' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> ... to this <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_250GB_S21NNSAG732394B' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> Step 2: Change this <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> ... to this <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <boot order='1'/> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev>
  16. Backing up flash is simple. Just go to Main -> Flash and then there's the button that download the zip file. Restoring is a matter of overwriting everything on the stick with what's inside the zip and everything will be back to what it was. Before upgrading, read up on various new release notes. Quite a number of things have changed since 6.5.3.
  17. +1 on here is not going to do anything. You need to post in the Unassigned Devices support topic.
  18. Post your xml and the PCI devices section of Tools -> System Devices. When copy-paste from Unraid, please use the forum code functionality (the </> button next to the smiley button) so the text is sectioned and formatted correctly.
  19. Your testing method is flawed. Data looks to certainly have been cached in RAM for your read test. There's no way "normal SSD" (by that I think you meant SATA SSD) can get 3045.652 MB/s (that is 3 gigabytes per second). Max throughput for SATA isn't anywhere near that speed. Similarly, your vdisk 13 GB/s is faster than any PCIe x4 NVMe drive can get, even PCIe 4.0. Write speed is probably a better indication. 482 MB/s for SATA and 1.2 GB/s for NVMe sound about right. 3 GB/s vdisk performance probably is VM itself is caching write in RAM. So TL;DR: nothing to remedy.
  20. Yes. It still is. USB 3.x still introduces latency. The headline throughput is nice but latency is what kills real life performance. USB 4.0 is the same with the exception of PCIe / Thunderbolt connection, provided it's PCIe-based storage (e.g. NVMe) with motherboard support.
  21. Nobody knows for sure since there hasn't been any scientific research into it. So take what I'm saying with a grain of salt. USB 3.0 is faster and with speed comes heat. A lot of USB sticks are poorly designed in terms of thermal - especially the micro tiny style. Trapped heat is problematic. Metal expands, which may loosen pins contact Plastic and metal expand at different rate so the plastic casing may deform the metal parts, causing short circuit or loose contact Heat may actually damage the storage cells. USB 3.0 sticks tend to be larger capacity, which is achieved by packing more data into practically the same amount of space. So when there is data corruption, more data is corrupted. And the heat contribution may also come from the surrounding ports as well, not just the stick itself. This explains why plugging USB 3.0 sticks on USB 2.0 ports help. Running it at slower speed generates less heat. This is just anecdotal again but that's how I have been running my micro stick with limited issues for years now. Having said those famous last words, it probably is gonna die soon LOL 🤣
  22. Yes and No. In theory, yes all you did is correct. However, heavy disk activities still have a chance of using the isolated cores. There are mitigation that helps (see bug report below) but they don't resolve the issue. I'm starting to think that it may not be an Unraid issue but a Linux issue as btrfs scrub shouldn't be Unraid specific. Bug Report:
  23. Just want to add that the issue itself is not fixed. The mitigation appears to be sheer performance improvement but it is not a resolution unfortunately.
  24. No. What display to boot with (or none at all) is a functionality of the motherboard BIOS. There are 3 different "Direct CU" vbios on the website. Are you sure you got the right one? If you can dump your own vbios, that would be best. Otherwise, there's no way to know for sure your symptom isn't due to a wrong vbios. The GTX 580 is also a very old card. It could just not be happy with passing through.
×
×
  • Create New...