testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. No. Also your issue could be a bad motherboard as well so don't throw away the RAM sticks just yet.
  2. Some pointers: You P2000 will have to be in the 1st slot since I don't think Asus mother board allows you to pick 2nd PCIe slot to boot with. So it will be GPU1 = P2000 and GPU2 = RTX 2080. Make sure to save down several versions of the BIOS (including old ones). There are some BIOS that was known to cause troubles with pass through. Don't overclock your RAM. Run it at stock speed i.e. 2133MHz. (3200MHz official speed is still an overclock and it has been known to cause weird and wonderful issues even when you think it's rock solid stable. Start with using the SATA 860 Evo for Unraid cache and try to pass through the 2 NVMe M.2 to your VM as PCIe device (i.e. stub it and tick it in the "other PCIe device" section of the VM template). That will give you the best performance. since it looks like gaming is a very important use case for you. If you don't understand stubbing, watch Spaceinvader One tutorials on youtube about PCIe pass through. Use the 500GB for Windows OS and the 1TB for Steam library with overflow to a network-mapped drive which is your array. You might also want to consider Intel offerings. If gaming is very important, the CCX/CCD design of Ryzen will catch a lot of new users off-guard when it comes to optimisation.
  3. Don't use 9P. Performance is terrible. You are better off in every way by using network shares.
  4. Back up your data to the other spare machine. Double check that everything is accounted for and copied correctly (doing checksum is recommended). Boot your T20 to Unraid, set it up, create share then mount the share on the spare machine and copy data over. Alternatively, if the spare machine HDD uses a file system supported by the Unassigned Devices plugin, you can connect the HDD to the T20 (assuming there are spare SATA ports) and copy directly using the console. It's actually very simple to migrate if you have space for the data. Been there done that.
  5. Your i5 6400 has Quick Sync I believe so if you are going to do Plex hardware transcoding anyway, you might as well give that a try before upgrading. That can offload some CPU requirement. I think the only thing that won't work is HEVC 10-bit, which needs Kaby Lake. If you need a Mac, buy a Mac. Considering you just need it to do web development, shouldn't be too expensive. Other than being a pretty novelty, I have never found the effort required to make a Mac OS VM work worth it. I also strongly dislike the idea that Apple can just pull the plug on all Hackintosh by suing everyone to the ground. Which I'm sure they will do once they can't charge double market price for shit components that the customers can't repair.
  6. I have edited video remotely via RDP (client is a Surface) and the experience is not too bad but I do have a GPU passed through. Having a GPU is pretty important with video editing (or at least with Adobe Premiere). I remember reading somewhere that just having a GPU alone (even low-end one) improves performance tremendously. You also want more cores to a certain point, beyond which core clock is more important than core count. My experience is that 1080p needs around 6 and 4k needs no more than 16. The exception is warp stabilisation - that needs as many cores as you can throw at it, especially if you need to stabilise multiple clips simultaneously. Fast storage medium is also recommended. For 1080p, you probably can get away with storing source files on the array (i.e. HDD). My experience with 4k says HDD is just not fast enough for smooth scrubbing so you probably want at least a SATA SSD. 8k needs NVMe. Scrubbing 8k on HDD is a laughable experience.
  7. testdasi

    NAS

    Note though that building in a NAS case is extremely challenging due to the limited space available. You are better off with something a little bigger like the Fractal Design Node 304. Otherwise, be prepared having to buy additional short-run cables, low-profile connectors, cable ties etc.
  8. It will work with speed limited to x8. Since no GPU can saturate PCIe 3.0 x16 anyway, there is no real life difference. Currently, true x16 only matters with storage applications.
  9. 10 SSD would probably require an HBA which will occupy at least an x8 slot. That's enough for an Optane, and with mobo supporting bifurcation, 2 Optanes 😉
  10. Adding SSD to the array will only speed up anything that is written to the SSD. A RAID-10 SATA should be faster than a single SATA but it obviously will be independent of your array. I wouldn't play around with 10 SSD though. I would rather get an Optane or some fast NVMe. They are more fun.
  11. Don't forget the major compromise to get fastest possible NAS i.e. RAID vs Unraid. With RAID: RAID 0: you will lose all your data with a single failed drive RAID 1: you will lose all your data with 2 failed drives RAID 5/6: you will lose all your data if you have more failed drives than number of parity RAID 10: best case scenario: you will lose all your data with half your drives + 1 fail. worst case scenario: you will lose all your data with 2 failed drives With Unraid: you will lose all your data only if all your data drives fail. There are other compromises (e.g. RAM requirement, same-size drives etc.) but the above is THE reason I don't use RAID.
  12. Yes indeed. I was about to propose the same thing johnnie just proposed above i.e. retest on a disk share to bypass shfs.
  13. To build the fastest possible NAS, I would not go with Unraid but FreeNAS to use native ZFS. I reckon the Unraid ZFS plugin would work but if I'm just gonna do ZFS for fastest possible storage then why go through hoops. If no need for the "NA" in "NAS", I would just run Windows bare metal with Windows storage space + regular backup.
  14. I would always preclear before shucking. That gives zero excuse for the seller to provide refund / replacement in case of DOA.
  15. For (1) lower speed to 2133 MHz (or simpler, just turn off XMP or whatever AMD calls it) 2133MHz is standard stock DDR4 speed.
  16. High level idea of top of my head if you want to control from your main server (assuming your secondary server also runs Unraid). Use the userscript plugin to schedule a script that sends WOL signal (or there's a plugin that does that too but not sure about scheduling the plugin - could just schedule the command - see quoted post at the end below). Mount the SMB share via Unassigned Devices. Use UD script to trigger backup activities if SMB share is mounted. Once backup done, use the same UD script to do a ssh connection to the secondary NAS and send command "powerdown" to shut it down. Source re sending WOL
  17. Are you accessing the VM through VNC via the web interface? You probably need to install a VNC client like VNC Viewer. For Windows VM, don't use VNC. Windows built-in RDP will do that for you.
  18. Based on the OP blog, his command is: diskspd -w50 -b512K -F2 -r -o8 -W60 -d120 -Srw -Rtext \\storage\testcache\testfile64g.dat > d:\diskspd_unraid_cache.txt Translation: diskspd test 50% write + 50% read mixed IO (the OP then ran for just read and just write) block size 512K (the OP ran multiple times with different block size) 2 concurrent threads random IO 8 concurrent IO requests per thread (so 16 total) 60 seconds warm-up run test for 120 seconds -Srw is to control caching and write through but I have no clue what r + w does show result in text I believe his storage assignments are: "Cache" is on a share with cache = only "Mount" is on array (cache = no). W2K19 is on a vdisk on cache = only share So essentially the OP is stress testing shfs ability to handle 16 concurrent random IO. What he found is shfs doesn't handle random write quite as well as random read. That kinda makes sense to me to some extent since read is direct from a single device while write requires shfs to first determine which device to write to, adding latency. Latency is always more detrimental to random IO than sequential (and majority of Unraid use case probably is sequential-based).
  19. Need more details please. An example would be great cuz I can't figure out what you are trying to do with Emby at all.
  20. Updates: Found a great deal on a brand new Intel Optane 905p 960GB 2.5" U.2 NVMe SSD and made the jump. It's a great drive for boot + app + scratch disk due to extremely low latency (and therefore fast random IO). Sequential is actually slower than all my Samsung NVMe but that's not a big concern for its intended use. Stubbed and passed through the 905p to the workstation VM and decided to reinstall Windows to re-optimise app + boot (boot drive used to the be the SM951 AHCI M.2). Move all the "scratch" stuff off the 970 Evo to the Optane. Ideally I would want to have a separate Optane for this purpose but life is not perfect, I guess. The Optane being boot + app + scratch is still faster than the 970 Evo being scratch-dedicated. The SM951 is mounted UD to take some write-heavy duty off cache. This old little M.2 has been rambo-ing through all my abuse over the years and spit it back to my face so we'll see how much Sylvester Stallone it still has in its tank. Removed the Toshiba 80GB (again) to make room for the 905p. My workstation workflow is kinda optimised as: Data ingest onto 2x Samsung PM983 NVMe Actively-being-worked-on is on the 970 Evo (for best speed as the PM983 is relatively slower with random IO). By having scratch stuff on the Optane 905p, it actually makes the 970 Evo faster as mixed read-write is reduced. Finished content go back to the 2x Samsung PM983 NVMe for storage I completely forgot that the Gigabyte X399 Designare comes with a M.2 -> U.2 adapter and almost pulled the trigger on a new one. Fortunately, got my sense back in time and checked the box. The U.2 connector blocks the Zotac 1070 Mini GPU heatpipe so it can't be used on the 1st M.2 slot. It works perfectly on the 2nd M.2 slot. My Asus PCIe bifurcation card heat sink just touches the U.2 connector. I'm now keeping an eye out for another good deal on another Optane. The view is to replace the 970 Evo with an Optane, move the 970 Evo to cache and decommission either the SM951 or one of the two i750.
  21. You mentioned you had a corrupted docker image. Have you resolved the cause of that? Are you sure your rom isn't also corrupted given it should be on cache, which had the corrupted docker image?
  22. The page has 2 sections. NVENC = encode NVDEC = decode Transcode = NVENC + NVDEC Plex does not encode to HEVC so the B Frame support is not required. You still need to check the NVDEC section to see if you have any content that is outside of GTX 1070 support e.g. HEVC 4:4:4 (it's rare but not impossible to find).
  23. If you are not sure what to do, Byte My Bits youtube channel has a tutorial on how to cover the right 3.3v pin.