Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


testdasi last won the day on September 5

testdasi had the most liked content!

Community Reputation

93 Good

1 Follower

About testdasi

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

1191 profile views
  1. BR0 and br0 are different. Linux-based OS are case-sensitive. If you set it to br0 (note: lower case) and it still doesn't work then you need to post it in the VM section. It's certainly not hardware related.
  2. You need to stuff your ignorance back up your backside. This aint Trump's states. It has nothing to do with the config. The issue is how that would interact of parity calculation and there have already been reports of a certain SSD causing parity error when in the array - and that is without trim complicating the matter. And for a double dose of stuffing ignorance back up your backside, I already raised a feature request to enable trim in the array before your throwing tantrum. So I am minding my business.
  3. It's a design choice for market segmentation. e.g. EPYC will have 4 chiplets with full memory controllers i.e. similar to the good old quad CPU design targeted specifically to enterprise uses (with the compromise being lower core clock - a poor choice for gaming VM). TR is unlikely to ever have that since it targets the enthusiast market. To be honest, I think you are a bit spoiled. 2 years ago, nobody but Linus Sebastian can afford having 3 VM with 8 physical cores per VM in the same PC. Now, people are already complaining that being unable to do that needs to be "rectified". For your use case, I would say having 2 PCs probably will be a better choice than trying to force 3 gaming VMs into the same case.
  4. 8 physical cores or 8 logical cores? If the former, TR isn't a good choice because of the design having 4 chiplets with only 2 having memory controllers. That means 1 of your 3 gaming PC will not have direct access to RAM (and PCIe slot) and thus will likely have unreliable performance. Also, it may be a problem with IOMMU without ACS Override, which may or may not cause you lag issues. In fact, I don't think there's any reasonably affordable CPU out there that will give you at least 24 physical cores with direct RAM access.
  5. Answers in green. 1. Do you have an easier solution for my requirements? There are many solutions out there. ESXi would be a popular alternative given you are not after Unraid NAS capability. Even popular Linux server distro e.g. Ubuntu server can do it with the right packages. Unraid has 30-day trial so you can test out full functionality, unlike ESXi though so perhaps that may entice you to give it a try to see if it works. 2. Will unraid let me easily assign different resources to a VM before it is started? Yes. Most things can be configured in the GUI. 3. Will linux / windows complain if they get assigned different GPUs (or no GPU at all) every time they are started? Let's assume I only have Nvidia GPUs. Not at all, with caveat. Caveat 1: I can only speak from my own experience with the GPUs I have. Caveat 2: Nvidia driver, under the right circumstances, can detect that you are running a GTX card in a VM. It will then refuse to load (in the hope that it would force users to buy an expensive Quadro card). To reduce this risk: Boot Unraid in legacy mode to prevent UEFI messing about with PCIe devices. Have a dedicated GPU for Unraid (a cheap one will suffice) + have a motherboard that allows you to pick which slot as primary (i.e. use for Unraid to boot). Turn off Hyper V 4. Is it possible to save the data for each VM on a large (and redundant) HDD and then load it onto an SSD once it is started? In other words basically use the SSD as a cache? If you are after smart read cache then no. If you are after a manual scripted approach then sort of yes but it's pointless, the wait time to start a VM would be prohibitive. If you are just after a way to backup your VM vdisk to array then it can be done using bash script (and CA User Script plugin). 5. How good is remote desktop performance between two VMs? In case it is bad is there any other easy way to use multiple VMs with the same peripherals at the same time? It depends on what you mean by "good". I have used RDP, VNC Viewer, NoMachine and have never found any to be limiting. "any other easy way to use multiple VMs with the same peripherals at the same time" - Synergy
  6. Ryzen and TR did have severe problems but all the severe issues have been bedded down. 1st / 2nd gen are basically rock solid. The recent hoo-hah was due to AMD releasing BIOS update for 2nd gen mobo to support 3rd gen Ryzen (probably in a rush). If you run 1st / 2nd gen then you simply downgrade to the last stable BIOS and everything is back to normal.
  7. You can check the Unraid Nvidia topic for the Unraid (community) build with support for Nvidia transcoding. There's a link somewhere in the first page to the Nvidia official website that states how many streams each GPU can support. (note: you will need Plex Pass for hardware transcoding). Unless you can get the E5 for cheap, Threadripper 1950X / 2950X probably is a better value. 1x E5-2680 v3 giving you 18000 doesn't mean 2x will give you 36000 - things just don't scale that way. My estimate based on typical diminishing return curve from my testing says something closer to 24000 (which happens to agree with the Interweb https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2680+v3+%40+2.50GHz&id=2390&cpuCount=2). That is lower than the 2950X and I'm fairly certain more expensive. With regards to ESXi, no experience with that so can't comment. My general stand is running a VM under a VM under a VM is rarely the best idea. And yes, the Intel 750 is a NVMe SSD. Side note: I actually have not thrown away any of my SSD tracing back to the day I was rocking Windows on an SSD when TRIM was considered bleeding edge tech. Then I was on a M.2 SSD when M.2 NVMe didn't exist (the Samsung SM951 came out originally in AHCI form (i.e. a glorified SATA controller) before Samsung confusingly released the same model in NVMe form). Then the Intel 750 came out as one of the first ever consumer-level NVMe SSD. [end of history lesson] 😅
  8. Guys, I don't think you need Waseh to update the script to get latest version of rclone. Having a look at the plugin code, I believe he sets it up to download whatever the latest version is at the time (of boot). So a restart will get you the latest version. For example, below is my current version: :~# rclone --version rclone v1.49.0-007-g16e7da2c-beta And below is from rclone beta website (https://beta.rclone.org/) v1.49.0-007-g16e7da2c-beta—29/08/2019, 11:08:23 v1.49.0-008-ge2b5ed6c-beta—02/09/2019, 06:04:57 My last reboot happened to be on 01 Sep 2019, which is why my rclone version is v1.49.0-007 (latest on that day).
  9. Yeah, Unraid is a poor solution when speed is the main concern. Theoretically, it is possible to set up a 2-in-1 server with Unraid which has the array for slow backup data and the cache pool running BTRFS RAID for a fast (software) RAID. However, I believe BTRFS RAID 5/6 is buggy which makes it not ideal for your case.
  10. The part you quoted has nothing to do with IOMMU group. They are USB buses. I would suggest you spend some time watching SpaceInvaderOne tutorials on Youtube. IOMMU only refers to PCIe devices (and thus things like your USB card which contains 4 separate controllers thus 4 PCIe devices which, if separable to individual IOMMU groups, can be passed through to the VM). Also, it has nothing to do with "not compatible with Unraid". If it can't be passed through with Unraid, it's highly unlikely to be passed through with any other Linux-based OS. Error code 10 (which presumably is what you are referring to) basically says device can't be started - a rather vague error to identify where the issue is. You might want to attempt the following and keep your fingers crossed. Not saying it fix your issue with the USB card but it tends to resolve unexpected problems with PCIe passthrough. Boot Unraid in Legacy mode (so stop UEFI messing about with devices) Start a new Windows template <-- this is critical because of the next thing you do Pick Q35 as machine type and OVMF instead of SeaBios <-- this step will enhance PCIe compatibility Make other necessary changes to the template, save it. Reopen the template in XML mode, go to the end of the template at this bit of code (see below) before </domain> Save template, start VM and keep your fingers crossed. Code: <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  11. Data is not striped so read speed for any given file is limited to the disk that file resides on.
  12. CA User Script schedule has an option to "Run at first array start". Schedule the script with that and it should be fine. 4GB is more than enough for most users so it's a good starting point. If you start getting issues then increase it but it's unlikely. It should be enough for at least 5 1080p streams (a 4k stream is about 4x1080p streams).