Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by testdasi

  1. BR0 and br0 are different. Linux-based OS are case-sensitive. If you set it to br0 (note: lower case) and it still doesn't work then you need to post it in the VM section. It's certainly not hardware related.
  2. You need to stuff your ignorance back up your backside. This aint Trump's states. It has nothing to do with the config. The issue is how that would interact of parity calculation and there have already been reports of a certain SSD causing parity error when in the array - and that is without trim complicating the matter. And for a double dose of stuffing ignorance back up your backside, I already raised a feature request to enable trim in the array before your throwing tantrum. So I am minding my business.
  3. It's a design choice for market segmentation. e.g. EPYC will have 4 chiplets with full memory controllers i.e. similar to the good old quad CPU design targeted specifically to enterprise uses (with the compromise being lower core clock - a poor choice for gaming VM). TR is unlikely to ever have that since it targets the enthusiast market. To be honest, I think you are a bit spoiled. 2 years ago, nobody but Linus Sebastian can afford having 3 VM with 8 physical cores per VM in the same PC. Now, people are already complaining that being unable to do that needs to be "rectified". For your use case, I would say having 2 PCs probably will be a better choice than trying to force 3 gaming VMs into the same case.
  4. 8 physical cores or 8 logical cores? If the former, TR isn't a good choice because of the design having 4 chiplets with only 2 having memory controllers. That means 1 of your 3 gaming PC will not have direct access to RAM (and PCIe slot) and thus will likely have unreliable performance. Also, it may be a problem with IOMMU without ACS Override, which may or may not cause you lag issues. In fact, I don't think there's any reasonably affordable CPU out there that will give you at least 24 physical cores with direct RAM access.
  5. Answers in green. 1. Do you have an easier solution for my requirements? There are many solutions out there. ESXi would be a popular alternative given you are not after Unraid NAS capability. Even popular Linux server distro e.g. Ubuntu server can do it with the right packages. Unraid has 30-day trial so you can test out full functionality, unlike ESXi though so perhaps that may entice you to give it a try to see if it works. 2. Will unraid let me easily assign different resources to a VM before it is started? Yes. Most things can be configured in the GUI. 3. Will linux / windows complain if they get assigned different GPUs (or no GPU at all) every time they are started? Let's assume I only have Nvidia GPUs. Not at all, with caveat. Caveat 1: I can only speak from my own experience with the GPUs I have. Caveat 2: Nvidia driver, under the right circumstances, can detect that you are running a GTX card in a VM. It will then refuse to load (in the hope that it would force users to buy an expensive Quadro card). To reduce this risk: Boot Unraid in legacy mode to prevent UEFI messing about with PCIe devices. Have a dedicated GPU for Unraid (a cheap one will suffice) + have a motherboard that allows you to pick which slot as primary (i.e. use for Unraid to boot). Turn off Hyper V 4. Is it possible to save the data for each VM on a large (and redundant) HDD and then load it onto an SSD once it is started? In other words basically use the SSD as a cache? If you are after smart read cache then no. If you are after a manual scripted approach then sort of yes but it's pointless, the wait time to start a VM would be prohibitive. If you are just after a way to backup your VM vdisk to array then it can be done using bash script (and CA User Script plugin). 5. How good is remote desktop performance between two VMs? In case it is bad is there any other easy way to use multiple VMs with the same peripherals at the same time? It depends on what you mean by "good". I have used RDP, VNC Viewer, NoMachine and have never found any to be limiting. "any other easy way to use multiple VMs with the same peripherals at the same time" - Synergy
  6. Ryzen and TR did have severe problems but all the severe issues have been bedded down. 1st / 2nd gen are basically rock solid. The recent hoo-hah was due to AMD releasing BIOS update for 2nd gen mobo to support 3rd gen Ryzen (probably in a rush). If you run 1st / 2nd gen then you simply downgrade to the last stable BIOS and everything is back to normal.
  7. You can check the Unraid Nvidia topic for the Unraid (community) build with support for Nvidia transcoding. There's a link somewhere in the first page to the Nvidia official website that states how many streams each GPU can support. (note: you will need Plex Pass for hardware transcoding). Unless you can get the E5 for cheap, Threadripper 1950X / 2950X probably is a better value. 1x E5-2680 v3 giving you 18000 doesn't mean 2x will give you 36000 - things just don't scale that way. My estimate based on typical diminishing return curve from my testing says something closer to 24000 (which happens to agree with the Interweb https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2680+v3+%40+2.50GHz&id=2390&cpuCount=2). That is lower than the 2950X and I'm fairly certain more expensive. With regards to ESXi, no experience with that so can't comment. My general stand is running a VM under a VM under a VM is rarely the best idea. And yes, the Intel 750 is a NVMe SSD. Side note: I actually have not thrown away any of my SSD tracing back to the day I was rocking Windows on an SSD when TRIM was considered bleeding edge tech. Then I was on a M.2 SSD when M.2 NVMe didn't exist (the Samsung SM951 came out originally in AHCI form (i.e. a glorified SATA controller) before Samsung confusingly released the same model in NVMe form). Then the Intel 750 came out as one of the first ever consumer-level NVMe SSD. [end of history lesson] 😅
  8. Guys, I don't think you need Waseh to update the script to get latest version of rclone. Having a look at the plugin code, I believe he sets it up to download whatever the latest version is at the time (of boot). So a restart will get you the latest version. For example, below is my current version: :~# rclone --version rclone v1.49.0-007-g16e7da2c-beta And below is from rclone beta website (https://beta.rclone.org/) v1.49.0-007-g16e7da2c-beta—29/08/2019, 11:08:23 v1.49.0-008-ge2b5ed6c-beta—02/09/2019, 06:04:57 My last reboot happened to be on 01 Sep 2019, which is why my rclone version is v1.49.0-007 (latest on that day).
  9. Yeah, Unraid is a poor solution when speed is the main concern. Theoretically, it is possible to set up a 2-in-1 server with Unraid which has the array for slow backup data and the cache pool running BTRFS RAID for a fast (software) RAID. However, I believe BTRFS RAID 5/6 is buggy which makes it not ideal for your case.
  10. The part you quoted has nothing to do with IOMMU group. They are USB buses. I would suggest you spend some time watching SpaceInvaderOne tutorials on Youtube. IOMMU only refers to PCIe devices (and thus things like your USB card which contains 4 separate controllers thus 4 PCIe devices which, if separable to individual IOMMU groups, can be passed through to the VM). Also, it has nothing to do with "not compatible with Unraid". If it can't be passed through with Unraid, it's highly unlikely to be passed through with any other Linux-based OS. Error code 10 (which presumably is what you are referring to) basically says device can't be started - a rather vague error to identify where the issue is. You might want to attempt the following and keep your fingers crossed. Not saying it fix your issue with the USB card but it tends to resolve unexpected problems with PCIe passthrough. Boot Unraid in Legacy mode (so stop UEFI messing about with devices) Start a new Windows template <-- this is critical because of the next thing you do Pick Q35 as machine type and OVMF instead of SeaBios <-- this step will enhance PCIe compatibility Make other necessary changes to the template, save it. Reopen the template in XML mode, go to the end of the template at this bit of code (see below) before </domain> Save template, start VM and keep your fingers crossed. Code: <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  11. Data is not striped so read speed for any given file is limited to the disk that file resides on.
  12. CA User Script schedule has an option to "Run at first array start". Schedule the script with that and it should be fine. 4GB is more than enough for most users so it's a good starting point. If you start getting issues then increase it but it's unlikely. It should be enough for at least 5 1080p streams (a 4k stream is about 4x1080p streams).
  13. It's in the GUI. You might want to watch SpaceInvaderOne basic setup vid.
  14. That's where the cache pool / array comes into play. You create a share - call it iso or something and put the iso image there (which means the iso is on the cache pool (cache = Only / Prefer) or on the array). An alternative, assuming your USB stick is large enough (or if you have another USB stick), is to save the iso to the USB stick.
  15. Given "a little gaming" is in the pipeline, the 2700 may still be an overkill. You need a GPU for the VM. To save yourself headaches, you might want 2 GPU, 1 cheap one for Unraid and 1 to pass through to the VM. So a more economical option in my mind may be B450 motherboard + Ryzen 2400G. X470 does not offer any significant improvement for Unraid-based uses (e.g. no need extreme VRM for overclocking, no need SLI etc.) over B450. So B450 will save you a bit. 2400G has built-in GPU that can be used for Unraid (see the point above about saving yourself headaches). It's also cheaper than the 2700. If you just need a VM for "a little gaming", you can assign 2 cores for the gaming VM, 1 core for the utility VM and 1 core for Unraid (half of which is cpu 0, the other half to pin the VM emulator to). That should be sufficient for "a little gaming". Cost saving can be spent on a dedicated GPU for the gaming VM. Not necessarily the best option but something for you to consider.
  16. That may be due to write being cache in RAM first.
  17. Unraid prefers cpu 0. That's the only (logical) core you absolutely should leave out. The Hyperthreading sister (i.e. cpu 6) should be left out but it's not entirely necessary. For instance, you can pin the VM emulator to that core (it's in the SIO advance VM config video). So for your 2 gaming VM, it depends on what sort of things you are running. So perhaps the 1080Ti VM will have 1,2,3,7,8,9 and the GTX680 VM will have 4,5,10,11. If the GTX680 VM is used mainly for web browsing and simple games then you can do 1-4+7-10 to the 1080Ti VM and 5+11 to the GTX 680 VM. You can even do 1,2,4,5,7,8,10,11 to 1080Ti VM and 3,9 to GTX 680 VM. Unlike Zen-based CPUs, the 8086K doesn't really need to care about CCD and CCX. The only big no-no is to split the pair to different VM's e.g. cpu 1 to a VM and cpu 7 to a different VM.
  18. They don't show up exactly as 970. You need to look for something like this: Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 For vfio stubbing purpose, I suspect your ID is 144d:a808 (cuz that's what my 970 and PM983 show up as)
  19. Sorry but I disagree. LT has always been so often too nice and bowed down to pressure from the minority that happens to scream really loudly about niche issues. e.g. new GUI too big / small and certain people refused to use their browsers' native zoom functionality and demanded very loudly that LT changed everything back to the way they were familiar with - and guess what, LT spent time and resource responding to these GUI supremacists while ignoring the fact that Gigabyte X399 users (e.g. me) have severe lags that to-date still have not been resolved. So it's rather refreshing to hear LT dev(s) get the balls to respond to loudmouth screamers in a different way. Donald Trump and Boris Johnson come to power because people are too nice to scream back at them.
  20. Hard links can's span physical volume. Potentially you have a in 1 disk and b in another disk.
  21. You seem to be pushing hard to justify the X470, in which case, why ask questions? To use the 2nd M.2 slot, the bottom PCIe slot will be disabled. So you are back to the same problem i.e. have no expansion. So is the X470 "enough"? It's certainly not with your requirement to have further future expansion. Also, usually the 2nd M.2 slot is connected to the chipset with means it can't be separated from the other devices without ACS Override - which may or may not cause problems. So even the X570 may not allow you to pass through both NVMe's to VM's. There's no way to ascertain the exact layout and IOMMU grouping without owning the actual board.
  22. Yes it means something. The 1TB and the 840 have not been formatted. I remember there's a box somewhere to format unmountable drives but can't remember - have not had to format anything for quite some time now (since I use the Preclear plugin). You might want to post a question in the General Support forum (alternatively, watch SpaceInvaderOne's video - I'm pretty sure it's in the basic set up guide). Btw, if you have any data that you still want to recover from the 1TB and 840, now is the time.