Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Start a new config. Turning Hyper-V on / off doesn't work reliably from the GUI.
  2. You missed my point. It's not about whether I can see your config or not. It's about HAVING your exact hardware to try it out to tell you for sure that it would work 100%. Also for your case, you can simply switch a HDD from SATA3 to SATA2 and use the SSD on the SATA3. Your HDD is unlikely to saturate SATA2 to begin with.
  3. You have to start a new template to change Hyper-V state.
  4. I would say you should look for an alternative way instead of virtualising Unraid under VirtualBox. Last I tried, network and IO speed by VB was unusable. Adding all the restrictions e.g. vdisk sizes and it's just not worth it to put in the effort.
  5. Going for Xeon does not mean stable and no headache etc. The only thing guaranteed with going Xeon is $$$. If you want Intel build (particularly for the iGPU), just go for i3 / i5 etc. There is really no point paying over your head for Xeon. Intel purposely prices Xeon ridiculously higher than equivalent consumer CPU because of market segmentation (i.e. you don't have as much money as a company). It's a similar situation with Supermicro. Just get consumer-grade mobo and you will be fine. With regards to RAID cache pool, as a matter of principle, I would discourage the use of RAID-0. If you don't need protection for the cache pool (i.e. don't need RAID-1) then mount the other NVMe as Unassigned Device and then separate write-heavy and read-heavy data. That will pro-long the lifespan of your SSD, which would be way more beneficial than any real life speed diff that a NVMe RAID-0 pool could ever offer. You should not be running single stick of RAM. That is not controversial. What would be somewhat controversial is that I do not advocate for paying extra for ECC RAM. Some will disagree with that to some extents. However, I had run ECC for quite some time in the past and then switched to non-ECC. There was ZERO diff in stability. If your RAM is good, the chance of a single-bit corruption (which ECC is meant to fix) on consumer servers is rather remote. And I'm running 96GB of RAM. If there's no stability diff in my server, the chance of it to matter on your 32GB server is even more remote. For Unraid and the number of drives you have, there's no benefit of getting Iron Wolf Pro.
  6. Populated slot will run at x4. The Asus one will not merge slots to run at x8. (PS: I am running the Asus in my server).
  7. Go to your BIOS and actually double check. Theoretically, as long as your mobo supports bifurcation, it should work. So there's optimism. But again, without having your exact hardware, it's impossible to tell for sure.
  8. We seem to have a pretty strong Chinese new user base LOL. This is the 2nd 1-post Chinese user I have seen this morning. On a serious note, it would be rather useful to have multi-language support and then enlist user contribution to translate Unraid to other languages.
  9. Because your question is rather niche. Does Unraid have NVME driver? It has been for quite a long while now. Does Unraid support NVMe over PCIe x1 adapter? That's not a question for Unraid. Theoretically, you can plug any device to a x1 slot as long as it physically fits. It would just run at x1 speed. That is common knowledge. So theoretically your NVMe should work via the PCIe x1 adapter. But without having the exact hardware that you have, there's really absolutely no way for anyone to answer for sure.
  10. No it won't. To use multiple M.2 on a x16 / x8 slot, you need PCIe bifurcation which is a relatively new tech. Given you mobo came out in 2016, I doubt it would support it. (Hint: go into your BIOS and look for any option that split x16 to x4/x4/x4/x4 or something to that effect). The other alternative is NVMe RAID card (yes they exist) but those tend to be on the expensive end and they generally use U.2 connector, not M.2. You will have no choice but to waste an x16 slot.
  11. I use ln. Let's say you want to point /mnt/cache/personal/doc to /mnt/disks/ssd/documents then do: ln -sv /mnt/disks/ssd/documents /mnt/cache/personal/doc Unlike mount, it's important that you don't have a folder called "doc" at /mnt/cache/personal or the command will create a documents symlink under /mnt/cache/personal/doc (i.e. it would become /mnt/cache/personal/doc/documents). I like symlinks for simple stuff since I just have to do it once and it persists across reboots. I think bind mount needs to be redone after reboot. But then Plex db has a lot of links too (if I remember correctly) so you might not be able to use symlink with it. With dockers though, I just point the path directly to the location in the docker config. I find that even more straightforward.
  12. The BX500 is DRAM-less. It has been widely noted by tech reviewers and youtubers that DRAM-less SSD should be avoided.
  13. Silly question but have you passed through the HDMI Audio? Is your Monitor using HDMI input with audio with volume up? Check your Device Manager to see if you have NVIDIA High Definition Audio? I once spent 30 minutes trying to fix audio issue and then realised my monitor had volume at 0. Then see the SpaceInvader One tutorial below and get the msi_util (msi interupt fix) from the vid description and tick your GPU and Audio device. In terms of driver, google "nvidia patch windows" to find the github site and use the latest supported driver from there ("Driver Link" are direct links to Nvidia website so they are genuine). I usually pick the latest Studio Driver but you may pick anything latest. Then read the github page to understand what the patch does and apply it if you want to. It's like Fight Club. I'm not supposed to talk about Fight Club. 😉
  14. Update: I found good deals on 2 additional 4TB Samsung 860 Evo. My array now consists of 3x 4TB SSD and the 10TB HDD. I originally wanted to use the 10TB as parity but that would waste 60% of the drive capacity (which I can use for other purposes) while reducing write speed (which would defeat the purpose of getting SSD). Then I realised I don't need parity protection for all the data on those SSD's (just my main backup, which doesn't change that often, less than daily) and 12TB is kinda close to 10TB anyway. So instead of having 10TB parity, I setup a regular script to create copies of each of the 3x 4TB SSD on the 10TB. This would protect me from a single drive failure, just like parity without compromising speed and available capacity. This only works because the 3x 4TB SSD don't contain regularly updating data (that would require live parity). Then found another good deal on a 960GB Intel Optane 905p so pulled (another) trigger on that and use it as my working drive for the VM. The non-boot 905p drops latency by about 4us vs the boot drive, about 7% faster. The old working drive (2TB Samsung 970 Evo) becomes the temp drive. The old temp drive (1.2TB Intel 750) is removed from the VM and mounted unassigned. I moved all the static data out of cache and into this UD drive. With the helps of symlinks, I still can use Unraid share feature with the data stored on UD. My cache now has 98% available space for write-heavy activities (which I am using it for). The 2x 2TB SATA SSD (Samsung 850 Evo and Crucial MX300) are pooled using mergerfs for a 2nd array (that supports trim). This is used for data that I don't need protection. They will also get written to more frequently than the ones in the actual array. I also set up another mergerfs pool that pool the available space of (in order) the HDD, MX300, 850 Evo, 3x 860 Evo to create a 3rd space-priority array. This will be used when I need to dump a very large (>10TB) amount of data to the server with speed being less of a concern. This is pretty niche use but I'm glad mergerfs helps with this. It would help if Limetech can implement multiple-array feature asap (preferably with trim!) but mergerfs is good enough for now. Today is Friday the 13th. COVID-19 is a pandemic. Prayers to the human race.
  15. I use binhex privoxyvpn by binhex - he's an old-timer on the app store. There's also OVPN_Privoxy by alturismo - he wrote the guide on how to use the --net new feature
  16. Please start a new topic. You will end confusing people trying to help because it's not even remotely similar to the OP!
  17. Nobody can help you with that little info! At the minimum, please attach diagnostics (Tools -> diagnostics -> attach zip file). Then provide details on what you were trying to do, what you did. The more details, the better!
  18. How did you change the capacity? Perhaps you will have to create a new VM with new vdisk, install Windows, add this old expanded vdisk to the new VM and use a partition tool to see what's happened to it.
  19. That's not an Unraid problem. That has been a Chrome issue (sort of) for years. Basically it doesn't distinguish password boxes under the same site. If it's a password box, it will autofill.
  20. @bubbaQ: you might want to change the title to [6.8.x] since this also affects subsequent versions.
  21. +1 what itimpi said. Having the smart switch actually makes it very confusing to help with those "why is write so slow" questions. I also don't consider the smart switch being that smart because I have found switching to r/m/w actually make it worse than remaining in reconstruct with a small number of simultaneous activities.
  22. What's your CPU? If your CPU has iGPU then there's no need to buy another low-end GPU for Unraid, especially with a Gigabyte mobo. In BIOS, look for "Initial Display Output" and pick the iGPU. Unraid will then boot with the iGPU (make sure to connect the display to the mobo HDMI / DP output to check that it is indeed booting on the iGPU. If your CPU doesn't have iGPU then any cheap one will do (e.g. GT710 is a popular choice on here). With Gigabyte mobo, keep your 1080 on the 1st PCIe slot and plug the lowend to either of the slower slot and then pick that slot for Initial Display Output in the BIOS. In terms of effectiveness in resolving error code 43, iGPU and low-end GPU are equivalent. It's a matter of whether your CPU has a iGPU or not for you to decide on which option. Now you said you have dumped your own vbios for the 1080. Did you dump it when Unraid is NOT booting with the 1080? Did you dump it with the 1080 in the 1st PCIe slot? Did you perform the hex edit on the header (i.e. following SpaceInvader One tutorial on Youtube)? If you answer no to any of the above question, redump your vbios. As in the "odds" of it resolving the error, you will understand it better if you know the background. So Nvidia wants you to buy the expensive Quadro to use in virtualised environment so the Nvidia driver will refuse to load if it detects a consumer card (e.g. GTX) being used in a VM. The error code 43 is actually a generic code that says "it doesn't work" (i.e. a wonky GPU / incomplete pass-through / not enough power etc. will all have the exact same error so you have to be 100% sure that "it works" before even considering any resolution). One of the detection methods is to see if the GPU has been initialised before being used in the current machine. Therefore, A vbios is critical because it makes the GPU act as if it has been freshly turned on (as well as helping the GPU initialise properly to the VM - hence it's better to NOT use a vbios rather than using the wrong one!). Having a GPU for Unraid to boot with (either iGPU or low-end dedicated) ensures the to-be-passed-through GPU is not initialised prior to being passed-through (because Unraid doesn't boot with it initially). So it generally helps with resolving error code 43. Another thing you can try is to boot Unraid in legacy mode (i.e. NOT UEFI). Some cards for whatever reason, don't like booting in UEFI before being passed to a VM. So as you can see here, booting Unraid with a non-1080 will help your case but it is no guarantee (e.g. a wrong vbios won't work regardless of any other resolution, hence my comment above to redump the vbios). Another detection method is via Hyper-V. The old advice (circ. 2018) was to disable Hyper-V in the VM template. The new advice is to keep it on and make these xml edits: Add dummy vendor ID tag in the Hyper-V section. <hyperv> ... <vendor_id state='on' value='0123456789ab'/> ... </hyperv> and add this kvm tag: <kvm> <hidden state='on'/> </kvm> From my personal experience, what I did (actually just a few months ago due to wanting to factory reset my workstation VM) was: Boot Unraid in legacy mode with my GT710 on 3rd PCIe slot and my 1070 on the 1st PCIe slot. Remember that Initial Display Output settings! that's the whole point of getting a Gigabyte mobo for Unraid in my opinion. Dump my 1070 vbios properly (with edit etc. i.e. SpaceInvader One tutorial) Start a VM template (Q35 + OVMF is my preferred choice) with Hyper-V + VNC display. Install Windows + turn on RDP (so I can remote access to check for issues if the GPU doesn't work, although I actually didn't need it) Edit my VM template to remove VNC and pick the 1070 GPU with vbios and HDMI Audio + the aforementioned hyperv / kvm xml edits. I didn't make the GPU + Audio on the same bus with multifunction but I would still recommend everyone to do it regardless. Boot my VM (which booted successfully with display from the 1070). Install Nvidia drivers, reboot a few times (including a full shutdown + start) to be 100% sure no error. Reboot Unraid to BIOS. Change Initial Display Output to 1st slot (the 1070), reboot to Unraid (which now uses the 1070 to boot!) and turn on the VM to re-verify that it would work without the need for the GT710. Shut down Unraid, remove the GT710 and run my Unraid with only 1 GPU. In other words, the GT710 was mainly to dump the vbios. PS: next time you have a problem, it's generally a good idea to attach diagnostics zip (Tools -> Diagnostics).
  23. I'm running a mixed SSD + HDD array and there's no issue to speak of. (I do know array SSD can't be trimmed but then I don't do aggressive write activities on array drives. I have dedicated unassigned devices for write heavy stuff).
  24. If you are after having an additional user on top of root to login to the webgui then it's not possible. There's only 1 user to login the GUI. If you are after ways to access your server remotely then VPN is the solution to go. There's Wireguard integrated in Unraid or you can install OpenVPN plugin. You should not expose your server directly to the Internet. Unraid is not hardened for that.
  25. This is your NVMe: IOMMU group 14: [15b7:5009] 01:00.0 Non-Volatile memory controller: Sandisk Corp Device 5009 (rev 01)
×
×
  • Create New...