Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Problem is vfio-pci.ios <-- typo? it should be vfio-pci.ids You are confused between vfio stubbing and vfio allow unsafe interrupts. They do different things so don't mix them up. Also, please kindly use the forum code functionality (the </> button next to the smiley button) when pasting stuff from Unraid GUI. It changes the formatting and makes things a lot easier to read.
  2. You can go to pcpartpicker website and add the i3-8100 to check the compatibility (and availability if you are in the US / UK - not sure about EU) and filter stuff by features too. It's a very useful resource for new builders.
  3. While the CPU are maxing out. Tools -> Diagnostics -> attach zip file on your next post.
  4. Both cases are among the quieter already so I would prioritise other needs over noise level. So if, for example, the best mobo you can find is an ATX then you would not pick the 804. When you have enough HDD (and fans to cool those HDD), it won't be whisper quiet anyway. CPU is ok for budget build and what you described as its use case. I would prefer Gigabyte motherboard because their BIOS has a lot of flexibility with regards to Initial Display Output so, for example, you can select the IGPU or any of the other PCIe x16 slots as primary (what Unraid boots with). So if you ever need to pass through a powerful GPU on the 1st PCIe slot, a Gigabyte motherboard will make your life a lot easier. I personally would not pick the Asus B360M because it has only 1 PCIe x16 slot. Again, flexibility is paramount and I would value an extra PCIe x16 slot (albeit running at x4 speed) over any bells and whistles. Server board is not a requirement for Unraid. It's a need thing e.g. if there's a certain feature that only server boards would have. You may even have to consider the overall costs of the alternative. For example, I had an Asrock C236 WSI build because that board is the only ITX motherboard at the time to have 8 SATA ports - perfect fit for a NAS build with the 304 case (2xSSD, 6xHDD). It was expensive but the cost of a cheapo motherboard + a LSI controller is at least 50% more expensive. 6 SATA ports are typically aplenty for home uses. You don't "NEED" 8. You should aim to get fewer high capacity drives instead of going for a big load of low capacity drives. In terms of price / GB, 8TB and 4TB HDD's don't differ by much (if at all). So instead of 8x4TB, you should be getting 4x8TB. Due to how HDDs fail (in a probabilistic manner), doubling the number of drives almost double your chance of having a failure. If you still want extra ports, you can go on Ebay and search for "LSI IT Mode" and pick (quoting johnnie.black) "any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i" Given your admitted skill level, I would not recommend clones of the LSI boards e.g. Dell H200/H310 and IBM M1015 just to make your life a tiny bit simpler. It is still best of you keep it to the number of SATA ports from your chipset (see point above). For PSU, go to pcpartpicker and select all your parts and see if the wattage estimate matches. I personally add about 20% to the pcpartpicker estimate for extra safety margin and future expandability. In terms of fans, I'm a Noctua fan 😅 . They are fugly but they are dang good. You are likely to need additional HDD anyway because it is highly unlikely you will be able to just move your HDD from the Synology (RAID) NAS to Unraid because you can't just randomly remove any number of drives from a RAID. What you forget probably is a migration plan i.e. how you plan to migrate your data from the Synology NAS over to the new Unraid server. Trust me, once you put it down in a plan (and double check on here if it makes sense), you probably will discover a few not-nice surprises.
  5. All I can tell is problem lies with nginx. Don't use it so don't know what's wrong with it. Try turning NGINX off and restart your server to see if the problem reappears.
  6. Tools -> Diagnostics -> attach zip file.
  7. You will never get any help with that kind of terse post. Need details! What happened? What did you do exactly (step by step)? Any diagnostics (Tools -> Diagnostics)? What do you expect to be on the drive? Rebuilding absolutely 100% certainly cannot cause what you have described.
  8. You can also create diagnostics from the console if you have access to it (either directly or via SSH). Just type diagnostics It's really important to get the diagnostics when you have the problem.
  9. Why? Instability is very hard to troubleshoot with any kind of running a CPU outside of its default range of parameters.
  10. First and foremost, do NOT disable Global C State control. With the latest BIOS, Global C State Control is no longer required to be disabled. It, in fact, improves performance when enabled on my 2990WX. I actually don't remember ever needing to disable it for stability - I believe it's a Ryzen problem and not a TR4 thing. Next, have you checked your CPU frequency while running? Is it thermal throttling? Are you using water cooling? I have seen several recent posts at various places about people complaining about poor performance on TR4, which turned out to be gunked up water cooling pump (especially the AIO kind). Lastly, docker constantly and inexplicably loading 100% on a single core is a symptom of pinning an isolated core to the docker. Isolation = core can ONLY be used by VM. Putting ANY isolated core on a docker will eventually cause that core to be loaded to 100% as the docker gets into a loop of trying to use a forbidden core.
  11. "Great" is subjective. There are uses for everything. RAID-1 will only save you if 1 of your 2 NVMe fails cleanly (i.e. without actual data corruption). It won't save you if, let's say, your VM is infected with crypto virus. A VM backup on the array will restore you quickly from a crypto virus infection and a NVMe failure (even partial corruption). It won't save you if, like you said, your server goes boom boom. An external drive backup of your VM vdisk (that is only plugged in while doing the backing up) will save you from NVMe failure, crypto virus infection and your server going boom boom. It won't save you if, god forbids, your house is on fire. A cloud backup will save you from NVMe failure, cryptovirus infection, your server going boom boom and your house on fire. It won't save you if the data center worker is dumped by his wife, gets angry and decides to set the entire data center on fire, killing off the storage unit that has your backup. (2), (3), (4) are examples of online, offline and offsite backups. There are always some scenarios that you can't recover your data regardless of what (and how many) backups you have. However, having a backup is still better than having no backup. That's why I'm running single-drive cache and focus on making sure everything important to me have the "4 O's" copies - original, online, offline, offsite.
  12. As always: Tools -> Diagnostics -> attach zip file Preferably with the 3 drives plugged in, detected in BIOS but not in Unraid.
  13. Should be ok. If you want to pass through one of the 2 Kingston A2000 (which uses SM2263 controller) SSDs via PCIe to a VM, you might run into some issues. The workaround can be found here (also note the vcpu limitation): https://bugzilla.kernel.org/show_bug.cgi?id=202055#c42
  14. It's because of the "2nd Law of Entitled Asses Dynamics" which states: The total demand from entitled asses for an isolated software can never decrease over time. So in layman's terms, one cannot decrease the demand of entitled asses. At "best", it can only be shifted from LSIO to LT.
  15. Typically fan is controlled in BIOS and failing that, external fan controller. Automatic fan control in Unraid has never worked for me across multiple servers.
  16. Because you still only have a single drive in cache. Please kindly watch SpaceInvader One basic guides on youtube (or read the Unraid wiki if text is more your thing). Single-drive cache = "not protected" = any data on cache is not protected, including anything still waiting to be moved to the array.
  17. Overclocking is not recommended with any 24/7 server, particularly with storage server. It's not a can vs cannot. It's a risk vs reward assessment.
  18. vfio-pci.ids has been working and still works with current version. The new BIND method was for a different stubbing method. Whatever happened to you, I don't think is related to the stubbing itself since both methods should achieve the same result.
  19. Fire as reported was due to the SATA end and not the MOLEX end. MOLEX (or more correctly, to the older folks who still remember, AMP) connector is not janky! It's bullet-proof if compared to SATA due to its simplicity. It also has a 5-decade tried-and-tested history. Arcing, while of course possible, is way less likely than the tightly-packed SATA power connector - you really have to try to make a bad MOLEX connector. A cheap SATA extender is WORSE! Instead of spreading load over multiple cables to the PSU, you now load more stuff onto the same cable -> bigger fire hazard under load and HDD spinning up can draw quite a bit of current, which may even MELT the CABLE itself. SATA connectors are more complex and cost cutting on complex stuff is never wise. I think what happened here is paranoia has caused you to overthink. Get a good cable following the advice of those with the experience. And you can only minimise the risk but never eliminate it.
  20. Other things you can try: Update motherboard BIOS Boot Unraid in Legacy Mode (i.e. disable UEFI boot) - note: this is Unraid host settings, not VM. VFIO stub the GPU (watch SpaceInvader One guide on Youtube for instruction) Obtain a vbios for the P4000 (again, watch SIO guide on Youtube). You can also try with the GT 710 instead of P4000 on the i5 650 system.
  21. A few things: Set split level to level 1. That means each sub folder of share TV will be on the same disk. Changing split level will only affect fresh data. If you somehow still have a series that split across drive then everything won't behave as you would expect. Don't use Fill Up allocation method unless you really intent that to be the case. Very few people will ever use that. Do not use both include and exclude. Use EITHER.
  22. So many question marks: You have 2 xml, Windows 10 and W10 Test2. The former is on VNC graphics with OVMF and the latter has GPU passed through with SeaBIOS. Each VM has its own different vdisk. Were they intentionally set up like that (e.g. you installed windows twice?) No ping + no rdp = your VM didn't boot -> wrong vdisk? Your test system is a i5 650 on P7H55-M motherboard with 4GB RAM. I understand you want to test things out first but that is way too far from a realistic test given what you are aiming to achieve. The P4000 requires PCIe 3.0 and your motherboard has PCIe 2.0. Yes, it is supposed to be backward compatible but it just demonstrates how old your base system is - it's very hard to prove anything on that. You set your VM with 2GB RAM. That is the minimum system requirement for the P4000. My Win10 VM would not even boot with 2GB RAM + GPU (albeit not Quadro). You mentioned trying both SeaBIOS and OVMF. You can't casually switch between them. You have to set 1 thing up and stick with it. Also, use Q35-4.1 for better PCIe support (start a new template and pick Q35 + OVMF). Last but not least, you should make sure your VM works first (in terms of xml) e.g. it can boot into the Windows installer WITH GPU PASSED THROUGH and then install things from there.
  23. The vbios has to be specific to your exact specimen (brand + model + revision). So it's very likely that you used something that doesn't match your card. It is not uncommon for some models to not have vbios on TPU. I have even seen a vbios dumped from 2nd slot not working if the card is in the 1st slot but that seems rare (have only seen 1 report). So given you don't have a 2nd GPU, the only thing you can do is to try to get the right vbios from TPU (if it's available). I believe SIO has a guide on how to dump it from gpuz (which runs from Windows) so if you can somehow get a Windows installation up and running, you may be able to follow that guide to do it. Alternatively, you can also dump the vbios if you have another computer as well. Passing through the GTX 1080 as only GPU without vbios is unlikely to work due to error code 43. Even with the right vbios, you might still need some workarounds in the xml but we'll deal with that when we get there.
×
×
  • Create New...