Jump to content
We're Hiring! Full Stack Developer ×

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. I protest this decision! It is unacceptable that I can't use my niche password " ". 😂
  2. testdasi

    NoFear

    You made a typical new user mistake. You split 16GB RAM per VM and forget that Unraid itself needs memory to run. Cut it to 12GB / VM and then see if you can run both at the same time. Then slowly increase RAM until things crash then dial it back a little.
  3. Only partially true. Direct Stream certainly uses less CPU. Difference in load for downscaling from 1080p to 720p vs 480p (or even some random resolutions) is rather negligible (except perhaps if you have unusually old hardware and/or stream to an unusually high number of simultaneous clients).
  4. "perfectly safe"? definitely not. just "safe"? probably. Anyone having problems passing through PCIe to a VM should be booting in legacy mode. This has proven to help for both error 43 (Nvidia driver detecting non-Quadro card being used in a VM) and reset issues (e.g. my old R9 390 a long time ago). Note: "help", not "solve". The only exception is of course if your hardware for whatever reasons requires UEFI to boot. However, it usually is because of a certain specific card requiring UEFI (i.e. a rather unusual circumstance). So if your hardware is in the nothing-special category but you can't boot in legacy (or suddenly unable to boot in legacy), it's more likely than not that you just need to rebuild your USB stick. There is really no benefit of booting UEFI over Legacy (or vice versa). So if you have switched to UEFI and nothing seems to stop working then just stick with UEFI. Just keep it in mind that IF you have issues with passing through PCIe devices in the future, start your troubleshooting with switching back to booting in Legacy Mode.
  5. This forum only focuses on Unraid related queries so it's unlikely you will get any answer relevant to Fedora 31. Not even the same kernel.
  6. I think it's a feature request and not a bug. It takes effort to make Linux case-insensitive (in the same way it would take effort to make Windows case-sensitive).
  7. Tools -> System Devices -> wait a few seconds -> copy-paste what you see in the "PCI Devices and IOMMU Groups" section here. (alternatively Tools -> Diagnostics and attach zip file but that's TMI for what you are asking).
  8. Your statement is a little confusing. It can mean 2 things: 2x data drives + 2x parity drives = 4x 10TB that you have. 4x 10TB data drives + 2x parity drives (to be purchased) If it's (1) then you are better off with a single parity and use the other drive as a backup for your most critical data. If you already have a backup solution in place then 3x array + 1x parity should be sufficient. If it's (2) then it's less clearcut. Based on backblaze HDD failure stat, I estimated that the "reasonable break-even point" (defined by me as when the expected rate of failure is not higher than the number of parity) for a dual-parity is still higher than 6 drives (i.e. 4 data, 2 parity). It is even higher if we only consider stat for only 8TB+ drives - which even Backblaze observed that they didn't seem to fail as often. However, I can see merit of dual parity for the very risk-averse - after all HDD failure is a probability thing.
  9. Hmm... that's strange. Maybe try unmount everything and redo permission of all the local folders. Reboot and then rerun the script. A few points: Unless you are doing a full plex library scan from scratch (i.e. blank database), it should NOT get you API ban. I do it all the time and have never run into any issue. So I think you have something else happening. The usual suspect is any kind of subtitle finder dockers. They were known to break the limit. Avoiding accidental ban like your case is why it's recommended to use Team Drive (now aka "Share Drive" - new name, same features) instead. That would allow you to have a "plan B". In terms of mapping, you can decide for yourself. mount_rclone = excluding things not yet uploaded to gdrive (i.e. ONLY files that are actually currently on the gdrive) mount_unionfs = gdrive + things not yet uploaded to gdrive
  10. It sounds like you mapped Sonarr to the upload folder. Sonarr should be mapped with the unionfs folder (and let unionfs control the use of upload folder).
  11. Question 1: did you enable Precision Boost in BIOS? Question 2: is there actual load on the cores? The scaling only kicks in when there's load (you can pick "Performance" which will kick it up at really low load) Question 3: how did you check CPU frequency?
  12. Single xfs disk (my cache disk to be exact). The only thing I would imagine to be unusual about my config is I point my images to /mnt/cache (instead of the default /mnt/user). I can't remember anything else. I even tried a feature update from an out-dated Windows VM to 1903 (which used 26/32GB vdisk space with lots of IO during installation) and still no corruption.
  13. I would recommend you to get a new router instead of trying to make it work with an ISP router. If it's just a router, chances are you can just plug in a new router to the same hole in the wall and it would work out of the box. If it's has an integrated modem (or if it doesn't work out of the box) then you can just plug the new router into the ISP router. All you have to do next is to disable Wifi on your ISP router (to reduce interference with your new router). Most ISP will have an option to disable that. ISP routers are notorious for having security holes. For example, my ISP had to mass-recall their routers because of concerns over Chinese espionage. And then their replacement router has admin password printed on the router itself (and hard-coded i.e. can't be changed). Yikes!
  14. I run qcow2 (uncompressed) on xfs with 6.8.0-rc4 and don't have this issue. Tested with both Mac and Windows installations. Wonder what's the diff between my VM and the ones that failed in here.
  15. +1 for RDP. I was able to even do light gaming remotely via RDP.
  16. Are you connecting to the VM via RDP? If so, I think it's the WDDM bug on Windows 1903. See below for fix suggestion.
  17. You need to rewatch SpaceInvaderOne videos, the one about VM basis and progress from there. (If the video you are watching goes straight to xml then you need to find an earlier video). Also, turn off any ad-blocker which may potentially interfere with the GUI. XML mode is a button on the upper right corner when you edit a VM. PS: realised it might have sounded condescending but Unraid learning curve would be incredibly steep if you jump in midway. All the guides and helps have to assume a minimum level of familiarity and your question(s) suggest you are highly unfamiliar. That means you will benefit a lot from starting from the basis and SIO early videos are incredibly good at talking about the basis e.g. the Unraid GUI.
  18. Don't worry about the Forced Stop vs Stop thing. It's not related. While you are at it with BIOS, also try to boot Unraid in Legacy mode. That helps in some cases.
  19. There is a "XML View" button on the upper right corner. With regards to the infamous error 43, it's not the GPU was turned off as you said. It's because the Nvidia driver refuses to load once it detects you are in a VM environment (in the hope that it would force you to buy the more expensive Quadro). So do yourself a favour and do these first: If you selected Hyper-V = "Yes" when building your VM template then START A BRAND NEW TEMPLATE and choose Hyper-V = "No". Boot Unraid in Legacy mode. If your motherboard BIOS allows you to pick a different slot as primary GPU to boot with, pick a different slot from what your 1030 is in (even if you don't have a 2nd GPU) and see if it would boot. Without a 2nd GPU, you won't see any boot prompts etc but if you can access the server through the network after a few minutes from boot then it has booted successfully. Any help you would receive here is predicated on (1) above, without which there is no hope for a card that already does error 43. (2) and (3) are my own personal recommendation as I have observed them to help with some cases. They are attempts to stop the to-be-passed-through GPU from initiating and thus make it less likely to detect that it is in a VM.
  20. That data corruption is a definite bug report to raise. Also in my particular case, my VM would still boot if vdisk format is wrong, it just not boot into Windows but into the UEFI shell instead with no error.
  21. You should do the fix (also known as the root port patch) for all Q35-based VM (and only Q35) under v4.0. You have a Gigabyte motherboard, first check your BIOS to see if you can pick which slot as primary. (My Gigabyte mobo has that option so I reckon it's a Gigabyte thing). Then you can use the 770 as primary without having to swap slot. With regards to x8 vs x16, I don't think your 770 / 620 will ever come close to even requiring x4 so there's no issue with that. Your reset issue is a card-based and not a slot-based issue. Yes Unraid will do it correctly as long as the card doesn't resist. I restart my 1070 VM all the time. I used to have a MacOS VM on the 710 and that also had no problem.
  22. +1 on what bastl asked i.e. What format are you using for your vdisk, raw, vhd, qcow2? Under some conditions, the GUI may reconfigure the xml for non-raw incorrectly.
  23. A few pointers: Threadripper is quad channel. You should have it in your budget to run 4 sticks of RAM. If having to pick between 4x16GB standard speed vs 2x16GB high speed for similar price, always pick the former (unless the achievable high speed is close to doubling the standard speed). You should wait out to see if 3rd gen TR will be backwards compatible before making that assumption. Cuz it may not. When I was considering mobo for my build, I arrived on the Gigabyte X399 for one single feature - the ability to pick which PCIe slot as primary for Unraid to boot with. That means I can use the middle slow PCIe2.0 slot for a cheapo GPU dedicated to Unraid and avoid having to deal with passing through primary GPU (e.g. error code 43, reset issues etc.) or waste a PCIx16 slot. The gotcha is that for whatever reasons (probably kernel related), ACS Override would cause severe lags with my VM for any Unraid version after 6.5.3. Hence, I had to reconfigure a few things to avoid needing ACS Override. I can't tell if the lag is Gigabyte specific or X399 specific or X399 + 2990WX specific. In case you wonder, the non-ACS IOMMU grouping is like this (same line = same group). Note: it's very likely that all the TR motherboard will have similar if not exactly the same grouping. SATA, USB 3.1, LAN, Wifi, 3rd PCIe slot (2.0), 5th PCIe slot (3.0 x8), bottom right (2280) M.2 slot 4th PCIe slot (x16) 1st USB 3.0 controller SATA (I'm guessing for the M.2 SATA), Mobo sound card Both 22110 M.2 slots, 2nd PCIe slot (x8) 1st PCIe slot (x16) 2nd USB 3.0 controller
  24. Firstly, you are using Q35 machine type v3.1 (presumably since you are on stable 6.7.2). In which case, you need to add this bit of code before </domain> <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> Alternatively, update to the latest 6.8.0-rc and choose Q35-4.1. Note: that's not a fix for your issue. That should be done regardless so that your PCIe slot runs at the right speed. Now I'm pretty sure the GT 610 has reset issues so the GT 620 probably does too. So in your case, perhaps we need to think outside the box. Have you tried flipping the card slot i.e. 770 in primary and 620 in secondary? Even if the above is successful, you are also likely to run into another problem, that is you will lose graphics with the GT 620 if the VM restarts (i.e. the reset issues manifesting in a different way). So probably you are likely to need a new GPU anyway.
×
×
  • Create New...