Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. You can use USB 3 sticks. There is nothing inherently wrong with it on paper. The speed advantage of USB 3 over USB 2 is virtually irrelevant to Unraid with the only benefit happens at boot (less than 10s faster boot based on my own testing) and ONLY at boot. What we have noticed over the years is that USB 3 seem to have the higher tendency to have issues. Anecdotal failure rate over time seems to be higher. It could very well be that USB 3 sticks are more common and thus people tend to report them failing more often. Another hypothesis is that USB 3 is faster but generates more heat and heat is bad for electronics (e.g. causing data corruption, random disconnects etc.). That's why Jon recommended a big stick with metal shell to help with heat dissipation. If you can't get USB 2 stick then at least use the USB 2 port on your motherboard. Slower = less heat = better. Some motherboards / chipsets don't like booting from USB 3 ports. It could very well be an early adopters' issue a long time ago and may not be an issue anymore. USB 3 also has more connection points than USB 2 and the more connections, the higher chance of something failing. So just by probability alone, sticking (pun not intended) to a simpler design reduces chance of failure. Anecdotally though, there are people running mini USB 3 stick for years with no issue. It's a probability and luck thing. The only USB stick that you absolutely CANNOT use with Unraid is one without GUID (most cheapo / unbranded sticks you get out there are likely to NOT have GUID).
  2. NIC problem is hard to diagnosed because there can be many failure points. So you might want to try to remove the variables one by one. Do you have a spare router from a friend? See if you have the same issue using a different router. Do you have a different device (e.g. another PC or laptop)? Does it work with the same port on the switch? Do you have a different NIC (e.g. the LAN port on your motherboard or a different NIC card)? Maybe test that out to see if perhaps the 10Gb NIC is not working properly. Can you plug the eth1 port to a non-10Gb port on the router? Does it work? Have you tried a different cable? To remove Unraid as a variable (e.g. perhaps a driver issue), you can create a Linux install from USB (stick with something popular like Ubuntu) to see if you have the same problem. Those are some basic tests you can do to try to isolate where the issue lies. At least you are not doing some super funky.
  3. Oh dear, don't use Syncthing directly on the mount. It's a rather long explanation as to why it doesn't work but just trust me, it won't work.
  4. Also there's no need to be too concerned. These errors are harmless.
  5. Doesn't mean your USB stick can't be broken.
  6. Start a new template. The change from i440fx to Q35 is too complicated for the GUI to adjust the xml. Also, remember to add this bit of code at the bottom of your xml, in front of </domain>. This makes your emulated PCIe run at x16 (instead of the default x1). <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  7. The cache pool is separate from the array. To simplify things, usually people have the HDD in the array and the SSD in a cache pool (which may have 1 or multiple SSDs).
  8. Also forgot to mention. You can test out with a trial license. It doesn't take much effort to have an empty disk + a USB + install Plex docker + copy 5 different media files over and do a 5-stream test.
  9. testdasi

    NAS setup

    That's why it's better to upgrade what you have instead of buying new. Case reviews are plenty on the web but unless you actually have it, there's no way to know the quirks and nuances.
  10. Pro is more expensive with better performance (7200rpm vs 5400rpm) but if you aren't requiring it then there's no point getting it. That's why I specifically caveated that as long as you are after quiet and not performance. My old Seagate Archive (5900rpm) can handle 5 simultaneous 1080p streams easily but I understand it does depend on the actual bitrates. You can use the cache for anything you want really but you probably prefer prioritising it for things that actually require better performance. You CAN use the cache for what you said, it's just that IMO the array probably is sufficient.
  11. It is definitely NOT hopeless. That's why I recommended you test it out first (otherwise I would have told you to just upgrade )
  12. testdasi

    NAS setup

    So what is your question? Not too clear what you are after. If you are asking whether you should get either of the 2 you looked into vs the old Antec 1200 then I say stick with your Antec 1200 and change the fans. That would be cheaper and there's really no reason to replace a case unless you actually have a problem with it.
  13. The latter i.e. it maxes out your 512MB limit.
  14. If I understand it correctly, you are asking about why you can't control the VM through NoMachine after changing the GPU. That's a NoMachine issue (likely with the NoMachine keyboard / mouse driver).
  15. Can't help without more details.
  16. Does downgrading to 6.6.7 resolve your lag? A lagging GUI has many causes.
  17. Perhaps install a fresh Ubuntu server on a different vdisk and see if you can mount the broken Ubuntu vdisk as a secondary disk?
  18. I assume you mean SSD "array" (SSD cache is a pool). SSD array has worked for quite some time, you just don't get TRIM support and depending on your SSD, parity support. Jonnie has reported using his SSD fine in array with parity (except for 1 SSD that always given him 1 or 2 parity error IIRC). SSD cache pool is just normal RAID like FreeNAS (albeit I don't think RAID 5 works but then I don't do RAID 5).
  19. The log you quoted is when the FCP plugin telling you that there WAS an OOM error on your server. It's not the timestamp of when the error happened. You had OOM errors on these time stamps: Aug 12 09:44:31 Aug 12 09:44:31 Aug 12 13:18:52 Aug 12 13:21:22 Aug 12 13:24:13 Aug 12 13:27:43 Aug 12 13:33:33 Aug 12 13:39:09 Aug 12 13:46:12 Aug 12 13:53:01 Aug 12 14:01:44 Aug 12 14:10:04 Aug 12 14:18:17 Aug 12 14:28:35 Aug 12 14:39:17 Aug 12 14:52:36 It looks to be all by docker a42f60d82242a9af576f67954002ad97b0c1f2e70e603a58ac3e7cb02cfb2e83 Go to Docker tab, click the toggle next to "BASIC VIEW" to enable "ADVANCED VIEW". Under each docker you will see Container ID. Container ID only has the 1st 12 characters but it should be enough for you to match it to the problem ID above.
  20. I believe the general consensus is 2000 passmark per 1080p stream so your CPU at 9279 is in the gray area for 5 streams. Note that the consensus is for H.264 (aka AVC). For H.265 (aka HEVC), your CPU will almost certainly won't be able to handle 3 streams. Given your CPU is in the gray area, I would say just set up a test server and see if it's ok for you and your content before going the upgrade path.
  21. My 2p: If you are after silence (and not performance), use WD Red for the array (i.e. don't get the Pro). There's quite a cult following of WD Red for Unraid. Brand is practically irrelevant in recent years when it comes to failure rate. So I would say just get whichever one you are most comfortable with and within your budget. It's more important that you: Buy from a reliable source (so you can claim warranty) Preclear (i.e. stress test) each disk before adding to the array - "infant mortality" rate of HDD is not insignificant If you intent to pass through an NVMe SSD to a VM using the PCIe method, make sure you double check that it can be passed through (e.g. Intel 660p, 760p don't work). The most ideal case is if there's another forum user who has had success passing through the same SSD you are after. Do NOT get an NVMe SSD unless you intent to pass it through via PCIe method. Any other method has a negative impact on performance (if not immediately then down the line). Most of the time the impact isn't that perceptible but then if you aren't about maximum performance then may as well get a SATA SSD for cheaper. Don't get QLC SSD (e.g. Samsung QVO, Intel 660p etc.) for VM boot and/or Unraid cache. QLC is meant for large capacity storage (think SMR for HDD) and not for RW IO. Go for 3D TLC or "V-NAND" instead. Steam library is generally better in the array for games that you don't play and cache for games that you play often. I vaguely remember there's a guide post / video for it. Have at least 500GB SSD for your cache. The cache has evolved to becoming essential for full utilisation of Unraid. Having a small cache pool just cause headaches down the line.
  22. From my experience, the vbios only improves stability but not performance but then I don't exactly have a top range GPU like yours. The general consensus seems to consider vbios as a fix and not a necessity. I recommend that if you can do it, you should dump your own vbios and use it regardless if there's any issue. It's better to remove a variable than to constantly wonder if the variable affects you or not. However, I do NOT recommend downloading the vbios from the web and edit it, unless you know exactly what you are doing (no, following SpaceInvaderOne's guide is not knowing exactly what you are doing). I have noticed people causing their own problems with pass-through due to incorrect application of the SIO method.
  23. You can run an unassigned cache pool but I can't remember if btrfs supports RAID5 or not. Jonnie may know, he's the expert in btrfs stuff here.
  24. Hyper threading is essentially just glorified smart queue management to increase the chance that once the (physical) core is done on its current task, another task is already primed up and available to work on. It does NOT mean both tasks are done in parallel. The automobile analogy may not be immediately clear to non-petrolheads so maybe it's easier to imagine your CPU as 8 workers, each having 2 apprentices. Each apprentice collects necessary materials and put them in a basket for the worker to assemble. The assembling takes more time than collection. A worker immediately works on assembling if the materials are readily collected but has to wait if the apprentice is still collecting material (or because there's nothing to collect). I think it probably is more obvious why having 4 workers with 8 apprentices is slower than 8 workers with 8 apprentices. With regards to isolation, it does not mean 0% load all of the time (but rather 0% load most of the time). Back to the apprentice analogy. You isolate the odd apprentice just to deal with VM work but the apprentice is dumb - he doesn't know if the materials handed to him is "VM" or not. So if the odd apprentice is handed some non-VM materials, he still hands it over to the worker, who then looks at it and says "yo odd apprentice, you aren't supposed to deal with this, hand it back to someone else" (Digression: I have to do this with my interns all the time) Now if the worker is overloaded because the even apprentice keeps on giving him work to do, then the odd apprentice will have to wait for his turn, which will record on the CPU usage measurement as "load". The above is the reason why if you assign an isolated core to a docker, you will get 100% load only on that single core and nothing on other cores. It's because the apprentice keeps on handing non-VM stuff to the worker who keeps on having to tell the apprentice to hand it to someone else. Of course, the above assumes you have assigned and isolated cores correctly.
  25. You misunderstood physical vs logical cores. All of your cores 0 to 16 as you see them in Unraid are logical cores i.e. with hyper-threading. The physical cores are not numbered. They are inferred based on the logical cores pairing e.g. 0 + 1 = 1 physical core. Imagine of your CPU as 8 cars being chained together in a train. They can be front wheel drive (FWD), rear wheel drive (RWD) or all wheel drive (AWD). Each car is a physical core. Each pair of wheels is a logical core. So in your low performance config, it's equivalent of having only 4 cars driving the train but each car runs in AWD mode. So you deliver a lot of power per car but only half of the overall maximum theoretical power since only half of the cars run. The better config, all 8 cars run but in FWD mode. Naturally you get more power. However, the front wheels of the 1st car are also used for steering so you lose a bit of power. The best config, all 8 cars run in RWD mode. Same amount of power as the better config but because the rear wheels are not used for steering, you use the maximum power. Of course, barebone is all 8 cars run in AWD mode.
×
×
  • Create New...