Jump to content
We're Hiring! Full Stack Developer ×

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Based on the post I cited, that (i.e. add an old 2nd GPU for Unraid) was the solution. I suggest you can msg the original poster and double check with him/her. If the SATA ports are important to you then of course, stick with the Asus ROG Crosshair. Just note the compromise that your cheap 2nd GPU will have to be in the 1st PCIe slot (the RX 580 in the 2nd PCIe slot). You can boot the VM from a vdisk file saved on the SSD as part of the cache pool. That is fine. For value options, you might want to consider something like the ADATA XPG SX8200 Pro. It is at a similar price point to the 660p and it's Micron 3D TLC NAND (and it has DRAM buffer) - that is actual value-for-money. Intel has the anti-consumer practice of locking up the entire SSD in read-only mode after all the reserve has been used, under the pretext of data loss protection. That doesn't go well with the lower endurance of QLC cells. And then you factor in the lower performance of QLC and you basically are paying brand premium on a budget low-end product (that sucks). What's wrong with the Toshiba HDD? Or to put it differently, what makes you pick the WD Red NAS over the Toshiba X300 Performance? Toshiba is one of the only 3 HDD manufacturers in the world (together with Seagate and WD) so it's not like they are a value brand.
  2. You need to check the spec sheet of the HDD / SSD model because they do have a large variation. Generally though: HDD sleeps (spin down) at about 0.25W, idles at about 3-5W, operates at 4-8W and uses up to 20W while spinning up. SSD sleeps at about 0.002W, idles at about 0.50W, operates at 2-6W So if your HDDs are in spin down most of the time then the power usage diff would be rather insignificant.
  3. What browser did you use? Any ad blocker? I tested on Chrome and Firefox and have no problem at all.
  4. There are many topics on the forum with issues passing through the RX 580 as primary / only GPU. The only [Solved] topic I can find (quoted at end of post), the OP had to have a 2nd GPU. So I would expect you to have a hard time with your suggested config and use case. There is a success story with Gigabyte X570 + 3900X + Nvidia GTX 1070 so you might want to consider that config instead for an easier time. In particular: I generally recommend someone new to Unraid to pick a Gigabyte motherboard because it has the Initial Output Display option in BIOS that allows you to pick any PCIe x16 slot as first GPU (what Unraid boots with). This allows you, IF THE NEED ARISES, to plug a 2nd GPU in the slowest PCIe x16 slot (usually only running at x4) to use for Unraid to boot and not waste your fast 1st PCIe slot. Mitigation for error code 43 with Nvidia GPU is generally more reliable than mitigation for reset issues with AMD GPU. Usually all it takes is a 2nd GPU (cheap single-slot for about €30-40 or so) for Unraid to boot with (hence my point above about Gigabyte motherboard) to resolve error code 43. The 1070 is a bit more expensive than the RX580 but the 1060 should be similarly priced. Stay away from the Intel 660p (and QLC SSD in general) especially if you want to use it the cache pool. It's cheap but there's a reason why it's so cheap (it's rubbish - real life performance is comparable to a good SATA SSD, and sometimes worse). You want 3D-TLC (aka "V-NAND"). Any particular reason why you picked the WD Red NAS? Unraid isn't RAID so there isn't really a need to pick a "NAS" (or "Enterprise") model. Usually the cheapest you can find from a reputable dealer is good enough. Many on here even shuck from cheap external HDD The only generally available (and updated) patched Unraid is Unraid Nvidia but that is only for those who want to do hardware transcoding in Plex using supported Nvidia GPU (e.g. Quadro P2000 is a popular choice). If you don't have that need (and from what you said, you don't) then there's no need to run any patched build of Unraid. A user posted a patched version that has VEGA and NAVI reset patches included but it has not been updated since November. RX 580 is Polaris though so I don't think that will help if you decide to go ahead with the RX 580. Referenced posts: [SOLVED] Can't pass my RX 580 through Success story with Gigabyte X570 + 3900X + Nvidia GTX 1070 Custom kernel
  5. A few things: Unraid will boot headless (if the motherboard allows it) but you will certainly need a display to change BIOS config. 2GB RAM is not enough for what you are trying to do. The CPU itself might only be good enough if there's no transcoding. With transcoding, you need to rely on hardware acceleration (i.e. Intel Quick Sync) but the CPU doesn't support HEVC 10bit i.e. HDR as far as I know. Upgrading to the 4GB memory model makes it in the £500 range. At that price point, you might as well save a bit more to build a proper box (e.g. Node 304 case + low-power mobo + CPU) + Nvdia P2000 and run Unraid Nvidia.
  6. There is a ClamAV docker that you can use to scan the array. Personally I prefer client-side AV software though.
  7. Updated from rc7 to stable. Was a bit apprehensive with the kernel downgrade but so far so good.
  8. As always: Tools -> Diagnostics -> attach zip file in your next post. Also, since it involves VM, share your xml. If copy-paste, please use the forum code functionality (the </> button next to the smiley button). On a side note, please kindly use the forum code functionality if copy-pasting stuff from Unraid (e.g. your IOMMU section and CPU thread pairing section in your original post). Not everyone on this forum is young and has good eye sight (myself included) so it gets painful to read when everything is exactly the same size, font and colour.
  9. I don't think the feature, as you described it, will ever be implemented by Unraid. It sounds simple in your head but is an extremely complicated change. On-demand CPU core isolation needs to be implemented by Linux at the kernel level e.g. creating a communication mechanism to lock and release core isolation (that is assuming core isolation is something that can be done after boot). On-demand core usage change for dockers needs to be implemented by Dockers. (remember, currently if you pin an isolated core to a docker, it will cripple the docker) IFTTT condition + GUI implemented by Limetech (1) and (2) are not controlled by LT (nor it is within LT capabilities, to be honest). So unless and until they are implemented by those entities, nobody (LT included) will be able to implement what you are asking.
  10. Please attach the xml of your 2 VM's. If copy-paste, please use the forum code functionality (the </> button next to the smiley button) separately for each xml (i.e. split them out).
  11. Plex doesn't support AMD hardware acceleration. Only Intel Quick Sync and Nvidia NVENC Source: https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/
  12. Your questions suggest you are very confused about the array vs cache pool. Let me clarify. Parity is used for the array. Typically parity is a HDD (since the array typically only contains HDD). SSD is almost never used as parity. Unraid array is NOT RAID. Each data disk has independent file system and there is no striping. You can use mixed-size HDD in the array, as long as parity is the disk with the largest capacity (i.e. greater than or equal to the largest data disk). This is one of the main advantages of Unraid over typical RAID NAS. Adding disk(s) to the array, by itself, will never result in data loss (because there is no striping, there is zero impact to the data on the existing disks). This is assuming there is no user error. If you, in panic, format your existing data disk or, in haste, put an existing data disk into the parity slot or, in ignorance, not follow the correct procedure, then there isn't really a product out there that can guarantee protection against such. The cache pool is typically where SSD's are used. SSD can run in single or (btrfs) RAID. There is no minimum size requirement based on the array size. It depends more on what you use the cache pool for and your usage pattern. Single SSD, you can pick xfs or btrfs file systems but if you pick xfs, you will not be able to add more disks to the cache pool. With btrfs file system, you can add disks to the cache pool without losing data, as long as you follow the right procedure. It carries more risks than adding disks to the array because the cache pool runs in RAID config. By default, the cache pool runs in RAID-1 mode with 2+ SSD's. While you can use mixed-size SSD in the cache pool, your available storage from the pool depends on the exact RAID config.
  13. Is it viable using the minix with the necessary docker containers to use it as a download station or is it going to struggle? Not enough RAM based on my own experience. The dockers you are after, on my server, uses significantly more than 2GB RAM. That is even before considering the low processing power of the CPU. Is the PCIe 2.0 x1 going to give me problems playing files? Marvell controller = forget it. PCIe 2.0 x1 = also forget it. Is it possible having the hot swap bayin my pc and booting up windows when I need it? Yes but I don't recommend it (see below). While it may be possible to make it work turning your main computer into a hybrid PC + NAS in 1 box, it isn't something I would feel comfortable recommending to someone new to Unraid. This particularly takes into account your comment "I can’t afford to lose it as a working computer and I am not sold on having it on 24/7 in my bedroom". You probably also may need to buy additional hardware to make it work. In that case, you might as well save up and build a proper NAS server.
  14. Try these, hope it helps with the sluggishness. Enable Global C State Control in the BIOS. (Enable, NOT Auto) Create a new template with Hyper-V = Yes, Machine Type = Q35-4.0.1 and the rest of the config the same. Untick Start VM and Save. Then edit in xml mode: Change the <hyperv> ... </hyperv> block of codes to this: <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='0123456789ab'/> <frequencies state='on'/> </hyperv> Add this bit above </features> <kvm> <hidden state='on'/> </kvm> <ioapic driver='kvm'/> Change the <clock> ... </clock> block of code to this: <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='yes'/> </clock> You will also need to watch SpaceInvaderOne vid on lstopo so you assign the right cores to the right PCIe slot. Always leave core 0 free.
  15. Also, check out this recent post about more realistic expectation of what can be achieved over 10GbE.
  16. Typical SATA SSD can reach 550MB/s (or so) but only sequential and on a barebone system during benchmark. 400MB/s is a more optimistic in-real-life estimate (again, sequential). For random IO, network latency is the main bottleneck. Your read speed will not get worse over time (in the sense of with vs without trim). What is your use case? As I said, most home uses tend to be read-heavy. You seem to be heavily interested in trying to theoretically max out your 10GbE LAN but that is rather misguided in my opinion, especially as you mentioned "point-to-point 10GbE between desktop and unraid server". For maximum speed, you want your SSD to be in your desktop with Unraid server as a pure backup of the desktop. As someone who have seen and personally experience data loss, I don't recommend RAID-0 as a matter of principle, regardless of backup strategy. Period.
  17. You need to clarify what you meant by "couldn't get it to boot". It's ambiguous. Does the VM errors out when you click Start? Does it show the Tianocore screen but get stuck there? Does it boot into a command line interface? Does it get in a boot loop? You can change machine type with no issue (but you have to create a new template - the GUI can't handle changing machine type on an existing template). The exception to that is if you were on SeaBIOS (default for typical guide with i440fx) and switching to OVMF (required for Q35). From my experience, that won't work out of the box. Also make sure your vdisk settings is of the right format (check your xml), especially if you were using qcow2 in the old template.
  18. Beside what johnnie already mentioned, you can opt for a very large HDD (compared to your SSD size). That way when you are writing, you would be using the fastest section of the HDD, which (from my experience) is near 200 MB/s. So your array will read at perhaps 400MB/s and write at about half of that (assuming turbo write is on - and it should be turned on for an SSD-based array). Pretty decent I would say. Impact from lack of trim is tiny bit overblown. Most home uses tend to be read-heavy which isn't affected by the lack of trim. Also note that having SSD in the cache = RAID1. It's fine now with 2x2TB but remember RAID1 will only protect you against 1 drive failure, even with 2x2TB + 2x4TB (same situation with RAID10). Would you then still be happy with sacrificing 6TB of available storage for exactly the same benefit? TL;DR: with the plan you mention, you should put the SSD's in the array.
  19. So the key differences are: Yours is i440fx SeaBIOS and mine is Q35 OVMF You don't use <numatune> to fix RAM allocation across host NUMA nodes. You have <timer name='hpet' present='no'/> and mine is <timer name='hpet' present='yes'/> You don't spread load evenly across CCX and you also use core 0 Not sure which one(s) caused the diff in our test results though. My hunch is probably (2) or (4).
  20. I think you are confused. /mnt/user is where the Unraid shares are mounted i.e. what are actually saved on your storage drives. (note: you shouldn't be using /mnt/user0 unless you know what you are doing - it just adds unnecessary complication). The "/media" map you are referring to sounds like what you map from within docker. Those paths are independent across dockers. i.e. Radarr / Sonarr doesn't see what you map /media to for Plex (and vice versa) - that's the whole point of dockers. If you somehow managed to map a /media in the Unraid file structure itself (e.g. symlink from console as root) then it will certainly disappear after boot because all folders except /mnt and /boot are stored in RAM.
  21. There isn't really a progress checker. The issue lies with AMD and the fix is in the kernel so whenever AMD fixes it and/or someone fixes the kernel and/or someone has a patch available then we'll see how things progress. Given how long the issue has been around, I am not optimistic. Nvidia isn't pain-free. Passing through primary / only Nvidia GPU (GTX) is likely to give you error 43. Having a second GPU for Unraid to boot with is one of the potential solutions (and even so it is not a guarantee that you won't get error 43). With regards to the U.2 - the Asus manual is very ambiguous because its spec in the same manual says "U.2 connector (supports U.2 NVMe device)". Given the manual says "such as" and "or", it can be interpreted as just 2 examples of the Mini SAS connector and the spec clearly says it only support 1 of the 2 examples. I don't have the board so of course take it with a grain of salt. However, I am comfortable that my conclusion is correct since even the Anandtech review specifically only says U.2 NVMe with no mentioning of additional 4 SATA ports. 1. Make sure you do a test with your NIC passthrough with Unraid before assuming it would work. I have seen several posts reporting problems with Pfsense VM lately on the forum.. 2. The pattern I have seen with Ryzen is there is usually at least 1 USB 3.0 controller that can be passed through (2 controllers for Threadripper). Based on this post on the forum, I don't see why the 06:00 device can't be passed through. It is in its own IOMMU group. You should always try the motherboard controller first. Only buy what you need if the motherboard controller doesn't work. 3. Expect strain on your relationship. Girls who game are already rare. Girls who game and still be nice when she wants to game while you are also gaming and can't give her the PC back is on the endangered list. 4. You can pass through the NVMe (as a PCIe device) to the VM as it's main storage. There is no need for a vdisk in that case. For the cache, you don't need NVMe performance. A SATA SSD is more than good enough for docker uses.
  22. Well, I can't speak for the Pfsense driver. However, I noticed more reports of issues with pfsense VM on here lately so I would assume it has something to do with 6.8.0. If 6.8.0 (stable) doesn't work for you, you can try a few things: Wait for 6.9.0-rc1 (coming out imminently) - it has 5.x kernel which has better hardware support in general. Create a new template and use Q35-4.0.1 machine type Downgrade to an older version e.g. 6.7.2 Note: Pfsense driver has nothing to do with Unraid. Unraid can have a driver for it but if Pfsense doesn't or the VM itself doesn't quite work then it won't work. You need to first check with Pfsense if your card is supported.
  23. It's like the "Tools -> Diagnostics -> attach zip file" that I just copy-paste all the time. 😅
  24. The BIOS issue that I remember had to do with X470 update to support next gen Ryzen that broke it. I don't think it is related to the X570 chipset. Nevertheless, it is still a good idea to pre-download all the BIOS's and keep them in a handy USB stick in case you need to downgrade. Your path with X5700XT will be fraught with difficulties (if it is even possible to pass through Navi right now). The Navi reset patch proved unreliable and was removed from 6.8.0-rc so the issue is still there. Your mobo only has 4 SATA. The U.2 is for PCIe NVMe SSD and not a SAS breakout. The maximum available SATA on a X570 motherboard without a separate controller is 6. Now questions for you: Why do you need separate Quad gigabit card? - using your server as a router too? Why do you need separate USB-C card? - what's wrong with your motherboard controller? You had 2 gaming desktops but replace it with 1 gaming desktop? Do you really need 2 NVMe? Do you actually need all SATA for the array? Generally with HDD, it is better to have fewer large capacity HDDs. Just use 1 SATA port for a cache i.e. SATA SSD and your problem with PCIe lane is resolved. From personal experience, NVMe cache doesn't offer any perceivable speed diff - it's a more a "I have it so I use it". Also, in my opinion, the first choice for motherboard with Unraid beginners should be Gigabyte due to one single feature: "Initial Display Output". Gigabyte BIOS allows you to pick any of the x16 slot (including the bottom x16 slot that only runs at x4 speed - check the owner's manual pdf from Gigabyte website to confirm) as initial display i.e. what Unraid boots with. That allows you flexibility in case you actually need a 2nd cheapo GPU for Unraid (e.g. when you have problems passing through primary GPU). Otherwise, you will have to waste a fast PCIe slot - of course if you are running Unraid Nvidia then it doesn't matter. All the motherboard brands are more or less similar in terms of features and gimmicks so why not consider something that MAY make your life easier?
×
×
  • Create New...