Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. LT has officially mentioned that 5.x kernel won't be in 6.8.x due to this bug: So you will have to wait for 6.9.0 at the earliest. (coming soon™) As to why there's limited interest, I can venture to speculate. Support for the tech itself is still limited. AFAIK, it needs a few hoops to work with Ubuntu 18.04 (LTS) and need lowest 19.04 for full out of the box support. AMD has been kicking Intel in the groin lately so asking for an Intel-only feature probably won't get as much buzz. Nvidia support is limited to Quadro RTX (and Tesla!, who has the dough?) so again, probably won't get as much buzz. TL;DR: a very niche bleeding edge feature that LT probably will only include out of convenience.
  2. You can read wiki for more info re TB vs TiB. https://en.wikipedia.org/wiki/Binary_prefix#Consumer_confusion
  3. "Can't get them to work" is rather vague. Need more details. Both P4000 and GT710 are not known to be problematic so it's likely something you did wrong. Note though: 4xVMs with 4xGPUs pass-through, while theoretically possible with consumer hardware (e.g. TR4), is something probably reserved for enterprise hardware (e.g. Epyc). It would also require a decent level of skills. You are approaching Linus level of complexity and he has a team of writers to help him do stuff (with personal help from Wendell - you can't get that being a regular Joe). While you are at it: Tools -> Diagnostics -> attach zip file in your next post with details.
  4. That may be due to a slightly higher latency having to go through the chipset - considering you benched with 5x1GB files. At 3.x GB/s, each 1GB file takes less than 1/3s, almost stepping a toe into the random IO. 😅 I typically bench CDM on NVMe with 8GB file.
  5. The error 127 is your GPU isn't resetting itself properly. Motherboard BIOS could be a problem but I would say the most likely culprit is your vbios file. How did you obtain the vbios?
  6. HDD is in the array, SSD (SATA and/or NVMe depending on use case) in the cache pool. Depending on the exact use case, remaining SATA SSD is typically mounted as unassigned devices. Remaining NVMe SSD can be mounted as UD or passed through to the VM using PCIe method (because NVMe drive is essentially just a PCIe card like a GPU).
  7. 1. USB "GPU" won't work 2. No need for VNC display. 3. No. Each GPU can only be passed through to 1 VM at any one time. 4. Support in guest OS is assumed before passing through is even considered. The point from itimpi is the HOST hardware has got to be supportive. For example, a motherboard can have multiple USB controllers, some are happy to pass through, some are not. The 5700 XT has reset issue that is still not fixed by AMD. Some NVMe controllers (e.g. Intel 660p) just can't be passed through due to Linux kernel conflicts. 5. USB devices can be attached to VM and should run at pretty much full speed. Passing through a USB CONTROLLER is more for hot plugging convenience.
  8. There are certainly theoretical merit to it but I don't think it really has that big of an impact in real life. Higher load on the same 2 out 16 cores over time may create a hot spot but then the temp should be even out by the CPU heat spreader. 2/16 is only 12.5% load max, it should not cause temp to sky rocket to the extent that it damages the CPU.
  9. While you can run Unraid headless, you might still need GPU to set things up e.g. BIOS. I don't think IPMI supports that and I don't think AsSpeed VGA chip works either due to lacking drivers. 8GB is enough for pure NAS. Even 4GB is enough. 16GB is better if you want to do stuff like dockers (trust me, once you set the NAS up, you will be very itchy to do more with it.
  10. You need to enable syslog server (in the settings) to save it your flash. That would allow you obtain log cross-boot. Your post-reboot log is pretty useless as any issue was logged in the pre-reboot.
  11. Completely forgot about the GUI! 🤣
  12. As a simple NAS, the Xeon 1230 V3 should be more than sufficient. Power consumption on the NAS will be mainly driven by your HDD and not the CPU. Intel SpeedStep is pretty good at reducing power consumption to an acceptable level. You are unlikely to break even 10-20W diff in power consumption in any reasonable amount of time with more expensive hardware. Also note to add 20-30 Euros to cover a cheapo GPU since I don't think the 1230 V3 has integrated.
  13. Did you attempt to PCIe stubbing any device in the past? It could be that you used the file method and the controller is in the same IOMMU group as the other intentionally stubbed device (probably your GPU). Unraid will try to stub everything in the same IOMMU group, which affected your controller without your realising it. If you used the syslinux method then follow Jonnie advice (click on Flash then scroll down to the syslinux section).
  14. It depends on how you set up your Shares "Cache" settings. You can SSH into the server and look under /mnt/cache to see what is currently being saved there. Will give you an idea as to what Share to fix. 399GB is a tiny bit on the excessive side unless you are running some high capacity vdisks.
  15. @cleibig PCIe 4.0 x8 is PCIe 3.0 x16. You will need 2 PCIe 4.0 M.2 SSD running on pretty intense simultaneous workload to saturate that. All of the current ubiquitous PCIe 3.0 M.2 will not saturate that link.
  16. I wonder if we can add a feature to the GUI that a user with root access can make obvious loud big alert on the GUI that lasts at least 24 hours. It would be normally utterly useless but would provide a mean to alert the user in scenarios such as this.
  17. Nothing you can do if migrating drive-by-drive. If you are really OCD, you have to manually move things around AFTER migration.
  18. What does it mean by "cannot"? What errors do you get? I would suggest a good starting point is to watch SpaceInvaderOne guides on Youtube. If you follow the guides and still have issues then come back with specific questions. Emphasis on "specific"! Also, with all queries about issues, Tools -> Diagnostics -> attach zip file to post.
  19. What are you struggling with e.g. issue, error etc?
  20. Theoretically, you can unbind and rescan devices. However, I have never managed to make it work in practice. It would still require Unraid to reload its interface (with display drivers)after a successful rescan. I don't think Unraid has a hoop for that.
  21. Let's start with the good news. You have a Gigabyte mobo which has a BIOS option for you to pick which PCIe slot as "Initial Display Output" i.e. for Unraid host. That will save you needing to plug the GT520 on the fast 1st PCIe slot. IIRC the 520 is a single-slot GPU so you may even be able to use the bottom PCIe slot for it. It will also allow your main GPU to be secondary GPU (despite being plugged in the fast 1st PCIe slot) which tends to make life a lot easier with PCIe passthrough. Now the not-so-good news. The RX580 is notoriously not happy with being passed through - I have seen many posts with issues on here. There is a "Solved" post which reported success with having the RX 580 as secondary GPU + vfio-stub it so you might still be ok. I would highly recommend watching SpaceInvaderOne guides on youtube, particularly the ones about passing through GPU (I think he has 3 vids varying from basic to advanced tweaks, you should watch them all) Now with regards to storage. It is impossible to pass through the 660p via PCIe method to a VM on Linux host (e.g. Unraid) due to Linux kernel not liking the controller). So the best you can do is mount it as Unassigned Devices and put a vdisk on it (or pass it through via ata-id method). The 660p real life performance varies from "as good as" to "worse than" a good SATA SSD. Keep that in mind if you are considering purchasing another SSD. QLC for SSD is like SMR for HDD i.e. cheap but not cheerful. I personally prefer a vdisk instead of ata-id passthrough since I can somewhat benefit more from TRIM with a qcow2 vdisk + scsi device (see johnnie's guide in the VM FAQ of the VM forum). You do NOT need cache for the array but you should still have at least a cache drive. The original intent for the cache pool to serve as write cache came about before (a) reconstruct write aka turbo write was implemented and (b) HDD were still ultra slow. Nowadays with turbo write on, you can run pretty close to the max speed of your slowest HDD while writing. The cache drive is now used for docker image, libvirt, appdata etc. They are pretty important for a smooth experience with Unraid if you are using it beyond a simple NAS. Mirrored cache pool is only required if you have critical data to protect against drive failure. I used to run a pool but now go for single-drive cache and instead focus on making sure my backup strategy is up to par. One of the main reasons is that I have found SSD (that is not Intel-based) incredibly resilient to catastrophic failure. They tend to fail very gracefully as dead cells are gradually replaced with reserved cells and eventually you just lose capacity as the reserve runs out. Intel, in contrast, engages in the anti-consumer practice of locking your SSD in read-only mode if all reserves are used under the pretext of data protection. While it does take a relatively long time to use up all the reserve, it is still anti-consumer and the practice is even more concerning with QLC.
  22. Also wondering if you can test it with a xfs drive to see if it's btrfs specific? What you described sounds a bit like COW (Copy-on-Write) gone wild.
  23. Are you sure you didn't deactivate the cores in the BIOS? It's highly unusual for the OS to not report the full CPU cores like that. I believe there are other members on the forum with the same CPU and I don't think anyone has reported strange stuff like yours. Perhaps also try booting in Safe Mode, in case some plugins are interfering with it too.
  24. No. "NAS" designation is not required. When it comes to Unraid, you just buy the cheapest available HDD from a reputable dealer who is familiar with shipping HDD. e.g. Amazon is reputable but they have recently sent me a HDD in an unpadded cardboard envelope so they are certainly NOT familiar with shipping HDD. All those "NAS" and "Enterprise" blabla are mostly irrelevant. So your question is really just 5900rpm vs 7200rpm e.g. noise level, power consumption, speed and most importantly price.
×
×
  • Create New...