dcoulson

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by dcoulson

  1. Now we're getting native ZFS in UnRAID, is there a roadmap to have multipath support for ZFS pools? Understand multipath isn't supported in unraid today and probably isn't desirable for use as part of UnRAID array, however it would be super helpful for folks running ZFS pools on SAS shelves with multiple controllers. Is there a mechanism for users to sponsor features like this?
  2. I've had this UnRAID system forever, but have done a lot of hardware swapping around over the years. Last hardware changes were over the Holidays, and things have been running stable for 4-6 weeks before it broke in the last 10-14 days. Current HW: Gigabyte TRX40 board w/ 3970X CPU 3070 GPU for OS 4090 GPU for VFIO NVMe boot drive for VM passed via VFIO Currently, if I boot my Windows 11 VM with the 4090 attached to it, it will run fine until I put any load on the GPU then it will crash in 3-5 minutes. Load can be just GPU acceleration for remote access from Parsec, running Kombustor or running Valley benchmark. Windows will reboot and UnRAID will log " vfio-pci 0000:4a:00.0: vfio_bar_restore: reset recovery - restoring BARs". Sometimes the UnRAID system will lock up shortly afterwards, other times it will keep on running. So far I have tried the following changes to UnRAID/HW with no difference: Enabled/Disabled CSM Enabled/Disabled 4G encoding and resizable Bar Removed memory overclock/XMP Deleted VM and re-created it from scratch - Both Q35 and 440fx Fresh windows 11 install w/ current nvidia drivers Reseated CPU/ram/GPUs/etc. Booting the Windows 11 NVMe drive works fine and the GPU can run Kombustor for hours without any issues. I've also pulled the GPU and tried it in a different box and it runs stable. I have a 'new to me' TRX40 board coming in a couple of days, so hoping it's a weird HW issue, but I can't figure out what the heck is going on. Any ideas, or do I just need to continue to replace hardware until the problem goes away?
  3. I have a bunch of drives in a ZFS pool that i have flagged as 'passed through' - I added two SSDs to my system to integrate into the ZFS pool, however the 'passed through' option isn't available for them via the edit page. The SSDs have a different icon, so not sure if UD identified them as something that can't be passed through? No idea what debug info is needed/available
  4. I'm running 6.11.5 - I am trying to make sure my docker containers do not utilized the isolated cores that I want to dedicate to VMs. I realize everyone says that docker won't use isolated cores, but that doesn't appear to be my experience right now - For example, I am isolating 8-15, but seeing containers use it (tdarr_node for example), and the containers themselves have no pinning configured at all. So could be a bug I suppose?
  5. Even though I have 8 of my cores isolated. (8 cores, 16 threads), I am still seeing docker container processes use them - Only way around that seems to be pin the docker containers to other cores to stop that happening. Unless there is another fix for it?
  6. Hi- I recently upgraded from a 24C/48T CPU to a 32C/64T CPU and now need to rearrange all my docker and VM pinning - I only have 10 VMs, so that was super easy, but I have over 70 docker containers. 99% of them use the exact same pinning config. Is there a way from CLI or API to modify the pinning config for a container in UnRAID, vs having to wear my mouse out clicking on the CPU pinning settings page. Also is there a way to set a 'default' pinning config for new containers that are created? or do I need to adjust every time i add something?
  7. I have a 24C/48T system which has "isolcpus=6-11,30-35" set at boot time to isolate 6C/12T for a VM. I am still seeing docker containers running on these cores when I do "ps -eo pid,psr,comm", and see their CPU utilization in the dashboard with the VM shut down. Do I have to explicitly pin the containers to other cores/threads, or is this functionality broken? I don't see a way to set the cpuset globally for the docker cgroup, and since I have 60+ containers I was hoping to not have to manually set the cpuset on each one - that's a lot of clicking. Is there a better approach for this, or am I misunderstanding how isolcpu works?
  8. What's the approach to add more app icons to this plugin? My Unraid box has both Shinobi and t-rex miner dockers utilizing my GPU - GPU stats plugin obviously shows there are two apps using it, however there isn't a little icon like we have for Plex transcoding. Would be great to get a list together of the popular apps that use the GPU and get icons for them into the plugin?
  9. What GPU are you using and what algorithm for mining? Can you post your logs?
  10. Since updating to the current trex container I am no longer able to access the WebUI - I get a connection refused error. Based on the logs trex is running, it is listening (although not sure why 127.0.0.1 instead of 0.0.0.0:4067 - 0.0.0.0 is in the config file). 20210216 16:17:42 ApiServer: HTTP server started on 127.0.0.1:4067 20210216 16:17:42 --------------------------------------------------- 20210216 16:17:42 For control navigate to: http://127.0.0.1:4067/trex 20210216 16:17:42 --------------------------------------------------- 20210216 16:17:42 ApiServer: Telnet server started on 127.0.0.1:4068 Any ideas? Seems to be running just fine, but can't access the WebUI.
  11. replaced it with what specifically - I have tried openvpn-nextgen and openvpn-strong-nextgen files and can't get past the cipher issues.
  12. I found a couple of other posts that suggested the same. I was running 6.8.0-rc7 for a while which has the 5.4 kernel, and also tried 6.9.0-beta with 5.5 - Both had the same issue. Also did BIOS update without any changes.
  13. Did it complete successfully? I tried the beta and had the same issue. Trying with IOMMU disabled in the BIOS now...
  14. I don't have a separate HBA - All the drives are directly connected to the motherboard.
  15. 1950X TR Gigabyte X399 Aorus Xtreme 3x10Tb connected to motherboard SATA controller - 1 parity My unraid box has been running for about a year without any issues. Last week it reported errors on all 3 drives - I rebooted and it had disk2 marked as disabled. I removed then re-added the drive to the array, however it would not complete the rebuild due to errors on all 3 drives again. Since then I have: Updated BIOS Updated to 6.8.3 (also tried 6.9.0-beta) Set iommu=pt in kernel boot flags Replaced all SATA cables Replaced 2 10Tb drives with new 12Tb drives and copied the parity disk to one of the 10Tb drives (copy was successful but SATA ports reset multiple times for both drives) Moved drives to different SATA ports on motherboard For each the rebuild will run anywhere from 2hrs to 20hrs before failing with identical failures across all drives. I'm running a rebuild with now with IOMMU disabled in BIOS. Next steps will be replace PSU and potentially MB/CPU. Anything else I am missing here? unraid-diagnostics-20200405-0730.zip
  16. I'm having similar issues on a x399 TR board - Did the rebuild complete with the 6.9-beta release?
  17. Yep - I just updated to 'latest' and it's still broke.
  18. Change your container repository to binhex/arch-jackett:0.11.409-1-01
  19. So far so good - Updated about 3hrs ago. No more emails. Did we fix something or just suppress the output?
  20. unraid-diagnostics-20190527-1718.zip
  21. I just updated to 2019.05.26 and still have the same issue. Any way to get debug/verbose logs out of the script to see what specifically is generating it?
  22. Hi- Every hour I get an email from my Unraid box, which I determined was due to an hourly cronjob run: root@Unraid:~# /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device Any ideas on how to suppress this? Thanks!