mishmash-

Members
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

0 Neutral

About mishmash-

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. My unraid box has a bit of a custom arrangement, where the GPU has 2x oversized fans - one blows on the GPU directly, and the other on the array drives and the GPU. The GPU controls one of the fans, and unraid controls the array fan. I am trying to set up the ability to calculate and compare a required PWM value for the array drives and the GPU, and then decide which is higher and apply it to the array fan. In my windows VM I use nvidia-smi with a timeout -t 5 loop running on a schedule to constantly update a txt file on the cache drive which unraid can read. This part
  2. So everything works well with Authelia and password based redirection. Only issue is when I try to login via webgui it just redirects me to my domain. I don't have the chance at all to setup other methods as per the tutorial. I have totp enabled in the config file and still nothing...I'm pulling my hair out trying to get totp to work! Any suggestions?
  3. I'm stuck on the first setup - mainly logging in via local network to setup 2FA. I can get to the authelia login page locally, but then when I try to login nothing happens. The docker log says "validation attempt made, credentials OK" but then nothing else. Anyone seen this issue before? Edit: all good, changed config to one factor, and it redirects properly after auth. now to work out how to get two factor to work...
  4. I found some replies here with users using cloudflare domains - and they are unproxied. Is this less secure than other methods? I guess the only thing happening is exposing your public IP via a sub domain. Are there other methods to get wireguard to work with a cloudflare proxy or otherwise? Apologies for my ignorance, I'm not super well versed in the world of networking.
  5. Another grave-dig: I haven't done any consistent temperature or AIDA 64 testing - that will come in a couple of months after work and holidays calm down, but, I have a -100mV undervolt set just to see what happens. No crashes so far. It appears that there is an overall temperature reduction of 2C, but it is hard to tell. So, further testing for later: Boot unraid with all dockers stopped, find an aida64 tester docker Benchmark with a stable ambient temperature and 0mV offset. Begin applying offsets in BIOS (if possible, haven't checked), if not, back
  6. Thread gravedigging... Managed to get into an interface of some sort using CLI and python3 for undervolting. I haven't tested it yet though. I used this python script, pulled it with git (pip3 did not work for me) and then ran it with python3. https://github.com/georgewhewell/undervolt root@sorrentoshare:~/undervolt# python3 undervolt.py --read temperature target: -0 (100C) core: 0.0 mV gpu: 0.0 mV cache: 0.0 mV uncore: 0.0 mV analogio: 0.0 mV powerlimit: 105.0W (short: 0.00244140625s - enabled) / 84.0W (long: 8.0s - enabled)
  7. Disclaimer: I'm certainly not an expert on this. Support is experimental. TRIM is not available when using SSDs in the array. My basic understanding is the fact that the TRIM command is passed to the SSD controller, and there are different types of methods for performing TRIM, which will invalidate parity. It's speculated that if it ever were implemented, then the TRIM method would be DRZAT (deterministic read zeroes after TRIM), as this would in theory not break parity....or something. DRZAT is also needed for some HBA cards when using them with SSDs in cache and TRIM.
  8. Note that partly due to legacy issues from upgrading from HDD to SSD array, I still use a cache drive SSD in BTRFS raid1. But I think I actually like having a cache SSD with an array SSD, as the cache can be trimmed, and is constantly seeing tiny writes etc. I might upgrade it to an NVME for next time. In reality though I think it does not matter at all on having cache+array ssd or just full ssd array no cache.
  9. I have an array of 4x 1.92TB Samsung Enterprise SSDs (PM863a). See hdparm output below. They have DZAT. Note that as they are in the array they do not trim, I rely on the fact that I have minimal drive writes and rely on the garbage collection. No parity errors, running for 4 months now. Maybe one day I'll play with zfs on unraid, but that's for another time long in the future. /dev/sdb: ATA device, with non-removable media Model Number: SAMSUNG MZ7LM1T9HMJP-00005 Serial Number: <redacted> Firmware Revision: GXT5404Q
  10. Ah right, makes sense. I've got 16GB RAM, but I tracked down the issue to a docker that had gobbled up 14GB of RAM. Put a resource limit on it and now cache dirs works. Cheers!
  11. I'm using the cache-dirs plugin. Using the open files tool I can see that a "find" service is constantly running on one of my disks, stopping it from being able to spin down. How long does it normally take for cache dirs to finish this "find" process? It's been the better part of 3 weeks for me so far...
  12. Update for anyone experiencing similar issues in the future - one of the fixes above has helped (I'm guessing the IOMMU error related fix, which is the Marvell 9230 firmware). I've been running a VM with constant processing and disk load and multiple dockers with no issues. This is with 1x SSD cache in XFS and the other SSD as an unassigned device in XFS for the VM.
  13. Made some changes to the system settings and config, hopefully one of these would have fixed the problem: Updated to Unraid 6.5.2 from 6.4.1 (cache drive became visible again) Updated Marvell 9230 firmware as per this thread here and here Updated ASUS BIOS to latest version (from 2014 to 2018!) Didn't disable VT-d as per Marvell thread (apparently disabling VT-d helps) Moved VM disk image off cache and onto unassigned device SSD Will report back if problems persist further.
  14. I've been having issues with either my cache pool or docker set up. One or both of them have corrupted twice in the last week, the first time I had a BTRFS cache pool set up, and now I have a single cache drive (XFS). The end result is what appears to be a unmountable cache drive and the inability to start the docker service (both times with BTRFS and XFS). I can't seem to trace the cause of this problem though, has anyone experienced this before, or is able to assist me with diagnostics? The 2x SSDs are mounted on a Startech PCI-e card, and have been giving me trouble
  15. I'm looking to convert my existing unraid box to host a gaming VM (just one for myself). My daily driver laptop is a lightweight unit - and I'm also weighing up whether it will be cheaper or better to go with an eGPU. My question is will a gaming VM using a i7 4770 and a GTX 1060 mini with passthrough support modern-ish titles? How steep do the hardware requirements get when running games through a VM? Are there any general rules of thumb? (i.e. need 0.5 GHz higher and one level above recommended requirements, etc)?