• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

boomam's Achievements


Apprentice (3/14)



  1. Since last rebuild of the cache, i've had my second pool corrupt too - issues that did not exist and are highly coincidental since the script was implemented. I'll look into it properly at the weekend, but for now i've turned it off. Let me know if you want/think some diagnostic logs would help diagnose. On the upside, finally found an excuse to convert from docker vdisk to a directory. 😛
  2. To clarify for people - you do not need multiple tunnels, nor a large config to proxy several subdomains through an argo tunnel. You can literally just have the config point at the IP/port of your proxy manager (NPN, SWAG, etc.) and add records for each subdomain in Cloudflare DNS as needed. The key however with the current argo version however is to turn TLS verify off in the config and set the SSL/TLS mode in Cloudflare to Full, otherwise there will be redirect issues.
  3. Well that certainly saves me some research at the weekend. 😛
  4. I'd have to look into it closer, but if they've set it to that, instead of using industry standard ACHI commands, then its a bit of a weird decision. We'll see I guess. 😛
  5. I've not said that it does, but if it increases the chances of BTRFS corruption due to allowing sleep, then as a by-product it should be listed as note in the documentation. Directly caused or not. Actually, yes - Unraid should have warnings about this (and other issues) on their install FAQ for new users. There's a lot of gotcha's with Unraid that, using this as an example can cause either data risk or hardware failure. That whilst not necessarily a direct cause of Unraid, the fact that its using a component, in this case Docker, that has the bug should be noted so users of the platform are correctly informed. Using the same logic for your great diagnostic/workaround of the docker/loop2 issue - any perceived issues, small or large, should be noted so people can understand the risk. Otherwise many will put issues down to other causes and there will be no consistent thread to tie together a root cause for something due to shared knowledge of an inherent risk, directly caused or not. It's not commenting on the intent of a given app/script/platform, nor the people creating it - to be clear, its appreciated work - but noting down issues as they occur and then discounting them afterwards, ensures that issues are captured, analyzed and prioritized for further remediation if needed, by allowing that collective knowledge to take place. Ignoring and going 'its just you/your hardware' when an issue has never existed previously is not great as it ignores potential issues caused as a side effect until they get much larger in scope. It's Dev/QA 101, especially in the Open Source community. If its not going to be added to the doc's/note's, then I'm sure as this isn't too far into the thread that others will see with a quick scroll down, so its arguably moot, they'll get the info if they read a little further. RE: Drive Spin downs The spin-down delay variable dictates when the ACHI command to spin down is sent to the drive. The command for this, I think, is the same between SSD & HDD, just acted on in different ways - and often differently dependent on the SSD manufacturer & firmware. In this case, its running on Crucial drives - from what I read a while ago I think it actually does help give it a nudge in the sleep direction, so setting it to 'never' should help. It just warrant's a little more testing to make sure that it works as intended. Failing that, some modifications elsewhere should achieve the same results too.
  6. You literally just clarified the opposite of that, saying that the script affects the drives sleep mode. Whilst your script doesn't directly affect that, it does dramatically increase the likelihood of it, and warrants a warning/note in the documentation.
  7. Can you provide clarity on what we should be seeing on the usage command please? You've given an example of what it shouldn't look like, but not what it should look like - this would be useful for understanding for everyone.
  8. SATA drives. And yes, BTRFS did mess up, luckily i was able to recover. No log errors though, other than being unable to mount. I'll set the drives to never sleep, should work around it. For others, it could be worth updating the original post (here and reddit) to list that caveat/s workaround, otherwise this fix will become better known as the cache killer, instead of cache saver
  9. I've had to revert this change as it breaks both NextCloud & Matomo, causing their config/log area to be read only. ## edit ## ...maybe, just noticed both my cache drives have gone offline, at the same time....investigating....
  10. Cool - just implemented now. Will monitor and see how it goes! 🙂
  11. Great work! Will this work if the docker.img is on a different drive? For myself, '/mnt/cache-b/docker/docker.img" instead of "/mnt/cache/docker/docker.img"
  12. Mapping /tmp from the host to wherever in the container places said mapping into RAM. That is what I did to fix the Ghost container, and is common practice for media containers too.
  13. Not sure that thats accurate - most documentation on the topic lists /tmp as being a default ramdisk for the system.
  14. Portainer is free.