CS01-HS

Members
  • Posts

    358
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. And you're sure they sleep with the container stopped? Maybe you have USE_HDDTEMP set to yes? If not I don't know what could be causing it.
  2. Apple began AFP deprecation in 2013 and still haven't removed it I assume because they realize their SMB implementation's lacking. I wish unRAID would re-add AFP support but admittedly I don't know how much work's required to maintain it.
  3. Did you restart the container after (and verify your commenting out the block persisted through restart) ? Can't imagine with no calls to the drives how it would keep them awake.
  4. It's probably the smartctl call in telegraf. Check your config: vi /mnt/user/appdata/Grafana-Unraid-Stack/telegraf/telegraf.conf Do you have nocheck = "standby" in inputs.smart? [[inputs.smart]] # ## Optionally specify the path to the smartctl executable path = "/usr/sbin/smartctl" # # ## On most platforms smartctl requires root access. # ## Setting 'use_sudo' to true will make use of sudo to run smartctl. # ## Sudo must be configured to to allow the telegraf user to run smartctl # ## without a password. # # use_sudo = false # # ## Skip checking disks in this power mode. Defaults to # ## "standby" to not wake up disks that have stoped rotating. # ## See --nocheck in the man pages for smartctl. # ## smartctl version 5.41 and 5.42 have faulty detection of # ## power mode and might require changing this value to # ## "never" depending on your disks. nocheck = "standby" # Otherwise I don't know SAS but I remember some forum discussions of SAS and spindown, maybe you have to customize the call. Worst case you can comment out the whole block which should resolve spindown but will disable SMART stats in grafana.
  5. Darn. Maybe this is the push I need to learn docker/container distribution.
  6. I love the Grafana Unraid Stack - clean and simple. Any plans to update it with a recent version of Grafana? They've added some handy features since v7.3.7: https://grafana.com/categories/release/
  7. First thanks for this container, very handy. One suggestion: I was running batch conversions in HandBrake and couldn't figure out why my iGPU wasn't fully utilized: It turns out it was but it was maxing out 3D render load (95%) and the reporting script (get_intel_gpu_status.sh) grabs video load (9%): So I tweaked the script to grab whatever's highest: #!/bin/bash #This is so messy... #Beat intel_gpu_top into submission JSON=$(/usr/bin/timeout -k 3 3 /usr/bin/intel_gpu_top -J) VIDEO_UTIL=$(echo "$JSON"|grep "busy"|sort|tail -1|cut -d ":" -f2|cut -d "," -f1|cut -d " " -f2) #Spit out something telegraf can work with echo "[{\"time\": `date +%s`, \"intel_gpu_util\": "$VIDEO_UTIL"}]" #Exit cleanly exit 0 I overwrite the container's version with the following Post Argument, where utils is a new mapped path to the folder containing my tweaked version: && docker exec intel-gpu-telegraf sh -c '/usr/bin/cp -f /utils/get_intel_gpu_status.sh /opt/intel-gpu-telegraf/; chmod a+x /opt/intel-gpu-telegraf/get_intel_gpu_status.sh' (Full path to cp is necessary because cp is aliased to cp -i) Now the display reflects full utilization:
  8. One advantage I noticed was it exposed unraid-autostart which I've added to my backup sets since reinstallation through Previous Apps didn't restore it (though it worked perfectly otherwise.)
  9. I have appdata on a single XFS-formatted SSD. Recently on occasion a container would disappear on restart necessitating reinstall – thought it might be docker image corruption so I should recreate it. In the process I saw 3 options for docker root: BTRFS image XFS image Directory Directory looked interesting so I thought I'd try it. Have I chosen poorly? Are there benefits I can take advantage of?
  10. I don't know if either of these applies but since RC2 VNC with Safari doesn't work for me.
  11. Sorry for the delay, just saw this. Actually I meant Duplicacy's full appdata directory. On unRAID that's typically: /mnt/cache/appdata/Duplicacy I run the Backup/Restore Appdata plugin weekly which backs up all my containers' appdata directories (and my flash drive) to the array, so for simple corruption I'd just restore from that. I'm talking about catastrophic failure, your server's struck by lightning or stolen, etc. I believe everything necessary to recreate a container is either on flash or in appdata. So I take those two backups, created by the backup plugin, and save them elsewhere – an offline flash drive, remote server, etc.
  12. Do you want versioned backup (I used Duplicacy's docker) or a simple copy in which case a User Script with a few calls to rsync would do? Whichever route you go as long as your backup drive's part of your unraid server if e.g. a power surge damages your main drives it'll also likely damage your backup drive. Same goes for an encrypting virus. Really you're only protecting against accidental deletion but that's better than nothing.
  13. Could be a freak occurrence but unraid suddenly stopped tracking reads/writes to my cache pool. It was being read from but according to the webgui and telegraf there was no activity. A reboot fixed it. Very strange.
  14. Strange bug since I updated to rc2: Occasionally when I delete a file, say test.txt, what appears in RecycleBin is a folder test.txt where the file should be and inside it the file test.txt. I rearranged my cache recently which involved manually moving files. Thought that might be the cause so I uninstalled the plugin, deleted all .Recycle.Bin folders in /mnt/user/ shares, and reinstalled but the problem persists. Anyone else seeing this?
  15. Isn't the easier procedure adding a second disk to the cache pool, letting it balance, then removing the old one? Maybe more room for bigger errors.
  16. Over SMB? Should be faster. I get about double that with a theoretically slower setup (odroid-HC2 over WebDAV.) You probably want to set this up as a copy job (initialize backblaze as copy-compatible and consider bit-identical.) Check out the forum: https://forum.duplicacy.com One piece of advice is keep a copy of duplicacy's appdata outside of CA AppData backups and duplicacy backups - In case of catastrophic failure to restore you'll need a working duplicacy and you don't want anything preventing that, e.g. encryption keys. (Seems like this would apply generally to backup software.)
  17. Ha, you're right that wasn't very clear. I'll rephrase: Are the buttons circled in my picture present in a clean install or did I change a setting (or maybe add a plugin) to get them?
  18. I'm almost sure I enabled this option (somehow) when I first installed unraid. I'd like to disable it but I can't find the setting. Am I misremembering and it isn't optional?
  19. On the off chance anyone else has a Silverstone CS01 here's my minimalist version. Light and dark.
  20. My Cyberpower (685AVR) worked fine with the built in management with one exception: Setting Turn off UPS after shutdown to Yes (which is necessary if you want the machine to boot automatically when power's restored) would seem to work then as the server was booting, cut power, which caused a dirty shutdown. Apparently there's an incompatibility with Cyberpower's implementation - with Nut you can work around it (see my post linked above.)
  21. Just a heads up. I saw repeated errors in the log file after updating to 2021.09.10: Sep 10 19:14:02 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 19:21:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 19:28:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 19:35:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 19:42:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 19:49:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 19:56:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Sep 10 20:00:01 NAS crond[1762]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Running manually I got this error for every function defined in Legacy.php: # /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" <script>if (typeof _ != 'function') function _(t) {return t;}</script> Fatal error: Cannot redeclare parse_lang_file() (previously declared in /usr/local/emhttp/plugins/parity.check.tuning/Legacy.php:6) in /usr/local/emhttp/plugins/dynamix/include/Translations.php on line 46 I fixed it (?) by removing the functions from Legacy.php. NOTE: This is with v6.10.0-rc1
  22. Click "Connect" at the top right of finder window and you should be able to log in. If that doesn't work then in finder hit [COMMAND + K] and enter the following (with appropriate substitutions) smb://<username>@<unraid IP address> If neither of those work, right click Finder in the dock while holding OPTION, select "Relaunch" and try them again.
  23. Rebuild DNDC does what you want for one. Maybe supports two or you could install two instances?
  24. After deleting an empty pool then recreating it I was presented with a misleading warning about parity which made me hesitate. I had to search the forum to confirm it would not be affected. (I don't know if this is new to 6.10 - this is the first time I've done it.)