CS01-HS

Members
  • Posts

    475
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. It runs once and exits. I run it on a different system. On unraid you'd add it as a User Script and set it to run weekly or daily. No it doesn't. For that you'd use the commands I mentioned above, either notice or error:
  2. Maybe a better solution but this is what I use for Emby which exhibits the same behavior:
  3. I have emby transcode to RAM (/dev/shm/) which works well except for garbage collection – temporary transcode files accumulate. To solve that I wrote a script (run by the container) to delete temporary files and a user-script to launch it when the container's restarted or if it's not running for any other reason. Assumptions: /transcode is the container's path to /dev/shm/ (or wherever you're transcoding to) /transcode/transcoding-temp is the container's path to the directory holding temp transcoding files (emby creates this subdirectory) /system-share/transcoding-temp-fix.sh is the container's path to the following script (make the script executable) transcoding-temp-fix.sh #!/bin/sh TRANSCODE_DIR="/transcode/transcoding-temp" # Delete old files when used space is above this % PERCENT_LIMIT=50 # Delete this many files at a time BATCH_SIZE=10 if [ -d "${TRANSCODE_DIR}" ]; then percent_full=$(df "${TRANSCODE_DIR}" | awk '{print $5}' | tail -1 | tr -dc '0-9') printf "Directory size: \t %3s%%\n" ${percent_full} printf "Directory limit:\t %3s%%\n" ${PERCENT_LIMIT} echo "" while [ $percent_full -gt $PERCENT_LIMIT ]; do if [ $(find ${TRANSCODE_DIR} -type f -name "*.ts" | wc -l) -gt 0 ]; then echo "(${percent_full}%) exceeds limit (${PERCENT_LIMIT}%), deleting oldest (${BATCH_SIZE}) files" find ${TRANSCODE_DIR} -type f -name "*.ts" -exec ls -1t "{}" + | tail -${BATCH_SIZE} | xargs rm else echo "*WARNING* (${percent_full}%) exceeds limit (${PERCENT_LIMIT}%) but files are not transcoding fragments" exit 1 fi percent_full=$(df "${TRANSCODE_DIR}" | awk '{print $5}' | tail -1 | tr -dc '0-9') done else echo "${TRANSCODE_DIR} (TRANSCODE_DIR): directory doesn't exist" fi Now the user script to launch it, set to run every 10 minutes (*/10 * * * *) NOTE: Update EmbyServer name and system-share (if necessary) to match your system #!/bin/bash #arrayStarted=true #clearLog=true # Verify EmbyServer's running running=$(docker container ls | grep EmbyServer | wc -l) if [ "${running}" != "0" ]; then # verify watch command that calls clearing script is running watch_running=$(docker exec -i EmbyServer ps | grep 'watch ' | wc -l) # make sure the detection command ran properly otherwise # we might end up running multiple instances of the script if [ $? -eq 0 ]; then if [ "${watch_running}" == "0" ]; then echo "Clearing script is not running. Re-starting..." docker exec EmbyServer sh -c 'watch -n30 "/system-share/transcoding-temp-fix.sh 2>&1" > /transcode/transcoding-temp-fix.log &' fi else echo "ERROR: Command to detect script run status failed" /usr/local/emhttp/webGui/scripts/notify -e "emby-ClearTranscodingTmp" -s "Command to detect script status failed" -d "" -i "alert" fi fi Monitor the script's activity: tail -f /dev/shm/transcoding-temp-fix.log Sample output: Every 30.0s: /system-share/transcoding-temp-fix.sh 2>&1 2022-10-10 14:45:19 Directory size: 5% Directory limit: 50% NOTES: Script can probably be tweaked to work for Plex If a better solution exists let me know, this was quick and dirty.
  4. I think there's a minor bug in the abort logic. With 2 scripts running in the background: myTest and myTestAlternate Aborting myTest aborts both (even though the display shows the latter as still running.) I figure the pattern-matching catches both. Not a big deal but FYI. I was trying to figure out why a script I wrote wasn't working, turns out it wasn't running because of the above.
  5. Okay but keep in mind if it ever gets filled (100%) especially if you're running dockers and VMs it'll cause all kinds of problems. So make sure that never happens.
  6. Could be your cache went from 30 to 70 between mover runs triggering "move all."
  7. Huh, seems to work for me. Are you sure you have this: set higher than this: and mover set to run frequently enough that it triggers at the lower % ?
  8. You can minimize it by: caching .DS_Store (which you want to do anyway to avoid waking parity) excluding each share from spotlight disabling calculate all sizes and show icon preview in folders and subfolders (probably easiest to set it as default then clear out .DS_Stores) And after all that Finder will still occasionally wake the drives. I decided it wasn't worth it.
  9. Huh, I didn't realize the built-in apcupsd was compatible with a NUT master. I have a similar setup and use the NUT plugin instead. (Detailed description of my setup here.)
  10. Thanks, I should have searched. Looks like it goes all the way back to 6.8.3 I triggered it over an SMB connection. This particular share doesn't have NFS enabled unless you meant system-wide. I can imagine SMB bugs triggering the underlying fuse "bug" (which is marked as won't fix) so I'll wait for a version with SMB fixes before digging deeper. I think in my particular case a stale directory listing resulted in the attempted move of non-existent file.
  11. The SMB bugs? Sure, but it shouldn't break shfs.
  12. I was moving a file from one folder to another within a share (on Mac) when an error occurred and the share disconnected. Checking the terminal I saw /mnt/user/ was inaccessible. Sorry I don't have a full syslog or diagnostics to share but this (coincident with the SMB disconnection) seemed relevant: Dec 4 09:05:21 NAS shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed.
  13. And you're sure they sleep with the container stopped? Maybe you have USE_HDDTEMP set to yes? If not I don't know what could be causing it.
  14. Apple began AFP deprecation in 2013 and still haven't removed it I assume because they realize their SMB implementation's lacking. I wish unRAID would re-add AFP support but admittedly I don't know how much work's required to maintain it.
  15. Did you restart the container after (and verify your commenting out the block persisted through restart) ? Can't imagine with no calls to the drives how it would keep them awake.
  16. It's probably the smartctl call in telegraf. Check your config: vi /mnt/user/appdata/Grafana-Unraid-Stack/telegraf/telegraf.conf Do you have nocheck = "standby" in inputs.smart? [[inputs.smart]] # ## Optionally specify the path to the smartctl executable path = "/usr/sbin/smartctl" # # ## On most platforms smartctl requires root access. # ## Setting 'use_sudo' to true will make use of sudo to run smartctl. # ## Sudo must be configured to to allow the telegraf user to run smartctl # ## without a password. # # use_sudo = false # # ## Skip checking disks in this power mode. Defaults to # ## "standby" to not wake up disks that have stoped rotating. # ## See --nocheck in the man pages for smartctl. # ## smartctl version 5.41 and 5.42 have faulty detection of # ## power mode and might require changing this value to # ## "never" depending on your disks. nocheck = "standby" # Otherwise I don't know SAS but I remember some forum discussions of SAS and spindown, maybe you have to customize the call. Worst case you can comment out the whole block which should resolve spindown but will disable SMART stats in grafana.
  17. Darn. Maybe this is the push I need to learn docker/container distribution.
  18. I love the Grafana Unraid Stack - clean and simple. Any plans to update it with a recent version of Grafana? They've added some handy features since v7.3.7: https://grafana.com/categories/release/
  19. First thanks for this container, very handy. One suggestion: I was running batch conversions in HandBrake and couldn't figure out why my iGPU wasn't fully utilized: It turns out it was but it was maxing out 3D render load (95%) and the reporting script (get_intel_gpu_status.sh) grabs video load (9%): So I tweaked the script to grab whatever's highest: #!/bin/bash #This is so messy... #Beat intel_gpu_top into submission JSON=$(/usr/bin/timeout -k 3 3 /usr/bin/intel_gpu_top -J) VIDEO_UTIL=$(echo "$JSON"|grep "busy"|sort|tail -1|cut -d ":" -f2|cut -d "," -f1|cut -d " " -f2) #Spit out something telegraf can work with echo "[{\"time\": `date +%s`, \"intel_gpu_util\": "$VIDEO_UTIL"}]" #Exit cleanly exit 0 I overwrite the container's version with the following Post Argument, where utils is a new mapped path to the folder containing my tweaked version: && docker exec intel-gpu-telegraf sh -c '/usr/bin/cp -f /utils/get_intel_gpu_status.sh /opt/intel-gpu-telegraf/; chmod a+x /opt/intel-gpu-telegraf/get_intel_gpu_status.sh' (Full path to cp is necessary because cp is aliased to cp -i) Now the display reflects full utilization:
  20. One advantage I noticed was it exposed unraid-autostart which I've added to my backup sets since reinstallation through Previous Apps didn't restore it (though it worked perfectly otherwise.)
  21. I have appdata on a single XFS-formatted SSD. Recently on occasion a container would disappear on restart necessitating reinstall – thought it might be docker image corruption so I should recreate it. In the process I saw 3 options for docker root: BTRFS image XFS image Directory Directory looked interesting so I thought I'd try it. Have I chosen poorly? Are there benefits I can take advantage of?
  22. I don't know if either of these applies but since RC2 VNC with Safari doesn't work for me.
  23. Sorry for the delay, just saw this. Actually I meant Duplicacy's full appdata directory. On unRAID that's typically: /mnt/cache/appdata/Duplicacy I run the Backup/Restore Appdata plugin weekly which backs up all my containers' appdata directories (and my flash drive) to the array, so for simple corruption I'd just restore from that. I'm talking about catastrophic failure, your server's struck by lightning or stolen, etc. I believe everything necessary to recreate a container is either on flash or in appdata. So I take those two backups, created by the backup plugin, and save them elsewhere – an offline flash drive, remote server, etc.
  24. Do you want versioned backup (I used Duplicacy's docker) or a simple copy in which case a User Script with a few calls to rsync would do? Whichever route you go as long as your backup drive's part of your unraid server if e.g. a power surge damages your main drives it'll also likely damage your backup drive. Same goes for an encrypting virus. Really you're only protecting against accidental deletion but that's better than nothing.
  25. Could be a freak occurrence but unraid suddenly stopped tracking reads/writes to my cache pool. It was being read from but according to the webgui and telegraf there was no activity. A reboot fixed it. Very strange.