• Posts

  • Joined

  • Last visited

Everything posted by CS01-HS

  1. I filed this report for a pre-release of 6.9.2 (I believe), thread below, but it's still broken in 6.10 stable.
  2. +1 Performance was much better with AFP before it was dropped (I think in 6.9) Finder sometimes takes 10-20 seconds to list a directory with a dozen files on a cache-only share. I assumed it was generally poor SMB performance in MacOS but maybe not. I've tried all the recommended tweaks, client and server, but it's still pretty poor. Relatedly, has search from MacOS clients (which hasn't worked since last November) been fixed in the recent betas? I'd think that'd be a pretty major issue but I don't see many complaints.
  3. The interface recommends a monthly balance but my schedules are based on day-of-week. Adding "First Monday", "First Tuesday", etc. as an option would be handy.
  4. With 6.9 I had to tweak unRAID's sdspin to make USB spindown work. I don't know whether this solution's generic and I haven't tested with 6.10 sdspin is a critical script so if you tweak it make a backup and be careful. Hope it helps. Edit: this one also may be relevant. Good luck.
  5. Now I wonder if Speed "Unavailable" for recent (and historical) runs is due to the plugin. Very minor issue either way.
  6. Ha! Well that explains it, thanks.
  7. Unraid seems to be logging scheduled checks as manual (I'm running 6.10.0-rc4.) I noticed it because my scheduled check, which should have paused in the morning, was still running. (I don't have it set to pause manual checks.) Debug log from scheduled check: Apr 7 00:05:01 NAS Parity Check Tuning: DEBUG: Manual Non-Correcting Parity Check running Apr 7 00:05:01 NAS Parity Check Tuning: DEBUG: Resume request Apr 7 00:05:01 NAS Parity Check Tuning: DEBUG: ... Manual Non-Correcting Parity Check already running Apr 7 00:07:01 NAS Parity Check Tuning: DEBUG: Manual Non-Correcting Parity Check running Apr 7 00:07:01 NAS Parity Check Tuning: DEBUG: array drives=4, hot=0, warm=0, cool=4 Apr 7 00:07:01 NAS Parity Check Tuning: DEBUG: All array drives below temperature threshold for a Pause The plugin's reporting matches unraid's history: so I doubt it's a plugin problem, but I'm posting here because unless you're using it you probably won't notice. Anyone else seeing this?
  8. Many updates so you might experience some issues but it's available: But if you'd like to try it back up your flash then go to Tools -> Update OS and change the branch from stable to next. Note: Even with this version if I have a folder open in Finder and another process or user moves those files, they often appear in the original folder in my Finder window until I force a refresh by navigating to a different folder then navigating back (as I described in my first reply.)
  9. I've had similar issues on MacOS. Try this: Create a folder and drag and drop a file in it Navigate to the folder's parent directory, then back to the folder You should see the new file. MacOS doesn't work well with the version of samba in 6.9.2. Another issue: search doesn't work. 6.10.0-rc4, which includes a later samba version, seems better but I haven't had much time to test.
  10. Ah, I assumed shfs intercepted and translated the command. I can see how that wouldn't bite me if I were only dealing with a single cache pool. Thanks, looking forward to the plugin!
  11. Running 6.10.0-rc2 I have two shares, call them A and B and two cache pools, call them A' and B' Share A is set to Cache: Yes, with Cache Pool A' Share B is set to Cache: Only, with Cache Pool B' I wanted to move files from Share A to Share B All files resided in Share A's cache. I opened Krusader (Host Path: /mnt -> Container Path: /unraid) With two panels: 1. /unraid/user/A/x/ 2. /unraid/user/B/y/ and moved files from Panel 1 to Panel 2 (Share A to Share B) But instead of placing them in share B's cache, unRAID created /B/y/ in share A's cache and placed them there. Is that a bug, misconfiguration, user error?
  12. I wish I'd investigated more before "fixing" it but I noticed my cache was much fuller than it should be. Recycle Bin reported 11GB used on my (cache-enabled) Download share but according Krusader it was actually 260GB: Emptying the share's Recycle Bin from the settings page got it down to 0. Anyone else seen that? (I have mover tuning setup to exclude .Recycle.Bin dirs but I don't think that would affect it.)
  13. I've been running v1.23.2 because H265 QSV encoding fails in the later versions. Comparing the conversion logs I see the later versions attempt to use the (newly-added) LowPower option: v1.23.2 [11:17:13] hb_display_init: using VA driver 'iHD' libva info: VA-API version 1.10.0 libva info: User environment variable requested driver 'iHD' libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_10 libva info: va_openDriver() returns 0 [11:17:13] encqsvInit: using encode-only path latest [11:07:46] hb_display_init: using VA driver 'iHD' libva info: VA-API version 1.12.0 libva info: User environment variable requested driver 'iHD' libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_12 libva info: va_openDriver() returns 0 [11:07:46] encqsvInit: MFXVideoENCODE_Init failed (-3) [11:07:46] encqsvInit: using encode-only (LowPower) path ... [11:07:46] Failure to initialise thread 'Quick Sync Video encoder (Intel Media SDK)' I saw a suggestion here to disable it: https://github.com/HandBrake/HandBrake/issues/3270#issuecomment-744087448 which I believe I should be able to do by passing -x lowpower=0 to the container template variable AUTOMATED_CONVERSION_HANDBRAKE_CUSTOM_ARGS But it doesn't seem to have an effect, I see the same reference to encode-only (LowPower) path in the conversion log. Any suggestions? Is there a way to get the CLI equivalent of the GUI command so I could run it manually and experiment?
  14. My unraid server uses shucked Seagate 2.5" drives (ST5000LM000.) Next to it I have another device with a 3.5" Seagate Ironwolf. When the 3.5" is spinning (and when it's reading and writing) I can hear it. With the 2.5" I can't tell, silent.
  15. It runs once and exits. I run it on a different system. On unraid you'd add it as a User Script and set it to run weekly or daily. No it doesn't. For that you'd use the commands I mentioned above, either notice or error:
  16. Maybe a better solution but this is what I use for Emby which exhibits the same behavior:
  17. I have emby transcode to RAM (/dev/shm/) which works well except for garbage collection – temporary transcode files accumulate. To solve that I wrote a script (run by the container) to delete temporary files and a user-script to launch it when the container's restarted or if it's not running for any other reason. Assumptions: /transcode is the container's path to /dev/shm/ (or wherever you're transcoding to) /transcode/transcoding-temp is the container's path to the directory holding temp transcoding files (emby creates this subdirectory) /system-share/transcoding-temp-fix.sh is the container's path to the following script (make the script executable) transcoding-temp-fix.sh #!/bin/sh TRANSCODE_DIR="/transcode/transcoding-temp" # Delete old files when used space is above this % PERCENT_LIMIT=50 # Delete this many files at a time BATCH_SIZE=10 if [ -d "${TRANSCODE_DIR}" ]; then percent_full=`df "${TRANSCODE_DIR}" | awk '{print $5}' | tr -dc '0-9'` echo "Directory limit: ${PERCENT_LIMIT}%" echo "Directory utilization: ${percent_full}%" while [ $percent_full -gt $PERCENT_LIMIT ]; do echo "(${percent_full}%) exceeds limit (${PERCENT_LIMIT}%), deleting oldest (${BATCH_SIZE}) files" # DEBUG: Uncomment to print list of deleted files #ls ${TRANSCODE_DIR}/*.ts -1t | tail -${BATCH_SIZE} | xargs ls -lh ls ${TRANSCODE_DIR}/*.ts -1t | tail -${BATCH_SIZE} | xargs rm percent_full=`df "${TRANSCODE_DIR}" | awk '{print $5}' | tr -dc '0-9'` done else echo "${TRANSCODE_DIR} (TRANSCODE_DIR): directory doesn't exist" fi Now the user script to launch it, set to run every 10 minutes (*/10 * * * *) NOTE: Update container name and paths to match your setup #!/bin/bash #arrayStarted=true # Verify EmbyServer's running running=`docker container ls | grep EmbyServer | wc -l` if [ "${running}" != "0" ]; then # verify watch command that calls clearing script is running watch_running=`docker exec -i EmbyServer ps | grep 'watch ' | wc -l` # verify detection command ran properly if [ $? -eq 0 ]; then if [ "${watch_running}" == "0" ]; then echo "Clearing script is not running. Re-starting..." docker exec EmbyServer sh -c 'watch -n30 "/system-share/transcoding-temp-fix.sh 2>&1" > /transcode/transcoding-temp-fix.log &' fi else echo "ERROR: Command to detect script run status failed" fi fi Monitor the script's activity: tail -f /dev/shm/transcoding-temp-fix.log Sample output: Every 30s: /system-share/transcoding-temp-fix.sh 2>&1 2022-01-30 14:40:44 Directory limit: 50% Directory utilization: 2% NOTES: Script can probably be tweaked to work for Plex If a better solution exists let me know, this was quick and somewhat dirty.
  18. I think there's a minor bug in the abort logic. With 2 scripts running in the background: myTest and myTestAlternate Aborting myTest aborts both (even though the display shows the latter as still running.) I figure the pattern-matching catches both. Not a big deal but FYI. I was trying to figure out why a script I wrote wasn't working, turns out it wasn't running because of the above.
  19. Okay but keep in mind if it ever gets filled (100%) especially if you're running dockers and VMs it'll cause all kinds of problems. So make sure that never happens.
  20. Could be your cache went from 30 to 70 between mover runs triggering "move all."
  21. Huh, seems to work for me. Are you sure you have this: set higher than this: and mover set to run frequently enough that it triggers at the lower % ?
  22. You can minimize it by: caching .DS_Store (which you want to do anyway to avoid waking parity) excluding each share from spotlight disabling calculate all sizes and show icon preview in folders and subfolders (probably easiest to set it as default then clear out .DS_Stores) And after all that Finder will still occasionally wake the drives. I decided it wasn't worth it.
  23. Huh, I didn't realize the built-in apcupsd was compatible with a NUT master. I have a similar setup and use the NUT plugin instead. (Detailed description of my setup here.)
  24. Thanks, I should have searched. Looks like it goes all the way back to 6.8.3 I triggered it over an SMB connection. This particular share doesn't have NFS enabled unless you meant system-wide. I can imagine SMB bugs triggering the underlying fuse "bug" (which is marked as won't fix) so I'll wait for a version with SMB fixes before digging deeper. I think in my particular case a stale directory listing resulted in the attempted move of non-existent file.
  25. The SMB bugs? Sure, but it shouldn't break shfs.