CS01-HS

Members
  • Posts

    475
  • Joined

  • Last visited

Posts posted by CS01-HS

  1. 20 minutes ago, talard said:

    Hope 6.10.0-rc4 will be lauch soon !

     

    Many updates so you might experience some issues but it's available:

     

    But if you'd like to try it back up your flash then go to Tools -> Update OS and change the branch from stable to next.

     

    Note: Even with this version if I have a folder open in Finder and another process or user moves those files, they often appear in the original folder in my Finder window until I force a refresh by navigating to a different folder then navigating back (as I described in my first reply.)

  2. I've had similar issues on MacOS. Try this:

    1. Create a folder and drag and drop a file in it
    2. Navigate to the folder's parent directory, then back to the folder
    3. You should see the new file.

    MacOS doesn't work well with the version of samba in 6.9.2. Another issue: search doesn't work.

    6.10.0-rc4, which includes a later samba version, seems better but I haven't had much time to test.

  3. Running 6.10.0-rc2

     

    I have two shares, call them A and B

    and two cache pools, call them A' and B'

     

    Share A is set to Cache: Yes, with Cache Pool A'

    Share B is set to Cache: Only, with Cache Pool B'

     

    I wanted to move files from Share A to Share B

    All files resided in Share A's cache.

     

    I opened Krusader (Host Path: /mnt -> Container Path: /unraid)

     

    With two panels:

    1. /unraid/user/A/x/

    2. /unraid/user/B/y/

     

    and moved files from Panel 1 to Panel 2 (Share A to Share B)

     

    But instead of placing them in share B's cache,

    unRAID created /B/y/ in share A's cache and placed them there.

     

    Is that a bug, misconfiguration, user error?

  4. I wish I'd investigated more before "fixing" it but I noticed my cache was much fuller than it should be. Recycle Bin reported 11GB used on my (cache-enabled) Download share but according Krusader it was actually 260GB:

     

    664127815_ScreenShot2022-03-01at11_55_51AM.thumb.png.0e1304a8a6298f808418f84dec7d3d2b.png

     

    Emptying the share's Recycle Bin from the settings page got it down to 0.

    Anyone else seen that?

     

    (I have mover tuning setup to exclude .Recycle.Bin dirs but I don't think that would affect it.)

  5. I've been running v1.23.2 because H265 QSV encoding fails in the later versions.

     

    Comparing the conversion logs I see the later versions attempt to use the (newly-added) LowPower option:

     

    v1.23.2

    [11:17:13] hb_display_init: using VA driver 'iHD'
    libva info: VA-API version 1.10.0
    libva info: User environment variable requested driver 'iHD'
    libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
    libva info: Found init function __vaDriverInit_1_10
    libva info: va_openDriver() returns 0
    [11:17:13] encqsvInit: using encode-only path

     

    latest

    [11:07:46] hb_display_init: using VA driver 'iHD'
    libva info: VA-API version 1.12.0
    libva info: User environment variable requested driver 'iHD'
    libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
    libva info: Found init function __vaDriverInit_1_12
    libva info: va_openDriver() returns 0
    [11:07:46] encqsvInit: MFXVideoENCODE_Init failed (-3)
    [11:07:46] encqsvInit: using encode-only (LowPower) path
    ...
    [11:07:46] Failure to initialise thread 'Quick Sync Video encoder (Intel Media SDK)'

     

    I saw a suggestion here to disable it:

    https://github.com/HandBrake/HandBrake/issues/3270#issuecomment-744087448

     

    which I believe I should be able to do by passing

    -x lowpower=0

     

    to the container template variable

    AUTOMATED_CONVERSION_HANDBRAKE_CUSTOM_ARGS


    But it doesn't seem to have an effect, I see the same reference to encode-only (LowPower) path in the conversion log.

     

    Any suggestions?

    Is there a way to get the CLI equivalent of the GUI command so I could run it manually and experiment?

  6. 4 hours ago, Hawkins12 said:

    And based on the backup settings, I assume it'll continue to run indefinitely.  How do you end this?

     

    It runs once and exits.

     

    4 hours ago, Hawkins12 said:

    So I assume you run this in the "Console" of Unraid to make it work.

     

    I run it on a different system. On unraid you'd add it as a User Script and set it to run weekly or daily.

     

    4 hours ago, Hawkins12 said:

    Also, on the echo commands, that's the message displaying Successful or Failed -- does that come through as an Unraid notification?

     

    No it doesn't. For that you'd use the commands I mentioned above, either notice or error:

     

    On 8/13/2021 at 3:28 PM, CS01-HS said:

    You can even include unraid notifications (in addition to the "echo" printouts.)

     

    Notices ("normal") appear in green:

    /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Backup Critical Files" -d "Backup complete" -i "normal"

     

    and alerts in red:

    /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Error" -s "Backup Critical Files" -d "Backup error" -i "alert"

     

     

  7. 16 hours ago, Squid said:

    More than a few.  It's the cause.  RootFS gets mounted as 50% of available RAM and basically its all Plex.  Personally, I've noticed that Plex when transcoding never actually deletes anything until playback is stopped and depending upon the number of users, the remote quality etc the files keep adding up.  

     

    Maybe a better solution but this is what I use for Emby which exhibits the same behavior:

     

  8. I have emby transcode to RAM (/dev/shm/) which works well except for garbage collection – temporary transcode files accumulate.

     

    To solve that I wrote a script (run by the container) to delete temporary files and a user-script to launch it when the container's restarted or if it's not running for any other reason.

     

    Assumptions:

     

    /transcode

    is the container's path to /dev/shm/ (or wherever you're transcoding to)

     

    /transcode/transcoding-temp

    is the container's path to the directory holding temp transcoding files (emby creates this subdirectory)

     

    /system-share/transcoding-temp-fix.sh

    is the container's path to the following script (make the script executable)

     

    transcoding-temp-fix.sh

    #!/bin/sh
    
    TRANSCODE_DIR="/transcode/transcoding-temp"
    # Delete old files when used space is above this %
    PERCENT_LIMIT=50
    # Delete this many files at a time
    BATCH_SIZE=10
    
    if [ -d "${TRANSCODE_DIR}" ]; then
        percent_full=$(df "${TRANSCODE_DIR}" | awk '{print $5}' | tail -1 | tr -dc '0-9')
        printf "Directory size: \t %3s%%\n" ${percent_full}
        printf "Directory limit:\t %3s%%\n" ${PERCENT_LIMIT}
        echo ""
        while [ $percent_full -gt $PERCENT_LIMIT ]; do
            if [ $(find ${TRANSCODE_DIR} -type f -name "*.ts" | wc -l) -gt 0 ]; then
                echo "(${percent_full}%) exceeds limit (${PERCENT_LIMIT}%), deleting oldest (${BATCH_SIZE}) files"
                find ${TRANSCODE_DIR} -type f -name "*.ts" -exec ls -1t "{}" + | tail -${BATCH_SIZE} | xargs rm
            else
                echo "*WARNING* (${percent_full}%) exceeds limit (${PERCENT_LIMIT}%) but files are not transcoding fragments"
                exit 1
            fi
            percent_full=$(df "${TRANSCODE_DIR}" | awk '{print $5}' | tail -1 | tr -dc '0-9')
        done
    else
        echo "${TRANSCODE_DIR} (TRANSCODE_DIR): directory doesn't exist"
    fi

     

    Now the user script to launch it, set to run every 10 minutes (*/10 * * * *)

    NOTE: Update EmbyServer name and system-share (if necessary) to match your system

    #!/bin/bash
    #arrayStarted=true
    #clearLog=true
    
    # Verify EmbyServer's running
    running=$(docker container ls | grep EmbyServer | wc -l)
    if [ "${running}" != "0" ]; then
      # verify watch command that calls clearing script is running
      watch_running=$(docker exec -i EmbyServer ps | grep 'watch ' | wc -l)
    
      # make sure the detection command ran properly otherwise
      # we might end up running multiple instances of the script
      if [ $? -eq 0 ]; then
        if [ "${watch_running}" == "0" ]; then
          echo "Clearing script is not running. Re-starting..."
          docker exec EmbyServer sh -c 'watch -n30 "/system-share/transcoding-temp-fix.sh 2>&1" > /transcode/transcoding-temp-fix.log &'
        fi
      else
        echo "ERROR: Command to detect script run status failed"
        /usr/local/emhttp/webGui/scripts/notify -e "emby-ClearTranscodingTmp" -s "Command to detect script status failed" -d "" -i "alert"
      fi
    fi
    

     

    Monitor the script's activity:

    tail -f /dev/shm/transcoding-temp-fix.log

     

    Sample output:

    Every 30.0s: /system-share/transcoding-temp-fix.sh 2>&1     2022-10-10 14:45:19
    
    Directory size: 	   5%
    Directory limit:	   50%

     

    NOTES:

    • Script can probably be tweaked to work for Plex
    • If a better solution exists let me know, this was quick and dirty.
  9. I think there's a minor bug in the abort logic.

    With 2 scripts running in the background: myTest and myTestAlternate

    Aborting myTest aborts both (even though the display shows the latter as still running.)

    I figure the pattern-matching catches both.

     

    Not a big deal but FYI.

    I was trying to figure out why a script I wrote wasn't working, turns out it wasn't running because of the above.

     

  10. 1 minute ago, Sanches said:

    I've disabled that option now... That was probably the issue.

     

    Thank you!

     

    Okay but keep in mind if it ever gets filled (100%) especially if you're running dockers and VMs it'll cause all kinds of problems. So make sure that never happens.

  11. 2 hours ago, Sanches said:

    Actually my mover runs once a week with the following config:

     

    image.thumb.png.d5ff0db586ecf5eaaff79889d71729e1.png

    image.png.c14037fac901871c2d92492992b61d5a.png

    I've this config for months...but only 3-4 weeks ago I'm having problems. When it reaches 15 days, it moves all files.

     

    Could be your cache went from 30 to 70 between mover runs triggering "move all."

  12. 5 minutes ago, Sanches said:

    Same here! For some weeks now.

    Any fix?

    I tried changing all settings and saving again...but when reaches the specified days old, it moves all files regardless of age.

     

    Huh, seems to work for me.

     

    Are you sure you have this:

    2099549638_ScreenShot2022-01-19at12_42_27PM.thumb.png.f472d86f37adc9c2aca731271fb277d6.png

    set higher than this:

    705836332_ScreenShot2022-01-19at12_42_41PM.thumb.png.c4d6730102c836fe118fd87bdc49aba3.png

    and mover set to run frequently enough that it triggers at the lower % ?

    1696700483_ScreenShot2022-01-19at12_46_24PM.thumb.png.5e4e86cf8fae08d5bb33d07a06b15b6a.png

     

  13. On 4/30/2021 at 2:09 PM, acosmichippo said:

    I took another approach with this and tried moving the files via unraid's CLI and they seem to be moving consistently without waking up the array.  I guess there is something in MacOS's finder that is waking up the array when I use it to move cached files around.  Keeping .DS_Store files on the cache did not solve the issue like I hoped it would (although maybe it helps a little, not sure).

     

    So for now I have just set up an hourly user script to move files between cached directories.

     

    You can minimize it by:

    • caching .DS_Store (which you want to do anyway to avoid waking parity)
    • excluding each share from spotlight
    • disabling calculate all sizes and show icon preview in folders and subfolders (probably easiest to set it as default then clear out .DS_Stores)

    And after all that Finder will still occasionally wake the drives. I decided it wasn't worth it.

    • Like 1
  14. Thanks, I should have searched. Looks like it goes all the way back to 6.8.3

    I triggered it over an SMB connection. This particular share doesn't have NFS enabled unless you meant system-wide.

     

    I can imagine SMB bugs triggering the underlying fuse "bug" (which is marked as won't fix) so I'll wait for a version with SMB fixes before digging deeper.

     

    I think in my particular case a stale directory listing resulted in the attempted move of non-existent file.  

  15. I was moving a file from one folder to another within a share (on Mac) when an error occurred and the share disconnected. Checking the terminal I saw /mnt/user/ was inaccessible.

     

    Sorry I don't have a full syslog or diagnostics to share but this (coincident with the SMB disconnection) seemed relevant:

     

    Dec  4 09:05:21 NAS shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed.

     

  16. 12 hours ago, Kloudz said:

    Yep, restarted the container and verified that the inputs.smart block was commented out

     

    And you're sure they sleep with the container stopped?

    Maybe you have USE_HDDTEMP set to yes?

    1183204964_ScreenShot2021-11-29at6_54_30AM.thumb.png.73e3b89e6c7a9cf885b0567cb8b771b1.png

     

    If not I don't know what could be causing it.

  17. 3 minutes ago, Kloudz said:

     

    Yep, I have nocheck = "standby". This is installed fresh from the App Store so not sure it would be different.

     

    That said, commented the whole block and nope, it did not resolve the issue.

     

     

    Did you restart the container after (and verify your commenting out the block persisted through restart) ?

     

    Can't imagine with no calls to the drives how it would keep them awake.

  18. 59 minutes ago, Kloudz said:

    In regards to the Grafana Unraid Stack. After install, it seems to keep all my disks (SATA) active.

    If you notice the last 5 are not doing anything. The last 5 drives are SAS drives. It's weird that the app is keeping the other drives busy.

     

    Is there a way to fix this?

     

    It's probably the smartctl call in telegraf.

     

    Check your config:

    vi /mnt/user/appdata/Grafana-Unraid-Stack/telegraf/telegraf.conf

     

    Do you have nocheck = "standby" in inputs.smart?

     

    [[inputs.smart]]
    #   ## Optionally specify the path to the smartctl executable
       path = "/usr/sbin/smartctl"
    #
    #   ## On most platforms smartctl requires root access.
    #   ## Setting 'use_sudo' to true will make use of sudo to run smartctl.
    #   ## Sudo must be configured to to allow the telegraf user to run smartctl
    #   ## without a password.
    #   # use_sudo = false
    #
    #   ## Skip checking disks in this power mode. Defaults to
    #   ## "standby" to not wake up disks that have stoped rotating.
    #   ## See --nocheck in the man pages for smartctl.
    #   ## smartctl version 5.41 and 5.42 have faulty detection of
    #   ## power mode and might require changing this value to
    #   ## "never" depending on your disks.
       nocheck = "standby"
    #

              

    Otherwise I don't know SAS but I remember some forum discussions of SAS and spindown, maybe you have to customize the call. Worst case you can comment out the whole block which should resolve spindown but will disable SMART stats in grafana.

  19. 3 hours ago, Hoopster said:

    Since the author of this docker container used to be quite active in the forums but has not shown up in over 13 months, I would say updates are unlikely.

     

    Darn. Maybe this is the push I need to learn docker/container distribution.