kizer

Moderators
  • Posts

    3742
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by kizer

  1. Have you looked at Tdarr? It does all of that automatically. It will convert right where your files are and require literally no input from you after everything is setup. Its pretty neat actually. As far as Filebot goes you can create custom Naming conventions and it will stick to them. That's why I said I can share my config file with you because I have my files set up a specific way too. Easy Peasy.
  2. So basically you just want folders created from files with spaces. Or I should say just put the file in the folder. as it is. I'll take a look at that. At the time I created this script I was having problems with Spaces and needed a utility. Otherwise if you want some crazy naming convention you could give Filebot a try, but you would have to modify a file to your liking which can kinda be a pain in the butt, but I could share my config file if you really wanted the year and other info in it. https://www.filebot.net/ There is a command line version of it that just runs when you drop files into it.
  3. So your wanting to prune the title or something different? Can you give me some exact examples. I'm confused in what you're showing me up above.
  4. Personally I always use . for spaces. Just keep in mind in the path you must use it exactly like it is. You could always change your path from Output files to Output.files and then change it back after you run this. Normally you wrap spaces in " " for example "Output files" so linux knows there is a space. I don't know if this will support it. I'll give it a try later when I can.
  5. @Beat-O You need to make sure you have community Applications installed. Which will give you an app store. From there you want to install a Plugin called User.Scripts From there you simply post in the code I gave you and make your subtle changes and click the run button. Community Applications is a must Plugin to have because it gives you a really cool store to install programs. Don't worry just do your thing and I'll answer any questions as you come up with them. 😃 The Pastebin link I posted up is a modified version of the code to help simply things. Just fire over questions as you need to and I'll do my best to keep answering them.
  6. @Beat-O OH sorry. Its just using some code that I posted up above and User.Scripts Plugin so you can either schedule the script to run or click on it to run it manually. You'll find plugin in the Community Applications App. You can simply take the text with the above code and make a bash script just make sure you make it executable so it runs on your system. I just find running things via User.Scripts the easiest. I also made a few changes so you just have to plug in a few values vs all of your paths. Also I threw in another option to move it to a final location just incase where your originally had the files are not the final location. Also added in some Chmod magic in case you want your files not to be root which this script make them be. https://pastebin.com/XaWP98tM
  7. https://unraid.net/blog/6-12-0-rc4
  8. I know I'm late to the party here. MY son has that very water cooler on his Gaming PC. 5600x and it works very well.
  9. Is he one of those guys that Jumps out of a Helicopter with the night vision goggles and flippers to swim uphill in the sand? Welcome aboard too.
  10. Cracked me up when i opened it up and noticed everything was upside down.
  11. Totally agree. Unbalance is perfect for this application.
  12. Until you make an accidental change in your settings and realize your moving 2TB of data and are upset you machine is taking forever to move said files and can't stop it. I know I over exaggerated some there, but things happen and keeping things simple in my humble opinion has always been astrong suit of unraids design.
  13. lol, I don't think I've ever had to use a 12-Digit password before.
  14. This is the latest bit of code I have from the guy who is keeping it up to date. I didn't write any of this so please don't consider me an expert on the topic. Lol It just works for me. #!/bin/bash # This script gets 19tives torrents list from Transmission, playing sessions from Plex, and tells rsync to copy them from array to cache drive. # It also cleans oldest modified files by rsyncing them back to array (in case of modification). # Hardlink are preserved # # By Reynald - 06 may 2020 - mailto:[email protected] # modified by quinto to work with plex v1.29 or later # v.0.5.19 # settings { #Plex PLEX_TOKEN=“xxxxxxxxxxxxx” PLEX_HOST="192.168.7.127:32400" PLEX_MAX_CACHED_SESSIONS=5 PLEX_CONTAINER_PATH="Media" #Rsync path STORAGE_PATH="/mnt/user0/Media/" CACHE_PATH="/mnt/cache/Media/" CACHE_MIN_FREE_SPACE_PCT="90" CACHE_MAX_FREE_SPACE_PCT="70" #Parameters LOG_MAX_SIZE=5000000 NOISY_HOUR_START=9 NOISY_HOUR_STOP=24 #if you're not using Youtube-dl agent for plex don't touch this line, it must only be changed when you're using yt-dl custom agent YOUTUBE_DL_AGENT="com.plexapp.agents.youtube-dl" #Options (set to true or false) DRY_RUN=false WRITE_ON_STORAGE=false #number of episodes to cache PLEX_CACHE_NB_EPISODES=6 PLEX_CACHE_SEASON_TILL_END=false VERBOSE=1 #0=Error; 1=Info; 2=More_Info; 3=Debug } ##### No modification below this line ##### CACHE_DISK=$(df -lah "$CACHE_PATH" | awk 'FNR == 2 {print $1}' | tr -d '\n') sys_checks() { # check if plex docker is running plex_running=`docker inspect -f '{{.State.Running}}' $PLEX_DOCKER` if [[ ! $plex_running ]] then echo "Error: Plex docker is not running" exit 1 fi # lock if [[ -f /var/lock/smart-cache_plex_transmission ]] then echo "Error: Script already running" exit 1 else touch /var/lock/smart-cache_plex_transmission [[ $VERBOSE -ge 2 ]] && echo "Welcome to $0" fi # check that path are mounted if [[ ! -d $STORAGE_PATH ]] || [[ ! -d $CACHE_PATH ]]; then echo "Error: Paths are not accessibles" rm /var/lock/smart-cache_plex_transmission exit 1 fi # cut log LOG_FILE=$(echo $0 | sed 's|\/script|\/log.txt|') echo "log= $LOG_FILE" LOG_SIZE=$(stat -c %s "$LOG_FILE") [[ $VERBOSE -ge 1 ]] && echo "Info: Log size is $LOG_SIZE" if [[ $LOG_SIZE -ge $LOG_MAX_SIZE ]] then [[ $VERBOSE -ge 1 ]] && echo "Info: Emptying log file" echo "" > "$LOG_FILE" fi [[ $VERBOSE -ge 1 ]] && echo "" } ####################### # Transfers functions # ####################### noisy_hours() { # return 0 if time in noisy hour range if [[ $(date '+%-H') -ge $NOISY_HOUR_START ]] && [[ $(date +%-H) -le $NOISY_HOUR_STOP ]] then return 0 else return 1 fi } rsync_transfer() { # get files and path SOURCE_FILE=$1 DEST_FILE=$2 SOURCE_PATH=$3 DEST_PATH=$4 RS_OPTIONS=$5 [[ $VERBOSE -ge 3 ]] && echo " --- Debug:Rsync_transfer function parameters:" [[ $VERBOSE -ge 3 ]] && echo " ---- Debug: Source file: $SOURCE_FILE" [[ $VERBOSE -ge 3 ]] && echo " ---- Debug: Dest. file: $DEST_FILE" [[ $VERBOSE -ge 3 ]] && echo " ---- Debug: Source path: $SOURCE_PATH" [[ $VERBOSE -ge 3 ]] && echo " ---- Debug: Dest. path: $DEST_PATH" [[ $VERBOSE -ge 3 ]] && echo " ---- Debug: Options : $RS_OPTIONS" # check if original file exist if [[ ! -f "${SOURCE_FILE}" ]] && [[ ! -f "${DEST_FILE}" ]] then echo " --- Error: Files:" echo " ${SOURCE_FILE}" echo " ${DEST_FILE}" echo " does not exist" return 1 elif [[ "${DEST_FILE}" = "${DEST_PATH}" ]] || [[ "${SOURCE_FILE}" = "${SOURCE_PATH}" ]] then echo " --- Error: Cannot sync root path!" return 1 elif [[ ! -f "${SOURCE_FILE}" ]] && [[ "${DEST_PATH}" = "${CACHE_PATH}" ]] && [[ -f "${DEST_FILE}" ]] then if noisy_hours then if [[ ! WRITE_ON_STORAGE ]] then [[ $VERBOSE -ge 2 ]] && echo " --- Info: File is on cache only. Inside noisy hours, sending to storage" rsync_transfer "${DEST_FILE}" "${SOURCE_FILE}" "${DEST_PATH}" "${SOURCE_PATH}" fi else [[ $VERBOSE -ge 2 ]] && echo " --- Warning: File is on cache only. Outside of noisy hours, doing nothing" fi return elif [[ -f "${DEST_FILE}" ]] && [[ "${DEST_PATH}" = "${CACHE_PATH}" ]] then [[ $VERBOSE -ge 2 ]] && echo " --- Info: File already cached" return fi # get dir SOURCE_DIR=$(dirname "${SOURCE_FILE}") DEST_DIR=$(dirname "${DEST_FILE}") if ! $DRY_RUN then # sync file mkdir -p "${DEST_DIR}" [[ $VERBOSE -ge 1 ]] && echo " --- Info: Syncing ${SOURCE_FILE} to ${DEST_FILE}" nice -n 13 ionice -c 3 /usr/bin/rsync --bwlimit=190000 -aHq "${SOURCE_FILE}" "${DEST_FILE}" if [[ ! $? -eq 0 ]] then echo " --- Error: cannot rsync ${SOURCE_FILE}" echo " to ${DEST_FILE}" return 1 fi # remove original file if requested if [[ "${RS_OPTIONS}" = "--remove-source-files" ]] && $RSYNC_RESULT then [[ $VERBOSE -ge 2 ]] && echo " --- Info: Delete ${SOURCE_FILE}" rm "${SOURCE_FILE}" fi else echo " --- Running in DRY RUN mode : Files aren't really copied, doing nothing" fi } ######## # Plex # ######## plex_cache() { # get Plex sessions STATUS_SESSIONS=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}/status/sessions -H "X-Plex-Token: $PLEX_TOKEN") if [[ -z $STATUS_SESSIONS ]]; then echo "Error: Cannot connect to plex" return 1 fi NB_SESSIONS=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/@size)' -) echo "----------------------------" echo "$NB_SESSIONS active(s) plex session(s):" echo "----------------------------" # for each session if [[ $NB_SESSIONS -gt $PLEX_MAX_CACHED_SESSIONS ]] then NB_SESSIONS=$PLEX_MAX_CACHED_SESSIONS echo "Warning: Caching is limited to $PLEX_MAX_CACHED_SESSIONS plex sessions (user setting)" fi for i in `seq $NB_SESSIONS` do # get title ID=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@ratingKey)' -) TYPE=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@type)' -) MKEY=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@key)' -) # eventually get serie info if [[ $TYPE = "episode" ]] then TYPE="Serie" GRANDPARENTTITLE=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@grandparentTitle)' -) SEASON=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@parentIndex)' -) TITLE=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@title)' -) EPISODE=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@index)' -) PARENT_ID=$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@parentRatingKey)' -) PARENT_SESS=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}/library/metadata/$PARENT_ID/children) PARENT_NB_EPISODES=$(echo $PARENT_SESS | xmllint --xpath 'string(//MediaContainer/@size)' -) PARENT_START_EPISODE=$(echo $PARENT_SESS | xmllint --xpath 'string(//MediaContainer/Video[1]/@index)' -) SECTION_ID=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}/library/metadata/$PARENT_ID | xmllint --xpath 'string(//MediaContainer/@librarySectionID)' -) SECTION_AGENT=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}/library/sections?X-Plex-Token= | xmllint --xpath 'string(//MediaContainer/Directory[@key='$SECTION_ID']/@agent)' -) if [[ ${SECTION_AGENT} = "com.plexapp.agents.youtube-dl" ]]; then TYPE="Youtube"; fi TITLE="$TYPE: ${GRANDPARENTTITLE} Season ${SEASON} - Episode ${EPISODE}/${PARENT_NB_EPISODES}: $TITLE" # update nb file to cache START_FILE="$EPISODE" if [[ ${SECTION_AGENT} == ${YOUTUBE_DL_AGENT} ]] then NB_FILES=$(( $EPISODE )) else if [[ $PLEX_CACHE_NB_EPISODES -gt 0 ]]; then NB_FILES=$(( ($EPISODE + $PLEX_CACHE_NB_EPISODES) < $PARENT_NB_EPISODES ? ($EPISODE + $PLEX_CACHE_NB_EPISODES) : $PARENT_NB_EPISODES )); fi $PLEX_CACHE_SEASON_TILL_END && NB_FILES=$(( $PARENT_NB_EPISODES )) fi elif [[ $TYPE = "movie" ]] then NB_SESS=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}${MKEY}?checkFiles=1?includeChildren=1?X-Plex-Token=) NB_MEDIAS=$(echo $NB_SESS | xmllint --xpath 'count(//Media)' -) NB_FILES=$(echo $NB_SESS | xmllint --xpath 'count(//Part)' -) TYPE="Movie" TITLE="$(echo $STATUS_SESSIONS | xmllint --xpath 'string(//MediaContainer/Video['$i']/@title)' -)" TITLE="$TYPE: $TITLE" START_FILE=1 else TYPE="Audio" TITLE="track caching not implemented" TITLE="$TYPE: $TITLE" START_FILE=0 NB_FILES=0 fi echo " - $i / $NB_SESSIONS: $TITLE" PLEX_FILE_SESS_MOVIE=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}/library/metadata/$PARENT_ID/children) PLEX_FILE_SESS_TVSHOW=$(nice -n 13 ionice -c 3 curl --limit-rate 200k --silent http://${PLEX_HOST}/library/metadata/$ID) for j in `seq $START_FILE $NB_FILES` do # get file path if [[ $TYPE = "Audio" ]] then [[ $VERBOSE -ge 2 ]] && echo " -- Info: Skipping" else if [[ $TYPE != "Movie" ]] then if [[ $TYPE = "Youtube" ]] then # media is a youtube episode PLEX_FILE=$(echo "${PLEX_FILE_SESS_MOVIE}" | xmllint --xpath 'string(//MediaContainer/Video['$(($PARENT_START_EPISODE - $j + 1))']/Media/Part/@file)' -) else # media is a tv show PLEX_FILE=$(echo "${PLEX_FILE_SESS_MOVIE}" | xmllint --xpath 'string(//MediaContainer/Video['$(($j - $PARENT_START_EPISODE + 1))']/Media/Part/@file)' -) fi m=$PARENT_NB_EPISODES else # media is a movie if [[ $NB_FILES -gt 1 ]] then if [[ $NB_FILES -gt $NB_MEDIAS ]] then # multiple parts movie (part1, part2, etc) PLEX_FILE=$(echo "${PLEX_FILE_SESS_TVSHOW}" | xmllint --xpath 'string(//MediaContainer/Video/Media/Part['$j']/@file)' -) else # multiple medias movie (4k, 1080p, 720p, etc) PLEX_FILE=$(echo "${PLEX_FILE_SESS_TVSHOW}" | xmllint --xpath 'string(//MediaContainer/Video/Media['$j']/Part/@file)' -) fi else # movie in single part / format PLEX_FILE=$(echo "${PLEX_FILE_SESS_TVSHOW}" | xmllint --xpath 'string(//MediaContainer/Video/Media/Part/@file)' -) fi m=$NB_FILES fi FILE_TO_CACHE="$(sed -e 's|\"\"|\/|g' -e 's/%/\%/g' <<<${PLEX_FILE} | sed 's|\"||g' | sed 's|\/'${PLEX_CONTAINER_PATH}'\/||')" [[ $VERBOSE -ge 2 ]] && echo " -- Info: Streaming File $j/$m: ${FILE_TO_CACHE}" STORAGE_FILE="${STORAGE_PATH}${FILE_TO_CACHE}" CACHE_FILE="${CACHE_PATH}${FILE_TO_CACHE}" # and send to rsync rsync_transfer "${STORAGE_FILE}" "${CACHE_FILE}" "${STORAGE_PATH}" "${CACHE_PATH}" fi sleep .8 done sleep .8 done sleep .8 # [[ $NB_SESSIONS != 0 ]] && echo "" [[ $VERBOSE -ge 1 ]] && echo "" } #################### # Delete old files # #################### cleanup() { # get free space a=$(df -h | grep $CACHE_DISK | awk '{ printf "%d", $5 }') b=$CACHE_MIN_FREE_SPACE_PCT echo "---------------------" echo "Cache disk usage: ${a}%" echo "---------------------" if [[ "$a" -ge "$b" ]]; then echo "$a% space used, quota is $b%, cleaning" [[ $VERBOSE -ge 1 ]] && echo "Info: Scanning files..." # get oldest accessed files find "${CACHE_PATH}" -type f -printf "%C@ %p\n" | sort -n | sed "s|`echo ${CACHE_PATH}`|%|g" | cut -d'%' -f2 | while read FILE_TO_CLEAN do # loop start: get free space again a=$(df -h | grep $CACHE_DISK | awk '{ printf "%d", $5 }') b=$CACHE_MAX_FREE_SPACE_PCT # if free space not enough if [[ "$a" -ge "$b" ]]; then [[ $VERBOSE -ge 1 ]] && echo " - Info: $a% space used, target $b%, uncaching $FILE_TO_CLEAN" STORAGE_FILE="${STORAGE_PATH}${FILE_TO_CLEAN}" CACHE_FILE="${CACHE_PATH}${FILE_TO_CLEAN}" # sync back cache to storage rsync_transfer "${CACHE_FILE}" "${STORAGE_FILE}" "${CACHE_PATH}" "${STORAGE_PATH}" "--remove-source-files" fi # loop done fi a=$(df -h | grep $CACHE_DISK | awk '{ printf "%d", $5 }') b=$CACHE_MIN_FREE_SPACE_PCT [[ $VERBOSE -ge 1 ]] && echo " - Info: $a% space used, quota is $b%, nothing to do" # prune empty directories from source dir [[ $VERBOSE -ge 2 ]] && echo " -- Info: Cleaning empty directories..." find "${CACHE_PATH}" -type d -not -path '*/\.*' -empty -prune -exec rmdir --ignore-fail-on-non-empty {} \; [[ $VERBOSE -ge 1 ]] && echo "" } sys_checks plex_cache cleanup echo "" rm /var/lock/smart-cache_plex_transmission exit 0
  15. Well things I'd wonder about. What time of day is your mover set to run? Are you using Plex and have the daily schedule set for that time period?
  16. I personally use the Mover Tuning plugin and have it set to move after a week. That way i can access the files from the Cache and the array, but they are moved automatically after a period of time vs daily. Also the upside is I'm sure the files aren't open or being used unless one of my kids is up watching a video at 5AM. 😃
  17. Same! Lol Litterally everything you said is the same. I was outside working with the younger crowd and I about died. lol
  18. Yep Old Hoopster. I've been beaten to countless posts by this guy. Sometimes I wonder does he sleep with a laptop on his chest? 😃
  19. I don't know of a solution for your sound, but you can schedule your conversions for times when your not using your Machine like when your sleeping or you can pin which cores are doing the conversion to leave some breathing room for your system.
  20. My 2GB Sandisk Cruiser has been error free since 2009. It only boots up from it and then it just sits idle for days on end.
  21. This is exactly what I did. Ran it as a NAS for a few days over the weekend while we was gone then went back to normal and turn on 1 thing at a time. We use Plex a lot so I turned on only Plex docker than each day I turned on another and another. Come to find out one of my dockers wasn't configured right and it was eating up resources and crashing. Sometimes the simplest things are the key to fixing things.
  22. Same. I got lucky that my Motherboard has an onboard USB port. Just plug it in and forget about it. I know its no help, but every time I do any major changes to my machine I always store a backup on my windows PC. Add a new drive, upgrade OS...... on and on........ I would store it on Unraid.net, but it appears I only have a 2GB stick and I don't have enough room to make a backup. Lol
  23. Everything is tied to your USB stick including My Servers.