Jump to content

cmarshall85

Members
  • Content Count

    6
  • Joined

  • Last visited

Community Reputation

0 Neutral

About cmarshall85

  • Rank
    Newbie
  1. @Derek_ did you ever figure this out? I am having the same issue. When I run lftp, any folders/files it creates are root:root and i need it to be nobody:users. Thanks!
  2. Thanks for the plugin! I have a question about permissions. I am in the process of migrating my Plex/NAS server from a Mac. On my Mac I have a script that syncs new files every 5 minutes from a remote seedbox. I have altered this script for UnRAID to download to my "Downloads" share. It appears to work fine as far as the syncing goes, but the files on the share are read-only and I can't do anything about that. What should I do to make it so I can have read/write permissions from my other devices when I access the share? Thanks! Here's the script: #!/bin/bash #### Seedbox Sync # # Sync various directories between home and seedbox, do some other things also. SCRIPT_NAME="$(basename "$0")" LOG_DIR="/mnt/user/Downloads/Seedbox/Scripts/Logs" SYNC_LOG="$SCRIPT_NAME.log" LOG_FILE="$LOG_DIR/$SYNC_LOG" USERNAME='***' PASS='***' HOST='***' PORT='***' # Number of files to download simultaneouslyvupv # Number of segments/parts to split the downloads into # Minimum size each chunk (part) should be nfile='2' nsegment='10' minchunk='1' # Location of remote files ready for pickup, no trailing slash REMOTE_MEDIA="***/downloads/Sync/" # Destination for remote media to be stored for further processing LOCAL_MEDIA="/mnt/user/Downloads/Seedbox/Sync" # Local processing directory files get moved to after sync PROC_MEDIA="/mnt/user/Downloads/Seedbox/Processing" # Test for local lockfile, exit if it exists BASE_NAME="$(basename "$0")" LOCK_FILE="/mnt/user/Downloads/Seedbox/Scripts/$BASE_NAME.lock" # Test for remote lockfile, exit if exists REMOTE_LOCK_FILE="***/scripts/post_process.sh.lock" # Fix permissions issue umask 002 # If a log file exists, rename it if [ -f $LOG_FILE ]; then mv $LOG_FILE $LOG_FILE.last fi echo "${0} Starting at $(date)" trap "rm -f ${LOCK_FILE}" SIGINT SIGTERM if [ -e "${LOCK_FILE}" ] then echo "${base_name} is running already." exit else ssh $USERNAME@$HOST "test -e $REMOTE_LOCK_FILE" if [ "$?" -eq "0" ]; then echo "Post Process is running on remote server, exiting..." exit fi touch "$LOCK_FILE" lftp -p "${PORT}" -u "${USERNAME},${PASS}" sftp://"${HOST}" << EOF set ftp:list-options -a set sftp:auto-confirm yes set pget:min-chunk-size ${minchunk} set pget:default-n ${nsegment} set mirror:use-pget-n ${nsegment} set mirror:parallel-transfer-count ${nfile} set mirror:parallel-directories yes set xfer:use-temp-file yes set xfer:temp-file-name *.lftp mirror -c -v --loop --Remove-source-dirs "${REMOTE_MEDIA}" "${LOCAL_MEDIA}" quit EOF echo "${0} Remote sync finished at $(date)" # Move sync'd files to processing directory rsync -av --progress --ignore-existing --remove-source-files --prune-empty-dirs -O \ --log-file=$LOG_FILE \ --log-file-format="%f - %n" \ ${LOCAL_MEDIA}/ \ ${PROC_MEDIA}/ # Clear Sync directory of empty folders and .DS_Store files find $LOCAL_MEDIA -name '.DS_Store' -type f -delete find $LOCAL_MEDIA -depth -type d -empty -exec rmdir "{}" \; mkdir -p $LOCAL_MEDIA rm -f "$LOCK_FILE" trap - SIGINT SIGTERM exit fi
  3. First off, thank you for this great plugin! I'd like to suggest a feature. Not sure if this would be possible to do, but figure it would be worth throwing it out there. On Carbon Copy Cloner (CCC) for Mac, it has a feature called Safety Net. Basically, the backups that program does are differential and then it throws all the deleted/removed files into a SafetyNet folder, similar to this Recycle Bin plugin. On CCC you can set the max size of the SafetyNet. Once it reaches the max size, it will automatically delete the oldest files that were copied to the SafetyNet until it has enough room for the new files. It would be great if Recycle Bin could delete old files like that. Clearing the Recycle Bin on a schedule is nice, but from my understanding that would just get rid of everything. It would be nice to have the option to always keep files in the Recycle Bin while only deleting what's necessary to make space for newly deleted files.
  4. Never mind, I figured it all out. Thanks again for the help!
  5. Thank you for the reply. So the way to go about this would be to stop the array, remove the 2TB from the list, and then add it as an unassigned device and restart the array? I have the plugins installed. And one more issue I’m facing is that my cache drive won’t mount. It came formatted as NTFS. When I stop the array, change the file system to XFS, and start the array again, I don’t get prompted to format it. I read that was the process for reformatting.
  6. Hi guys, I have built my first Unraid server. I currently have a 12TB disk for parity and a 1TB SSD for cache. I am waiting for a sale on WD drives to buy some more for the array. While putting the server together, I remembered I have a 2TB drive laying around that hasn't been in use for awhile, so I figured I might as well throw it in the server as Disk 1 so I can get it up and running and add more drives later. So I get Unraid running from the flash drive and as it's doing the first parity build, I remember that there might be a few things on that 2TB drive that I had deleted by mistake on the other copy of the data that was on this drive. The 2TB drive was formatted HFS+ and my original plan was just to reformat it once I stuck it in the server but I thankfully didn't do that yet since I didn't see any possible way to do that. (See attached screenshot for current setup) So basically I have this unmountable HFS+ drive in the array. I'm wondering if there's any way for me to get data off of it by sharing it through Unraid, or do I have to just stop the array, pull the drive and put it in an external enclosure to get the data I need off of it, and rebuild the Unraid array from scratch? It's not the end of the world if I have to do that, I was just wondering what's the best path for me to take from here. Thanks in advance!