-
Posts
437 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by flyize
-
-
Just wanted to come back and say thank you for this. It's working perfectly!
- 1
-
This was a self inflicted error. I was connected to the wrong wifi network. UGH
-
Hmmm, I just saw this email from 5 hours ago. According to Unraid Connect, all my containers are stopped (which isn't true because Plex is working).
* **disk3 (ST8000DM004-2CX188_WCT0EE4D) has read errors**
-
So I'm having a weird problem. I can't access the UI for any of my Docker containers or Unraid itself. I also can't ping the server. However, Plex is fully working and Unraid Connect shows the server as online. What in the heck am I missing here?
-
Yeah, I knew I had forgotten something.
-
Actually, not sure what I've done, but the mover now moves everything off.
-
On 3/13/2024 at 7:41 PM, ronia said:
Hey thanks for this! This was pretty much exactly what I needed the mover to do with some slight modifications. I've attached my version below in case this is useful for anyone else:
#!/bin/bash START_TIME=`date +%s` DATE=`date` # Define variables TARGET_DIRS=("/mnt/cache/plex_lib") OUTPUT_DIR="/dev/shm/cache_mover" OUTPUT_FILE="$OUTPUT_DIR/ignore.txt" MOVE_FILE="$OUTPUT_DIR/moved.log" LOG_FILE="$OUTPUT_DIR/verbose.log" MAX_SIZE="500000000000" # 500 gigabytes in bytes #EXTENSIONS=("mkv" "srt" "mp4" "avi" "rar") VERBOSE=false # Ensure the output directory exists mkdir -p "$OUTPUT_DIR" # Ensure the moved log exists touch $MOVE_FILE # Cleanup previous temporary files rm -f "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_all_files.txt" rm -f $OUTPUT_FILE # MOVE_FILE and LOG_FILE intentionally kept persistent for target_dir in "${TARGET_DIRS[@]}"; do # Step 1: Change directory to the target directory cd "$target_dir" || exit # Step 2: Find files with specified extensions and obtain metadata (loop through extensions) #for ext in "${EXTENSIONS[@]}"; do # find "$(pwd)" -type f -iname "*.$ext" -exec stat --printf="%i %Z %n\0" {} + >> "$OUTPUT_DIR/temp_metadata.txt" #done # Step 2(alt): Find all files. No filter. find "$(pwd)" -type f -iname "*" -exec stat --printf="%i %Z %n\0" {} + >> "$OUTPUT_DIR/temp_metadata.txt" done # Step 3: Sort metadata by ctime (second column) in descending order sort -z -k 2,2nr -o "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_metadata.txt" # Step 4: Get the newest files up to the specified size limit total_size=0 move_size=0 processed_inodes=() while IFS= read -r -d $'\0' line; do read -r inode ctime path <<< "$line" # Keep track of all files echo $path >> $OUTPUT_DIR/temp_all_files.txt # Skip if the inode has already been processed if [[ "${processed_inodes[*]}" =~ $inode ]]; then continue fi size=$(stat --printf="%s" "$path") if ((total_size + size <= MAX_SIZE)); then if $VERBOSE; then echo "$DATE: Processing file: $total_size $path" >> $LOG_FILE # Debug information to log fi #echo "$path" >> "$OUTPUT_FILE" # Appending only path and filename to the file total_size=$((total_size + size)) # Mark the current inode as processed processed_inodes+=("$inode") # Step 4a: List hardlinks for the current file #hard_links=$(find "$TARGET_DIR" -type f -samefile "$path") #if [ -n "$hard_links" ]; then # echo "$hard_links" >> "$OUTPUT_FILE" #else # echo $path >> $OUTPUT_FILE #fi # Step 4a(alt): Script does not support hardlinks, but is significantly faster and supports multiple TARGET_DIR echo $path >> $OUTPUT_FILE else if $VERBOSE; then echo "$DATE: Moving file: $move_size $path" >> $LOG_FILE # Debug information to log fi move_size=$((move_size + size)) # Do not add to the move file log if previously added if ! grep -q "$path" $MOVE_FILE ; then echo "$DATE: $path" >> $MOVE_FILE fi continue #break fi done < "$OUTPUT_DIR/temp_metadata.txt" # Step 5: Cleanup temporary files rm "$OUTPUT_DIR/temp_metadata.txt" END_TIME=`date +%s` if $VERBOSE; then echo "$DATE: File list generated and saved to: $OUTPUT_FILE" >> $LOG_FILE echo "$DATE: Execution time: $(($END_TIME - $START_TIME)) seconds." >> $LOG_FILE fi
Summary:
- Removed hardlinks from step 4a. I will not have hardlinks in my share, and it significantly improves execution time- Removed looping over an extension list. I didn't feel like there was anything in my share that couldn't be moved between cache and array
- Added execution time logging
- Added a 'temp_all_files.txt" file. This was useful in convincing myself that the ignore.txt was capturing everything on the share and nothing was missed. You can just diff the two files together when the cache is below MAX_SIZE and there should be identical.
- Added a verbose logging mode
- Added a 'moved.log' for tracking what has been moved between cache/array
- Changed TARGET_DIR to TARGET_DIRS to loop over all potential shares. At the moment this is just my plex library, but I have some ideas where else this could be used.
- Moved the output directory to /dev/shm. My mover is setup to run on the hour, so this reduces the number of read/writes on my cache drive
I finally got around to adding this. I used this version. I have 1.3TB on my 'cache_media' drive and set the script MAX_SIZE at 1000000000000 (1TB). Right now its at 600GB used. It made a list of files to ignore, and has then started moving files anyway. What am I missing here?
edit: Oops, I think it was self inflicted. I see that by default using the Move Now button does not use the Mover Tuning settings.
-
I wish I knew the answer to that question as well. I have a similar issue with Chrome, but Firefox works fine.
-
1 hour ago, JorgeB said:
It's not logged as a disk problem, and SMART looks fine, a lot of CRC UDMA errors so the SATA cable would be the main suspect.
P.S. lots of OOM errors logged, they appear to be caused by Frigate.
Someday, I'd love to not have to bother you guys, and maybe be able to help. Where do I go to see that those CRC errors in the diagnostics?
-
You're an angel! That's wonderful news, as I'm about to be travelling for a week and didn't want this thing to die while I was gone! I'll get the cable swapped out.
Yeah, the OOM errors are annoying. It just started happening with the latest version of Frigate. I've notified the devs, but its pretty much just me with the problem. For some reason, ffmpeg likes to occasionally try to grab 80GB+ of RAM. It was bringing down the whole server before I limited the RAM available to the container.
-
Any suggestions on if/how I can fix this?
-
Just so that I can keep track of all my scripts in one place, can I use User Scripts? It puts scripts in /boot/config/plugins/user.scripts/scripts/. I'd still call if from the Mover Tuner, but at least all my scripts are in one place then.
-
Okay, got it!
-
So in your example, if it gets over 750GB, the script will make a list of files from newest to oldest. Then remove the oldest file from the list and recalculate total space used, and do this over and over until it gets under 750GB. Then pass all this off to the Mover, with a list of files it can't touch. Am I understanding correctly?
-
So wait, is MAX_SIZE how much free space to keep on the disk? And then it adds older files until it hits that number? Sorry, I'm terrible with bash scripts.
-
Thanks @JorgeB. All fixed up.
- 1
-
4 hours ago, flyize said:
Okay. So unassign the disks, mount the array. Then navigate through the emulated disks (via the Unraid GUI I guess?) and make sure I see data. Then reassign the disks and rebuild (in-place)?
Well hopefully that's correct, cause I'm in the middle of the rebuild!
-
Okay. So unassign the disks, mount the array. Then navigate through the emulated disks (via the Unraid GUI I guess?) and make sure I see data. Then reassign the disks and rebuild (in-place)?
-
1 minute ago, trurl said:
How? More details please.
mount /dev/sdk /tempdrive
-
7 minutes ago, trurl said:
Of course, because they are being emulated by parity.
That does at least answer this question though:
But probably better if you post new diagnostics after you get hardware fixes done.
I went in an mounted /dev/sdk. It's actually emulating the whole device? If so, how will I know when I get it fixed?
edit: Added diagnostics
-
Any reason I can't just bring them back online and pretend nothing happened (or maybe do a parity check)?
edit: FWIW, if I look at the drives, all the data seems to be there.
-
Thanks! Looks like those two drives are on the same molex to sata power splitter. If I order another one and things spin up, do I have to rebuild?
-
Oh crap, I powered off the server without recording the actual disabled drives. Where do I see that in the diags?
-
8 minutes ago, JorgeB said:
A reboot will never help with disabled devices.
That is just the syslog, please also post the complete diags from now.
Well that was a bonehead move on my part. Ugh
Yikes. Share with 14TB of stuff shows empty. When I try to browse via console, I get "Structure needs cleaning"
in General Support
Posted · Edited by flyize
What have I done to get here and how can I fix it?
edit: Looks like I needed to run a repair on one of the drives.