Jump to content

Terebi

Members
  • Posts

    124
  • Joined

  • Last visited

Everything posted by Terebi

  1. This is the (empty for now) support thread for the asustor platform drivers. This allows for fan and LED control of ASUStor NASs under unraid. The plugin was kindly built by @ich777 but is supported by @Terebi Users of this plugin may also be interested in this guide :
  2. #!/bin/bash # Define variables TARGET_DIR="/mnt/cache/data" OUTPUT_DIR="/mnt/user/appdata" OUTPUT_FILE="$OUTPUT_DIR/moverignore.txt" MAX_SIZE="500000000000" # 500 gigabytes in bytes EXTENSIONS=("mkv" "srt" "mp4" "avi" "rar") # Ensure the output directory exists mkdir -p "$OUTPUT_DIR" # Cleanup previous temporary files rm -f "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_filtered_metadata.txt" rm -f $OUTPUT_FILE # Step 1: Change directory to the target directory cd "$TARGET_DIR" || exit # Step 2: Find files with specified extensions and obtain metadata (loop through extensions) for ext in "${EXTENSIONS[@]}"; do find "$(pwd)" -type f -iname "*.$ext" -exec stat --printf="%i %Z %n\0" {} + >> "$OUTPUT_DIR/temp_metadata.txt" done # Step 3: Sort metadata by ctime (second column) in descending order sort -z -k 2,2nr -o "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_metadata.txt" # Step 4: Get the newest files up to the specified size limit total_size=0 processed_inodes=() while IFS= read -r -d $'\0' line; do read -r inode ctime path <<< "$line" # Skip if the inode has already been processed if [[ "${processed_inodes[*]}" =~ $inode ]]; then continue fi size=$(stat --printf="%s" "$path") if ((total_size + size <= MAX_SIZE)); then echo "Processing file: $total_size $path" # Debug information to screen #echo "$path" >> "$OUTPUT_FILE" # Appending only path and filename to the file total_size=$((total_size + size)) # Mark the current inode as processed processed_inodes+=("$inode") # Step 4a: List hardlinks for the current file hard_links=$(find "$TARGET_DIR" -type f -samefile "$path") if [ -n "$hard_links" ]; then echo "$hard_links" >> "$OUTPUT_FILE" else echo $path >> $OUTPUT_FILE fi else break fi done < "$OUTPUT_DIR/temp_metadata.txt" # Step 5: Cleanup temporary files rm "$OUTPUT_DIR/temp_metadata.txt" echo "File list generated and saved to: $OUTPUT_FILE" * 1.0 Initial public post * 1.1 Fix not clearing main output file, skip file if same hardlink already processed. * 1.2 Fix hardlinks not outputting correctly. * 1.3 Sort explicitly by date, in case inodes getout of order
  3. I have the desire to keep my cache drive full-ish of new media, so that : * Newly downloaded torrents seed from NVME rather than disk * Users watching media have better performance (and more recent downloads are more likely to be watched) * My disks stay spun down more, for power consumption and noise * In flight torrent/usenet downloads that are happening when mover is triggered do not get moved mid-download * When new media is downloaded that would mean the cache is too full, have the mover move off the oldest files to stay under the threshold. (keep newest files on cache up to threshhold) the advantage of my script over what you can do with mover tuning natively is : The settings on age and size will delay moves, but once you reach those triggers, it will move everything that matches. This could result in completely emptying your cache, or moving more than it needed to. For example, lets say you start with an empty cache, and have cache set to move older than 90 days, at 50% full. then download enough content all on the same day to reach 49% of your cache space. Nothing happens. 90 days go by. Nothing happens. You download 2% additional content. All 49% of the 90 day old content will be moved all at once, leaving you at 2% cache. My script will only move the oldest content enough to drop your usage below the desired threshhold. Whats oldest could be yesterday, or could be a decade. WARNING : ONLY USE THIS SCRIPT IF YOU HAVE MIRRORED CACHE, OR REALLY DON'T CARE IF YOU LOSE ITEMS HELD IN CACHE. (Because without mirrored cache, items kept in cache are not protected from drive failure) The Mover Tuning plugin supports two features we can use to make this happen : 1) Run a script before move 2) Don't move any files which are listed in a given text file. So, all we have to do is make a script that puts the newest files into a text file. Setup Copy the script into your appdata or somewhere (preferably someplace that won't get moved by mover). You may need to chmod the file to make it executable. I recommend appdata, because if you put it in data it may either get moved by the mover, or reading the file will spin up a drive when maybe there is nothing to actually move. Modify the variables at the top of the file as needed. Add the script to mover tuner to run before moves. Set mover tuner to ignore files contained in the output filename : If you run the script by hand (bash moverignore.sh) you can see the files that it will keep on the cache. You can then run the following command to test if mover will not move the files in question (any files in the ignore file, should not appear in the output of this command) find "/mnt/cache/data" -depth | grep -vFf '/mnt/user/appdata/moverignore.txt' You can also run mover in the commandline, and verify that it does not move any of the listed files (but should continue to move unlisted files) Additionally, if you want to manually move a directory off of cache you can run the following command find /mnt/user/cache/DirectoryNameGoesHere* -type f | /usr/local/sbin/move&
  4. Thread necromancy here, but I'd just add that while what you say about torrents is true, there are benefits for usenet as having instant imports from the arr apps into media shares is quite nice. But the tradeoffs you mention are indeed important to realize.
  5. Well, you were right. I think the problem was actually ram now. Not bad ram, but I had (foolishly I know) mixed the original 4gb chip with a 16gb addon. In retrospect I know ram should be matched, i was just being dumb. With either chip (but not both) the system is fine. The weird part is still the pattern of instability though. Only that overnight crash. If the ram was causing problems, I would expect the instability to be at times of heaviest usage.
  6. Hrm, Im not sure what the plugin is doing today, but if anything I came up with is useful to you, please do incorporate it. This is just sorting all of the files in the relevant dir by date, then listing the files up until the quota to keep on cache is filled. Step 4 first pass keeps track of hardlinks so they aren't double counted against the quota. Then step 4a lists all the files, including all of the hardlinks to be ignored by the mover. I believe the mover is breaking the hardlinks, moving all the files (twice for hardlinks?) then repairing the hardlinks. It doesn't seem like what I came up with helps with that, but im just dipping my toes in.
  7. I believe I have solved the hardlinks problem. Their space will not be double counted, and all copies will be listed Pinging people from this thread that expressed interest in this idea previously. @NGHTCRWLR @Andiroo2@dopeytree See two posts up for more details <script moved to standalone guide post
  8. The point of my script that it will dynamically exclude the newest files. to get similar behavior out of the other script, you would have to constantly edit the script to exclude the new files.
  9. Like many others, I have wanted to use mover tuner to keep my cache full, so that more recent media is coming off of cache. This can give better performance, and reduce disk spin ups. @hugenbdd said he was going to work on a more fully featured script to do this, here. But I am impatient. Mover tuner already supports a ignoring files which are listed in a config file. Tuner also already supports running a script before running the mover. So, All we need is a script which produces a list of files we wish to not move. Details of how to do this moved to the below link for better visibility and maintenance. [GUIDE] HOW TO KEEP CACHE DRIVE FULL OF MEDIA
  10. Id just do a nightly backup to array and skip the backup nvme. Or use it in a mirror. The appdata backups can be done with the CA Backup plugin. Note that even though you think you copied it to the array, it may still be on the cache due to mover shenanegans, so time the backup for before the mover runs.
  11. This works with nothing special. Put a fresh copy of unraid on a usb key and boot. Assign the drives and everything will work without rebuild. If you use parity2, make sure you put all the drives in the same slots as they were before (so take a screenshot first) As JorgB said, if you copy the super.dat file, it will preserve your disk setup automatically
  12. Will do. The content on the server is replaceable, so Im comfortable running without parity for a while. Im at 48 hours of uptime now, ill let it run through at least the weekend before I touch it again. But the crashes prior were VERY reliable.
  13. I had a similar issue, but just clicking a bunch at various places in the text box eventually let me type into it.
  14. Ive made a new post, in support rather than bug as I have identified the root cause as my parity drive
  15. I have been having nightly crashes for months now, only at night. I have done all the standard things, turning off all dockers and plugins, reinstalling from scratch, trying old versions, 24 hour memtests, etc. Nothing worked. I started pulling drives, and BOOM, back to stability. The drive causing the problems is a WD 16TB HDD Ultrastar DC HC550 SATA 7200RPM . But I think lots of people use this drive or something similar. Its being used in an Asustor Lockerstor 4 Gen2 AS6704T The instability ONLY happens at night. generally between 11pm and 3am my time. It does NOT consistently line up with mover or plex jobs, etc. It feels like its some sort of unraid task kicking off, or some kind of hibernate in the drive after NAS usage drops off for the night. It NEVER happens during the day, even if I'm heavily streaming via plex, or downloading data, or manually kicking off mover during the day. Pulling the drive completely gives stability. Having the drive in but not mounted at all gives stability ( I need to double confirm this with a longer test, but I got to 48 hours last time, before I mounted it in unassigned drives (see next item)) Having the drive mounted in unassigned devices or as a data disk gives the nightly crash, even if its not being used (no data on drive) Having the drive mounted as a parity drive gives the nightly crash UNLESS a parity check is being performed. (This imo is more evidence towards some sort of hibernate) (need to double confirm this again too) Turning off spin-down does NOT resolve the issue I have c-states disabled, and am not using s3 sleep or anything If its a bad disk, while I'll be disappointed in the wasted $, its not the end of the world. But because of the pattern of instability, I'm concerned its not a bad disk, but some kind of incompatibility or configuration issue. That makes me concerned about other disks I may purchase that may also show issues. Other drives I have in the system that are not causing issues : Sandisk 3.2Gen1 30gb (unraid flash) 2x Samsung 970 EVO Plus NVME (btrfs cache) MaxDigitalData 14tb drive (xfs data) Seagate Ironwolf Pro 16tb (xfs data) Currently running without parity as I have no disks big enough other than the problem child. Instability comes in a variety of symptoms, there does not seem to be a strong pattern : Sometimes I get a runaway CPU. When this happens it is often aligned with X:47 which is the hourly cron jobs, but it happens even with no plugins/dockers so I don't think the cron jobs are really doing anything other than "waking up" and then noticing a problem. Via netdata/newrelic, the problem processes are "system" often shfs. Its never in docker or my containers. Sometimes OOM errors in syslog Sometime just random kernel dumps Often (but not always) when it crashes, the system becomes unavailable to new network connections (confirmed via inbound and outbound uptime monitors). But during the pattern of instability, quite often docker continues to run for some time (sometimes hours) because newrelic/netdata continue to collect metrics, and sometimes I can see the arr ecosystem has continued to process feeds and downloads. But other times it crashes hard and there is an immediate cutoff of everything at the same time. Attached diagnostics are without drive, as I'm double confirming that I get many-multiple days of stability without the drive tower-diagnostics-20240126-0953.zip
  16. The push method of uptime to status cake will not need a maintenance window, as cron keeps running even if the dockers are off. But that will not correctly notify you if your inbound networking dies. You could also reduce the likelyhood that statuscake reports an outage by using the "alert delay" of longer than CA backup takes to backup that site. A delay of downtime of 5 min or whatever is not going to really hurt you for unraid, this isn't mission critical stuff.
  17. I do something similar to ConnerVT, except I use StatusCake. I have a check coming in to hit my Plex docker, and also an outbound heartbeat that I run from UserScripts.
  18. @space192 if the temps show up in sensors, or some other command line tool, and the fans are pwm controllable, you could write a script that runs on a schedule (1/x min), and adjusts the fan speed. Here is a similar script that I wrote for cpu temps that should be somewhat easily tweakable.
  19. Unfortunately its an asustor prebuilt nas, so my ability to diagnose/fix hardware is a bit limited. I will turn off each of the NVMEs in turn to see if that fixes it, and if neither does, try without cache/parity for a day. I thought the 3 hour memtest would be enough, but I can run it for longer I had previously (months ago) been playing with enabling cstates, and did notice when they were enabled, stability was VERY bad (multiple crashes per day). I thought I had disabled all of that, but I will take a second look. The part that is really confusing to me is why it only crashes at night. If it was hardware, I would expect it to be happening randomly at any time of day, or especially during high usage.
  20. The cpu increase started at 11:47, thats right when "hourly" tasks are started per cron But its interesting to me that this problem does not happen every hour. I do have the userscripts plugin, and there is one hourly task, but it is a single line script that runs curl to update duckdns. The /etc/cron.hourly dir has a single script called user.script.start.hourly.sh which was pre-existing. it calls /usr/local/emhttp/plugins/user.scripts/startSchedule.php From what I can read, that is just calling the hourly user scripts scripts. And as I said, I only have the one.
  21. I have a media center NAS. It works flawlessly all day long for plex, transfers, general utilities etc. Even under quite heavy load (4-5 streams) it chugs along. But it "crashes" every night. I've done extensive diagnosis and experimentation, and eliminated almost everything, including rebuilding the config from scratch multiple times. I just installed newrelic for additional monitoring, and I believe it finally found the culprit. (Unfortunately netdata does not do a good job of tracking arbitrary processes that aren't in its list, otherwise this could have been a much quicker journey) The crashes generally occur between 11pm my time and 3am my time, though there have been a few that have happened as late as 5-6am. Notably this is when the server is most idle, so may have something to do with either sleep kicking in, or some sort of scheduled task that is waiting until the system is unused. They are not actually "crashes" always. For the example from last night, the server became unresponsive at 11:52pm (per a user script that pings status cake 1x/min) But as you can see in the netdata and newrelic graphs, both dockers continued to collect metrics until I rebooted the server in the morning. This does not line up with mover nor plex scheduled task windows (1 am and 2 am respectively) startBackground, monitor, smartctl_type, php, startSchedule.p and samba-dcerpcd all kick in at roughly the same time and eat all the CPU This morning, I ran smart self tests on all the drives in the system (array and cache) and all completed without error. And as I said, even under heavy disk usage everything works fine. I have run a memtest for 3 hours, which was clean. I've also tried running the server with only the original 4gb that the machine came with. tower-diagnostics-20240113-1300.zip syslog-192.168.50.106.log
  22. I noticed today that my docker network was set to macvlan. When I stop the service and open that drop down list, ipvlan is disabled. I'm not sure when the change happened, what triggered it, or why I can't change back. One possibility is that I did install "RTL8125 Drivers" So far, I have not seen any problems that I'm aware of, but as I previously had stabiity issues I'd like to make sure I'm in the most stable configuration as possible to eliminate any possibilities or hypotheticals, in case I do have issues in the future
×
×
  • Create New...