Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral


  • Rank
    Advanced Member


  • Gender
  • Location
    Charleston, SC

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I tried this tonight and it appears to work, no errors. The backup drive did spin up, and it took about about 5 mins or more (didn't time it). During most of that time, I noticed the drives containing the original data were staying spun down. Somewhere near the end, not exactly sure where, they spun up. Is this because cache dirs is turned on and it was scanning the backup location against the cache and then only spun up when it actually found something to copy over? If not why? is there any way to be sure it's running properly. No errors were returned. This is what I got at the end of the 5 mins or more. Only these two lines, which is all I get on the old script too. Script location: /tmp/user.scripts/tmpScripts/New RClone/script Note that closing this window will abort the execution of this script. Let me know what you think. Any way to be certain it's working correctly? Thanks for the help, it's looking good. I know it took a little time to figure all that out.
  2. I will try this out tonight and report back, thanks!
  3. Trying to resurrect some help with this. Can you please see my response above? Thanks in advance.
  4. Here is what I got: I don't know anything about the added "mountpoint" language so not going to attempt to work on this one alone.
  5. thank you for that. So that script runs the same backup/sync and will do nothing if it can't find the backup location?
  6. Sorry I've been inaccessible durign the past couple days. I have Userscripts run the 4 nearly identical RClose lines daily. These are "sync'd" so it both copies and deletes the backup location so it's kept identical to the original location. #!/bin/bash rclone sync "/mnt/user/Backup Items" "/mnt/disks/Data_Backup_1/Backup" rclone sync "/mnt/user/Media/Music" "/mnt/disks/Data_Backup_1/Music" rclone sync "/mnt/user/Media/Music (uncat)" "/mnt/disks/Data_Backup_1/Music (uncat)" rclone sync "/mnt/user/SACD" "/mnt/disks/Data_Backup_1/SACD" The script you have written above mine is a little above my pay grade. It looks like you are setting a location to be known as "mountpoint" and they putting conditions on allowing it to run when it actually exists, otherwise killing it. I dont know enough of the command lines to fully follow it. Can you show me how to apply it to the 4 lines above? Also... what is the purpose of the "ping" command line above for cnn.com? Is that actually supposed to be there? I know what "ping" does but no idea what that could have to do with RClone. I should be more accessible with limited interruption the rest of tonight and tomorrow to respond. Thanks again for the help.
  7. I just now looked and it did NOT error out this morning. The Preclear is nearly complete, still running and no errors on the page request with RClone disabled. Thank you for helping to spot this issue. Obviously as it comes to pre-clears I can manage this, but to avoid the odd occurrence is there a way to set up RClose so it just stops/errors out instead of filling up Rootfs? Please let me know but I wont be able to respond or work on anything with this until tonight. Thanks again! .
  8. How much load should pre-clearing two drives at once put on my processor?
  9. Well 1st round test says RClone is the issue. It was left on last night, no pre-clear/smart test. No backup drive present. This morning no GUI response the rootfs is full. I'll do the reverse tonight, turning off RClone and trying to get a pre-clear to complete, but this appears to be the issue. Assuming that to be true, what can be done with RClone to get it not to write to rootfs? Is there something that can be put in the command line or is it in the software?
  10. I think my plan for testings is this: 1. Tonight I'll leave RClone engaged, no backup drive present, but not have either pre-clear or the smartest running. If RClone is the problem, I should still have an issue with the GUI page request tomorrow and should see Rootfs being full. The reason for this step is my partial belief that this condition has already taken place just by the happenstance of all the recent testing scenarios. It wasn't an official test and don't recall getting errors, I just can't say for 100%. 2. Assuming I DO get an error tomorrow morning when accessing the GUI, and see the Memory , then tomorrow night I will run the smart test with RClone turned off. If that has no error that I think we have an definitive answer. Any thoughts?
  11. Well that sounds promising to be source of the problem then. I have disabled RClone for now and will run a test of this soon (tonight or tomorrow night). Is there anyway to set up RClone so it would not to attempt to write a backup to Memory in the event of a failed drive?
  12. ahhhh... RClone runs once a day to backup a handful of folders to an external hard drive and is executed via User Scripts. Yes it probably runs sometime at night, and that drive was disconnected during the pre-clears. I haven't seen any errors due to RClone so it never hit my radar as a cause. Do you think this would fill up the temp file location? Is it just because the drive is disconnected or should I suspend the backup during any preclear/extended smart test? If you think so, I will suspend the backup script and try again tonight.
  13. Did you see the screenshot attached to this post? https://forums.unraid.net/topic/78318-drives-dropping-out-of-array-into-ud-split-from-preclear-results/?do=findComment&comment=726573 I missed it before but it does show rootfs going from 10% full to 100% full. The question is what about running a pre-clear or an extended Smart scan is causing that file to fill up? One thing I can try is to run a smartscan and try to see how fast it actually is filling up. All I can tell right now I am initiating one of those processes before going to sleep and sometime before 4:40 in the morning (which is when unRAID is trying to write to that file) the file is getting full and the errors commence.
  14. OK will do, Tuesday night. Slow process only being able to truly test once a day at 4:40 in the morning. I thought I was going to be able to pull diagnostics. It actually started the process but timed out and wouldn't even start on a 2nd attempt. The server did respond to a "reboot" command in the GUI this time. I didnt have to use the power button to re-boot.