Jump to content

BLKMGK

Members
  • Posts

    978
  • Joined

  • Last visited

Everything posted by BLKMGK

  1. I can test on Chrome but it’ll use the same engine I believe. I don’t know how to access the debug console on a tablet browser. Honestly it doesn’t feel like a crash just an inability to display the dialog. I’ll be home tonight and can check and troubleshoot it further. My tablet is an iOS update behind, my phone is current so I can check both for you. Once I kick it off from my desktop I can access it from anywhere, it’s starting that’s the issue. When I refresh I do see the following message: {"message":"Not Found"} But removing the /transfer from the URL brings the page up fine ? I’ll post an update tonight with what I see. I have a big move from this morning running for another hour or so as I clean drives for crypto. Your tool is far superior to trying to do this with Krusader as that container tends to freeze and move data slowly, rsync rocks! I look forward to using the Gather soon too when this is done too...
  2. Not much to send honestly. I select a drive, it expands, I select a folder (expands fine), select a target drive, Plan is lit, I press Plan and it goes dark while the colored bars in the top right begin “playing”. I never see a dark box displaying progress with the planning as before, the Plan button stays dim, and the page is unresponsive until I refresh which starts it all over again. Does that help? Certainly willing to test if you need the help! I’ll check it on my desktop when I get home too.
  3. Did something change with this release regarding how the Plan pops up? I used to be able to use this from my iPad but after the last release hitting the Plan button starts the churning in the upper right but I never see the plan and cannot advance to begin the move. I am sad! Lol
  4. Thanks for updating the plugin to the latest Rclone! Still trying to exclude a dir - argh!
  5. Been beating my head against the wall for awhile on a config issue. I've been using this plugin and rclone for months now with great success, first on ACD and now on GSuite. My commandline is as follows: rm /mnt/rclone-bak.log # remove the old backup mv /mnt/rclone.log /mnt/rclone-bak.log # create the new backup rclone sync --log-file /mnt/rclone.log --max-size=40G --copy-links --transfers=4 --exclude '/FOO/*.*' --bwlimit=10M --verbose /mnt/user/ google_crypt:/ --checkers=10 -v This removes a logfile from the previous run 2 prior, copies over the existing current logfile, then kicks off rclone in verbose mode with output piped to a file. I can tail these files to monitor status or just open them (hint: do not place them in a directory being backed up). The bolded portion however is a real problem... I have recently moved data to a new share we'll call FOO. Foo is about 60gig of 50Kb files, it's not something I require to be backed up and the tiny file sizes push rclone into spasms so my backup never completes! I'm in a 403 tarpit... How the bloody hell do I skip the FOO directory? I have tried each of the following to no result: --exclude '/FOO/*.*' --exclude "/FOO/*.*" --exclude='/FOO/*.*' --exclude '/FOO/*.*' --exclude="/FOO/*.*" --exclude "FOO/*.*" --exclude="FOO/*.*" --exclude "/mnt/user/FOO/*.*" I have probably tried a few other combinations too! I have been told that the path should be relative to the one given to rclone for backup so I think I've got this right not giving it the full path. I have tried with and without a leading slash, I have tried dual and single quotes - I'm pretty sure single is correct. For bonus points there's another share I'd ALSO like to skip but it's much less critical as it's simply a working directory.
  6. Yay! It's been a year or two or four but this is terrific news! I look forward to seeing how this goes. I suspect it won't satisfy everyone no matter how it's done but this will be great.:)
  7. So far I've yet to get my firewall working right and just haven't had time for it but NextCloud was pretty easy overall. I'll likely not go so far as to use LetsEncrypt with it though. Perhaps I'll try after the darn firewall is solved - PITA being double natted for sure!
  8. I'll look into changing the port, i tinkered around with an NGINX config file and made it angry so obviously that wasn't right lol. I'll see what the docs say, thanks! Otherwise, my Docker is running fine near as I can tell. Having issues getting my PFSense firewall to pass HTTPS to it which is why I wanted to swap ports, I think it's a config issue with the firewall though. Inside my network it works fine - including the mount point! goto the Docker tab in unRAID, click on the link for the NextCloud Docker, this gives you a config tab. Appdata is where NextCloud stores data, NextCloud data is where a mountpoint can be defined - it's possible to browse your datastore to choose where data comes from - mine is within an unRAID share. Is that what you were looking for? You can also define the port the Docker is allowed to use, any network config changes to NextCloud will likely require a change here too in order for it to network. That help any? Edit: Ah wait, you're suggesting something called Cozy for personal cloud? that's one I hadn't heard of. I'm trying to get NextCloud working, so far the Docker has worked great, my firewall not so much lol
  9. Letsencrypt is just for the SSL certs right? The biggest hassle I see with that is that they constantly expire and must be renewed, is that what the hassle was of figuring them out? Any tips for Nextcloud? How secure is it since I'll likely have to poke a hole in the firewall to expose the service. Last thing I want is someone roaming my network lol! I have Dyndns working for VP services so that part isn't bad and I assume is required? Any issues upgrading from release to another? Are you on 11 now?
  10. I see threads about Owncloud, NextCloud, and I've seen Seafiles mentioned - likely others out there. I'd like to run a cloud of my own such that when I take pictures or receive files on my IOS devices they end up stored in MY cloud not DropBox or Amazon or apple's data center. I backup my server with RCLONE to ACD now with everything encrypted, I just want to create my own space. Which package is working best for this? Security issues? i can run it as a Docker on UNRAID or as a VM in ESX or heck even a docker in a VM on ESX lol. I'm interested in hearing what people have found works well, something that supports IOS for sure and also supports Android too. Thoughts? Reading the various threads I wasn't really getitng a clear sense of feature differences or of issues so if someone could break it down some I'd appreciate it!
  11. Updated from 6.2.4 to 6.3.1 and found just one issue so far - my Cache SSD got dropped and was being picked up by Unassigned Devices. Spotted it pretty quickly, stopped the server, selected it, and all seems well after starting the server service back up. One thing for folks to do after updating is run a "check for updates" on Plugins as they don't have status on first boot, no biggie. So far so good here!
  12. Currently working very well for me nightly with UserScipts, I managed to clear the long filenames and it was a PITA but now done. This runs maybe an hour or three a night and updates Amazon fine. First time it ran it deleted a good TB and a half off of Amazon from all of my previous attempts and files moving around. It's now run multiple nights in a row without issue, starts up at 4:40am and am not sure how to change it but it's working lol. For the logfile, I simply keep it at a higher level than I'm backing up - at the /mnt level. I did modify the calling script to rename rclone-bak.log, then move rclone.log to rclone-bak.log so I have 2 nights worth of logs I can look at if something weird happens. Currently sitting at about 27.1TB backed up and about 520K files give or take a few K
  13. Alright, after having to severely slash some filenames and move the logfile out of the range that's being backed up I'm finally able to have target files deleted that are no longer needed. I'd really appreciate some help from anyone that has exclusions working. Next up is scheduling this to run regularly. Hopefully User Scripts handles it - fingers crossed!
  14. Yup, Amazon! I've just done a bit of hack and slash with a bulk renamer so we'll see what fails this round. IF all of the lengths check out it will still fail however since it will be unable to avoid trying to backup the changing log file. Has anyone got excludes working properly? I've got a work directory or two I'd like to skip and that log file for sure! Deleting the extra crap would sure be nice and that won't occur without a full run without I/O errors it says... Edit: 4 more and counting! Some of these I've edited 2&3 times now and have names shorter than others. The cryto must occasionally get a wild card and screw them up!
  15. Here's an example of one file that's just failed. I'm still getting over 100 failures despite having modified every file that was claimed to fail on the last run! BlackHat/Defcon/DEF CON 9/DEF CON 9 audio/DEF CON 9 Hacking Conference Presentation By Daniel Burroughs - Applying Information Warfare Theory to Generate a Higher Level of Knowledge from Current IDS - Audio.m4b Another BlackHat/Defcon/DEF CON 19/DEF CON 19 slides/DEF CON 19 Hacking Conference Presentation By - Kenneth Geers - Strategic Cyber Security An Evaluation of Nation-State Cyber Attack Mitigation Strategies - Slides.m4v These don't show up with a find as short as 225 chars so I'm a bit frustrated and am wondering if there's something in the name of the file itself that's tripping these up except I see little in common. The error is: u0027 failed to satisfy constraint: Member must have length less than or equal to 280
  16. Finally have my entire backup nearly done! However while the backup has been occurring I've been adding files, moving files and doing what gets done with NAS like unRAID. I've also run into issues with files having pathnames too long. For folks who may run into this the following command will find files that are too long so you can rename them. Run it from a terminal session or SSH in the /mnt/user directory and it'll return the names of files that exceed 230. find -regextype posix-extended -regex '.{230,}' Note that max allowed length is 280 but for some reason quite a few extra characters seem to get tacked on and I've had to search for lengths longer than 230. I had over 130 instances of this, no fun to clean up! I have one last issue - exclusions! I run this using UserScripts with the following CustomScript: rclone --log-file /mnt/user/work/rclone.log --max-size=40G --transfers=2 --bwlimit=8.8M sync /mnt/user/ crypt: --exclude="work/HB" --exclude="work/rclone.log" --exclude="work"--checkers=10 "work" is a shared mount under /mnt/user and I'd like to exclude it. When I'm compressing video or doing random downloads this is where that winds up - it's also where I have a running log for rclone being created. Because the rclone log is constantly being updated when rclone tries to upload it the file changes and I get an error. If rclone finds an I/O error like this it refuses to do deletions on the target. Since I've had some pretty hefty files get uploaded from my Work directory that I don't want hanging around I need that full synch to occur. So, how best to exclude "work"? I've tried "/mnt/work/" and I've tried what's above. I'm going to try "/mnt/work/" again as this seems like it ought to work and rclone has revved a few times since I last tried it. I've got it running now with just one error found so far - the darned log file! All of my name changes seem to have taken - whew! So close to having a good baseline but so far - I'd appreciate a pointer from anyone who's gotten exclusions to work. Edit: Ugh, even 230 wasn't short enough - I've still got at least 20 files with issues!
  17. Yeah, I can see where that could be an issue I hadn't thought of. I currently transfer 9 files concurrent and have 15 threads checking files as I understand it. I do have lots of files now but once initial upload is done I think it'll be mostly large files from media and backups. It'll be 20 or 30 days before I get there but I'll try to report any bizarre behavior when I do!
  18. Your stop script looks better than what I'm doing. UserScripts terminates the top level script but not the rest of the rclone scripts which I have to kill by hand. I expect I'll be using your's soon lol. I'm still doing my initial upload and am at 10tb but when complete I'd like to schedule it too, I like what you've done!
  19. An observation, Amazon supposedly has a 50gig per file limit. Some of my backups were far larger so I set them to split at 49gig and rclone to stop at 50gig. I observed 3 of these files proceed to 100%, hang, and then start over again at 0%! I've now reduced the rclone max to 40gig and will try lowering the split size further to say 10gig as it takes ages at the higher numbers and failures kill throughout
  20. There's a new version out but I've not been able to upgrade yet, anything significant? I e got a couple of 50gig files that get as far as 70% complete and then something like my router falls over and breaks the connection or I need to stop things to put in a new drive. Picked up a ton of drives on BF and slowly putting them in, at this rate these files are going to take forever lol
  21. I track progress by knowing roughly how much I need to backup and visiting the ACD web page to see how much has been pushed up (6tb of 32). To monitor what it's up to I output to a log and then SSH in to tail the log. Using this I've noted errors with file lengths and issues when case of file names change which is pointed out but not uploaded. I've had issues killing the process occasionally but pulling up HTOP I can kill it once I identify the main process. Oh, if you have the log file in the path it's backing up it'll error
  22. Was this ever taken further? Using rclone right now but wondering if Duplicati might be more user friendly to control and monitor.
  23. Yup, sure is. That's what I suspect may be adding characters actually. Looked at Duplicati? Seems to do the same things as rclone except for mounting and might be more user friendly overall. I say that 4TB into an rclone backup lol.
  24. I understand the character limit is an Amazon limit, however I *think* my files aren't as long as that to being with. Is it possible that rclone changes the filename length? I'll try to track down the files when the run is done but I'm up to over 95 files now which seems a bit extreme. What's the filename limit in unRAID? EXT4 ends at 255 and Amazon is claiming 280, hence my question...
  25. Seeing some errors from rclone, from the looks of it files with long paths trip up ACD? Limit of file path appears to be 270? I cannot tell if rclone makes longer paths when it encrypts but I think that might be the case? I'm going to wait until my entire job is done before investigating but so far I appear to have run into this about 75+ times. Anyone else seeing similar? Failed to copy: HTTP code 400: "400 Bad Request": response body: "{\"logref\":\"8051344f-b3fa-11e6-9046-fd50d35e6b20\",\"message\":\"1 validation error detected: Value \\u0027te0v5i0g8ro7ft8uqhor5072ft4ei4bi7543loomvfpc5v76s0uaeoo563ev87chlhua9f0q5fo5d53vp6p5l5u91athf57motjfm01upet75li1vhc67v93ahli3p95jfrgvevon6lhabn4jcuukb7ti78hd7qrvj5cjb00bjd6h88pad79rdmmjtltnj2tv236sttpdjkfntmdk1piubcud9qd290dn9bvmuf4mf2gkv0dmh249qon08nm2v8t9gq5ojjls5r0urmau7j5os9p2s1r0ffklonjerih6hjve1cr5ir0\\u0027 at \\u0027name\\u0027 failed to satisfy constraint: Member must have length less than or equal to 280\",\"code\":\"\"}"
×
×
  • Create New...