Goldmaster

Members
  • Posts

    71
  • Joined

Converted

  • URL
    https://Goldmaster.site

Recent Profile Visitors

1035 profile views

Goldmaster's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. Can we have a matrix server as well please? It can be bridged using https://www.t2host.io/discord/ so those that are on matrix can still talk and discord users can still to each other. Please?
  2. @binhex I have noticed I get this error as well. Any chance of a fix? or could i just run sudo apt get install atomicparsley inside the container?
  3. Im not sure whats going on with the s3 sleep plugin. I have my settings set like this Yet, when I press sleep on the main page, it says system is in sleep mode, then the page refreshes, and is back to normal. its not actually going to sleep. System specs are in my signature below. Unless I havent set anything correctly above, but I want to set the system to go to sleep at 10pm, every day automatically, no matter what. Any ideas please?
  4. Would be good to edit this post to consider also installing talescale, which is built on wireguard and requires NO port forwarding. @Sycotixhas done a video guide on setting it up here https://youtu.be/nzBQTJ2isOI
  5. One thought I had, Is something similar to the unraid gui mode, where just like this, the user could be presented with a option to run as, with light features, or download the extra features (temporary or permanently) that enable browsing files and having flash features.
  6. Thank you @alturismo so it looks like i need to adjust my rclone config. wonder if its related to this issue https://github.com/rclone/rclone/issues/3186
  7. Tried that and left it to run while at work, and still pretty much the same thing. nothing is getting backed up offsite. Anyway I could make luckybackup work with google drive directly or something? maybe an option in the remote options or something? really with this could work.
  8. It is set to true, from the start, yet still get the same issue, this also includes having / at the start of the folder path. Local backup to an external hard drive works fine and copying from unraid share to google drive in krusader works fine.
  9. thank you @ich777, so for my case, try running luckybackup as root, then maybe try some of these suggestions? https://itectec.com/unixlinux/why-does-rsync-fail-with-broken-pipe-32-error-in-socket-io-code-10-at-io-c820/ I wasn't sure if I needed to set the upload rate limit or something along those lines, but I thought that you could only upload 750gb a day, which is near impossible for me to get to. this is what my rclone drives are set to, unless something there is causing the issue? --allow-other --async-read=true --dir-cache-time=5000h --drive-use-trash --poll-interval=15s --vfs-cache-max-age=504h --vfs-cache-poll-interval=30s --vfs-read-chunk-size=32M --vfs-read-chunk-size-limit=2G --vfs-cache-mode=full --vfs-cache-max-size=50G --vfs-cache-poll-interval=30s --buffer-size=32M & I might try and see if running root works, just done a local backup, some items not transferred as im not running as privileged.
  10. Was not sure if you can use rclone to mirror to and from, just thought it was for just accessing files from cloud. but the simple reason is there's no gui, unless I could download a gui that can integrate with Waseh rclone plug-in, then I could setup a mirror option (with delete on other end). Privileged is turned off.
  11. hi there I have got luckybackup running fine. local hard drive backups work fine. however, backing up to google drive folder mounted in rclone work ok, but then produce a load of red text and then at the end the following error. rsync: [sender] write error: Broken pipe (32) rsync error: error in socket IO (code 10) at io.c(823) [sender=3.2.3] . I dont know why this happens, as I can access google drive folders and files that are mounted in rclone fine in krusader without issue. I can post logs if needed, I have also noticed that there would be errors such as failed: Input/output error (5) next to each file that tries to get copied. Im wondering if its something to do with how rclone is setup or something. any thoughts?
  12. I had /Destination instead /destination so when the folder was not found, it would have been recreated inside the docker. not sure how to turn that off in lucky backup.
  13. @alturismo just sorted it. What it was, I had a folder called /Destination which was the old mount point for my external drives, before moving to /destination and luckybackup would create the folder if it doesn't exist, so lucky backup had backed up some stuff to the docker and not the hard drive. so now my docker size has gone from 98% full to 80% full.
  14. @alturismo the paths are correct, and they dont go into the docker or anything like so, the files do go to an external drive or to to google drive folder mounted on rclone. I can only assume the 3gb is temp files. which I cant clean up. this is what I have set up for google drive backup.
  15. @alturismo this is what I have set in the docker folder.