• Posts

  • Joined

Everything posted by Goldmaster

  1. Thank you. but a warning or alert for what? I get nervous if I see read text for mission critical files.
  2. @ich777 What I dont quite get is why do i text in red when backing up, when I check the external drive, all files are fine can be opened without issue. I have external drive mounted as read write slave. So why red text which according to here means errors. So why display text in read is if there's an error, if there isn't an error? I have my /mnt as read only.
  3. Can we have a matrix server as well please? It can be bridged using so those that are on matrix can still talk and discord users can still to each other. Please?
  4. @binhex I have noticed I get this error as well. Any chance of a fix? or could i just run sudo apt get install atomicparsley inside the container?
  5. Im not sure whats going on with the s3 sleep plugin. I have my settings set like this Yet, when I press sleep on the main page, it says system is in sleep mode, then the page refreshes, and is back to normal. its not actually going to sleep. System specs are in my signature below. Unless I havent set anything correctly above, but I want to set the system to go to sleep at 10pm, every day automatically, no matter what. Any ideas please?
  6. Would be good to edit this post to consider also installing talescale, which is built on wireguard and requires NO port forwarding. @Sycotixhas done a video guide on setting it up here
  7. One thought I had, Is something similar to the unraid gui mode, where just like this, the user could be presented with a option to run as, with light features, or download the extra features (temporary or permanently) that enable browsing files and having flash features.
  8. Thank you @alturismo so it looks like i need to adjust my rclone config. wonder if its related to this issue
  9. Tried that and left it to run while at work, and still pretty much the same thing. nothing is getting backed up offsite. Anyway I could make luckybackup work with google drive directly or something? maybe an option in the remote options or something? really with this could work.
  10. It is set to true, from the start, yet still get the same issue, this also includes having / at the start of the folder path. Local backup to an external hard drive works fine and copying from unraid share to google drive in krusader works fine.
  11. thank you @ich777, so for my case, try running luckybackup as root, then maybe try some of these suggestions? I wasn't sure if I needed to set the upload rate limit or something along those lines, but I thought that you could only upload 750gb a day, which is near impossible for me to get to. this is what my rclone drives are set to, unless something there is causing the issue? --allow-other --async-read=true --dir-cache-time=5000h --drive-use-trash --poll-interval=15s --vfs-cache-max-age=504h --vfs-cache-poll-interval=30s --vfs-read-chunk-size=32M --vfs-read-chunk-size-limit=2G --vfs-cache-mode=full --vfs-cache-max-size=50G --vfs-cache-poll-interval=30s --buffer-size=32M & I might try and see if running root works, just done a local backup, some items not transferred as im not running as privileged.
  12. Was not sure if you can use rclone to mirror to and from, just thought it was for just accessing files from cloud. but the simple reason is there's no gui, unless I could download a gui that can integrate with Waseh rclone plug-in, then I could setup a mirror option (with delete on other end). Privileged is turned off.
  13. hi there I have got luckybackup running fine. local hard drive backups work fine. however, backing up to google drive folder mounted in rclone work ok, but then produce a load of red text and then at the end the following error. rsync: [sender] write error: Broken pipe (32) rsync error: error in socket IO (code 10) at io.c(823) [sender=3.2.3] . I dont know why this happens, as I can access google drive folders and files that are mounted in rclone fine in krusader without issue. I can post logs if needed, I have also noticed that there would be errors such as failed: Input/output error (5) next to each file that tries to get copied. Im wondering if its something to do with how rclone is setup or something. any thoughts?
  14. I had /Destination instead /destination so when the folder was not found, it would have been recreated inside the docker. not sure how to turn that off in lucky backup.
  15. @alturismo just sorted it. What it was, I had a folder called /Destination which was the old mount point for my external drives, before moving to /destination and luckybackup would create the folder if it doesn't exist, so lucky backup had backed up some stuff to the docker and not the hard drive. so now my docker size has gone from 98% full to 80% full.
  16. @alturismo the paths are correct, and they dont go into the docker or anything like so, the files do go to an external drive or to to google drive folder mounted on rclone. I can only assume the 3gb is temp files. which I cant clean up. this is what I have set up for google drive backup.
  17. @alturismo this is what I have set in the docker folder.
  18. is there also a way to get /plexmediaserver/cache folder to go also into the ram? please?
  19. How come lucky backup is using 3gbs of data on the docker img? not sure how or why. I have set lucky backup to backup to google drive folder that is mounted using the rclone plug-in and to a local drive that is mounted using unassigned devices. but that is it. any thoughts on whats using the 3gb of space? if cache or temp files, then is there a way to cache and temp files get stored in the ram?
  20. Hi there, I had wallabag installed and i couldn't remember the password I had set (hindsight I realised that the default username and password was in the docker template). I uninstalled wallabag and reinstalled, but for some reason the css and everything else isn't being loaded correctly. This also happening on other browsers. This only came about after reinstalling. I did try searching up to see if anyone else has had the same issue, but couldn't find anything of note. I do see there is an option to import from pocket, but any way to auto sync from pocket? I take it take articles are saved. so that if in a years time the web page disappears or changes, then the article is still preserved?
  21. Ok so I have got everything setup with test. I did get i/o error read write in the logs. I did see a solution was to add --def minLengthMS=0 into the advanced parameters. but it seams that file-bot isnt moving tv shows even though they are there and is set correctly. I just get [amc] Invoking AMC script... [amc] Run script [fn:amc] at [Mon Oct 11 11:39:15 BST 2021] [amc] Parameter: artwork = n [amc] Parameter: music = n [amc] Parameter: clean = y [amc] Parameter: excludeList = /config/amc-exlude-list.txt [amc] Parameter: movieFormat = /mnt/user/Hoard/video/movies/{plex} [amc] Parameter: musicFormat = {plex} [amc] Parameter: seriesFormat = /mnt/user/Hoard/video/tv-shows/{plex} [amc] Parameter: animeFormat = /mnt/user/Hoard/video/tv-anime/{plex} [amc] Parameter: minLengthMS = 0 [amc] Argument[0]: /watch [amc] Use excludes: /config/amc-exlude-list.txt (15) [amc] No files selected for processing [amc] Done ¯_(ツ)_/¯ But there are files, so move them. I have tried restarting and setting the wait time from 1800 seconds to 60 and that made no difference. @alturismo
  22. Ok thank you @alturismo. Think I have it all setup and working. guess now I just need to get a licence then check logs that the test option will work. then change the test variable to move. Then Im sorted.
  23. AHA! so for the monitoring, I would just paste in the watch folder and then have how its set (so stick with plex options). but how do i them run the amc? do i need to just add after {plex} where to send files to? such as /mnt/user/Hoard/video/tv-anime/{plex} in the fields there?
  24. thank you it does seam to be using the ram Filesystem Size Used Avail Use% Mounted on /dev/loop2 20G 17G 3.4G 83% / tmpfs 64M 0 64M 0% /dev tmpfs 63G 0 63G 0% /sys/fs/cgroup shm 64M 4.0K 64M 1% /dev/shm rootfs 63G 21G 43G 33% /transcode shfs 17T 36G 17T 1% /media tmpfs 3.8G 182M 3.6G 5% /tmp /dev/loop2 20G 17G 3.4G 83% /etc/hosts tmpfs 63G 0 63G 0% /proc/acpi tmpfs 63G 0 63G 0% /sys/firmware However, the tmpfs is only filled up by 5%. does that matter or quite what? Also the cache folder has appered in the plex appdata. when it should appear in ram. Or is that quite normal. any way to make the cache folder appear in tmp as well? the part about --no-healthcheck does that mean to block using the ssd as transcoding or quite what? can you elaberate by your point? "If you dislike permanent writes to your SSD". thank you for your help.
  25. hi there @binhex Im trying to do what others have done with transcoding to ram (which is located at /tmp ) so for the Container Variable: TRANS_DIR do i just type /tmp and make sure there is nothing in the transcode directory under plex advanced settings? I still seam to get quite a few reads on the cache drive and there is no plex transcode folder in the tmp folder. any ideas on how i can verify things are working correctly?