martikainen

Members
  • Posts

    18
  • Joined

  • Last visited

martikainen's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks, it was a typeo on the --vv, and the mount name was gdrive:encrypted, not it works Thanks!
  2. Thanks to you both! But i guess I'm missing something important, keep getting this error: " How exactly is the mount point supposed to look? I've tried with 2 types (cut down the rows below after --vv....) # sync files rclone move /mnt/user/mount_mergerfs/encrypted/ /mnt/user/local --vv and rclone move encrypted: /mnt/user/local --vv
  3. With these new limitations I've tried to change the upload script so that it downloads instead with the use of service accounts to avoid the download limit, but i haven't succeded I looked at the command being sent to move data from the localshare to the remote, and thought it would be as easy as just changing place with the two, so have it move from the remote share to the local, but nope.... is there anyone who's been able to update the script to download data instead? having 100TB with 750gb/day is about 4-5 months...
  4. Thanks! Much appreciated feedback! Just tried using drive for everything (before reading your answer :)) and noticed the API ban pretty quickly.
  5. I've been trying to find an answer to this but the unraid forum search engine isn't treating me well What folder should i point my downloads to in deluge if i want to upload it to gdrive and continue syncing there? Or is that not an option? mnt/user/local gets uploaded to the cloud. mnt/user/mount_rclone is the cloudfiles mounted locally. mnt/user/mount_mergergfs is a merge of both, mapped in plex/sonarr/etc. But I cant download to mount_mergerfs right? Then the files wont be uploaded? or?
  6. Just wanted to add to the thread that i experienced the same issue and spent alot of hours trying to solve this... hope this thread gets higher up on google now
  7. Thanks! Forgot to add an example, try removing - detect from a camera and restart the container. Error parsing config: The detect role is required for dictionary value @ data['cameras']['test']['ffmpeg']['inputs'] e":"No such container: ddf6dc9c7a50"} Att startup when checking the logs it will throw the above error, then the container will become orphan. And as @mathgoy seems to have found this issue is in the original frigate template as well so maybe you wont find it that easy =/
  8. Thanks a lot for the nvidia version! Gave it a go yesterday and I really love it One thing I noticed is that if you mess the config up, or if a process crashes in frigate, the whole docker container is marked as orphan in unraid and you need to redeploy it (using previous apps for example). I'm guessing this is a unraid template error and not related to blakeblackshear's image.
  9. During the past 2 years (I believe) I've been using this docker image on different OS running on the same hardware (XPnology, Ubuntu and now unRaid). And during that time I've experienced the same kind of issue. rtorrent craches when doing "Force recheck". Most often it works, but in some cases, totaly random. I get this error in the browser, and rTorrent restarts, the logs of rtorrent doesnt show anything (havent enabled verbose loggin though..) Looking at the logfile for my cache drive, I see these errors at the timeframe of the recheck I just removed 2 memory sticks from my server after running memtest86 and finding out that I did had an issue with my memory, there could be another stick thats broken (2x16GB installed atm). I should probably run memtest for a few hours (will probably run that tonight, right now I need the server online since it's my home automation server). Have anyone else experienced that rtorrent crashes when doing "force recheck"? And were you able to find the root issue? Do you want me to get a specific log? Like verbose logging from rtorrent or similar, please let me know! And btw, thanks a lot for all your great containers binhex! Using a bunch of them tower-diagnostics-20201121-1623.zip
  10. Been trying out the sync command now, have I understood it correctly that I'll need to run the sync command on a schedule to download/upload files? It cannot do that automatically when a file is either placed locally or in my onedrive? I choose to run the script "in background" thinking it would always be active, but no file changes are being made. So to make it always check for new files I shoud use "Custom" schedule and type in 2 * * * *, this would run the script every second minute. What happends if there's a huge file that haven't finished in 2 minutes, will it most likely crash the cron scedule, or just wait for the next time to run?
  11. EDIT: I think I solved it, found the video from spaceinvaderone which used a much simpler script, ended upp with this. Gonna try syncing the share and doing a reboot to see if it's persistent and new files are uploaded/downloaded correctly #!/bin/bash #---------------------------------------------------------------------------- #first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access #there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure #you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/user/NAS/OneDrive #This section mounts the various cloud storage into the folders that were created above. rclone sync OneDrive: /mnt/user/NAS/OneDrive ----------------------------------------------------------- Sorry for the late reply, didn't have time to focus on my private stuff cause of work =/ I now tried using the upload script: https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload Changed the required settings to my paths, and to sync. # REQUIRED SETTINGS RcloneCommand="sync" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="OneDrive" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="OneDrive" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/NAS/OneDrive" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/NAS/OneDrive" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone Executing this script gives me the error that rclone is not installed.. Script location: /tmp/user.scripts/tmpScripts/rclone_Sync/script Note that closing this window will abort the execution of this script 04.10.2020 08:58:15 INFO: *** Rclone move selected. Files will be moved from /mnt/user/NAS/OneDrive/OneDrive for OneDrive *** 04.10.2020 08:58:15 INFO: *** Starting rclone_upload script for OneDrive *** 04.10.2020 08:58:15 INFO: Script not running - proceeding. 04.10.2020 08:58:15 INFO: Checking if rclone installed successfully. 04.10.2020 08:58:15 INFO: rclone not installed - will try again later. Executing the command "rclone listremotes" in an ssh to my unRAID server and it works as I think it would root@Unraid:/mnt/user/NAS# rclone listremotes GoogleDrive: OneDrive: Any pointers? I dont have any other rclone user script running at the moment, and the paths above are empty, just to try to verify functionallity, dont wanna mess up my onedrive
  12. Wanna start with saying thanks for all the contributors to this thread! I've tried searching this topic and going through a bunch of pages, but I'm starting to get sloppy when reading since its 82 pages atm. If I wanna use the rclone scripts just for syncing my entire onedrive/gdrive to my share on my unRAID server, which of the settings in the mount script should I use? I want to have a local copy of everything on my drive, and have changes done localy to update my drive, and updates done in my drive from another source to update my local copy. I tried using only "rcloneMountShare" from the mountscript provided by "BinsonBuzz" https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount, but it doesnt seem to keep it updated, neither does it seem to actually download the copy, just creates a browsable drive localy, and as soon as I access a file it's downloaded into cache. Or it just might be that I'm to impatient, I can see that I'm using a lot of bandwith on my unRAID server, but I haven't figured out how to check what process is using the bandwith Thanks in advance!
  13. I tried disabeling apparmor on my synology and that fixes the issue, but im not sure what the risks are of not running apparmor, so I'll continue the search for the right way to run this 😃
  14. Need some help running this on synology, I've had it running on an ubuntu VM for a long time but want to move it to my NAS. I'm stuck with the error thats been discussed before, but I couldn't find any solution I've tried running it with different PUID/GUID First try was with my regular account "administrator" PUID = 1026 GUID = 101 administrator@NAS:/volume2/RAID/Docker/Torrent$ id uid=1026(administrator) gid=100(users) groups=100(users),101(administrators),65536(docker) Second try was with root PUID = 0 GUID = 0 administrator@NAS:/volume2/RAID/Docker/Torrent$ id root uid=0(root) gid=0(root) groups=0(root),2(daemon),19(log) I have checked the "Execute container using high privilige" in the general settings tab of the container in synology docker GUI. I've tried to change permissions on the host's /etc/nginx folder because I thought that's were the permission was struggeling, but it seems like it's inside the container(?) What else can I try?
  15. Simpsons was just an example, each series is placed in their own folder. I have no interest in renaiming, unpacking or moving files. I'm running plex and rar2fs so it wouldnt be a problem downloading to a "completed folder". Thats why I was asking if it was possible to skip the usage of a temp folder for downloads. It's not a big problem to use a download folder, but i've noticed that sometimes rtorrent seem to fail with something while moving data from temp folder to completed downloads, the data is moved but rtorrent still thinks the data is in the temp folder, so i need to manually point rtorrent to the new location and do a "force recheck" before it will seed. But thanks for the quick answer Binhex! And thanks alot for this awesome container! If you accept suggestions, would it be possible to add mail notifications? https://pmourlanne.wordpress.com/2013/04/27/rtorrent-receiving-an-email-when-a-download-is-complete/