martikainen

Members
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

0 Neutral

About martikainen

  • Rank
    Newbie
  1. Thanks! Forgot to add an example, try removing - detect from a camera and restart the container. Error parsing config: The detect role is required for dictionary value @ data['cameras']['test']['ffmpeg']['inputs'] e":"No such container: ddf6dc9c7a50"} Att startup when checking the logs it will throw the above error, then the container will become orphan. And as @mathgoy seems to have found this issue is in the original frigate template as well so maybe you wont find it that easy =/
  2. Thanks a lot for the nvidia version! Gave it a go yesterday and I really love it One thing I noticed is that if you mess the config up, or if a process crashes in frigate, the whole docker container is marked as orphan in unraid and you need to redeploy it (using previous apps for example). I'm guessing this is a unraid template error and not related to blakeblackshear's image.
  3. During the past 2 years (I believe) I've been using this docker image on different OS running on the same hardware (XPnology, Ubuntu and now unRaid). And during that time I've experienced the same kind of issue. rtorrent craches when doing "Force recheck". Most often it works, but in some cases, totaly random. I get this error in the browser, and rTorrent restarts, the logs of rtorrent doesnt show anything (havent enabled verbose loggin though..) Looking at the logfile for my cache drive, I see these errors at the timeframe of the recheck I just removed 2 me
  4. Been trying out the sync command now, have I understood it correctly that I'll need to run the sync command on a schedule to download/upload files? It cannot do that automatically when a file is either placed locally or in my onedrive? I choose to run the script "in background" thinking it would always be active, but no file changes are being made. So to make it always check for new files I shoud use "Custom" schedule and type in 2 * * * *, this would run the script every second minute. What happends if there's a huge file that haven't finished in 2 minutes, will it most likely cra
  5. EDIT: I think I solved it, found the video from spaceinvaderone which used a much simpler script, ended upp with this. Gonna try syncing the share and doing a reboot to see if it's persistent and new files are uploaded/downloaded correctly #!/bin/bash #---------------------------------------------------------------------------- #first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access #there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure #you only need as many as what you need to mount
  6. Wanna start with saying thanks for all the contributors to this thread! I've tried searching this topic and going through a bunch of pages, but I'm starting to get sloppy when reading since its 82 pages atm. If I wanna use the rclone scripts just for syncing my entire onedrive/gdrive to my share on my unRAID server, which of the settings in the mount script should I use? I want to have a local copy of everything on my drive, and have changes done localy to update my drive, and updates done in my drive from another source to update my local copy. I tried using only "rcloneMount
  7. I tried disabeling apparmor on my synology and that fixes the issue, but im not sure what the risks are of not running apparmor, so I'll continue the search for the right way to run this 😃
  8. Need some help running this on synology, I've had it running on an ubuntu VM for a long time but want to move it to my NAS. I'm stuck with the error thats been discussed before, but I couldn't find any solution I've tried running it with different PUID/GUID First try was with my regular account "administrator" PUID = 1026 GUID = 101 administrator@NAS:/volume2/RAID/Docker/Torrent$ id uid=1026(administrator) gid=100(users) groups=100(users),101(administrators),65536(docker) Second try was with root PUID = 0 GUID = 0 administrator@NAS:/volume2/RAID
  9. Simpsons was just an example, each series is placed in their own folder. I have no interest in renaiming, unpacking or moving files. I'm running plex and rar2fs so it wouldnt be a problem downloading to a "completed folder". Thats why I was asking if it was possible to skip the usage of a temp folder for downloads. It's not a big problem to use a download folder, but i've noticed that sometimes rtorrent seem to fail with something while moving data from temp folder to completed downloads, the data is moved but rtorrent still thinks the data is in the temp folder, so i need to manually
  10. Are there any good way to skip the usage of a download folder? I'm using the auto-tools plugin and have a directory listing as below, together with that I'm using the filters to place the torrent files in the specific Movie/Series folder that it should be located in when it's downloaded Data - Movies - Series - Simpsons - Better call saul So I'm placing the file in "Simpsons", it starts the downloads in /Data/Incomplete/Series/Simpsons, I dont have the need to use a download folder, I would just want it to download directly into /Data/Series/Simpsons. I t
  11. Hi Just wanna start of with saying thank you for an awesome container! I've been running separate solutions for a long while trying to find that 1 container which has all components in it. Unfortunately I'm facing some issues with rtorrent process stopping, VPN, autodl etc is still working but not rtorrent. Checking the supervisord log I can't find anything obvious, are there any other logs I should check? This happens once or several times per day, I'm guessing it occurs when it loads torrent automatically but I cant reproduce the error myself by adding torrents in my watch