rix

Community Developer
  • Posts

    423
  • Joined

Everything posted by rix

  1. Ensure you do the initial setup exactly as described here. https://github.com/rix1337/docker-gphotos-sync Your error messages indicate you either did not pass through the config dir as described or did not provide the secrets as required. Also ensure you run the command with the correct options.
  2. if there is an official way, it should be mentioned on the project page: https://github.com/gilesknap/gphotos-sync if not, you need to run two separate instances of this container with different names and config folders. this should essentially result in two different containers where you can log into each of your accounts seperately.
  3. Afaik this is something only ripit can fix. If there is another tool that can be included instead of ripit I will consider migrating over. What I'd need is the tools name and how it needs to be called.
  4. For me too, have just tested it. Enjoy the container!
  5. Sure: #!/bin/bash docker exec GooglePhotosSync gphotos-sync /storage but it does not seem to work (running the command outside user scripts does!). I will try to add a cron entry through container variables, so everyone can set this up by themselves.
  6. Run makemkvcon -r --cache=1 info disc:9999 | grep DRV:0 inside the container. This will help you modify ripper.sh accordingly.
  7. Due to inactivity on my side and no further time to support this I have now removed Synclounge from my repos.
  8. Have you tried just passing through /dev/sr1 as /dev/sr1 and modifying the ripper.sh for your second container to point to /dev/sr1 ? That should work. Have just one optical drive, hence I cannot test this.
  9. Hey guys. I am doing fine and am hereby back again. The last few weeks have made me aware of what matters most. I will stop supporting tools that I do not use my own and have since removed the following repos, images and their respective templates. From this day forth I will only be able to help you with RSScrawler Ripper GoogleMusicManager MyJDApi
  10. Nice attitude. Apart from that, I was abroad and now am managing the COVID-19 thing at work. Lack of responsiveness for another few weeks included. I will take a look at your questions, if there is time.
  11. I have tried for this to reappear with 6.8.1 for a couple of days - hasnt happened since. Will close for now
  12. Thanks for the input. I stopped them all, waited some time, yet the high load persists.
  13. Havent tried that, but if you can reproduce this issue, it sounds like its on googles end
  14. Hi, for the past few weeks I have noticed an odd behaviour by my server on UNRAID 6.8.0: After a couple of hours of uptime, one sfhs process stays at 100% cpu utilization Network shares become extremely unresponsive after that (unusably unresponsive) Stopping/Starting the Array resolves the high utilization for a few hours, but results in a parity check Disabling folder caching (the plugin) and even uninstalling it does not resolve the issue Please help, this eats a lot of power and makes my server unusable as NAS. This is my current htop: Diagnostics are attached. diagnostics-20200110-2225.zip
  15. Its working completely fine on my part.
  16. Dang, I forgot updating the template when removing the Credentials variable. This is fixed now.
  17. Hm, that would have been my first question also. Thanks Squid! When passing through Variables to docker those quotes should be optional. Please try setting this up as specified on github: docker run -d \ --name="MyJD-API" \ -p port:8080 \ -v /path/to/config/:/config:rw \ -e USER=USERNAME \ -e PASS=PASSWORD \ -e DEVICE=DEVICENAME \ rix1337/docker-myjd-api The backslashes are optional, if you use just one line for the command. Also DEVICE is optional if you only run one JDownloader. Please do the following: Run JDownloader Ensure the JDownloader is actively conencted to my JDownloader (verify in the settings) Run the container using this command: docker run -d --name="MyJD-API" -p port:8080 -v /path/to/config/:/config:rw -e [email protected] -e PASS=AsD1345QWio8 rix1337/docker-myjd-api This should work. If it does not, please post the output of you container's log (there might be something wrong with my code after all).
  18. It's hidden away in their forum somewhere as a workaround.. I'll report back if I find it again! EDiT: https://forums.plex.tv/t/hardware-transcoding-broken-when-burning-subtitles-apollolake-based-synology-nases/482428/33
  19. For those of you interested in how to fix this: The PMS 1.18.1 branch includes a new driver that breaks Hardware Transcoding for Apollo Lake Intel CPUs. In Linuxserver.io's docker image this is located at /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so Just delete iHD_drv_video.so and transcoding will work again.
  20. Thanks for helping me debug this - I pushed these changes: https://github.com/rix1337/docker-ripper/commit/a3470835533b126c305e4765edb879b2669aec7d
  21. That explains why the files are still there. Please replace the line seen above with these lines in your ripper.sh then report back: # delete MakeMKV temp files cwd=$(pwd) cd /tmp rm -r *.tmp cd $
  22. This looks expected.. I have added this command, that should delete all the .tmp files in the folder: find /tmp/ -name "*.tmp" -type f -delete Could you please enter that one in the container shell and post the results.. might be an error displayed or it working perfectly.. either would help me get to the core of this