ryryonline

Members
  • Content Count

    55
  • Joined

  • Last visited

Everything posted by ryryonline

  1. I've been experiencing the same thing lately. This is what the log file has been saying: s6-svwait: fatal: timed out s6-svwait: fatal: timed out s6-svwait: fatal: timed out s6-svwait: fatal: timed out s6-svwait: fatal: supervisor died rdpmouseControl: what 2 rdpmouseDeviceOff: rdpkeybControl: what 2 rdpkeybDeviceOff: rdpSaveScreen: [cont-finish.d] executing container finish scripts... XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":1" after 3791 requests (3538 known processed) with 0 events remaining. [cont-finish.d] done. [s6-finish] waiting for services. ch
  2. I've been having some issues with some of my Kodi installs and TVH since the latest update. I was going to check to see if going to an earlier version would fix the issue since I've ruled out many other items. TVH was next on my list.
  3. @saarg I use the tag linuxserver/tvheadend:release-4.2 for my docker. Is there a way to pick an older release of this branch? I've tried linuxserver/tvheadend:release-4.2:152 but that failed.
  4. I tried with a different browser, FireFox, and it worked! This isn’t the first tine Chrome has led me astray. Thanks!
  5. I've been using the Calibre-RDP plugin. I recently upgraded to UnRaid 6.5 but downgraded back to 6.4.1 because of some issues. Now, when I try to use the Calibre-RDP plugin, nothing shows in the browser. The logs show this error: Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level. Mar 26, 2018 3:06:03 PM org.apache.coyote.http11.AbstractHttp11Processor process INFO: Error parsing HTTP request header Any ideas on a fix? So far I've tried deleting the Docker and reinstalling, but have the same issues.
  6. I'm looking at running a couple different dockers through a 3rd-party VPN service. I can't find a solution that I'm 100% comfortable with. There is a lot of old and confusing information out there. Would it be possible to setup a VLAN on UnRaid, connect that VLAN to the VPN service, and then use that VLAN on specific dockers? Just a thought.
  7. I just installed this docker and admin:admin isn't working. I tried to uninstall, clear the appdata folder and reinstall. I'm having the same issue. Does anyone know the correct credentials for this? Thanks
  8. Checked this morning. Everything is working so much better now! I really appreciate it!
  9. The image is refreshed once a week on fridays. Any chance of an update this weekend? There are some bug fixes in the latest that will help me out. Thanks so much!
  10. I'm just double checking the this docker is on the most recent unstable version. According to TVHeadend I'm on version 4.1-2370~g0c506b4. When I look at https://bintray.com/tvheadend it says there was a version 4.1-2372~gfca44ba released. If I'm not just seeing things, is it possible to update the docker for the new version? Or, is there a way I can update the software without waiting for the docker update? Thanks!
  11. This is working well for the most part. However, I can't get a few platforms working correctly. I'm currently focusing on HomeAssistant and the plugin crashes when it tries to load. I have quadruple checked everything and my json file validates. I did notice that this docker (https://github.com/ckuburlis/homebridge-docker) is a few commits behind. Would that have something to do with this? Best
  12. Was your issue with Transmission or a completely different Docker? If it was Transmission, which repo did you use? Mine is through linuxserver.io. EDIT: I just realized that in the Transmission docker, I forgot to set the Network Type to NONE. I had it set to Bridge. Once I made those changes, everything seems to be up and running.
  13. I tried this on my brother's server (remotely over his VPN). The Pipework docker seems fine but when I start the Trainsmission docker to change the IP address I can no longer access his server remotely. I have to have my brother reboot from within his network. Back to the drawing board.
  14. That would be wonderful! Thanks!
  15. Thanks! I just tried and, yes, it works! I didn't even see the repository option in the advanced settings. Very cool way to "rollback" just in case. Best
  16. I was playing around with this and got something up and running I logged into the terminal of UnRaid as root and ran the following command: docker run -d --name="pipework_old" --net="host" --privileged="true" -e TZ="America/New_York" -v "/var/run/docker.sock":"/docker.sock":rw --pid=host -e run_mode=batch,daemon -e host_routes=true dreamcat4/pipework:1.1.3 This seemed to load the earlier version (1.1.3) of Pipework that loads and runs. I then load another docker (call it Docker B) as I normally would. (-e 'pipework_cmd=br0 @CONTAINER_NAME@ 192.168.1.50/24@192.168.1.1') When I connect
  17. I'm not sure how to do this but really miss this docker. Any suggestions on how to get this working?
  18. ^^^^^^^^^^^ Has anyone else run into this issue? I believe you may if you remove the Pipework container and then reinstall it...
  19. Could the docker have downloaded the latest build that has an error (https://hub.docker.com/r/dreamcat4/pipework/builds/)? Is there a way to set this container to download an earlier version like 1.1.4? Maybe that will fix this issue?
  20. Pipework has been working great! However, I just installed it on my brother's server and it won't start. I keep getting an error that says: error: can't connect to unix:///docker.sock. To test this out, I reinstalled it on my server. I updated the software and now I'm getting the error on my own server. Both are UnRaid 6.1.7. Any thoughts on what might be happening? docker.sock is mapped to /var/run/docker.sock. Thanks!
  21. I am interested in installing this Docker: https://github.com/ahoernecke/docker_scumblr/tree/master I added the URL to my template repositories, but I assume it's missing some files to work with UnRaid. How would I install this to UnRaid so I could control it from the GUI? Thanks so much!
  22. I am a big fan of having backups to the cloud in case of fire, theft, act of the almighty, etc. Therefore, I believe that irreplaceable files, such as photos, should be stored in the Cloud. While there is already a Dropbox docker, I would love to see a plugin that will let you specify User Shares to be backed up to Amazon Cloud Drive (not S3) or Google Drive (and Google Photos). For example, the plugin screen would allow me to choose my services and enter credentials (Google Drive, Amazon Cloud Drive, Dropbox, etc). Then I would go in a User Share, check the services I want to use to
  23. I did. I created a Bridge (br0) under Settings - Network Settings In the Transmission Docker I set Network Type to None. Under Extra Parameters I entered in: -e 'pipework_cmd=br0 @CONTAINER_NAME@ 192.168.1.50/24@192.168.1.10' (I wanted Transmission to run on 192.168.1.50 with a gateway of 192.168.1.10). Also, make sure the Pipework docker is set to Host.
  24. Is there an easy way to tell what is causing some of my drives to spin back up after I manually spin them down? A few of my drives never spin down. However, when I manually spin them down, they spin back up after a few seconds. I can only assume that the files on the drive are being accessed by something. To test this out, I turned off all of my dockers and VMs then spun them down. This helped - all but one drive spun back up. The drive that spun back up is a Seagate 8TB Archive Drive (STA8000AS0002). Note: I have all my user shares set to use the cache drive (SSD) and I have c
  25. This was exactly the issue. I was setting many dockers to use /mnt/cache/<whatever> and those directories were set to cache = no. It caused me all kinds of headaches with app configuration as the apps were looking in /mnt/user instead of /mnt/cache for plugins, scripts, etc. So, I switched all apps to use /mnt/user/<whatever> and set the cache drive to yes on the drives where I wanted it cached. I think I misconfigured it initially since I thought when a file was on a cache drive it wouldn't be available until the mover script ran. I later learned that anything in the cache al