Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by aptalca

  1. Probably a permissions issue with smb I actually mount the whole unraid drive in dolphin as /mnt:/mnt so I can move things natively and not through smb open the advanced view in container settings and you'll see the resolution variables as well as the uid and guid if you want to run it with root permissions
  2. I just tested it again and it lets me cut and paste as well as move by dragging in split screen. Are you getting an error? Do you have write permission in the folders or on the file? Are you running it as nobody or root?
  3. johnodon, I wish you didn't delete the github repo for this. I was gonna send a patch to externalize the zap2xml perl script as unevent suggested: have the user download it and place into the the config folder. Then there would be no issue with the author and the docker would simply be a distinct wrapper. Anyway to undelete?
  4. Post a screenshot of your Docker Container Page. Just Click "Docker" Tab and take a screenshot. Will do asap i am back at the server. Atm on the road for some stuff Under the docker tab, click on the container's logo and select webui. It will open the gui using the correct port
  5. letsencrypt expects you to have full control over whatever url you put in. In the case of dynamic dns, you do not have control over the actual domain: publicvm.com In that case put in xxxxxx.publicvm.com as the URL and leave the subdomains as is (default is www). With that, letsencrypt will create a certificate that covers xxxxxx.publicvm.com and www.xxxxxx.publicvm.com, both of which you have control over.
  6. Looking into it. I thought I had tested php before, but when I just tested now, it didn't work for me either. I'll fix it in the next few days.
  7. I also have an nginx container with built in letsencrypt. It gets free ssl certs and auto updates them every 60 days. I haven't used it with owncloud but I use it with every other container's gui with reverse proxy
  8. ?? It updates automatically at start EDIT: just realized tapatalk doesn't show strikethrough
  9. I posted my config here: http://lime-technology.com/forum/index.php?topic=43696.msg437353#msg437353 I have both a regular webserver and a proxy setup
  10. Is it License or Licence? Good point... I'm Canadian, eh so I made it <Licence>. Guess that I can support both spellings as I'll probably take some heat for it.... What's Canadian? Is it some sort of extraterrestrial thing like Klingon?
  11. Couchpotato does that, not sure about plex requests. You can ask in their thread on the plex forums
  12. Change the configuration folder location to either a cache drive or if you don't have one, a disk location. /mnt/cache/blah /mnt/diskX/blah I deleted the container and image, added it fresh and changed /config to point to /mnt/cache/appdata/plexrequests I am still seeing the same issue Can you post a log? Also, how long did you wait? At container start, it updates meteor and a couple of times I noticed the meteor servers being ridiculously slow. It took 20 minutes for the update After 45 minutes, the docker started responding. It looks like the meteor servers were just really bogged down. Thanks for the help, and for the awesome docker containers! Sure, glad it worked
  13. Change the configuration folder location to either a cache drive or if you don't have one, a disk location. /mnt/cache/blah /mnt/diskX/blah I deleted the container and image, added it fresh and changed /config to point to /mnt/cache/appdata/plexrequests I am still seeing the same issue Can you post a log? Also, how long did you wait? At container start, it updates meteor and a couple of times I noticed the meteor servers being ridiculously slow. It took 20 minutes for the update
  14. Hmm. . . never noticed the temp directory and that behavior. I'll look into it. It is strange that Calibre doesn't do any clean up of the tmp files until the import is completed. Here's a temporary workaround (since you only have to do this once, it should be fine). You can edit the container in unraid gui, and add a new volume mapping. Put in "/tmp" under container volume and put in whatever folder on unraid you would like to use for the temporary temp location (double temp :-) under host path. This way, that folder you pick, which should be outside of the docker image and somewhere on your array, cache drive or a disk outside of the array, should be used for the temporary files during import. After import, if you like, you can edit the container settings to remove that mapping, and then you can delete that folder, too. Keep in mind that every time you edit the container settings (if edge is set to 1), the container will download the latest version which may take a few minutes depending on how calibre's server is feeling. By the way, are you hosting the National Library Archives? 400GB of ebooks? How? Hmm, I tried mounting to /mnt/user/tmp and keep getting this error Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. so, instead of mounting to /mnt/user, I mounted it directly to one of the disks instead. after that it was able to start up. Did you first create a user share called tmp in unraid?
  15. Change the configuration folder location to either a cache drive or if you don't have one, a disk location. /mnt/cache/blah /mnt/diskX/blah
  16. Hmm. . . never noticed the temp directory and that behavior. I'll look into it. It is strange that Calibre doesn't do any clean up of the tmp files until the import is completed. Here's a temporary workaround (since you only have to do this once, it should be fine). You can edit the container in unraid gui, and add a new volume mapping. Put in "/tmp" under container volume and put in whatever folder on unraid you would like to use for the temporary temp location (double temp :-) under host path. This way, that folder you pick, which should be outside of the docker image and somewhere on your array, cache drive or a disk outside of the array, should be used for the temporary files during import. After import, if you like, you can edit the container settings to remove that mapping, and then you can delete that folder, too. Keep in mind that every time you edit the container settings (if edge is set to 1), the container will download the latest version which may take a few minutes depending on how calibre's server is feeling. By the way, are you hosting the National Library Archives? 400GB of ebooks? How?
  17. Hmm. . . never noticed the temp directory and that behavior. I'll look into it.
  18. Is the video playing fine? Plex transcodes as needed. It speeds up and slows down when necessary.
  19. Well since you asked... I've been thinking about something I think is important, part of the maturation of the unRAID community and its many development partners. I brought it up first over here. unRAID and the numerous addon options have come a long way, and it's time to think about risk management, not just for our data but also for the tools. The community is growing, and depending more and more on so many plugins and containers. Yet for the most part, these plugins and containers have a single author, and that's a single failure point. Think of what would happen to so many users if something happened to PhAzE, binhex, bungy, etc, with so many plugins and containers and so many dependent users. As we're all reminded now and then, life happens, and it rarely comes with advance notifications. Businesses have to have disaster plans, for hurricanes, earthquakes, tornadoes, fires, etc, and so do we. Rudimentary perhaps, but something that users can count on. It's important that users be able to select tools with a known backup plan, a succession plan if something happens to the original author. In a way, I'm an outsider in the addon development world here, so I don't want to say what should happen. Perhaps as simple as a field in CA for 'Backup author/group'? But I'm hoping this can get the ball rolling, and you Squid and the many authors can decide on a mechanism, that provides at least a minimum of a succession plan. I know you have already worked out blacklisting processes, and that's good, but we really need a way to make sure important plugins and containers are carried on, not lost and blacklisted. Once something is set up, then we can all nudge the authors to make sure they have someone they can get to cover for them, if the unexpected happens... However, I can imagine some authors being resistant. That's fine, it's their right. What is a backup author? Basically someone that is able and willing. At a minimum, they need to be able to manage a repository, the original or their own; they need to be able to access the source, and make needed corrections; and they need to be able to package up an update. More than that is gravy. Two things. . . Linuxserver is a team so all their containers have multiple devs And docker containers are simple and streamlined enough that in a lot of cases you can just replace an existing one with a different author's version, point it to the same configuration folder and you're good to go. Many containers have multiple versions already. Plex has more than 3 well supported ones and I believe they are interchangeable
  20. I believe docker can pass through usb devices as long as the host (unraid) has loaded the drivers for them. I'm not sure if unraid contains any drivers for the kobo device (perhaps recognizes as an sd card?). I use it with a Kindle and my preferred method is to send the ebooks to the kindle associated amazon email addresses through calibre and the books are delivered to the devices. The other method is downloading them from the server.
  21. Since dolphin is getting some attention, I updated the info on its github and docker hub pages. By default, it runs as user nobody (uid 99) which should be fine for unraid. But some docker containers run as root and you may not have write access to their local files. If you want to run dolphin as root (uid 0), add the environment variables USER_ID and GROUP_ID and set both to 0. Or install from scratch from the community applications and the updated template includes these variables under advanced view
  22. I think you must misunderstand the purpose of Dolphin since it is nothing like Tonido. A good analogy for Dolphin is Windows File Explorer. This particular docker implementation of it is just giving you a Linux desktop in a browser with the Dolphin file manager already launched. Yeah, sorry my end goal was always just something to manage my files without managing them from my other systems, not necessarily to have access to the files remotely like with Tonido. I still like being able to manage files while I'm away, but would rather not have it open to the world. With that in mind, is there a way to lock it down? Best option is vpn, second best is to use a reverse proxy like nginx. With reverse proxy, you can set a password through .htpasswd For vpn, check out the openvpn server dockers For nginx, you can use either the nginx docker or the letsencrypt one I have, which sets up a free 3rd party SSL certificate with nginx
  23. Home-Automation-Bridge has been updated to ver 1.3.7. It now supports Nest integration, as well as multiple Veras and Harmonys
  24. Haha yeah that makes sense. I thought before the new/updated didn't include the apps that were uninstalled at some point because I didn't see a couple of those I had removed on the list, but perhaps they didn't meet the cut off for the recent date, I'm not sure. The name part also makes sense. In fact, the only time I ever changed the name was because I wanted to run a second instance of a container so I had to rename the second one. I'd be curious about whether people change the names regularly. EDIT: just updated and it works great. Thanks so much
×
×
  • Create New...