Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by aptalca

  1. If the unraid rc truly requires port 443, then you would only need a new ip with port 443 open for the letsencrypt container, not the rest of the containers. I believe the new unraid rc uses a limetech hosted ddns and gets the certs for the addresses on their server (everyone gets a randomized unique string added to limetech's address). The certs would not be for your own domain, but the custom domain limetech assigns you. Theoretically they should be able to let you use a different port for the connection between their server and yours, although I'm not sure if that's implemented.
  2. If you remove the subdomains field in the container settings, that change should persist through updates. If it doesn't, it's an unraid gui issue.
  3. That's harmless. It's trying to reload nginx after cert renewal but failing, because nginx is not running yet, since the renewal script is running during container start. Nginx will be started later with the new certs loaded. If the script was running via cron at 2am, nginx would have been running, and would have been reloaded properly. Either way everything works fine.
  4. I didn't interpret his reply as sarcastic at all. I do have the same genuine questions. Is there a reason I should be worried about sharing the drive serial numbers? If so, (as jonathanm suggested) the anonymizer can be modified to redact those as well (it already redacts a bunch of personal info). None of us want to post personal identifiable info online in this day and age. If not, we understand it's your personal decision and it's cool.
  5. Edit the container settings, open advanced view and you'll see the field for web ui. Plug the ip in there
  6. What are you trying to do? Can you access the admin interface? If so openvpn-as is running fine If you're just trying to access the cert download page, the address without admin should redirect you to /connect or something like that where you can login as the user. I noticed a redirect issue before (related to client OS and browser) but it's an openvpn-as thing, not docker related.
  7. No issues in the log. Are you sure it is listening on the correct interface? What address did you try to access?
  8. Openvpn and openvpn-as are separate products. The first is the actual platform and the backend, and is open source. The second is a frontend server based on the first, but is not open source and is a commercial product.
  9. The certs weren't generated properly (could be a port forwarding or a dns issue) them you tried it too many times unsuccessfully and now letsencrypt servers are throttling you. Try putting in your custom domain (including your custom subdomain) as the url, and enter a subdomain like www, don't set only subdomains to true. Sometimes when you change the subdomains around you can get around the throttling issue. You still have to fix the dns or port issue. If that doesn't work, you'll have to wait until letsencrypt accepts requests from you again
  10. Try copying a 25gb file from a btrfs drive (or pool) to the same drive. That's when I had issues. Also during unrar and repair where there is simultaneous read and write operations on the same disk.
  11. What I did was 1) mount a second ssd through unassigned devices plugin, 2) shut down all Dockers and VMs (turn off the services in the settings so they don't automatically restart when the array starts), 3) rsync all data from cache to unassigned device (rsync preserves permissions, timestamps, etc. with the option "a"), 4) stop the array, 5) change the disk format from btrfs to xfs and 6) restart the array. It will format the cache drive, which takes about a minute. Then you can transfer your data back to the cache drive and enable the docker and VM services If you don't have a spare ssd, you can rsync to an array disk as well. Make sure you use a disk share and not a user share for that (ie. /mnt/diskX)
  12. Another update. After switching to a single btrfs cache drive, I continued to have minor issues. Sonarr and Radarr still logged "database locked" error messages (sqlite errors), likely due to high disk io during unrar and repairs, although these were much shorter lived compared to btrfs cache pool and they did not cause any issues apart from log messages. Then I converted the drive to xfs and have not had any error messages logged. I am convinced that the disk io is due to btrfs. The issues are much worse in a raid 0 config compared to a single btrfs drive
  13. This sucks because the client to client backup was super useful. I'll look into duplicati
  14. Please read my message three messages above yours. No need to add users through ssh. No need to delete the admin user. No need to do anything through ssh anymore. Follow the directions on docker hub or github. It really is super simple to set up. You guys are way over-complicating it. With regards to switching authentication to pam and back to local, you don't need to do that either. With the latest update, new installs default to local authentication. If you update an older install, it may have been set to pam, in that case, change it to local. If it's already local, you're good to go.
  15. No need to delete the admin user. It will come back when you update the container anyway. Just follow the steps in the readme to disallow the admin user logging in. The admin user is required for the first time login, that's why the container always creates it. But once the user sets up another account with admin privileges and disables admin user login in the config, admin will just be a useless account that doesn't do anything (and doesn't hurt anything). If you want to tighten up security, you can create two user accounts, one an admin, specifically for management purposes, and another for users to login with. Only share the certificate for the second user account with your users. Or, you can create many user accounts, one for each real life user (or per device) so you can disable access for a specific individual if needed. I'm the only vpn user for my server so I use the same certificate on all of my devices. The downside is, if a device is lost or stolen, I would have to generate a new cert and update it on all the other devices.
  16. The instructions in the readme will apply to existing users. The most important thing is to make sure that authentication is set to local before the other users are created. In a nutshell, pam users don't survive container updates or reinstalls, and the admin user (a pam user) gets reset. Local users survive updates and the admin user access (a pam user) can be deactivated in the config file once another local user is given admin privileges
  17. There is a pr that will provide instructions on how to fix the password resetting issue. It is currently under review and should be merged soon
  18. Url should be the top domain that you have control over, ie. test.duckdns.org Then the subdomains would be tor and whatever else you like
  19. Duckdns automatically forwards all sub-subdomains to your ip as well. Just set it up with le so that your cert covers it
  20. Assuming the ip is correct, your router seems to be not forwarding the request on port 443 to the container properly
  21. I'm not sure, according to the log, it seems to start. Did you try going to http://unraidip:33400? If it's still not working, perhaps you can ask in their plex thread
  22. That is not correct. Do not move plex installation files or you'll likely break it and will have to set it up from scratch. Plex doesn't care or know about the root folder. We mount that folder as "/config" inside the container so all plex knows is that it's files are under /config. Whether it is /mnt/cache/appdata/blah or just /blah on your host does not make a difference to plex. Plus, your post is confusing because there is another folder called "Plex Media Server" under plex's internal structure (under /config/Library) and if someone moves that folder, they will surely break it. Leave the mount points as is. If plex is working, there is nothing wrong with them. Set the owner of the plugin files directly and restart plex. That's all that is needed. PS. The container only checks the top folder's permissions/owner on container start, and fixes them only if that folder's is wrong. So if you add files downstream with wrong permissions, the container won't fix it. In your case, my guess is that when you moved the whole plex folder, you likely changed the entire folder's permissions/owner and the container fixed it, also fixing the webtools permissions/owner.
×
×
  • Create New...