Jump to content

CorneliousJD

Members
  • Posts

    699
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. About AMP is short for Application Management Panel, it is a piece of software designed for managing multiple game servers from a single user interface. This supports Minecraft via McMyAdmin3, TF2, Starbound, and a whole lot more! Please see the GitHub page for different modules you can use and create with this! Requirements You will need an AMP license, or a McMyAdmin license in order to get this to run. NOTE: If you use a McMyAdmin license, this container will limit you to only have access to Minecraft and nothing else. For license management, see here: https://manage.cubecoders.com/ If you are migrating from McMyAdmin2 to AMP, please see here: https://github.com/cubecoders/amp/wiki/How-to-import-an-existing-Minecraft-server-into-AMP PLEASE READ: You must also set a static MAC address on this container! AMP is designed to detect hardware changes and will de-activate all instances when something significant changes. This is to stop people from sharing pre-activated instances and bypassing the licencing server. One way of detecting changes is to look at the MAC address of the host's network card. A change here will de-activate instances. By default, Docker assigns a new MAC address to a container every time it is restarted. Therefore, unless you want to painstakingly re-activate all your instances on every server reboot, you need to assign a permanent MAC address. In order to do this on unRAID, please follow these steps. 1. Visit this page: https://miniwebtool.com/mac-address-generator/ 2. Put 02:42:AC in as a prefix. 3. Choose to format with : colons 4. Generate a MAC address. 5. In the container template during install (or edit) please make sure the "advanced" toggle is on in the top right corner. 6. In the "Extra Paramters" enter your MAC address like this --mac-address="02:42:AC:XX:XX:XX" Variables LICENCE: The licence key for CubeCoders AMP. You can retrieve or buy this on their website here: https://manage.cubecoders.com/ Note if a McMyAdmin license is applied, it will limit AMP to MyMyAdmin only mode. MODULE: Which Module to use for the main instance created by this image. Please see GitHub page for more options. USERNAME: The username of the admin user created on first boot. PASSWORD: The password of the admin user. This value is only used when creating the new user. If you use the default value, please change it after first sign-in. GAME PORT: Be sure to use the proper port, and edit the port to be TCP/UDP based on what the game uses! Other Variables: For a list of other variables that can be used and created, plese see the GitHub page. Reverse Proxy This should work just fine with a reverse proxy (like LinuxServer's SWAG container, or Nginx Proxy Manager). I would recommend setting this up as a subdomain, such as https://amp.yourdomain.com A basic example would be as follows. Note that I have currently not tested this. #AMP server { listen 80; server_name amp.yourdomain.com; return 301 https://amp.yourdomain.com$request_uri; } server { listen 443 ssl http2; server_name amp.yourdomain.com; location / { include /config/nginx/proxy.conf; proxy_pass http://192.168.1.10:8080/; } } Known Issues I do not know of any known issues with this container at this time.
  2. Just click like you're reading the recipe, not editing it. Let me know if you find it there. If not I'll take some screenshots when I'm at my home PC.
  3. You're welcome, glad people are finding good uses for it. I plan on using it instead of OpenEats moving forward as well, although there's still some things about OE I like better In regards to adding a recipe to the cookbook, you have to go into the recipe itself first, and then in the top right there's a little "..." icon that you can click to add it to a specific cookbook. If you're still stuck let me know and I can get some example screenshots for you later.
  4. Actually also use it for a document scanner to dump documents to OCRmyPDF (and then import into PaperMerge) My Brother ADS-1000W uses FTP to do that.
  5. The "Fix common problems" plugin says to not use it, and to use a container instead. I'm only (currently) running it on my local network, but even then if there's any security risk I'd rather not. I - like an idiot - downloaded FileZilla container thinking it was the server application but it's not. I am not at all familiar with any of the FTP servers listed in CA really, does anyone have any advice/insight on this, and would would be good container to use, perferably one w/ a webUI. Thanks in advance.
  6. Well, it's done! Recipes container is successfully running self-contained with no other DB required. Sorry it took so long, I hoenstly didn't give it an honest effort until this evening but it's working now! It'll be up on CA within 2 hours or so -- if it's not there yet when you look just check back in a few. It's all in a self-contained docker container now. Support thread for it is here From this point forward thoiugh please keep everything related to the Recipes container in the following thread. That will leave this one just for OpenEats support.
  7. About Tandoor Recipes is a Django application to manage, tag and search recipes using either built in models or external storage providers hosting PDF's, Images or other files. This application is meant for people with a collection of recipes they want to share with family and friends or simply store them in a nicely organized way. A basic permission system exists but this application is not meant to be run as a public page. Please note that I did not develop, and am not involved with the development of Recipes. There was simply a number of requests to get a working container on unRAID so I took the lead on creating the teimplate and publishing it into Community Applications. GitHub: https://github.com/vabene1111/recipes Project Page: https://docs.tandoor.dev/ Requirements Nothing else other than the container, it is all self-contained. However you do have the option to use a PostgreSQL to add the Trigram similarity search function. Simply show the hidden/advanced variables when installing or editing the container to enter PostgreSQL information if you wish to use that. Variables ALLOWED_HOST: This will default to * (all), however to be more restrictive you can list your IP address of your unRAID server (allows local LAN access), along with any reverse-proxy domain you want to access from (for access over the Internet), e.g. it may look like this: 192.168.1.10, recipes.yourdomain.com SECRET_KEY: This needs to be a long randomly genrated string for cryptography. See here for more information. Other Variables: A list of other variables is available here. Reverse Proxy This should work just fine with a reverse proxy (like LinuxServer's SWAG container, or Nginx Proxy Manager). I would recommend setting this up as a subdomain, such as https://recipes.yourdomain.com A basic example would be as follows. Note that I have currently not tested this. #RECIPES Was noted that this config is not working for SWAG. If someone has a working config please let me know and I will post it here. Nginx Proxy Manager works just fine to reverse proxy this app. Known Issues All media is served directly by Gunicorn instead of NGINX, this isn't recommended, but works. -- See screenshot below. If you would like to change this and serve images via another container (such as Nginx, or SWAG, etc) see discussion on how users got it working here: https://github.com/vabene1111/recipes/discussions/341 By default this will use a SQLite database, which removes the Trigram similarity search function. -- This is likely not an issue for most people, but something to note. I chose to let this run with SQLite by default so that everything is self-contained and you do not need another PostgreSQL container, unless you choose to set one up in order to gain the Trigram similarity search features. If there are any container-specific issues please post here, otherwise if it's an application issue or request, you're better of posting on the Github page for Recipes here: https://github.com/vabene1111/recipes/issues
  8. Just getting this setup, but /data/ contains the papermerge.db file along with all the uploaded files/PDFs, etc. I would assume we all want that papermerge.db file in our appdata folders so it's on our SSDs and part of weekly backups, etc. Is there a way to get this papermerg.db file into the appdata while leaving the actual data (PDFs) somewhere else, such as on the array? Or is this werhe we go into papermerge.conf.py and setup MEDIA_DIR = "/data/media" to be another path we map, then we can direct /data to the appdata folder as well? Thanks in advance.
  9. I had a *lot* of issues with the SQLite database in photoprism actually, lots of "database is locked" errors. After switching it out to MariaDB it just works and everything is much much faster now as a whole. FWIW I pinged the devs on SQLite and they said they tested ith a 6 core machine with fairly standard RAM but I'm running 16T/32C and 128GB of RAM so perhaps my machine was going *too* fast for it to keep up and was trying to move on while the SQLite DB was still locked? Not sure but moving to MariaDB solved the issues. If you do decide to pull it from CA let me know - I will take over the template, I already have an approved CA repo so I can put it up instead if you wish. Your template works great so far on everything with it! I plan to roll with this as a self hosted Google Photos replacement (best thing I can find for now) and get Nextcloud to auto-upload photos from my phone to external storage that is the import folder of Photoprism and script auto-imports to happen every hour or overnight each day.
  10. For the PhotoPrism docker you should add in options for using mysql isntead of sqlite since the built in sqlite is horrible with performance and getting itself "locked" causing issues. Here's what I've added on mine to get it working. Name: Database Driver Key: PHOTOPRISM_DATABASE_DRIVER Value: sqlite (default) or mysql Description: Change my mysql instead of sqlite to use a MariaDB or MySQL database. Name: MySQL Data Source Name Key: PHOTOPRISM_DATABASE_DSN value: user:pass@tcp(IP.OF.DATABASE.SERVER:3306)/photoprism?charset=utf8mb4,utf8&parseTime=true Description: MySQL database connection information. Leave alone if using SQLite NOTE. Would just need to test actually leaving DSN blank/unchanged for sqlite, it shouldn't use it at all if sqlite is in use, but I haven't actually tested this. All info from here: https://dl.photoprism.org/docker/docker-compose.yml I have this up and working on the MariaDB container with this information above.
  11. Intersting, thank you for this. Sadly I don't know that I have the SSD space to really do this right now, but perhaps in the future I could. I do find it wierd though that the web UI for things like calendar/tasks/to-do are all slower though, file access I could understand since the data itself is stored as files in the nextcloud file share, but I would have thought all the other stuff is stored in the DB (which is on SSD cache drives)
  12. I have my appdata for nextcloud on cache, the nextcloud SHARE I just have on the normal array, so it sounds like you've got this correct now
  13. you need to create accounts for them and they have to login to the app with those accounts, not your own account. Their uploads should go into their own folders.
  14. I thought that might be the case actually after I posted it and thought about it 5 minutes later Thanks for handling that. If you need anything from me just @ me. thanks!
  15. So I not only converted to PostgreSQL 13, I also installed the official Redis docker container and changed my appdata\nextcloud\www\nextcloud\config\config.php to contain the following at the start, with 10.0.0.10 being the IP of my server, which is where Redis is installed. This is just the start of the file, the rest of the file will be your config that already exists... <?php $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.distributed' => '\OC\Memcache\Redis', 'redis' => [ 'host' => '10.0.0.10', 'port' => 6379, ], 'memcache.locking' => '\OC\Memcache\Redis', 'datadirectory' => '/data', The reason I left memcache local to APCu is due to this: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html They suggest leaving it as APCu for local if you have enough RAM, if you do not, then use Redis for both lcoal and distributed (as well as locking). I've found that the combination of Postgres and Redis has drastically sped up ny Nextcloud web UI. It's still not perfect, I feel it could be faster (A lot of other webapps I'm running behind reverse proxy are still more responsive, but none are likely as compelx as Nextcloud) I'm still looking for MORE optimization if anyone has any advice on that. PS - I used this page to reference for conversion to Postgres from MariaDB - worked flawlessly. https://docs.nextcloud.com/server/15/admin_manual/configuration_database/db_conversion.html
  16. Should be able to set repsoitory to postgres:12 :latest is version 13 right now.
  17. Well, Redis seems to be working but there's no config file mapped to nothing we can edit. Upon opening the container I'm getting a number of warnings. Not an issue most likely for a home server setup, but would like to work to fix these if possible? 1:C 05 Oct 2020 23:47:34.407 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 05 Oct 2020 23:47:34.407 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 05 Oct 2020 23:47:34.407 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 05 Oct 2020 23:47:34.408 * Running mode=standalone, port=6379. 1:M 05 Oct 2020 23:47:34.408 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1:M 05 Oct 2020 23:47:34.408 # Server initialized 1:M 05 Oct 2020 23:47:34.408 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 05 Oct 2020 23:47:34.408 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
  18. Did you ever get anywhere with this? I'm also experiencing the same thing when browsing the Web UI, things are slower than I feel they should be, as if something isn't right somewhere or could be optimized/changed to make things better but I've been hitting a wall in trying to find *what* that may be. FWIW I am running PostgreSQL 13 at this time, just converted my MariaDB to Postgres and it may have made a small performance increase, but not as much as I was hoping for.
  19. Well, I spun up PostgreSQL 13 and created a databse and executed the following command inside the Nextcloud container. occ db:convert-type --port 5432 --all-apps --clear-schema pgsql nextcloud 10.0.0.10 nextcloud It took about 2 hours to convert my MariaDB to PostgreSQL and when it finally finished, Nextcloud seems slightly faster, by a small margin, but not by a ton like I had hoped. Is there anything else that can be done to speed up this container? I'm accessing it via SWAG with pretty basic/default settings, and haven't installed too many third party apps so far, only really using it for file shares for myself and 2 smaller users. Thanks in advance again.
  20. Hi all, I'm hoping for some guidance on getting my Nextcloud instance to run faster. Most things involving the web UI are just horribly slow, the file sync between machines seems okay (speed isn't a huge factor there) but when browsing/using the web UI it just seems awfully slow. I found this reddit thread that states switching to PostgreSQL solved their issues and made things much snappier. I'm willing to go through the process to get this done if you guys here agree that it is worth doing and would make things run faster? (Note I'm currently running MariaDB with Nextcloud and 2 other (very small) databases in it. I have multiple users of Nextcloud and do not mind interrupting them (just a family member and a close friend) incase this takes a while to get finished, but I want to make the experience better, as I plan to "de-google" a lot more here in 2021 and would like Nextcloud running as smoothly as possible. Thank you in advance!
  21. FYI - Monica (MonicaHQ) is now depreciated and has moved to just the Monica docker image, see here: https://hub.docker.com/r/monicahq/monicahq Should be a very simple migration in the template. Just edit repo to "monica" and edit appdata container path (and comment) to be /var/www/html/storage/ Tested and running on my server this way now.
  22. A year or two ago when I started running lots of containers and it got messy to keep track I said to myself "I really wish they'd let you organize these into groups or folders"... so I really just want to say THANK YOU very much on this... This makes things so much cleaner, and the advanced options to show them w/ expand/collapse on the dashboard is just icing on the cake.
  23. I'll see if he's got a dockerhub he maintains with that, since it has a dockerfile. I'll see if I can spin this up as another option for people
  24. Two simple questions with the changes to SWAG. #1 - Is there a new icon URL I can plug into the container so it doesn't show "LetsEncrypt" icon anymore? I'm weird about these things. #2 - Is there going to be a new support thread that I should update it to link to, or is this still going to be the same thread?
×
×
  • Create New...