Jump to content

FoxxMD

Members
  • Content Count

    74
  • Joined

  • Last visited

Community Reputation

12 Good

1 Follower

About FoxxMD

  • Rank
    Advanced Member

Recent Profile Visitors

877 profile views
  1. Unfortunately I do not think there is an easy way to go about this just using docker. The Dockerfile for the client shows that the source is copied to a working directory, then built, and then copied to the www folder. There is a CLIENT_BUILD_ARGS argument for adding arguments to the end of the build command but that's a bit unwieldy. Your options boil down to: Build szurubooru yourself locally, then map your built css folder to the www folder Open an issue (or better yet PR) on the repo to implement some kind of user-defined intermediate step between initial code copy and building, like... Specify an additional folder the Dockerfile could check for files in and copy those after the initial source copy -- that way a user could map their own source style files into that and have them overwrite files original source files before the build step Yes, it is safe to remove! Thank you for catching that I had forgotten to update the template.
  2. @buzzra are you sure you read any of the docs? The entry for enabling new registration, in the docs The entry for creating a superuser as well as accessing admin panel where you can manage users, in the docs Support here is for supporting the docker template on unraid or other specifics about running on unraid -- not for reading the manual for you.
  3. Application Name: pgadmin4 Application Site: https://www.pgadmin.org/ Github Repo: https://github.com/postgres/pgadmin4 Docker Hub: https://hub.docker.com/r/dpage/pgadmin4/ Template Repo: https://github.com/FoxxMD/unraid-docker-templates Overview pgAdmin is the most popular and feature rich Open Source administration and development platform for PostgreSQL, the most advanced Open Source database in the world. Some features: Designed for PostgreSQL 9.5 and above Access and manage multiple databases CRUD for all the things and raw SQL execution Wizards and graphic tools for many administrative tasks (ACL, backup/restore, etc.) Monitoring dashboard How is this different than the existing CA template for pgAdmin4? The dockerhub repo (fenglc) used for that template has not been updated in 2 years and is now archived This dockerhub repo (dpage) is the official image for pgAdmin4 and is up to date fenglc only supports utilities (pg_restore, etc.) for postgres 9 while dpage contains utilities for postgres 9.5, 9.6, 10, 11, and 12 fenglc has hardcoded default credentials while dpage supports supplying your own default user/pass for new installations dpage supports many more environmental variables for configuration Usage NOTE: On initial container build there may be a noticeable delay (10-20 seconds) between when the container starts and when the UI becomes available. === Migrating from fenglc or bringing your own data === Use the Config path (under "Show More Settings") in the template to map your working directory. If you are migrating from fenglc just use the same folder from appdata. If you use this approach you can safely ignore or remove the Email/Password variables from the template. === New Installations === You must provide values for Default Email and Default Password in the template or the container will not start. The email is not actually used, it's just a username. After the container has been started for the first time these variables can be removed from the template. Configuration All other configuration for the container can be found in the pgAdmin docs.
  4. For current users: After v2.2 Elasticsearch is no longer needed. You do not have to take any action in order to upgrade but it is still recommended to backup your postgres DB and data folders beforehand. Once you have started the new (upgraded) container do not stop/update the container until post data has been rehashed -- progress can be checked by viewing the logs. After hashing is complete you can safely remove the Elasticsearch Host variable from your unraid template and restart the the container.
  5. +1 thank you for this script. It would be great if this plugin would work like the rest of the dashboard modules so it could be moved around/move to one column/collapsed
  6. @steffenk you're right it doesn't look like it supports multiple uploads. You might want to check out this other CA app that I use for image archival, Szurubooru, it might be what you are looking for.
  7. This repo contains a docker-compose file which means the "dockerized" solution is actually many different docker containers working together, which docker-compose helps orchestrate. docker-compose can be installed on unraid if you are willing to jump through some hoops using NerdPack. You will still have to write the scripts to run docker-compose and set them up for startup as well. The other option is to just install each docker image from the compose file individually and setup bindings/ports/volumes yourself. However this is a pretty beefy compose file so I wouldn't recommend it
  8. @milfer322 you can update your unraid template to get the ready-made configuration (may require removing and re-installing app from CA?) Or you can add the variables yourself to your template: HTTPS_ONLY as false or true EXPOSE_PORT will specify what port whoogle will start on inside the container
  9. Application Name: maloja Application Site: https://docs.getpinry.com/ Github Repo: https://github.com/pinry/pinry/ Docker Hub: https://hub.docker.com/r/getpinry/pinry Template Repo: https://github.com/FoxxMD/unraid-docker-templates Overview pinry is a tiling image board system for people who want to save, tag, and share images, videos and webpages in an easy to skim through format -- basically an open-source Pinterest clone. Some of its features: Image fetch and online preview Tagging system for Pins Browser Extensions Multi-user support Both public and private boards Search by tags / Search boards with name Some demo sites: https://pin.37soloist.com/ https://pinry-demo.lapo.it/ more screenshots here Usage Initial setup only requires adding the template from CA. Additional settings can be accessed by modifying the configuration file at /mnt/user/appdata/pinry/local_settings.py Additional Configuration This is very specific -- if you are accessing behind an nginx reverse proxy you must make sure you have not set HTTPOnly for cookies as the application gets CSRF tokens from a Cookie header.
  10. @FFV maloja has a super simple endpoint for scrobbling in a DIY manner by using "/api/newscrobble with the keys artist, title and key - either as form-data or json." Might be able to do a loop with applescript and submit the track info from itunes using that? IDK sounds like a bit of an en devour but it's definitely doable!
  11. @milfer322 the author has updated the dockerfile for this app. It is now possible to do what you want by setting these variables in the template: Container Port to 443 Additionally, you may want to set Application HTTPS Only to true.
  12. Application Name: maloja Application Site: https://maloja.krateng.ch/ Github Repo: https://github.com/krateng/maloja Docker Hub: https://hub.docker.com/r/foxxmd/maloja Template Repo: https://github.com/FoxxMD/unraid-docker-templates Overview maloja is self-hosted music scrobble server to create personal listening statistics and charts as a substitute for Last.fm / Libre.fm / GNU FM (scrobbling is the act of recording the music you listen to a database.) maloja has many features that make it suitable as a replacement for last.fm, etc. including: Easy import of existing scrobble data in CSV format (from last.fm, etc.) Custom rules for importing/scrobbling Custom and 3rd party integrations for album/artist artwork& Insightful charting to display time-sliced "top charts" for tracks and artists Full listening history and track lookup using multiple sources (youtube, gmusic, spotify..) A first-party chrome extension for scrobbling from the web as well as third-party scrobble-compliant endpoints for use with other extensions and applications Usage Initial setup only requires adding the template from CA. A randomly generated API key to use with your preferred scrobbling client can be found in mnt/user/appdata/maloja/clients/authenticated_machines.tsv A default setting file is generated at /mnt/user/appdata/maloja/settings/default.ini If you want to override any default settings then add them to another file, settings.ini, in the same folder. You may have to create the file first. Additional Configuration/Usage I will only be covering what is not already included in the readme so check that out first for info on how to import from last.fm, make backups, update db rules, and set general configuration. Setting up Artist/Album Image Fetching Three 3rd party APIs can be integrated to fetch artwork. Of these Last.fm only fetches track artwork. So I would recommend integrating each of these one at a time and only setting up the next on the list if not all of your images are fetched: Spotify - go through the Create A Client ID process. You will need the client ID and Secret. Last.fm - Will also need client ID and Secret Fanart.tv - Only need the API Key Add each (as necessary) to your settings.ini file and then restart the container. Scrobble Clients These are my personal preferences for scrobble clients and are definitely not exhaustive. Web-based -- https://web-scrobbler.github.io/ has extension for chrome and firefox that work on almost any website. Setup: Open extension options Under Account choose Properties for ListenBrainz API URL: http://yourMalojaUrl/api/s/listenbrainz/1/submit-listens Token: Any of the tokens you have registered for maloja Local-based -- I don't have any native applications for listening to music but instead use https://github.com/airsonic-advanced/airsonic-advanced which has ListenBrainz integration Setup: Open settings -> credentials Add credentials with app Listenbrainz Password is any maloja token Open settings -> personal Check "Register what I'm playing at ListenBrainz" In the ListenBrainz URL field that appears enter the same URL as used for web-scrobbler
  13. My apologies I am not too familiar with unraid using a custom network. unraid cannot use port mappings when the network type is not bridge because you have manually setup a different network to attach the container to. You can either use the bridge network and let unraid map the internal port (5000) to the host port (443) on your IP, do your own mapping (externally) somehow, or you can take some step to modify the port whoogle starts on such as: Fork the project and modify Dockerfile to hardcode a specified port in the entrypoint command, then use your own build as the repo Open an issue to request specifying port using an environmental variable in the Dockerfile
  14. There should be a Web UI entry in your container settings. Click Show More Settings (but it shouldn't be hidden). If you don't have it for some reason use Add Path, Port, variable, etc. action at the bottom to add it with Container Port 5000 Host Port 443
  15. @milfer322 I think you are asking how to access it at 192.168.1.6? The root URL can stay the same, you just need to change the Web UI port to 443.