Jump to content

CorneliousJD

Members
  • Posts

    692
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. Just getting this setup, but /data/ contains the papermerge.db file along with all the uploaded files/PDFs, etc. I would assume we all want that papermerge.db file in our appdata folders so it's on our SSDs and part of weekly backups, etc. Is there a way to get this papermerg.db file into the appdata while leaving the actual data (PDFs) somewhere else, such as on the array? Or is this werhe we go into papermerge.conf.py and setup MEDIA_DIR = "/data/media" to be another path we map, then we can direct /data to the appdata folder as well? Thanks in advance.
  2. I had a *lot* of issues with the SQLite database in photoprism actually, lots of "database is locked" errors. After switching it out to MariaDB it just works and everything is much much faster now as a whole. FWIW I pinged the devs on SQLite and they said they tested ith a 6 core machine with fairly standard RAM but I'm running 16T/32C and 128GB of RAM so perhaps my machine was going *too* fast for it to keep up and was trying to move on while the SQLite DB was still locked? Not sure but moving to MariaDB solved the issues. If you do decide to pull it from CA let me know - I will take over the template, I already have an approved CA repo so I can put it up instead if you wish. Your template works great so far on everything with it! I plan to roll with this as a self hosted Google Photos replacement (best thing I can find for now) and get Nextcloud to auto-upload photos from my phone to external storage that is the import folder of Photoprism and script auto-imports to happen every hour or overnight each day.
  3. For the PhotoPrism docker you should add in options for using mysql isntead of sqlite since the built in sqlite is horrible with performance and getting itself "locked" causing issues. Here's what I've added on mine to get it working. Name: Database Driver Key: PHOTOPRISM_DATABASE_DRIVER Value: sqlite (default) or mysql Description: Change my mysql instead of sqlite to use a MariaDB or MySQL database. Name: MySQL Data Source Name Key: PHOTOPRISM_DATABASE_DSN value: user:pass@tcp(IP.OF.DATABASE.SERVER:3306)/photoprism?charset=utf8mb4,utf8&parseTime=true Description: MySQL database connection information. Leave alone if using SQLite NOTE. Would just need to test actually leaving DSN blank/unchanged for sqlite, it shouldn't use it at all if sqlite is in use, but I haven't actually tested this. All info from here: https://dl.photoprism.org/docker/docker-compose.yml I have this up and working on the MariaDB container with this information above.
  4. Intersting, thank you for this. Sadly I don't know that I have the SSD space to really do this right now, but perhaps in the future I could. I do find it wierd though that the web UI for things like calendar/tasks/to-do are all slower though, file access I could understand since the data itself is stored as files in the nextcloud file share, but I would have thought all the other stuff is stored in the DB (which is on SSD cache drives)
  5. I have my appdata for nextcloud on cache, the nextcloud SHARE I just have on the normal array, so it sounds like you've got this correct now
  6. you need to create accounts for them and they have to login to the app with those accounts, not your own account. Their uploads should go into their own folders.
  7. I thought that might be the case actually after I posted it and thought about it 5 minutes later Thanks for handling that. If you need anything from me just @ me. thanks!
  8. So I not only converted to PostgreSQL 13, I also installed the official Redis docker container and changed my appdata\nextcloud\www\nextcloud\config\config.php to contain the following at the start, with 10.0.0.10 being the IP of my server, which is where Redis is installed. This is just the start of the file, the rest of the file will be your config that already exists... <?php $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.distributed' => '\OC\Memcache\Redis', 'redis' => [ 'host' => '10.0.0.10', 'port' => 6379, ], 'memcache.locking' => '\OC\Memcache\Redis', 'datadirectory' => '/data', The reason I left memcache local to APCu is due to this: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html They suggest leaving it as APCu for local if you have enough RAM, if you do not, then use Redis for both lcoal and distributed (as well as locking). I've found that the combination of Postgres and Redis has drastically sped up ny Nextcloud web UI. It's still not perfect, I feel it could be faster (A lot of other webapps I'm running behind reverse proxy are still more responsive, but none are likely as compelx as Nextcloud) I'm still looking for MORE optimization if anyone has any advice on that. PS - I used this page to reference for conversion to Postgres from MariaDB - worked flawlessly. https://docs.nextcloud.com/server/15/admin_manual/configuration_database/db_conversion.html
  9. Should be able to set repsoitory to postgres:12 :latest is version 13 right now.
  10. Well, Redis seems to be working but there's no config file mapped to nothing we can edit. Upon opening the container I'm getting a number of warnings. Not an issue most likely for a home server setup, but would like to work to fix these if possible? 1:C 05 Oct 2020 23:47:34.407 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 05 Oct 2020 23:47:34.407 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 05 Oct 2020 23:47:34.407 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 05 Oct 2020 23:47:34.408 * Running mode=standalone, port=6379. 1:M 05 Oct 2020 23:47:34.408 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1:M 05 Oct 2020 23:47:34.408 # Server initialized 1:M 05 Oct 2020 23:47:34.408 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 05 Oct 2020 23:47:34.408 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
  11. Did you ever get anywhere with this? I'm also experiencing the same thing when browsing the Web UI, things are slower than I feel they should be, as if something isn't right somewhere or could be optimized/changed to make things better but I've been hitting a wall in trying to find *what* that may be. FWIW I am running PostgreSQL 13 at this time, just converted my MariaDB to Postgres and it may have made a small performance increase, but not as much as I was hoping for.
  12. Well, I spun up PostgreSQL 13 and created a databse and executed the following command inside the Nextcloud container. occ db:convert-type --port 5432 --all-apps --clear-schema pgsql nextcloud 10.0.0.10 nextcloud It took about 2 hours to convert my MariaDB to PostgreSQL and when it finally finished, Nextcloud seems slightly faster, by a small margin, but not by a ton like I had hoped. Is there anything else that can be done to speed up this container? I'm accessing it via SWAG with pretty basic/default settings, and haven't installed too many third party apps so far, only really using it for file shares for myself and 2 smaller users. Thanks in advance again.
  13. Hi all, I'm hoping for some guidance on getting my Nextcloud instance to run faster. Most things involving the web UI are just horribly slow, the file sync between machines seems okay (speed isn't a huge factor there) but when browsing/using the web UI it just seems awfully slow. I found this reddit thread that states switching to PostgreSQL solved their issues and made things much snappier. I'm willing to go through the process to get this done if you guys here agree that it is worth doing and would make things run faster? (Note I'm currently running MariaDB with Nextcloud and 2 other (very small) databases in it. I have multiple users of Nextcloud and do not mind interrupting them (just a family member and a close friend) incase this takes a while to get finished, but I want to make the experience better, as I plan to "de-google" a lot more here in 2021 and would like Nextcloud running as smoothly as possible. Thank you in advance!
  14. FYI - Monica (MonicaHQ) is now depreciated and has moved to just the Monica docker image, see here: https://hub.docker.com/r/monicahq/monicahq Should be a very simple migration in the template. Just edit repo to "monica" and edit appdata container path (and comment) to be /var/www/html/storage/ Tested and running on my server this way now.
  15. A year or two ago when I started running lots of containers and it got messy to keep track I said to myself "I really wish they'd let you organize these into groups or folders"... so I really just want to say THANK YOU very much on this... This makes things so much cleaner, and the advanced options to show them w/ expand/collapse on the dashboard is just icing on the cake.
  16. I'll see if he's got a dockerhub he maintains with that, since it has a dockerfile. I'll see if I can spin this up as another option for people
  17. Two simple questions with the changes to SWAG. #1 - Is there a new icon URL I can plug into the container so it doesn't show "LetsEncrypt" icon anymore? I'm weird about these things. #2 - Is there going to be a new support thread that I should update it to link to, or is this still going to be the same thread?
  18. Indeed, sadly it's still one of the best self-hosted recipe management solutiolns even with this fact and its quicks that come with it. I lack the time AND the know-how to properly update any of it, but as it stands now for my own private use with the dockerfile you've provided, it works very well at this point. I am certainly open to suggestions for other recipe management software that's self-hosted though if someone finds something worthwhile.
  19. In an effort to get some of the bugs fixed (by running latest version) I've adopted for now to use @BrambleGB's Dockerfile fork. Change your repository FROM maduvef/openeats TO bramblegb/openeats And save the container, this should pull the latest version and fix a few bugs that exist in the older version. I'm working on forking the Dockerfile to see if I can address a few other things but time is VERY lacking for me at the moment, work and personal life keep me very busy, so for now BrambleGB's file will work and pull you the latest version. Thank you very much BrambleGB for providing this to the community as well!
  20. Awesome, rgr that! I'll probably stick with the LSIO one for now, but always good to have options. I settled on TheLounge too after my research. Thanks!
  21. Any difference/reason to use this container for TheLounge vs the LinuxServer.io one? If so, would pointing at my existing appdata work fine?
  22. I had the same issue this morning, I actually just removed the container and re-added it and it's working again now. For what it's wroth, very much *not* a fan of the animated gif logo in my docker page either. If anyone else is not either, I changed FROM: https://raw.githubusercontent.com/Organizr/docker-organizr/master/logo.gif TO: https://raw.githubusercontent.com/causefx/Organizr/v2-master/plugins/images/organizr/logo-no-border.png
  23. Automatically? No, not unless the Stash devs add it to the docker themselves, but manually, sure! So /mnt/user/appdata/stash/scrapers -- just follow the instructions on the CommunityScrapers page to download the files and put them in the correct place and you should be able to use them no problem.
  24. Thanks for sharing. I will probably continue to run this with it's final update. I can still access locally at least and via my own remote access. Will you just tag it as unsupported and let people still install from CA?
×
×
  • Create New...