rob_robot

Members
  • Content Count

    12
  • Joined

  • Last visited

Everything posted by rob_robot

  1. I didn't encounter this issue as far as I remember. Could it be some memory size issue? Is this the only error or are the additional error messages in the log file?
  2. I also had the same heap size problem with elasticsearch. It can be solved by editing the docker, then switching to "Advanced view" and edit the EXTRA PARAMETERS line. Here an example to switch from 512MB to 4G of heap size: -e "ES_JAVA_OPTS"="-Xms4g -Xmx4g" --ulimit nofile=262144:262144 Actual heap size can be checked by opening up a console inside the docker and then running following command: curl -sS "localhost:9200/_cat/nodes?h=heap*&v" Heap.max should then show 4GB instead of 512 MB.
  3. It is a bit like a chicken and egg problem. The file should get created after the first run, but after this time has passed I don't remember if I manually added the file or if I copied it from inside the docker (so not mapping the config file at all and then copying the file outside of the docker via docker command). One way would be to manually create the file: 1.) Go to /mnt/user/appdata/fscrawler/config/ and create the folder "job_name" (permissions 999, root / root) 2.) Inside the new job_name folder, create a file called _settings.yaml and paste the content from my
  4. The app is great, but to get my paperless set-up working I would need a feature to specify a unique file name for the output file, i.e. something like: SCAN_YEAR_MONTH_DAY_TIME_ID.pdf The problem I have is that my scanner is not providing unique file name indices with i.e. an increasing index number, but instead will re-start counting up from 1 as soon as there are no more files in the folder. This means that once the files have been processed and deleted in the incoming scans folder, the scanner would restart indexing and provide the same file name as before i.e. SCN_0
  5. It was an error in the config file. To check syntax I suggest running rsnapshot configtest inside the docker container, as stated on the GitHub page for rsnapshot https://github.com/rsnapshot/rsnapshot/blob/master/README.md#configuration
  6. I have edited rsnapshot.conf and crontabs/root in the appdata directory as follows and then restarted the docker, but the cron does not seem to get started. I do not see any snapshots being created so far. Is there anything else that needs to be done? # do daily/weekly/monthly maintenance # min hour day month weekday command */15 * * * * run-parts /etc/periodic/15min 0 * * * * run-parts /etc/periodic/hourly 0 2 * * * run-parts /etc/periodic/daily 0 3 * * 6
  7. This guide is based on the Samba WIKI Spotlight with Elasticsearch backend: https://wiki.samba.org/index.php/Spotlight_with_Elasticsearch_Backend The goal of this project is to use the Mac finder to search SMB shares from Mac clients. The provided solutions gives us an index based full text search, something that I've been waiting for a long time. Recently added extensions in SAMBA finally made this possible. To begin with I want to say that I'm nether an UNRAID nor docker expert, so please forgive me if there are better ways to solve this, but my solution seems to be worki
  8. I would recommend to switch to postgres11 instead of MongoDB. For me this caused a big improvement in upload speeds and also CPU utilisation. I posted how to set it up on the Nextcloud thread.
  9. I have been experimenting with different sql databases in nextcloud after installing it based on the instructions in the first post. My recommendation would be to go with Postgres11 instead of MongoDB in general. The main difference is CPU utilisation of the NAS and achievable upload speed. With MongoDB my i3 processor was at 100% utilisation when uploading larger amounts of files. In general the upload was pretty slow and also the web performance of Nextcloud was not great. With MongoDB, on local ethernet with 1 Gbit/s I got a poor 5-15 MB/s of upload speed. With postgres11, I get aroun
  10. For those that are stuck like me on the ingest-attachment plugin issue: You need to stop the elasticsearch docker and restart it after you have executed the command to install the plugin so it gets loaded into elasticsearch. Here my steps: 1.) get elasticsearch docker (7.9.1 works) do a clean install (delete old elasticsearch in /mnt/user/appdata/) 2.) Download the full text search packages in nextcloud app store (at least 3 packages) 3.) Configure your Nextcloud search platform to "Elasticsearch" and address of Servlet to: http://YOUR_IP:9
  11. I got diskover running thanks to this post, so thanks a lot for that. https://forums.unraid.net/topic/75763-support-linuxserverio-diskover/?do=findComment&comment=778389 Now after trying out diskover I was wondering if it also can be used to integrate a full text search on the share i.e. searching .doc and .pdf files? Maybe by using the plugins?