rob_robot

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rob_robot's Achievements

Noob

Noob (1/14)

7

Reputation

  1. When I apply the macvlan fix as described here https://docs.unraid.net/unraid-os/release-notes/6.12.4/, I get the problem that vhost0 and eth0 are both creating interfaces with the same IP address of the unraid server (192.168.14.15). My unify network is now complaining that there is an IP conflict with 2 clients being assigned to the same IP. While this is not creating functional problems, it throws warnings in the unifi log and prevents me from setting local DNS, so I would be interested if there is a fix for this? route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default unifi.internal 0.0.0.0 UG 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a4b11a9a27a1 192.168.14.0 0.0.0.0 255.255.255.0 U 0 0 0 vhost0 192.168.14.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 ifconfig vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.14.15 netmask 255.255.255.0 broadcast 0.0.0.0 ether XX:XX:XX:XX:XX txqueuelen 500 (Ethernet) RX packets 42631 bytes 11758180 (11.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 33271 bytes 66624456 (63.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether XX:XX:XX:XX:XX txqueuelen 0 (Ethernet) RX packets 27070 bytes 57222523 (54.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 28877 bytes 7967471 (7.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.14.15 netmask 255.255.255.0 broadcast 0.0.0.0 ether XX:XX:XX:XX:XX txqueuelen 1000 (Ethernet) RX packets 175318 bytes 186774948 (178.1 MiB) RX errors 0 dropped 40 overruns 0 frame 0 TX packets 66106 bytes 25680102 (24.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  2. Below a variant of the shell script creating a lock file, so you can increase the frequency of the cron job (i.e. every 10min) while ensuring at the same time that there won't be overlapping cron jobs in parallel: The script is checking for stale lock files after a threshold of 18 hours. #!/bin/bash # Define a lock file to track the process lock_file="/mnt/user/appdata/photoprism/photoprism_index.lock" # Check if the lock file exists if [ -f "$lock_file" ]; then # Get the timestamp of the lock file lock_timestamp=$(date -r "$lock_file" +%s) current_timestamp=$(date +%s) max_duration=$((1080 * 60)) # Maximum duration in seconds (18 hours) # Calculate the time difference between now and the lock file creation duration=$((current_timestamp - lock_timestamp)) # Check if the process is still running (based on duration) if [ "$duration" -lt "$max_duration" ]; then echo "$(date '+%Y-%m-%d %H:%M:%S') Photoprism index is still running. Skipping." exit 0 else echo "$(date '+%Y-%m-%d %H:%M:%S') Stale lock file found. Proceeding." fi fi # Create the lock file to indicate the process has started touch "$lock_file" # Function to remove the lock file on script exit remove_lock_file() { rm -f "$lock_file" } trap remove_lock_file EXIT docker exec PhotoPrism photoprism index docker exec PhotoPrism photoprism index --cleanup
  3. For this you would need to use the --cleanup option as cleanup and index are separate operations. i.e. #!/bin/bash docker exec PhotoPrism photoprism index docker exec PhotoPrism photoprism index --cleanup
  4. This can be done using post arguments in the docker && docker exec -u 0 Nextcloud /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars && echo "memory_limit=2G" >> /usr/local/etc/php/conf.d/php.ini'
  5. For me, it worked copying over the sshd_config from /etc/ssh/sshd_config to /boot/config/ssh/sshd_config then uncommenting and setting to "no" the below two entries (PasswordAuthentication and PermitEmptyPasswords) + adding the entry to disable ChallengeResponseAuthentication. # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no Then a restart and I was no longer able to login with password. Please note that it is important to generate and test the key based login up front as otherwise, you will lose the capability to login via ssh. For remote client Mac users: You can create a config file in your /Users/<user_name>/.ssh folder like below to be able to use "ssh tower" to access ssh from your remote client Mac instead of typing the server address. Of course "tower" could be replaced with your shortcut / server name: 1.) Go to /Users/<user_name>/.ssh, then touch config nano config Then paste into this file: Host tower Hostname xxx.yyy.zzz user root IdentityFile ~/.ssh/ed25519 Notes: Replace host name with your IP address. The IdentityFile should point to the location of your private key. You can add multiple configs to the config file like, Host name1 Hostname aaa.bbb.ccc.ddd user uname1 IdentityFile ~/.ssh/file1 Host name2 Hostname aaa.bbb.ccc.eee user uname2 IdentityFile ~/.ssh/file2
  6. If it is just about backing up Photos from the iPhone / iPad to the Server, then have a look at setting up a Nextcloud instance on your unraid server. While this is a bit of work (Nextcloud + postgres database + redis dockers, potentially elasticsearch too if you want search), it will give you some other benefits including also automated upload of your photos from your phone to the nextcloud instance via the nextcloud client app from the App Store. In addition to this you will also be able to sync data between all your devices and share across multiple users.
  7. Thanks a lot for this great script. Found this page after restoring my USB flash drive that recently died and realising that the last backup happened to be 1.5 years old 🙄
  8. The same happened to me when restoring a backup of the USB thumb drive on a Mac using the Unraid USB creator tool. To fix it, just go into /boot/config/pools on your USB drive and delete the ._cache file, then reboot.
  9. Hi JorgeB, after upgrading to 6.10.3 it is working again. Thanks a lot! The interface rules drop-down menu inside network settings is back.
  10. For the above issue, for me it worked to go into the docker container and manually delete “nextcloud-init-sync.lock”, then restart the container. I found this here: https://help.nextcloud.com/t/upgrade-locking-nextcloud-init-sync-lock-another-process-is-initializing-ephemeral-vs-persistent-lock/139412
  11. I encountered the same problem after upgrading to 6.10.2 (stable). I have 2 network adapters and the 10 GBit Mellanox card used to be set to eth0. Setting it to eth0 based on NIC is no longer available in 6.10. Seems like a bug or a change made related to 6.10. Both network adapters still show up as eth0 and eth1, though. Setting them to different IP addresses allows to connect to the web-GUI through both, but the main problem is that I no longer can configure the Mellanox card to be the primary network adapter. Docker service is also defaulting to the internal eth0.
  12. I didn't encounter this issue as far as I remember. Could it be some memory size issue? Is this the only error or are the additional error messages in the log file?
  13. I also had the same heap size problem with elasticsearch. It can be solved by editing the docker, then switching to "Advanced view" and edit the EXTRA PARAMETERS line. Here an example to switch from 512MB to 4G of heap size: -e "ES_JAVA_OPTS"="-Xms4g -Xmx4g" --ulimit nofile=262144:262144 Actual heap size can be checked by opening up a console inside the docker and then running following command: curl -sS "localhost:9200/_cat/nodes?h=heap*&v" Heap.max should then show 4GB instead of 512 MB.
  14. It is a bit like a chicken and egg problem. The file should get created after the first run, but after this time has passed I don't remember if I manually added the file or if I copied it from inside the docker (so not mapping the config file at all and then copying the file outside of the docker via docker command). One way would be to manually create the file: 1.) Go to /mnt/user/appdata/fscrawler/config/ and create the folder "job_name" (permissions 999, root / root) 2.) Inside the new job_name folder, create a file called _settings.yaml and paste the content from my initial post. Please make sure to change the IP. address at the bottom of the file (- url). Later on there will be as well a 2nd file called _status.json, but I don't think this is needed initially.
  15. The app is great, but to get my paperless set-up working I would need a feature to specify a unique file name for the output file, i.e. something like: SCAN_YEAR_MONTH_DAY_TIME_ID.pdf The problem I have is that my scanner is not providing unique file name indices with i.e. an increasing index number, but instead will re-start counting up from 1 as soon as there are no more files in the folder. This means that once the files have been processed and deleted in the incoming scans folder, the scanner would restart indexing and provide the same file name as before i.e. SCN_0001.pdf, which then causes the output file to get overwritten. Also keeping the input files is not an option either, as the scanner has a limitation of index number 2000 (SCN_2000.pdf), which would limit the number of possible scans. Is there a way to make a small modification to the ocrmypdf-auto.py phyton script to give a unique file name to the output file?