T0a

Members
  • Posts

    149
  • Joined

  • Last visited

Everything posted by T0a

  1. Can you give an ETA on dropping those two tiers? Or at least will you inform about the date once you know the timeframe? Under this new circumstances, I plan on buying another Plus license.
  2. Redis wird von Paperless nur für das temporäre Verarbeiten von neuen Dokumenten verwendet und hält keine Nutzdaten. Per default verwendet Paperless eine SQLite Datenbank. Man kann aber auch eine dedizierten Datenbank wie z.B PostgreSQL verwenden, die dann auch extra gesichert werden muss. Ich vermute, dass du Paperless mit einer dateibasierten SQLite Datenbank betreibst. Diese liegt dann im appdata Verzeichnis und ist auch darüber zu sichern. Ich empfehle dir auch mal die Paperless Dokumentation hinsichtlich des Betriebs und Aufbaus zu lesen - damit sollte vieles klarer werden. Langfristig kannst du dir auch mal zur 3-2-1 Backupstrategie Gedanken machen. Insbesondere, wenn du Dokumente digital hast, die du nicht verlieren darfst.
  3. Ich nutze das rsync Script von mgutt hier, um meine Shares auf eine externe Festplatte zu sichern. Die Backup-Festplatte habe ich über das Unassigned Devices Plugin eingehängt. Das rsync Script führe ich über das Script Plugin aus. Für Paperless sichere ich das Paperless appdata Verzeichnis und ein dediziertes Share für Paperless auf dem der media/consume/ etc. Ordner liegt. Ich nutze Paperless mit einer SQLite Datenbank. Daher muss ich keine zusätzliche Datenbank sichern. Ich nutze außerdem das appdata Backup/Restore Plugin, um meine appdata Verzeichnisse zusätzliche als tar Dateien in ein dediziertes backup Share auf dem Array zu sichern. Dieses Backup Share wird ebenfalls mit dem oben genannten rsync Skript auf die externe Festplatte gesichert. Ob du wie ich einzelne appdata Verzeichnisse direkt per rsync sicherst und zusätzlich redundant das appdata Backup/Restore Plugin nutzt musst du für dich entscheiden.
  4. Hi, hast du mal in den paperless-ngx thread geschaut? Dort steht im ersten Post wie man migriert. Hier auch noch mal der direkte Link in die Dokumentation (Sektion „Migrating to Paperless-ngx“). Redis hält bei paperless in der Regel keine persistenten Daten. Verwendest du eine dedizierte Datenbank? Falls nicht ist es ausreichend ein Backup von dem paperless appdata directory und dem Mount deiner Dokumenten zu erstellen. Zur Migration ist es dann am einfachsten, wenn du das neue paperless-ngx Template aus der UnRaid CA verwendet und bei der Konfiguration die existierenden paperless Verzeichnisse zu verwenden. Wichtig ist, dass insbesondere das DATA Verzeichnis von der alten paperless Installation benutzt wird. Paperless-ngx führt bei der Installation dann eine Migration deiner alten paperless SQLite Datenbank aus. Falls du möchtest kannst du nach Migration und erfolgreicher Installation inkl. Tests per UI deinerseits den Container ausmachen, die migrierte SQLite Datenbank in den neuen paperless-ngx appdata Ordner kopieren und das DATA dir im paperless-ngx UnRaid Template umbiegen. Du solltest nur sicherstellen, dass du die Verzeichnisse vorher sicherst, falls etwas schief geht. Viel Erfolg.
  5. On startup paperless ist doing some checks such as checking the various paths for existence, readability and writeability (data dir, trash dir, media dir, consumption dir). The path checks tries to write a test file to the mounted directories of your container. See here. In your case the write check fails for your consumption and data dir. I suggest checking the docker mounts and directories on your server first. My guess would be that your container is not allowed writing the mounts.
  6. Das 32GB ECC Modul von Kingston ist wirklich super preiswert. Ich habe für den gleichen Preis vor einem Jahr ein ECC Modul mit nur 16GB von Samsung erworben. Solltest du viel mit Virtualisierung machen, bietet es sich an zeitnah ggf. ein zweites Modul zu erwerben. Ich habe zu Beginn den „Fehler“ gemacht nur ein günstiges ECC RAM Modul zu kaufen und habe zu einem späteren Zeitpunkt ein zweites Modul teurer nachgekauft. Unabhängig davon sind 32GB RAM natürlich auch für viele Szenarien ausreichend. Ich frage mich, ob der Preis mit der Einführung von DDR5 Speicher zusammenhängt oder mit einem anderen Ereignis in Verbindung steht (siehe Update) Viel Spaß beim Zusammenbauen. —— Preisverfall bei RAM (notebookcheck.com) Prognose über steigende RAM Preise (pcgameshardware.de)
  7. I have observed some work regarding this problem in Github. Unfortunately, both changes got reverted. Allow multiple rows in NFS rule (4e25bc8cb158b31e7c5ed36f133713bef2d4e35c, revert) NFS security rule: change input to textarea, which allows more input (d6b67b44aa6909ed72b75d4238055eddc89ddf99, revert) I really hope Limetech will tackle the problem in one of their next releases.
  8. Did you upgrade your Container to v1.10.0 around the the time you observed the problems? Can you please try to update your instance to the latest bug fix version v1.10.1 and check whether the problems persist? Make sure to do a backup before upgrading though. I assume this has nothing do to with the Unraid template per se as it hasn't changed recently.
  9. Hi 👋, I did some experiments in this area I would like to share with you. Having the ability to provision VMs automatically within Unraid given user-specific seed configurations would be awesome. Here is what I did: Disclaimer: Do not experiment with your productive environment. The commands listed are not complete neither fully tested. The setup is a proof of concept. Execute at your own risk. Read the 'Open Issues' section first. The setup utilizes the vagrant libvirt plugin with a local vagrant installation. The Vagrantfile configures a libvirt based base box with Unraid like KVM options (attention: the configuration is incomplete and has side-effects. See below.). Native plugin installation on Ubuntu 22.04. There is also a docker container having all dependencies included already. # See: https://vagrant-libvirt.github.io/vagrant-libvirt/installation.html $ sudo apt-get purge vagrant-libvirt $ sudo apt-mark hold vagrant-libvirt $ sudo apt-get install -y qemu libvirt-daemon-system ebtables libguestfs-tools $ sudo apt-get install -y vagrant ruby-fog-libvirt $ vagrant plugin install vagrant-libvirt # Create the VM on the remote Unraid server $ vagrant up # Destroy the VM. Note the image remains in /var/lib/libvirt/images! $ vagrant destroy The Vagrantfile below configures only a small portion of what the original Unraid template contains e.g. the network configuration is missing entirely. Unraid lists the VM after bootstrapping in the VM section. You can also verify this by executing the command "virsh list --all". Right now, we cannot access the VM via the internal VNC viewer because the libvirt plugin does not support the websocket attribute (yet!). I am working on contributing this feature to the project. For now, you can manually edit the XML template and add the attribute "websocket = '5600'" to the graphics tag. Open Issues: The VNC viewer does not work, because the vagrant libvirt plugin does not support the websocket attribute (working on a contribution) Vagrant stores the image in "/var/lib/libvirt/images/" which is the default libvirt storage location. This configuration option comes from "/etc/libvirt/storage/default.xml". Changing the default storage location to "/mnt/user/domains" involves probably the Unraid team. Creating another storage pool and referencing it with "storage_pool_name" in the Vagrantfile is possible, but does not survive a reboot. I haven't touched the network configuration yet. At the moment, the proof of concept creates a dedicated network 'vagrant-libvirt' (see "virsh net-list --all") Discussion: After I did these initial tests, I don't know if it is worth the effort. It may be simpler to keep a golden master VM and copy its disks and VM template. There are also not a lot of libvirt vagrant boxes publicly available. In the end, I am still looking for a better solution to provision new VMs in a cloud like manner (with seed configuration) within Unraid. I would love to hear your ideas or solutions.
  10. Thank you for your feedback. After disabling the vm custom icon container the icon appears: $ ls -l /usr/local/emhttp/plugins/dynamix.vm.manager/templates/images/ total 120 -rw-r--r-- 1 root root 1007 Aug 31 2018 arch.png -rw-r--r-- 1 root root 3185 Aug 31 2018 centos.png -rw-r--r-- 1 root root 3604 Aug 31 2018 chromeos.png -rw-r--r-- 1 root root 3945 Aug 31 2018 coreos.png -rw-r--r-- 1 root root 2487 Aug 31 2018 debian.png -rw-r--r-- 1 root root 1138 Aug 31 2018 default.png -rw-r--r-- 1 root root 3193 Aug 31 2018 fedora.png -rw-r--r-- 1 root root 5405 Aug 31 2018 freebsd.png -rw-r--r-- 1 root root 1746 Aug 31 2018 libreelec.png -rw-r--r-- 1 root root 3404 Aug 31 2018 linux.png -rw-r--r-- 1 root root 3683 Aug 31 2018 openelec.png -rw-r--r-- 1 root root 4693 Aug 31 2018 opensuse.png -rw-r--r-- 1 root root 3081 Aug 31 2018 redhat.png -rw-r--r-- 1 root root 4495 Aug 31 2018 scientific.png -rw-r--r-- 1 root root 4430 Aug 31 2018 slackware.png -rw-r--r-- 1 root root 4407 Aug 31 2018 steamos.png -rw-r--r-- 1 root root 2080 Aug 31 2018 ubuntu.png -rw-r--r-- 1 root root 1097 Sep 6 2018 unraid.png -rw-r--r-- 1 root root 3677 Aug 31 2018 vyos.png -rw-r--r-- 1 root root 1634 Aug 31 2018 windows.png -rwxr-xr-x 1 root root 727 Sep 28 2021 windows11.png* -rw-r--r-- 1 root root 4022 Aug 31 2018 windows7.png -rw-r--r-- 1 root root 4342 Aug 31 2018 windowsvista.png -rw-r--r-- 1 root root 3221 Aug 31 2018 windowsxp.png It looks like I run into this issue here. Note that the windows11.png file permissions differ from the other files. Maybe they wrongly added the executable bit, while creating the new release bundle. Anyway, the original problems is resolved. Thank you.
  11. After upgrading from Unraid 10.3 to 6.11.2 (see upgrade steps) I noticed the following issue: Further investigation shows the image is indeed missing: Disclaimer: I use the vm_custom_icons container, but have the option "Keep Stock VM Icons" set to "Yes".
  12. I just upgraded from Unraid 6.10.3 to 6.11.2. Up until now, I couldn't discover any issues. I leave my upgrade steps here in case someone finds it useful:
  13. How is your experience with the spice-html5 client? For an outsider, the client library looks like a tech preview and hasn't really received a lot of updates in the past. The only other relevant project I could find is this flexVDI/spice-web-client. I don't know how it compares. Don't get me wrong, I am just curious and want to hear your feedback.
  14. Hi 👋, meine Liste wie folgt: vim tmux screen iperf3 powertop python-setuptools Unklar, ggf. eine Abhaengigkeit python2 Benoetigt fuer "Virtual Machine Wake On Lan" Plugin python 3 Unklar, ggf. eine Abhaengigkeit perl Benoegtigt, um ueber "update-ca-certificates" ein custom wildcard Zertifikat zu installieren (referenz)
  15. It looks like your request has been addressed in the prerelease 6.11.0-rc2.
  16. Hi 👋, vielen Dank fuer deinen Hinweis. Habe mir eines als Ersatz fuer mein Fujitsu D3644-B bestellt. Da CPUs in der Regel wesentlich langlebiger sind als Mainboards, ist es immer eine gute Idee bei alter Hardware fuer Ersatz zu sorgen. Eine alte CPU findet man mit groesserer Wahrscheinlichkeit auf dem Gebrauchtmarkt als ein passendes Mainboard wenn es um alte Hardware-Revisionen geht.
  17. I would love having the ability to add custom root CA certificates as well. In my case, I have services available via HTTPS with a custom wildcard certificate on another Host. When I want to query the services with CURL from my Unraid box, I always have to add the "-k" flag to allow for insecure requests. @ddumont I suggest to use this Perl version for the latest Unraid version, which is based on Slackware 15.0. Right now, you install Perl from Slackware 14.x. That shouldn't be a problem in general, but can cause problems in certain situations with certain packages. I wonder if there is a problem executing your /boot/config/fix-ca-certificates script from within the USB stick. Didn't Unraid prevent you from executing scripts from the USB stick directly or am I wrong here?
  18. This will allow everyone in my network to access the NSF shares, right? I would rather try to avoid that. As of now, only certain IP addresses have access to the shares. However, I cannot apply the options to these IP addresses as the rule input field has a size limit.
  19. Indeed that solved the problem. Thank you so much! For reference, I clicked on the tab shares and selected the backup share. Then, under "Nfs Security Settings", I modified the existing rule to "<ip>(sec=sys,rw,insecure,anongid=100,anonuid=25699,no_root_squash)", where <ip> is the address of the Linux client. Update: Is there a way to set the options globally for NFS instead of per rule and IP for all shares? The rule field seems to have a length restriction. Thus, I cannot technically add the same options to all IPs.
  20. No, I execute these commands on the Linux client. It mounts a backup share from the Unraid server and then rsyncs the data to the share. Update: Seems like the ownership issue only occurs when using the root user: toa@client:~$ sudo umount /mnt/backup toa@client:~$ sudo mount -t nfs4 192.168.178.21:/mnt/user/backup/clients/ /mnt/backup toa@client:~$ touch /mnt/backup/ toa@client:~$ touch /mnt/backup/file toa@client:~$ sudo umount /mnt/backup toa@client:~$ sudo su root@client:# sudo mount -t nfs4 192.168.178.21:/mnt/user/backup/clients/ /mnt/backup root@client:# touch /mnt/backup/file2 root@client:# ls -ahl /mnt/backup/ total 3.8G drwxrwxrwx 1 root root 115 Jun 11 15:34 . drwxr-xr-x 3 root root 4.0K Nov 17 2019 .. -rw-r--r-- 1 toa users 0 Jun 11 15:29 file -rw-r--r-- 1 nobody nogroup 0 Jun 11 15:34 file2
  21. Hi 👋, I recently upgraded to Unraid 6.10.2 and try to backup another Linux host via NFS4 and rsync to my Unraid server. The same procedure worked with NFS3 and rsync in the past. However, after upgrading and switching to NFS4 the files do not preserve their ownership anymore and I receive errors from rsync in my client logs as follows. Exemplary, an error from rsync. 11/06/2022 10:30:28 rsync: chown "/mnt/backup/opt/gitea" failed: Operation not permitted (1) User permission comparison: root@client:/mnt/backup/opt/gitea# ls -ahl /mnt/backup/opt/gitea/ total 8.0K drwx------ 1 nobody nogroup 58 Jan 2 13:30 . drwx------ 1 nobody nogroup 4.0K Jun 4 15:06 .. drwx------ 1 nobody nogroup 158 Jun 11 00:00 backup -rw------- 1 nobody nogroup 491 Jun 11 10:31 docker-compose.yml root@client:/mnt/backup/opt/gitea# ls -al /opt/gitea/ total 20 drwxr-xr-x 4 toa toa 4096 Jan 2 13:30 . drwxr-xr-x 30 root root 4096 Jun 4 15:06 .. drwxr-xr-x 2 1000 1000 4096 Jun 11 00:00 backup -rw-r--r-- 1 toa toa 491 Nov 26 2021 docker-compose.yml drwxr-xr-x 5 root root 4096 Dec 28 2020 gitea User id and group for user root@client:/# id toa uid=1000(toa) gid=100(users) groups=100(users),20(dialout),995(docker) root@unraid:/# id toa uid=1000(toa) gid=100(users) groups=100(users) The rsync commands that I execute as root user on the Linux hosts to mount the NFS share from the Unraid server: mount -t nfs4 192.168.178.21:/mnt/user/backup/clients/client /mnt/backup ... rsync -av --delete --delete-excluded $OPT_EXCLUDES /opt /mnt/backup Before, as an example the file 'docker-compose.yml' showed the owner and group 'toa' in the remote share on the client. My research lead me to this article. On the client side, I then set "NEED_IDMAPD=yes" and "NEED_GSSD=no" in the file '/etc/default/nfs-common'. I didn't enable the 'Domain' setting in the '/etc/idmapd.conf' file as I couldn't find that setting in Unraid. Afterwards, I restarted the client and tried again with the same errors. Would love to get some help on this problem. Feel free to request further information for troubleshooting. Thank you in advance!
  22. FYI: There is a small typo in the "nuke uptime column" settings help text: "This will remmove (nuke) the uptime column"
  23. Hi, upgraded to Unraid 6.10.2 today. I experienced a strange UI issue where the loading animation didn't stop when renaming the description of the server (see screenshot attached). I use Firefox version 101.0 (64-bit). What you will find in the diagnostics I attached - it may help to reproduce the issue: 1. Took the array offline 2. Disabled smb1 3. Renamed server description to NAS 3.1 The loading animation didn't stop. I then clicked on another tab and loading stopped 4. Started the array 4.1 Strange Firefox popup appeared. Cannot remember the text anymore, but I clicked resend in the modal dialog 5. Stale configuration appeared suddenly in the footer after clicking the button from the Firefox modal dialog 6. Seems like I could not start the array anymore 6.1 Removed disk assignment from array and reassigned 7. Stale message went away 7.1. Still could not start the array from the UI anymore (no button) 8. Reboot 9. Array successfully started From the system logs only the following messages look suspicious to me: Jun 5 12:53:17 Zeus nginx: 2022/06/05 12:53:17 [error] 9589#9589: *6130 open() "/usr/local/emhttp/images/directory.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/directory.png HTTP/1.1", host: "192.168.178.21" Jun 5 12:53:23 Zeus nginx: 2022/06/05 12:53:23 [error] 9589#9589: *6156 open() "/usr/local/emhttp/images/directory.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/directory.png HTTP/1.1", host: "192.168.178.21" [...] # More occured in the meantime, but not part of the diagnostics Jun 5 15:57:24 Zeus nginx: 2022/06/05 15:57:24 [error] 9589#9589: *103560 open() "/usr/local/emhttp/images/ui-icons_222222_256x240.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/ui-icons_222222_256x240.png HTTP/1.1", host: "192.168.178.21" Jun 5 16:22:23 Zeus nginx: 2022/06/05 16:22:23 [error] 9589#9589: *122857 open() "/usr/local/emhttp/images/ui-icons_222222_256x240.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/ui-icons_222222_256x240.png HTTP/1.1", host: "192.168.178.21" Jun 5 16:30:19 Zeus nginx: 2022/06/05 16:30:19 [error] 9589#9589: *126641 open() "/usr/local/emhttp/images/file-types.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/file-types.png HTTP/1.1", host: "192.168.178.21" Jun 5 16:30:37 Zeus nginx: 2022/06/05 16:30:37 [error] 9589#9589: *126641 open() "/usr/local/emhttp/images/file-types.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/file-types.png HTTP/1.1", host: "192.168.178.21" Jun 5 16:30:42 Zeus nginx: 2022/06/05 16:30:42 [error] 9589#9589: *126641 open() "/usr/local/emhttp/images/file-types.png" failed (2: No such file or directory) while sending to client, client: 192.168.178.25, server: , request: "GET /images/file-types.png HTTP/1.1", host: "192.168.178.21" Though, they might be totally unrelated or caused by a plugin I use. Don't know. zeus-diagnostics-20220605-1239.zip
  24. Why don't you use the official Zerotier image for the template? Does your image contain custom changes for Unraid?