T0a

Members
  • Posts

    149
  • Joined

  • Last visited

Posts posted by T0a

  1. Quote

    Simultaneous with introducing these two key types, we will no longer offer Basic and Plus keys;


    Can you give an ETA on dropping those two tiers? Or at least will you inform about the date once you know the timeframe? Under this new circumstances, I plan on buying another Plus license. 

  2. 11 hours ago, boernie77 said:

    Das waren viele gute Tipps! Besonders der mit der Datenbank. Daran hätte ich nicht gedacht. Ich nutze Redis. Da finde ich aber nichts in appdata. Und auch sonst nirgends. Muss man die extra sichern, oder sind die Daten dazu in Paperless mit drin, wenn ich diese sichere?


    Redis wird von Paperless nur für das temporäre Verarbeiten von neuen Dokumenten verwendet und hält keine Nutzdaten.
     

    Per default verwendet Paperless eine SQLite Datenbank. Man kann aber auch eine dedizierten Datenbank wie z.B PostgreSQL verwenden, die dann auch extra gesichert werden muss. 
     

    Ich vermute, dass du Paperless mit einer dateibasierten SQLite Datenbank betreibst. Diese liegt dann im appdata Verzeichnis und ist auch darüber zu sichern. 
     

    Ich empfehle dir auch mal die Paperless Dokumentation hinsichtlich des Betriebs und Aufbaus zu lesen - damit sollte vieles klarer werden. 
     

    Langfristig kannst du dir auch mal zur 3-2-1 Backupstrategie Gedanken machen. Insbesondere, wenn du Dokumente digital hast, die du nicht verlieren darfst. 

    • Like 1
  3. Quote

    Wie ich das am Besten bewerkstellige, weiß ich noch nicht genau. Irgendwelche Vorschläge? Meine Idee ist es, diese extern über rsync zu sichern...

     

    Ich nutze das rsync Script von mgutt hier, um meine Shares auf eine externe Festplatte zu sichern. Die Backup-Festplatte habe ich über das Unassigned Devices Plugin eingehängt. Das rsync Script führe ich über das Script Plugin aus. Für Paperless sichere ich das Paperless appdata Verzeichnis und ein dediziertes Share für Paperless auf dem der media/consume/ etc. Ordner liegt. 

     

    Ich nutze Paperless mit einer SQLite Datenbank. Daher muss ich keine zusätzliche Datenbank sichern.

     

    Ich nutze außerdem das appdata Backup/Restore Plugin, um meine appdata Verzeichnisse zusätzliche als tar Dateien in ein dediziertes backup Share auf dem Array zu sichern. Dieses Backup Share wird ebenfalls mit dem oben genannten rsync Skript auf die externe Festplatte gesichert. 

     

    Ob du wie ich einzelne appdata Verzeichnisse direkt per rsync sicherst und zusätzlich redundant das appdata Backup/Restore Plugin nutzt musst du für dich entscheiden.

    • Like 1
  4. Hi,

     

    hast du mal in den paperless-ngx thread geschaut? Dort steht im ersten Post wie man migriert. Hier auch noch mal der direkte Link in die Dokumentation (Sektion „Migrating to Paperless-ngx“).

     

    Redis hält bei paperless in der Regel keine persistenten Daten. Verwendest du eine dedizierte Datenbank? Falls nicht ist es ausreichend ein Backup von dem paperless appdata directory und dem Mount deiner Dokumenten zu erstellen. 


    Zur Migration ist es dann am einfachsten, wenn du das neue paperless-ngx Template aus der UnRaid CA verwendet und bei der Konfiguration die existierenden paperless Verzeichnisse zu verwenden. 

    Wichtig ist, dass insbesondere das DATA Verzeichnis von der alten paperless Installation benutzt wird. Paperless-ngx führt bei der Installation dann eine Migration deiner alten paperless SQLite Datenbank aus.

     

    Falls du möchtest kannst du nach Migration und erfolgreicher Installation inkl. Tests per UI deinerseits den Container ausmachen, die migrierte SQLite Datenbank in den neuen paperless-ngx appdata Ordner kopieren und das DATA dir im paperless-ngx UnRaid Template umbiegen. 

     

    Du solltest nur sicherstellen, dass du die Verzeichnisse vorher sicherst, falls etwas schief geht. 
     

    Viel Erfolg. 

     

    • Like 1
  5. On startup paperless ist doing some checks such as checking the various paths for existence, readability and writeability (data dir, trash dir, media dir, consumption dir). 
     

    The path checks tries to write a test file to the mounted directories of your container. See here
     

    In your case the write check fails for your consumption and data dir. I suggest checking the docker mounts and directories on your server first. My guess would be that your container is not allowed writing the mounts.

    • Thanks 1
  6. 1 hour ago, renegade03 said:

    Also Abschließend mal die fertige Zusammenstellung:

     

    Gigabyte c246m-wu4

    Intel Core i3-9100

    Kingston 32GB DIMM ECC (1 Modul)

    Western digital red sn700 NVMe-nas-SSD (2x)
    Seagate BarraCuda Compute 3TB (4x vorhanden)

    Corsair RM550x


    Das 32GB ECC Modul von Kingston ist wirklich super preiswert. Ich habe für den gleichen Preis vor einem Jahr ein ECC Modul mit nur 16GB von Samsung erworben. Solltest du viel mit Virtualisierung machen, bietet es sich an zeitnah ggf. ein zweites Modul zu erwerben. Ich habe zu Beginn den „Fehler“ gemacht nur ein günstiges ECC RAM Modul zu kaufen und habe zu einem späteren Zeitpunkt ein zweites Modul teurer nachgekauft. Unabhängig davon sind 32GB RAM natürlich auch für viele Szenarien ausreichend. 
     

    Ich frage mich, ob der Preis mit der Einführung von DDR5 Speicher zusammenhängt oder mit einem anderen Ereignis in Verbindung steht (siehe Update)
     

    Viel Spaß beim Zusammenbauen.

     

    ——

    Preisverfall bei RAM (notebookcheck.com)

    Prognose über steigende RAM Preise (pcgameshardware.de)

  7. On 6/11/2022 at 11:54 PM, T0a said:

    I cannot apply the options to these IP addresses as the rule input field has a size limit

     

    I have observed some work regarding this problem in Github. Unfortunately, both changes got reverted.

    I really hope Limetech will tackle the problem in one of their next releases.

  8. Quote

    I recently noticed my scanned documents are just sitting in the file queue [...]

     

    Did you upgrade your Container to v1.10.0 around the the time you observed the problems? Can you please try to update your instance to the latest bug fix version v1.10.1 and check whether the problems persist? Make sure to do a backup before upgrading though.

     

    I assume this has nothing do to with the Unraid template per se as it hasn't changed recently.

  9. Hi 👋,

    I did some experiments in this area I would like to share with you. Having the ability to provision VMs automatically within Unraid given user-specific seed configurations would be awesome. Here is what I did:

     

    Disclaimer: Do not experiment with your productive environment. The commands listed are not complete neither fully tested. The setup is a proof of concept. Execute at your own risk. Read the 'Open Issues' section first.

     

    The setup utilizes the vagrant libvirt plugin with a local vagrant installation. The Vagrantfile configures a libvirt based base box with Unraid like KVM options (attention: the configuration is incomplete and has side-effects. See below.).

     

    Native plugin installation on Ubuntu 22.04. There is also a docker container having all dependencies included already.

    # See: https://vagrant-libvirt.github.io/vagrant-libvirt/installation.html
    $ sudo apt-get purge vagrant-libvirt
    $ sudo apt-mark hold vagrant-libvirt
    $ sudo apt-get install -y qemu libvirt-daemon-system ebtables libguestfs-tools
    $ sudo apt-get install -y vagrant ruby-fog-libvirt
    $ vagrant plugin install vagrant-libvirt
    
    # Create the VM on the remote Unraid server
    $ vagrant up
    
    # Destroy the VM. Note the image remains in /var/lib/libvirt/images!
    $ vagrant destroy

     

    The Vagrantfile below configures only a small portion of what the original Unraid template contains e.g. the network configuration is missing entirely.

    Spoiler
    # -*- mode: ruby -*-
    # vi: set ft=ruby :
    
    Vagrant.require_version ">= 1.8.0"
    
    ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
    VAGRANTFILE_API_VERSION = "2"
    
    Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
    
      config.vm.define "ubuntu-01" do |config|
      # See: https://app.vagrantup.com/boxes/search?provider=libvirt
      config.vm.hostname = "ubuntu-01"
      config.vm.box = "generic/ubuntu2004"
      config.vm.box_check_update = false
    
      config.vm.provider :libvirt do |v|
            v.description = "Ubuntu-01 Vagrant"
            v.memory = 1024
            v.cpus = 2
            v.memorybacking :nosharepages
            v.features = ['acpi', 'apic']
            v.cpu_mode = "host-passthrough"
    
            # <clock/>
            v.clock_offset = "utc"
            v.clock_timer :name => 'rtc', :tickpolicy => 'catchup'
            v.clock_timer :name => 'pit', :tickpolicy => 'delay'
            v.clock_timer :name => 'hpet', :present => 'no'
    
            v.emulator_path = "/usr/local/sbin/qemu"
    
            # <channel/>
            v.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :disabled => true
    
            # <graphics/>
            v.graphics_type = "vnc"
            v.graphics_port = "-1"
            # Available once the pull request is merged
            # See: https://github.com/vagrant-libvirt/vagrant-libvirt/pull/1672
            # v.graphics_websocket = "-1"
            v.graphics_ip = "0.0.0.0"
    
            # <video/>
            v.video_type = "qxl"
            v.video_vram = "16384"
    
            # See: https://vagrant-libvirt.github.io/vagrant-libvirt/configuration.html#connection-options
            v.host = "<unraid-server-ip>"
            v.username = "root"
            v.id_ssh_key_file = "/home/<user>/.ssh/id_rsa_unraid"
        end
      end
    end

     

     

    Unraid lists the VM after bootstrapping in the VM section. You can also verify this by executing the command "virsh list --all". Right now, we cannot access the VM via the internal VNC viewer because the libvirt plugin does not support the websocket attribute (yet!). I am working on contributing this feature to the project. For now, you can manually edit the XML template and add the attribute "websocket = '5600'" to the graphics tag.

     

    Open Issues:

    • The VNC viewer does not work, because the vagrant libvirt plugin does not support the websocket attribute (working on a contribution)
    • Vagrant stores the image in "/var/lib/libvirt/images/" which is the default libvirt storage location. This configuration option comes from "/etc/libvirt/storage/default.xml". Changing the default storage location to "/mnt/user/domains" involves probably the Unraid team. Creating another storage pool and referencing it with "storage_pool_name" in the Vagrantfile is possible, but does not survive a reboot.
    • I haven't touched the network configuration yet. At the moment, the proof of concept creates a dedicated network 'vagrant-libvirt' (see "virsh net-list --all")

     

    Discussion:

     

    After I did these initial tests, I don't know if it is worth the effort. It may be simpler to keep a golden master VM and copy its disks and VM template. There are also not a lot of libvirt vagrant boxes publicly available. In the end, I am still looking for a better solution to provision new VMs in a cloud like manner (with seed configuration) within Unraid. I would love to hear your ideas or solutions.

  10. I just upgraded from Unraid 6.10.3 to 6.11.2. Up until now, I couldn't discover any issues. I leave my upgrade steps here in case someone finds it useful:

     

    Spoiler

    - Backup user data
    - Backup flash drive
    - Restore /boot/config/go file to default
    - Remove Nerdpack plugin (not supported anymore, replacement available)
    - Remove SSD trim plugin (backed-in to Unraid 6.11)
    - Disabled Docker service
    - Disabled VM service
    - Run update assistant in the Tools section
    - Do the upgrade
    - Reboot
    - Re-enable Docker service
    - Re-enable VM service
    - Install NerdTools from community applications
    - Install Python 2 via NerdTools (required for "Libvirt wake on lan" plugin)
    - Enable "Libvirt wake on lan" plugin by switching from "Yes" to "No" and back to "Yes" again (Settings -> VM Manager)

    - Enable SSD trim (Main -> Schedule)
    - Install Powertop via NerdTools
    - Restore /boot/config/go file with custom changes
    - Optional: Install additional packages via NerdTools (vim, wget)
    - Check system logs for errors and warnings
    - Reboot again to test custom changes

     

     

  11. 23 minutes ago, SimonF said:

     

    I have created this PR https://github.com/limetech/webgui/pull/1148 to add spice support.

     

    How is your experience with the spice-html5 client? For an outsider, the client library looks like a tech preview and hasn't really received a lot of updates in the past. The only other relevant project I could find is this flexVDI/spice-web-client. I don't know how it compares. Don't get me wrong, I am just curious and want to hear your feedback.

  12. Hi 👋,

     

    meine Liste wie folgt:

    • vim
    • tmux
    • screen
    • iperf3
    • powertop
    • python-setuptools
      • Unklar, ggf. eine Abhaengigkeit
    • python2
      • Benoetigt fuer "Virtual Machine Wake On Lan" Plugin
    • python 3
      • Unklar, ggf. eine Abhaengigkeit
    • perl
      • Benoegtigt, um ueber "update-ca-certificates" ein custom wildcard Zertifikat zu installieren (referenz)
    • Thanks 1
  13. Hi 👋,

    vielen Dank fuer deinen Hinweis. Habe mir eines als Ersatz fuer mein Fujitsu D3644-B bestellt. Da CPUs in der Regel wesentlich langlebiger sind als Mainboards, ist es immer eine gute Idee bei alter Hardware fuer Ersatz zu sorgen. Eine alte CPU findet man mit groesserer Wahrscheinlichkeit auf dem Gebrauchtmarkt als ein passendes Mainboard wenn es um alte Hardware-Revisionen geht.

  14. I would love having the ability to add custom root CA certificates as well. In my case, I have services available via HTTPS with a custom wildcard certificate on another Host. When I want to query the services with CURL from my Unraid box, I always have to add the "-k" flag to allow for insecure requests.

     

    @ddumont

     

    I suggest to use this Perl version for the latest Unraid version, which is based on Slackware 15.0. Right now, you install Perl from Slackware 14.x. That shouldn't be a problem in general, but can cause problems in certain situations with certain packages.

     

    I wonder if there is a problem executing your /boot/config/fix-ca-certificates script from within the USB stick. Didn't Unraid prevent you from executing scripts from the USB stick directly or am I wrong here?

     

  15. On 6/11/2022 at 6:25 PM, dlandon said:

    Are you talking about on Unraid shares?  I don't think there is a global setting that applies to all on Unraid.

     

    Can you not set the rule to apply to all ip addresses by setting the "*" instead of individual ip addresses?

     

    This will allow everyone in my network to access the NSF shares, right? I would rather try to avoid that. As of now, only certain IP addresses have access to the shares. However, I cannot apply the options to these IP addresses as the rule input field has a size limit.

  16. On 6/11/2022 at 3:38 PM, dlandon said:

     

    Check your NFS rules on the client.  They affect permissions.  This is what UD uses when mounting remote NFS shares:

    *(sec=sys,rw,insecure,anongid=100,anonuid=99,no_root_squash)

     

     

    Indeed that solved the problem. Thank you so much! For reference, I clicked on the tab shares and selected the backup share. Then, under "Nfs Security Settings", I modified the existing rule to "<ip>(sec=sys,rw,insecure,anongid=100,anonuid=25699,no_root_squash)", where <ip> is the address of the Linux client.

     

    Update: Is there a way to set the options globally for NFS instead of per rule and IP for all shares? The rule field seems to have a length restriction. Thus, I cannot technically add the same options to all IPs.

    • Like 1
  17. 36 minutes ago, dlandon said:

    I assume these are commands on Unraid.  If so use the UD Plugin to mount the remote NFS share as you are not mounting the NFSv4 with any options.  It will make management of the remote share a lot easier.  UD will also manage a default set of rules that should work in most cases.

     

    Also, post your diagnostics zip file for further help.

     

    No, I execute these commands on the Linux client. It mounts a backup share from the Unraid server and then rsyncs the data to the share.

     

    Update:

     

    Seems like the ownership issue only occurs when using the root user:

     

    toa@client:~$ sudo umount /mnt/backup
    toa@client:~$ sudo mount -t nfs4 192.168.178.21:/mnt/user/backup/clients/ /mnt/backup
    toa@client:~$ touch /mnt/backup/
    toa@client:~$ touch /mnt/backup/file
    toa@client:~$ sudo umount /mnt/backup
    toa@client:~$ sudo su
    root@client:# sudo mount -t nfs4 192.168.178.21:/mnt/user/backup/clients/ /mnt/backup
    root@client:# touch /mnt/backup/file2
    root@client:# ls -ahl /mnt/backup/
    total 3.8G
    drwxrwxrwx 1 root   root     115 Jun 11 15:34 .
    drwxr-xr-x 3 root   root    4.0K Nov 17  2019 ..
    -rw-r--r-- 1 toa    users      0 Jun 11 15:29 file
    -rw-r--r-- 1 nobody nogroup    0 Jun 11 15:34 file2

     

  18. Hi 👋,

     

    I recently upgraded to Unraid 6.10.2 and try to backup another Linux host via NFS4 and rsync to my Unraid server. The same procedure worked with NFS3 and rsync in the past. However, after upgrading and switching to NFS4 the files do not preserve their ownership anymore and I receive errors from rsync in my client logs as follows.

     

    Exemplary, an error from rsync. 

    11/06/2022 10:30:28 rsync: chown "/mnt/backup/opt/gitea" failed: Operation not permitted (1)

     

    User permission comparison:

    root@client:/mnt/backup/opt/gitea# ls -ahl /mnt/backup/opt/gitea/
    total 8.0K
    drwx------ 1 nobody nogroup   58 Jan  2 13:30 .
    drwx------ 1 nobody nogroup 4.0K Jun  4 15:06 ..
    drwx------ 1 nobody nogroup  158 Jun 11 00:00 backup
    -rw------- 1 nobody nogroup  491 Jun 11 10:31 docker-compose.yml
    root@client:/mnt/backup/opt/gitea# ls -al /opt/gitea/
    total 20
    drwxr-xr-x  4 toa  toa  4096 Jan  2 13:30 .
    drwxr-xr-x 30 root root 4096 Jun  4 15:06 ..
    drwxr-xr-x  2 1000 1000 4096 Jun 11 00:00 backup
    -rw-r--r--  1 toa  toa   491 Nov 26  2021 docker-compose.yml
    drwxr-xr-x  5 root root 4096 Dec 28  2020 gitea

    User id and group for user

    root@client:/# id toa
    uid=1000(toa) gid=100(users) groups=100(users),20(dialout),995(docker)
    root@unraid:/# id toa
    uid=1000(toa) gid=100(users) groups=100(users)

     

    The rsync commands that I execute as root user on the Linux hosts to mount the NFS share from the Unraid server:

    mount -t nfs4 192.168.178.21:/mnt/user/backup/clients/client /mnt/backup
    ...
    rsync -av --delete --delete-excluded $OPT_EXCLUDES /opt /mnt/backup

     

    Before, as an example the file 'docker-compose.yml' showed the owner and group 'toa' in the remote share on the client.

     

    My research lead me to this article. On the client side, I then set "NEED_IDMAPD=yes" and "NEED_GSSD=no" in the file '/etc/default/nfs-common'. I didn't enable the 'Domain' setting in the '/etc/idmapd.conf' file as I couldn't find that setting in Unraid. Afterwards, I restarted the client and tried again with the same errors.

     

    Would love to get some help on this problem. Feel free to request further information for troubleshooting. Thank you in advance!

     

     

  19. I added the progress flag to my borgmatic command (borgmatic create --verbosity 1 --progress --stats) today and noticed that the logs available via the WebUI are incomplete when compared to docker logs from the command line.

     

    The left side shows the logs via the build-in web UI log viewer. The right side shows the logs via `docker logs -f <id>`. Not sure whether this is an issue with how Unraid fetches the docker logs for the container.

     

    Anybody experienced something similar in the past or can explain that behavior?

     

    unraid_docker_log_does_not_show_everything.thumb.png.b2e071371ae123abe330bbce3f344677.png

  20. 5 hours ago, dlandon said:

    Fixed in the next release.

     

    Thanks @dlandon. I cannot proceed my preclear session from yesterday after a server shutdown. Any ideas? Is this expected behavior? Are there any workarounds? I don't want to start the preclear from the beginning. 

     

    Update: My mistake. You need to click "start preclear" and then the plugins asks to resume. I initially though "start preclear" will trigger a new preclear run. I expected the UI to show my paused session in the devices table.

     

    Log:

    May 3 16:22:27 Preclear resumed on all devices.

     

    image.thumb.png.0f8c35f55565637e801925aaf1fb35b9.png

  21. I just tested this plugin with a new disk attached via USB to my Unraid server (6.9.2). I started the preclear process and paused it when the status reached 65%. Unfortunately, no "*.resume" was created on "/boot":

     

    May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Preclear Disk Version: 1.0.25
    May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Disk size: 4000787030016
    May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Disk blocks: 976754646
    May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Blocks (512 bytes): 7814037168
    May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Block size: 4096
    May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Start sector: 0
    May 02 17:49:36 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: zeroing the disk started (1/5) ...
    May 02 17:49:36 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: emptying the MBR.
    May 02 19:13:16 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: progress - 25% zeroed @ 192 MB/s
    May 02 20:45:26 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: progress - 50% zeroed @ 167 MB/s
    May 02 21:49:18 preclear_disk_WD-WX92DA1DAR18_12570: Pause requested
    May 02 21:49:18 preclear_disk_WD-WX92DA1DAR18_12570: cp: cannot create regular file '/boot/preclear_reports/WD-WX92DA1DAR18.resume': No such file or directory
    May 02 21:49:18 preclear_disk_WD-WX92DA1DAR18_12570: Paused

     

    After manually creating the folder "/boot/preclear_reports", starting the preclear process and pausing it again the file "WD-WX92DA1DAR18.resume" gets written to disk as expected:

     

    root@server:/boot# ls /boot/preclear_reports/
    WD-WX92DA1DAR18.resume
    root@server:/boot# cat /boot/preclear_reports/WD-WX92DA1DAR18.resume 
    # parsed arguments
    verify_disk_mbr=''
    erase_preclear='n'
    short_test=''
    read_stress='y'
    erase_disk='n'
    notify_freq='1'
    format_html=''
    verify_zeroed=''
    write_disk_mbr=''
    write_size=''
    skip_preread='y'
    read_size=''
    notify_channel='4'
    no_prompt='y'
    cycles='1'
    skip_postread=''
    read_blocks=''
    
    # current operation
    current_op='zero'
    current_pos='2621668589568'
    current_timer='14431'
    current_cycle='1'
    
    # previous operations
    preread_average=''
    preread_speed=''
    write_average=''
    write_speed=''
    postread_average=''
    postread_speed=''
    
    # current elapsed time
    main_elapsed_time='14442'
    cycle_elapsed_time='14441'

     

  22. On 4/1/2022 at 2:37 PM, ullibelgie said:

    Many thanks for information, @JeyP91

    - If I only follow your explaination from the beginning of your post point 1.) to 4.)  - is it possible to update the paperless-ngx, whenever there is a maintainance/functional update available - let's say from todays version 1.6.0 to 1.6.1 sometimes in the future

     

    Maybe WatchTower will serve your needs. Personally, I don't like automatic updates for my containers as I have the urge for checking the application after an update. Keep in mind that even minor updates might break the setup.

     

    On 4/1/2022 at 2:37 PM, ullibelgie said:

    - will I be alerted that my current version of paperless-ngx is outdated (when I manually check for update for all dockers), so that I know I need to update paperless-ngx with the newer version

     

    Maybe Diun will serve your needs.

     

    On 4/1/2022 at 2:37 PM, ullibelgie said:

    Another concern:
    ...

    With my current large archive (tens of thousands of docs), upgrading to ngx still seems to be a risk for me, if the original docker-app will be removed soon... (as announced) - so I could not fall back to the original paperless-ng

     

    I have not stated to remove the paperless-ng container any time soon. Please re-read the introduction post again:

     

    On 3/13/2022 at 9:04 AM, T0a said:

    For now, the paperless-ng and paperless-ngx Unraid templates will coexist in the community application store. That allows existing users to still rely on the mature paperless-ng for their productive environment and make the change to paperless-ngx once they feel comfortable.