Jump to content

T0a

Members
  • Posts

    149
  • Joined

  • Last visited

Posts posted by T0a

  1. Quote

    Jedenfalls ist die Ursache meiner Ansicht nach in dem Healthcheck der verschiedenen Docker Container zu suchen. Bisher sind hier Plex und PiHole aufgefallen:

    Danke für deine Antwort @mgutt. Ich hatte die Diskussion um die negativ in Erscheinung getretenen Container bezüglich der Schreibvorgänge ebenfalls verfolgt. Soweit ich weiß, hat Limetech ein paar Anpassungen an der Art wie das docker.img gespeichert wird durchgeführt.

     

    Die vielen Schreibvorgänge sind bei Btrfs leider auch ohne Docker ein Problem. Ich versuche gerade den Thread hier im Forum zu finden. Aus meiner Erinnerung waren die Schreibvorgänge eines Btrfs Raid1 im Vergleich zu einer einzelnen XFS formatierten Festplatte um den Faktor 4 höher. Begründet wurde dies durch die vielen zusätzlich geschriebenen Metadaten (Update: Thread)

  2. Quote

    Von der Community is der Wunsch nach ZFS eindeutig da, siehst auch gut in den User Requests 2020 und den zahlreichen ZFS Threads wann es endlich kommt...

    Ich bin ein Befürworter von ZFS in Unraid und möchte ebenfalls auf das Array verzichten. Aufgrund der obigen Diskussion habe ich das Bedürfnis meine Gründe dafür hier darzulegen. Außerdem möchte ich kurz anreißen, warum ich trotz dieser Zielvorstellung Unraid betreiben möchte.

     

    Ist-Zustand

    Ich habe aktuell ein Unraid Array mit zwei WD Red Festplatten, wobei eine dieser beiden Festplatten für die Parität fungiert. Weiterhin habe ich klassisch eine SSD als Cache, einige Docker Container und eine Linux VM für das tägliche Arbeiten. Damit würde ich mich aktuell als typischen Unraid Nutzer beschreiben.

     

    Mein Unraid Server steht im Schlafzimmer auf meinem Schreibtisch, damit ich mittels durchgereichter iGPU an meinem Monitor arbeiten kann. Remote-Desktop Lösungen haben mich an dieser Stelle nicht glücklich gemacht, aufgrund eines zusätzlich benötigen Gerätes, sowie auftretenden Artefakten. Leider sind die beiden Festplatten im Betrieb als auch im "spin-down" Zustand und trotz Dämmmaterial (Fractal R6 gedämmt) im Gehäuse zu hören. Besonders in der Nacht stören mich diese Geräusche.

     

    Warum ZFS?

    Aus den oben aufgeführten Gründen habe ich mir zwei 1 TB Samsung SSDs gekauft, um diese in Unraid 6.9 in einer Raid-1 Konfiguration zu betreiben. Die Kapazität ist für meine Zwecke mehr als ausreichend, da ich einen sehr niedrigen Datenabdruck habe. Wegen der Geräuschkulisse werde ich die WD Red Festplatten anschließend in einem Backup System verbauen, welches an einem beliebigen Ort aufgestellt werden kann. Damit bleibt der obligatorische USB-Stick im Array, damit ich weiterhin Docker und KVM nutzen kann.

     

    Leider scheint Btrfs trotz einiger Fehlerbehebungen in Unraid 6.9 bezüglich hoher "Writes" immer noch enorm viele Schreibvorgänge bei einer Raid1 Konfiguration zu verursachen. Grundsätzlich habe ich hier im Forum häufig über Probleme von Btrfs Raid1-Konfigurationen gelesen. Bei ZFS hatte ich diesen Effekt bisher nicht so stark wahrgenommen und daher plane ich mit einem ZFS Raid-Pool und hoffe, das Limetech die GUI Unterstützung hierfür bald in Unraid zur Verfügung stellt.

     

    Weiterhin ist die Möglichkeit VMs via Snapshots zurückzusetzen ein netter Bonus.

     

    Warum dann Unraid?

    Ich habe sehr viel Zeit mit Unraid verbracht (insbesondere mit dem durchreichen der iGPU) und habe abzüglich der Geräuschkulisse ein verlässliches Setup. Ich kann leider nicht beliebig viel Zeit in die Marktsichtung und Einarbeitung anderer Produkte, sowie etwaige Portierung meiner Konfiguration investieren. Außerdem fühlt sich Unraid für meinen Usecase noch nicht gänzlich falsch an und die benötigten Features scheinen ebenfalls auf der Roadmap zu sein. Zusätzlich mag ich diese Community und fühle mich hier wohl. TrueNas Scale ist neben Unraid in meinen Augen noch einen Blick wert. Aber bis die Software nächstes Jahr als GA verfügbar ist, wird noch etwas Zeit vergehen.

     

    Wenn du einen Wunsch frei hättest?

    Thunderbolt 4 Unterstützung in Unraid und bezahlbare optische Thunderbolt-Kabel mit einer Länge von über 20 Metern

     

     

    • Like 1
  3. I haven't monitored the whole conversation. But first let's make sure your configuration works and you can backup files. To do so, please open the console of the borgmatic container and execute a backup yourself via  "/usr/bin/borgmatic prune create -v 1 --stats 2>&1" (without the ""). For testing purpose only include a small file in your backup via the `config.yaml`. Let me know how this goes.

     

    Just to make sure: Do you know how cron works and what the entries mean? The backup is only triggered when the cron expression evaluates to true. You may have to restart the container after making changes to the crontab.txt file.

  4. On 12/24/2020 at 6:17 AM, jaychu said:

    I was going through the logs and noticed that I was getting this error. i actually came from a fresh install of paperless-ng and I was wondering what step I might be missing because I can't seem to find where I would put the redis password. (assuming this is the authentication error the logs are eluding to)

     

    (image.thumb.png.62af38a9ef48babcbcd9d585bfd49cbd.png

    Please post your issue to the paperless-ng support thread. See my post: "Issues with the Unraid setup go here". This thread here is meant for issues with the original paperless docker container. My first case would be that your document is password protected. Further diagnostics in the other thread then.

  5. 32 minutes ago, Greygoose said:

    Please can you confirm the crontab.txt is just copied into the config folder and nothing else is required for automatic backups?

     

    It isn't working my end.

     

    What does the borgmatic Docker log tell you? For me the crontab.txt in the right place is all I need for the automation (despite the config.yaml and keys for sure)

  6. On 8/12/2020 at 6:19 PM, Natcoso9955 said:

    any got this working in a vm connection setup in guacamole?

    Yes, just in this moment. Here is what I did for my Linux VM:

    1. Create new connection with Protocol SSH
    2. Set parameter Hostname in the networks section to the IP address of the Linux machine
    3. Enter user credentials in the Authentication section
    4. Tick the checkbox "send WoL packet"
    5. Set the field "Mac address of the remote host" to the MAC address of your virtual machine (see vm template)
    6. Set the field "Broadcast address for WoL packet" to you routers IP address
    7. Set the time to wait to e.g. 35
    8. Save connection

    Have fun.

  7. 1 hour ago, Danimal said:

    Hi, just testing out paperless-ng, but I am having some issues with getting it to pick-up from the consume folder.

     

    I get the following error from the logs:

     

    Error while consuming document: Error -5 connecting to ip:6379. No address associated with hostname.

     

    Did you read the instructions? You need a separate Redis container running. Replace <IP> in the paperless template with the servers IP running Redis then. You can get Redis from the CA too.

     

    Regarding your second question: No, you just need the single Redis and paperless-ng container.

  8. Quote

    If this seems dubious to you, please let me know why... it's just my thought process.

    No, I totally get your point and your arguments seems reasonable to me. I run a docker container with a filesystem-based SQLite database that has no build-in database export like you mentioned. That's why I asked this question in the first place.

     

    I installed your container today and I really like it so far. Up until now, I did my offsite backups to Google drive via rclone. However, this solution didn't let me sleep well to be honest - especially the Google drive part. So, I ordered a storage box from Hetzner today and did my first backup to it with borgmatic!

     

    The last piece missing is stopping the docker container I mentioned above. The plan is to use "HA dockermon"  from within the borgmatic container. Would you mind adding curl to the docker container for me? Then, I would be able to stop any container via:

    curl -v -X POST <ha_dockermon_ip>:8126/container/container_name --header 'content-type: application/octet-stream' --data '{"state": "stop"}'

    Thanks for bringing borgmatic to my attention :)

     

  9. On 11/26/2020 at 10:13 PM, Picha said:

    Need some help with that.

     

    For curious reasons i added just the vbios to see what happens. What should i say, i got a display output ! yeaah

    Sadly went only well for just a few seconds, till my logfiles filled up with "vfio_region_write failed: Device or resource busy igpu"

    But i am happy to know, it should work somehow.

    Sooo... i made everything like you sad there. Except the "VFIO-PCI Config to stub 00:02.0". What do you mean with that ? I passed through the IGPU in the System Devices if its that.

     

    ....

     

    Edit Edit: Kernel settings were indeed wrong. everthing is working now.

     

    Sorry, for being late here. I'm glad you got it working! I remember how frustrating it was for me to figure it out and find a working solution. Enjoy your IGD inside your VM. Do you mind reporting what CPU and mainboard you are using?

  10. Hello paperless users,

     

    unfortunately, paperless hasn't received a lot of updates and bug fixes in the past few month. Even pull requests are not merged for some time now. Though, paperless runs rocks solid and gets the job done!

     

    For some time now, there is a well-maintained fork of paperless out there. It's called paperless-ng and I'm happy to announce that paperless-ng is officially available via Unraids community application store (CA store).

     

    Let me briefly outline a few improvements over the existing solution:

     

    • New front end build with Angular. It features full text search with scored and highlighted results, savable filters, a dashboard, and document uploading on the landing page.
    • Mobile support is also almost there. Some layouts don't work yet on small screens.
    • New mail consumer that supports multiple accounts and custom filters and actions. Fully tested!
    • Paperless-ng trains a neural network on your documents and assigns tags and correspondents automatically, if you instruct it to do so.
    • Updated dependencies.
    • More tests of critical backend parts.
    • A proper task processing queue that can consume multiple documents in parallel. Consumption of many documents is now blazing fast on multi core system. Fixed much of the consumer code, so that it does not block the database during consumption, for instance.
    • Paperless-ng now uses OCRmyPDF to perform OCR on documents. It still uses tesseract under the hood, but the PDF parser of Paperless has changed considerably and will behave different for some documents (also see PAPERLESS_OCR_MODE @bigbangus @vakilando).
    • Compatible with the paperless iOS and Android app

     

    There is even more. So, don't miss to check out the documentation too! Jonas, the maintainer of paperless-ng, is a highly motivated dev and currently he works towards the first stable release 1.0. Thus, the current version of paperless-ng is flagged as beta in the CA store.

     

    If you are interested in paperless-ng and want to support Jonas, please test paperless-ng and give him feedback. You can find a migration guide here too. Be warned, paperless-ng has received a lot of changes and you might encounter bugs. So, whatever you are doing get your backup right first!

     

    Also make sure to read the product vision of paperless-ng first, before submitting feature requests. Things like multi-user support are not in the scope of the project for example. That being said:

    • Issues with paperless-ng go here
    • Issues with the Unraid setup go here

     

    Happy testing and stay healthy!

     

    • Like 1
  11. Hi sdub,

     

    this Borg integration looks promising to me. Thanks for taking the time creating the container and making it available to the community. I will definitely check it out and may consider it as a replacement for my local rsync and remote rclone offsite backup. Will report back!

    Quote

    Flash drive and appdata are incrementally backed up (alternative to CA backup utility)

     

    How do you make sure that files are not getting written by your docker containers while the backup is running? The CA backup stops containers to prevent file corruption AFAIK. I cannot see such a mechanism in your solution. Technically, this would be possible with the before_backup and after_backup hooks.

     

    Not sure, if any further/similar steps needs to be taken into account for the flash drive. May be worth looking into the CA backup code to review the protection mechanisms.

  12. 9 minutes ago, jameson_uk said:

     

    Is there a way to have a share on the SSD that isn't part of the array but syncs across to a share that is on the array?

    This could be accomplished with a share set to „cache only“ and the user scripts plugin. With rsync you can then copy the files time-based to an Array share.

    • Thanks 1
  13. 10 minutes ago, jameson_uk said:

    I set this up and everything is working but the drives are still spinning.

    I reset the drive stats but there are still some reads and writes to disks 1 & 3 which is stopping them spinning down.

    Unraid.png.1bead3c3291b3c90c90736cb5b696cf3.png

     

    There is nothing connected to the array but these docker containers are running.  Any ideas how I can track down what is actually accessing the disks?

    Take a look at the file activity plugin from the community apps section.

    • Thanks 1
  14. 1 hour ago, dlandon said:

    New release of UD.  Highlights:

    • Don't allow unmounting a disk that was not mounted by UD.  If you manually mount an unassigned disk at a mount point other than /mnt/disks/, UD will not be able to unmount it.
    • Change disk size query to try to keep from spinning up disks.
    • Cut down on tmp file accesses.
    • Fix a situation where permission settings from UD Settings would not update the shares.
    • Some more code cleanup.

    I just updated it and tested the new release with my external USB drive. Sadly, a refresh on the main tab still wakes up the drive. The option "PASS THRU" is enabled. Found this in my logs when I refresh the tab:

    May 31 23:27:23 Zeus unassigned.devices: Error: shell_exec(/usr/bin/dd bs=446 count=1 if=/dev/sdb 2>/dev/null | /bin/sum | /bin/awk '{print }') took longer than 5s!

    Note that the device is formatted as ZFS and reported as "zfs_member" by UD. Can confirm that the command wakes up the drive:

    root@Zeus:~# /usr/bin/dd bs=446 count=1 if=/dev/sdb 2>/dev/null | /bin/sum | /bin/awk '{print }'
    23591     1

     

    Edit: Solved with 2020.05.31a. Big thanks @dlandon!

  15. 5 hours ago, dlandon said:

    I'm working on a solution, but I can't be sure the disk size query is the issue.  Most likely it is.  Pretty much nothing else is queried when a disk is marked as passed through.

     

    Let me know if I could assist you with testing e.g. I can grap the changes from a Github branch and modify my Unraid installation when you don't want to push a release yet.

  16. 21 minutes ago, dlandon said:

    It is not executed.

    I see, then I had the wrong suspect here, sry. Should have read the code in more depth to get a better grasp.

     

    21 minutes ago, dlandon said:

    If "Pass Thru" is set, the disk should not wake up.  I suspect your disk is waking up when UD checks the disk size.

    Okay, any ideas what could be improved to prevent the wake up then? Do you think disabling of checking the size when PASS THRU is enabled would be a good solution? People might want to see the disk usage even if they pass-through the drive. Can you point me to the code part?

     

    Would it be an option to make it optional to show the "Unassigned Devices" section in the main tab?

     

    For the time being I removed the "Unassigned Devices" section from the main tab by editing the header of "UnassignedDevices.page". The change will be reverted when I update the plugin though.

     

  17. I got an external USB hard drive (WD Elements). When I put it to sleep via 'hdparm -y /dev/sdb'  and click on the main tab (or refresh it), the 'Unassigned Devices' section refreshes (takes a few seconds) and wakes up the external drive again. I can reproduce this, pinning the wake up of the drive exactly to the time refreshing the main tab.

     

    * The drive is formatted with ZFS

    * "PASS THRU" is enabled

    * Auto Mount USB Devices set to No

     

    Why does the plugin wake up the drive? After troubleshooting the issued rescan command on refresh here (/sbin/udevadm trigger --action=change 2>&1) wakes up the drive. Can I exclude the drive from the plugin without removing the plugin? I don't want the plugin to interact with this respective drive.

     

    Is it possible to disable rescan on page refresh? Can I remove the "Unassigned devices" section from the main tab only displaying it in the plugin section? Other solutions?

  18. Playing around with ZFS in Unraid the for a few days now. Thanks for keeping the Plugin up-to-date!

     

    I created a single zpool on an external USB disk using the commands mentioned in the first post. The device name changed however from 'sdb' to 'sdg' and the pool was not loaded automatically any more. Thus, I exported the pool and imported the device via its unique UUID i.e (source) :

     

    root@server:~# zpool export extdrive
    root@server:~# zpool list -v
    no pools available
    root@server:~# zpool import -d /dev/disk/by-id extdrive
    root@server:~# zpool list -v
    NAME                                                  SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    extdrive                                              928G   139G   789G        -         -     0%    15%  1.00x    ONLINE  -
      usb-WD_Elements_25A2_575833314142354837365654-0:0   928G   139G   789G        -         -     0%  15.0%      -  ONLINE  
    root@server:~# 

     

    This makes sure the pool is loaded, even if the device name changed. For me it looks like it is recommended to create the pool using the unique UUIDs rather than device names. What do you guys think?

     

    Edit: Seems like it does not work well. After reboot another device name is assigned and despite the fact that the pool is mounted via UUID, commands like 'zpool list -v' stuck :/

     

    Edit2: Looks like that the "stuck" behaviour occurs, when the device labels changes (e.g. re-plug USB drive) while the pool is still loaded. Thus, I ended up doing the following via UD plugin:

    #!/bin/bash
    PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
    
    case $ACTION in
      'ADD' )
      ;;
    
      'UNMOUNT' )
      ;;
    
      'REMOVE' )
      ;;
      'ERROR_MOUNT' )
        DEST=/mnt/extdrive
        zpool import -d /dev/disk/by-id extdrive
        zfs mount -a
        if mountpoint -q $DEST; then
    
          rsync -a -v --delete a b 2>&1
          [...]
    
          sync
    
          ud_backup_exit=$?
          if [ ${ud_backup_exit} -eq 0 ]; then
            echo "Completed UD backup"
          else
            echo "UD backup failed"
          fi
        else
          echo "Backup drive not mounted, exiting"
          exit 1
        fi
    
        zfs umount /mnt/extdrive
        zpool export extdrive
        if mountpoint -q $MOUNTPOINT; then
          echo "Error while un-mounting ZFS drive"
        else
          echo "Device can be removed"
        fi
      ;;
    
      'ERROR_UNMOUNT' )
      ;;
    esac

     

    I assigned this script via the UD plugin and configured auto-mount for the device. Now I can plug in my ZFS USB device and remove it once the backup is finished.

     

    Using the "ERROR_MOUNT" state is kind of a hack. Would love to have the "ADD" state renamed to "MOUNTED". Then an additional state "ADD" would allow to just indicate the occurrence of new devices.

     

    Maybe also custom mount commands for the UD plugin would be cool for kind of scripting with ZFS drives.

     

    How do you guys handle such cases?

     

     

  19. Pretty cool post - will take some time to work me through. 
     

    Wanted to post a comment after reading the first few paragraphs. I have a failover pihole setup in place utilizing keepalived. Two pihole instances, each on a separate Rpi. When one goes down, the floating IP switches in an instant to the other instance. You might want to check it out, so your family can browse the web while you work on your server.

×
×
  • Create New...