• Posts

  • Joined

  • Last visited

Posts posted by bland328

  1. Here's an idea for another check this wonderful plugin could make, though I thoroughly admit it is an edge case:


    Check /mnt permissions to ensure the mode is 0755.


    I suggest this because a sloppy boot-time script on my system borked permissions on /mnt, and the result was that all my User Shares (both via SMB and the GUI) vanished until I found and fixed it.


    Again--absolutely an edge case, but thought it might save someone a couple hours at some point :)

  2. tl;dr: Why am I being asked to re-provide my encryption key when upgrading a data drive?


    I've been an Unraid user since version 2013, and understand quite a bit about how and why Unraid does what it does. Today, however, I'm confused, and can't figure out if Unraid is doing something unexpected, or if I'm being dense, since I haven't replaced a drive in a couple years.


    I'm replacing a healthy 4TB drive with a new 8TB drive, so I stopped the array, set the 4TB drive to "no device," shut the system down, swapped out the old drive for the new, powered up again, and selected the new 8TB drive to replace the now-missing 4TB.


    Off to the right of the newly-selected 8TB drive, it now says "All existing data on this device will be OVERWRITTEN when array is Started", and immediately below it says "Wrong" and displays the identifier for the old 4TB drive; this all seems right to me.


    Near the bottom of the page, it says "STOPPED: REPLACEMENT DISK INSTALLED" to the left of the Start button and "Start will start Parity-Sync and/or Data-Rebuild" to the right; this also seems right to me.


    However, here's the parts I don't get:


    The Start button is disabled, and there's no "yes, I want to do this" checkbox of any kind to enable it, which may be related to the "Encryption status: Enter new key" message appearing below the Maintenance Mode checkbox.


    Though I do use encrypted XFS for my data drives, I don't understand Unraid wanting a new encryption key for two reasons:


    1. /root/keyfile already exists.
    2. Why would Unraid need an encryption key for a disk that I gather is about to be overwritten on a sector-by-sector basis, rather than being formatted? And if the answer is that new disks always get reformatted before rebuilding...well...okay, but why isn't the existing /root/keyfile being used?


    I don't recall having to re-supply the encryption key in the past, so I figured I'd check in here before moving forward and potentially doing something unfortunate. Thanks for any insight!



    I pasted the phrase from my /root/keyfile in as a "new" passphrase, and it worked; Parity-Sync/Data-Rebuild is now running.


    However, I'd still appreciate any insight as to why this was necessary, so I can decide whether to submit a 6.9.0 bug report or a simply a feature request to improve this experience.

  3. I'm having a problem with CA that's new to me, but I'm not clear if it started with version 2021.02.27, or earlier.


    The problem is that when I do a DockerHub search, the chain-looking "link" icons don't do anything when clicked.


    And when I hover, the popups all read "Go to DockerHub page ca_href"


    I'm happy to gather more information, if useful.

  4. On 12/16/2020 at 9:08 AM, Gunny said:

    This did it for me! Thanks a bunch for posting that.


    Glad to hear it helped!


    For the record, today the Unraid web GUI was draaaaagging...and I discovered these "worker process...exited on signal 6" messages rapidly spamming /var/log/syslog again.


    So, I went hunting for stale Unraid sessions open in browsers on other computers, and found two.


    When I closed one, the spamming slowed, and when I closed the other, the spamming stopped.

  5. Thanks so much for creating the Wallabag docker template!


    I'm trying to use it for the first time, and keep running into Error 500, and seeing this at the end of the log:


    2021/01/25 15:54:14 [error] 201#201: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught RuntimeException: Unable to write in the cache directory (/var/www/wallabag/var/cache/prod) - - [25/Jan/2021:15:54:14 -0600] "GET / HTTP/1.1" 500 31 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36"
    in /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php:676
    Stack trace:
    #0 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(573): Symfony\Component\HttpKernel\Kernel->buildContainer()
    #1 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(117): Symfony\Component\HttpKernel\Kernel->initializeContainer()
    #2 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(166): Symfony\Component\HttpKernel\Kernel->boot()
    #3 /var/www/wallabag/web/app.php(18): Symfony\Component\HttpKernel\Kernel->handle(Object(Symfony\Component\HttpFoundation\Request))
    thrown in /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php on line 676" while reading response header from upstream, client:, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://", host: "tower:6500"


    I haven't managed to get this fixed, and rolling back to 2.3.8 didn't help. Any thoughts on what I might try or otherwise chase?

  6. On 10/19/2020 at 2:33 PM, RealActorRob said:

    ...two tabs with the web interface open...

    I was just watching my syslog being spammed with

    nginx ... worker process ... exited on signal 6

    2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped.


    Hope this helps someone.

    • Thanks 1
  7. tl;dr: After migrating Docker and VM storage to Unassigned Device /mnt/disks/nvme, when I stop the array and that drive unmounts, I'm left with a /mnt/disks/nvme directory containing appdata, domains, and system dirs that each contain proper-looking directory structures, but no files.


    I recently migrated Docker and VM storage to a /mnt/disks/nvme volume mounted by the UD plugin (carefully updating all container configuration files and even the dockerMan template files along the way), and everything seems to be working well.


    But, as described above, I've noticed strange results when I stop the array and, though /mnt/disk/nvme does unmount, I'm then left with a /mnt/disk/nvme directory containing appdata, domains and system dirs that are empty except for a handful of appropriate-looking empty subfolder trees.


    Is this to be expected?


    If I boot with Docker and VMs both disabled, these dirs don't appear when the array stops, suggesting to me that they are created for potential use as mountpoints, but it seems strange to me that they are created as services are stopping and drives are unmounting.


    To be clear, these unexpected dirs aren't causing any problem I know of, except that to (lazily-written) scripts of mine, it looks like the /mnt/disks/nvme volume is still mounted 🙄


    At any rate, I'd love to understand why this is happening, even if I shouldn't be concerned. Thanks very much for any insight, and apologies if this ultimately isn't truly a UD plugin question.


    I'm on Unraid 6.8.3 with Unassigned Devices 2020.10.25 and Unassigned Devices Plus 2020.05.22 installed, and Destructive Mode turned off.

  8. tl;dr: Improve handling of the uncommon situation that the mover can't move some files from cache to array because they unexpectedly already exist on the array.


    In Unraid 6.8.3, it appears that if a file exists on the cache drive (e.g. /mnt/cache/foo/bar.txt) and also on the array (e.g. /mnt/user/foo/bar.txt), then mover simply leaves the file on the cache drive forever.


    I recognize that this situation shouldn't naturally occur, but *shrug* things happen.


    I can see four problems with this situation, whenever it somehow occurs:

    1. Endangered data (since the newer file is never propagated to the array)
    2. Wasted cache space
    3. User confusion (e.g. /mnt/cache/foo/bar.txt and /mnt/user/foo/bar.txt match, but /mnt/disk1/foo/bar.txt doesn't)
    4. Potential pointless spinning up of drives every time the mover runs, only to find it can't move these conflicting files (in my real-life situation, chasing of mysterious drive spin-ups is what led me to all this)


    Here's my two suggestions to improve the situation:

    1. At the very least, some sort of user notification if some files are "stuck" in the cache due to conflicts.
      Plus, some mechanism for rectifying the situation would be a bonus.
    2. Optimally, Unraid would compare any such conflicting file to what exists on the array, then rectify the situation automatically if they match.
      This would be a potentially expensive operation when it occurs, but should be uncommon, and the results could arguably be worth the expense.
      Also, for a definition of files "matching", I suggest that if the file
      contents match, regardless of other metadata (timestamps, ownership, etc), then the metadata from the version of the file in the cache should end up on the array.


    Apologies in advance if I'm wrong about how things work, but this appears to me to be the current situation with Unraid 6.8.3.

  9. On 2/5/2020 at 12:34 PM, SuperDan said:

    Im curious if you tried running the test script "urbackup_snapshot_helper test" from the container console to verify that Urbackup can use the BTRFS snapshot feature?


    I somehow missed that, so I did not. Thanks for the tip! I'll give it a try when I have a few free minutes and report back.


    I tried it and got an immediate failure of:

    Backupfolder not set

    I don't have time at the moment to look into that deeply, but I did find food for thought at

  10. On 11/11/2019 at 7:06 AM, fireb1adex said:

    I am seeing an error about upgrading

    I'm running into this (often? always?) after upgrades, as well.


    If I do this from the container console...

    # dpkg --configure -a

    ...and restart the container, it fixes me up until the next upgrade.


    Though I'm keeping an eye out for a better solution, naturally :)

  11. On 12/23/2019 at 12:32 AM, maxse said:

    I can’t seem to find any info on how to install it

    @maxse, FYI, I'm just getting started with restic, and to install it I downloaded the 'linux_amd64' build from the restic releases page on Github, and have a script called from /boot/config/go that (along with plenty of other boot-time tweaks) handles copying it to /usr/local/bin.


    I'll also mention that my startup scripts set the XDG_CACHE_HOME environment variable to point to a dir I made on a nice, speedy Unassigned Device (though you could also use /mnt/cache/.cache or wherever you like) so that restic makes all its cache files somewhere persistent, instead of in the RAM disk, where they'd be lost on a reboot, which almost certainly isn't what you want!


    The restic Docker container may be great, but it sounded like an un-necessary layer of complication to me, so I approached it this way.

  12. On 11/19/2019 at 9:51 AM, SuperDan said:

    ...this image seems to be missing BTRFS tools

    I'm curious about this, too, @binhex!


    I've been attempting incremental backups (directly to a /mnt/cache/... path within a cache-only share folder on a BTRFS cache drive) and finding that UrBackup is making none of the expected BTRFS subvols or snapshots.


    I'm absolutely not up to speed on what all is involved in a Docker container performing BTRFS-specific operations on an "external" BTRFS volume.


    So, following up on @SuperDan's question, might this be because certain BTRFS resources are excluded from the binhex-urbackup image? And, if so, is that strategic?


    If it isn't strategic, it would be lovely to see them added. And when I have a bit of free time, if I won't be duplicating someone else's efforts, I'll take a shot at it myself.


    EDIT: After some consideration and experimentation, I'm not even sure I'm thinking about this correctly.


    I installed btrfs-progs within the binhex-urbackup container (# pacman -Fy && pacman -S btrfs-progs) as an experiment, but my next incremental backup still didn't create the BTRFS subvolume I was hoping for.


    On the UrBackup Developer Blog, it says that "[e]very file backup is put into a separate sub-volume" if "the backup storage path points to a btrfs file system and btrfs supports cross-sub-volume reflinks."


    So, admitting I'm more than a touch out of my depth here, perhaps: 1) Unraid btrfs doesn't support cross-sub-volume reflinks for some reason, or 2) I shouldn't expect it to work from within a Docker container accessing a filesystem that's outside the container, or 3) ...something else.


    Any insight is appreciated, and I'll post here if I happen to get it figured out.

  13. 21 hours ago, TheExplorographer said:

    So what did you do?

    Sorry...should've explained! I was lucky enough to have recently migrated the VM in question to a second, non-Unraid box to use as a template for another project, so I was able to simply go grab a copy of the OVMF_VARS.fd file from there.


    Had that not been possible, I suppose I would've grabbed a clean copy of that file from here or here, the downside being the loss of my customized NVRAM settings.


    I didn't notice if any cores were pegged with this happened, but I rather doubt it, because in my case there was no boot activity--I didn't get to the Tianocore logo, nor even to the point of generating any (virtual) video output for noVNC to latch onto.

    • Like 1
  14. 19 hours ago, bland328 said:

    My trusty macOS VM that I've been running 24/7 for years suddenly won't start.

    For the record, I solved this...but I'm not sure what to make of it. And it almost surely has nothing to do with the OP's problem, but I'll leave the solution here anyway, in case it helps someone else:


    Apparently, the 'OVMF_VARS.fd' file (OVMF NVRAM backing for the VM) for that VM became corrupt. It is stored in an unassigned btrfs volume on an NVME drive, and the btrfs volume itself does not appear to be corrupt.


    I've no idea what happened there, but hopefully (and probably) it has nothing to do with Unraid 6.8.1.

  15. I'm having a somewhat similar problem. My trusty macOS VM that I've been running 24/7 for years suddenly won't start.


    And by that I mean that the VM claims to have started (green 'play' button lights up in the Unraid GUI), but nothing ever happens--I don't get even as far as being able to VNC in (I get the "Guest has not initialized the display (yet)" message).


    In the log file for the VM, not much appears--when I try to start the VM, it spits out the long set of qemu args, then this standard stuff:

    2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: high-privileges
    2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: custom-argv
    2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: host-cpu
    char device redirected to /dev/pts/1 (label charserial0)

    and then...nothing.


    I know this may not actually have anything to do with 6.8.1, but in the name of anyone else having VM woes under Unraid 6.8.1?

  16. 8 hours ago, ryoko227 said:

    I actually never got around to trying it TBH. ...  I will have to look into this patching again when I get some free time.

    Thanks for the update, @ryoko227. If/when I get some time, I'll also put some work into this, and will post here.


    EDIT: For the record, I'm on an ASUS PRIME X370-PRO + AMD Ryzen 5 2600, with a 500GB Kingston NVME drive in the motherboard slot, formatted BTRFS, and unassigned. Turning off IOMMU in the BIOS does stop the flood of page faults, but I need to turn that back on soon 😅

  17. Thanks for the info, @Gitchu.


    I've upgraded to Unraid 6.8.0 final, and now find that VMXNET3 (+Q35-3.1, which may or may not have anything to do with it) is working great with Catalina, as well.


    Though, to be fair, I haven't freshly logged into iCloud services (iMessage, App Store, iCloud Drive, etc.) recently; for anyone else reading this, I've run into problems in the past with VMXNET3 working fine with those services only after I've successfully logged in with a different (e1000 or passed-through) NIC.


    Fingers crossed that those days are now behind us, but I don't feel like logging out of what's now working just to test it. 😉

    • Like 1
  18. For the record, I'm currently experimenting with running Catalina under qemu/kvm/libvirt on a non-Unraid Linux box (still an Unraid fan here...this is just a side project!), and I find that using when qemu-4.1.1_1, a virtual e1000-82545em NIC is working just great with br0: bridging.


    To be fair, this br0: bridge is one I configured, and I'm not currently quite savvy enough to know if that could be the difference...but I doubt it.


    So, I'm looking forward to Unraid 6.8.0, suspecting an updated qemu will fix everything for me, as it did for @Gitchu.


    Assuming it does, I'll stop using qemu's virtual e1000 NIC+e1000 kext, and return to using e1000-82545em.

    • Like 1