bland328

Members
  • Posts

    105
  • Joined

  • Last visited

Posts posted by bland328

  1. On 11/2/2023 at 11:01 AM, bobbintb said:

    I managed to enable the Linux Audit Framework for unRAID but unfortunately it requires rebuilding the kernel.

    @bobbintb, did you happen to use any particular guide to accomplish this? Or have one in mind that you recommend? I'm also in need of auditd support, and though I have many years of Linux experience, I have yet to build a custom kernel. Thanks for any advice!

  2. A hopefully-quick OpenVPN-Client question:

     

    The overview for for the app begins:

    Quote

    The basic steps for a OpenVPN connection that requires a Username and Password are:

    1) Rename your *.ovpn to 'vpn.ovpn' and place it in your OpenVPN-Client directory...

     

    Am I correct that this is wrong, and the *.ovpn file should actually be renamed to 'vpn.conf'?

  3. CA failure report, as requested by plugin:

     

    Spoiler

    OS: 6.9.2
    Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36
    Language:

    <br /> <b>Warning</b>: count(): Parameter must be an array or an object that implements Countable in <b>/usr/local/emhttp/plugins/community.applications/include/helpers.php</b> on line <b>193</b><br /> {"description":"\n\t\t<div class='popup'>\n\t\t<div><span class='popUpClose'>CLOSE<\/span><\/div>\n\t\t<div class='ca_popupIconArea'>\n\t\t\t<div class='popupIcon'><img class='popupIcon' src='https:\/\/github.com\/juusujanar\/unraid-templates\/raw\/master\/img\/Portainer-logo.png' onerror='this.src=&quot;\/plugins\/dynamix.docker.manager\/images\/question.png&quot;'><\/div>\n\t\t\t<div class='popupInfo'>\n\t\t\t\t<div class='popupName'>Portainer<\/div>\n\t\t<div class='popupAuthorMain'>portainer<\/div>\n\t\t\t\t\t<div class='actionsPopup' id='actionsPopup'>Actions<\/div>\n\t\t\t\t<div class='supportPopup' id='supportPopup'><span class='ca_fa-support'> Support<\/div><div class='pinned' style='display:inline-block' title='Click to unpin this application' data-repository='registry.hub.docker.com\/portainer\/portainer-ce' data-name='Portainer'><\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t<div class='popupDescription popup_readmore'><p><br>&nbsp;&nbsp;&nbsp;&nbsp;Portainer hides the complexity of managing containers behind an easy-to-use UI.<br><br>&nbsp;&nbsp;&nbsp;&nbsp;By removing the need to use the CLI, write YAML or understand manifests, Portainer makes deploying apps and troubleshooting problems so easy that anyone can do it.<br>&nbsp;&nbsp;&nbsp;&nbsp;<br><br><br><br>&nbsp;&nbsp;&nbsp;&nbsp;Configuration<br><br>&nbsp;&nbsp;&nbsp;&nbsp;\/mnt\/user\/appdata\/portainer This is where Portainer will store it's data.<br><br>&nbsp;&nbsp;&nbsp;&nbsp;\/var\/run\/docker.sockr Portainer uses this to get Docker information from unRAID host.<br><br>&nbsp;&nbsp;&nbsp;&nbsp;Port 8000 TCP tunnel server<br><br>&nbsp;&nbsp;&nbsp;&nbsp;Port 9443 Secure (HTTPS) WebUI port, default uses self-signed certificate<br><br><\/p>\n<\/div>\n\t\n\t\t<div>\n\t\t<div class='popupInfoSection'>\n\t\t\t<div class='popupInfoLeft'>\n\t\t\t<div class='rightTitle'>Details<\/div>\n\t\t\t<table style='display:initial;'>\n\t\t\t\t<tr><td class='popupTableLeft'>Application Type<\/td><td class='popupTableRight'>Docker<\/td><\/tr>\n\t\t\t\t<tr><td class='popupTableLeft'>Categories<\/td><td class='popupTableRight'>Network:Management, Productivity, Tools:Utilities<\/td><\/tr>\n\t\t\t\t<tr><td class='popupTableLeft'>Added<\/td><td class='popupTableRight'>Nov 5, 2021<\/td><\/tr>\n\t<tr><td class='popupTableLeft'>Repository<\/td><td class='popupTableRight' style='white-space:nowrap;'>registry.hub.docker.com\/portainer\/portainer-ce<\/td><\/tr><tr><td class='popupTableLeft'>Last Update:<\/td><td class='popupTableRight'><span id='template619'>Unknown <span class='ca_note'><span class='ca_fa-asterisk'><\/span><\/span><\/span><\/td><\/tr><\/table>\n\t\t\t<\/div>\n\t\t\t<div class='popupInfoRight'>\n\t\t\t\t\t<div class='popupAuthorTitle'>Maintainer<\/div>\n\t\t\t\t\t<div><div class='popupAuthor'>jj9987<\/div>\n\t\t\t\t\t<div class='popupAuthorIcon'><img class='popupAuthorIcon' src='https:\/\/github.com\/juusujanar.png' onerror='this.src=&quot;\/plugins\/dynamix.docker.manager\/images\/question.png&quot;'><\/img><\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div class='ca_repoSearchPopUp popupProfile' data-repository='jj9987&#039;s Repository'>All Apps<\/div>\n\t\t\t\t\t<div class='repoPopup' data-repository='jj9987&#039;s Repository'>Profile<\/div>\n\t\t\t\t\t<div class='ca_favouriteRepo nonfav' data-repository='jj9987&#039;s Repository'>Favourite<\/div>\n\t\t\n\t\t\t<div class='donateText'>If you like my work and want to buy me a beer, feel free.<\/div>\n\t\t\t<div class='donateDiv'><span class='donate'><a href='https:\/\/www.paypal.me\/jj9987' target='_blank'>Donate<\/a><\/span><\/div>\n\t\t\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t<\/div>\n\t<div><br><span class='ca_note ca_bold'><span class='ca_fa-asterisk'><\/span> Note: All statistics are only gathered every 30 days<\/span><\/div>","trendData":null,"trendLabel":null,"downloadtrend":null,"downloadLabel":null,"totaldown":null,"totaldownLabel":null,"supportContext":[{"icon":"ca_fa-project","link":"https:\/\/portainer.io","text":"Project"},{"icon":"ca_fa-support","link":"https:\/\/lime-technology.com\/forums\/topic\/69491-support-jj9987-portainer\/","text":"Support Forum"},{"icon":"ca_fa-docker","link":"https:\/\/registry.hub.docker.com\/r\/portainer\/portainer-ce\/","text":"Registry"}],"actionsContext":[{"icon":"ca_fa-edit","text":"Edit","action":"popupInstallXML('\/boot\/config\/plugins\/dockerMan\/templates-user\/my-Portainer.xml','edit');"},{"divider":true},{"icon":"ca_fa-delete","text":"<span class='ca_red'>Uninstall<\/span>","action":"uninstallDocker('\/boot\/config\/plugins\/dockerMan\/templates-user\/my-Portainer.xml','Portainer');"}],"ID":619}

     

    The situation in which the failure occurred was slightly atypical: I attempted to install Portainer from CA, but it failed to start due to a port binding conflict, after which I was automatically returned to CA, and this error appeared, with instructions to post it here.

  4. Here's an idea for another check this wonderful plugin could make, though I thoroughly admit it is an edge case:

     

    Check /mnt permissions to ensure the mode is 0755.

     

    I suggest this because a sloppy boot-time script on my system borked permissions on /mnt, and the result was that all my User Shares (both via SMB and the GUI) vanished until I found and fixed it.

     

    Again--absolutely an edge case, but thought it might save someone a couple hours at some point :)

  5. tl;dr: Why am I being asked to re-provide my encryption key when upgrading a data drive?

     

    I've been an Unraid user since version 2013, and understand quite a bit about how and why Unraid does what it does. Today, however, I'm confused, and can't figure out if Unraid is doing something unexpected, or if I'm being dense, since I haven't replaced a drive in a couple years.

     

    I'm replacing a healthy 4TB drive with a new 8TB drive, so I stopped the array, set the 4TB drive to "no device," shut the system down, swapped out the old drive for the new, powered up again, and selected the new 8TB drive to replace the now-missing 4TB.

     

    Off to the right of the newly-selected 8TB drive, it now says "All existing data on this device will be OVERWRITTEN when array is Started", and immediately below it says "Wrong" and displays the identifier for the old 4TB drive; this all seems right to me.

     

    Near the bottom of the page, it says "STOPPED: REPLACEMENT DISK INSTALLED" to the left of the Start button and "Start will start Parity-Sync and/or Data-Rebuild" to the right; this also seems right to me.

     

    However, here's the parts I don't get:

     

    The Start button is disabled, and there's no "yes, I want to do this" checkbox of any kind to enable it, which may be related to the "Encryption status: Enter new key" message appearing below the Maintenance Mode checkbox.

     

    Though I do use encrypted XFS for my data drives, I don't understand Unraid wanting a new encryption key for two reasons:

     

    1. /root/keyfile already exists.
    2. Why would Unraid need an encryption key for a disk that I gather is about to be overwritten on a sector-by-sector basis, rather than being formatted? And if the answer is that new disks always get reformatted before rebuilding...well...okay, but why isn't the existing /root/keyfile being used?

     

    I don't recall having to re-supply the encryption key in the past, so I figured I'd check in here before moving forward and potentially doing something unfortunate. Thanks for any insight!

     

    EDIT:

    I pasted the phrase from my /root/keyfile in as a "new" passphrase, and it worked; Parity-Sync/Data-Rebuild is now running.

     

    However, I'd still appreciate any insight as to why this was necessary, so I can decide whether to submit a 6.9.0 bug report or a simply a feature request to improve this experience.

  6. I'm having a problem with CA that's new to me, but I'm not clear if it started with version 2021.02.27, or earlier.

     

    The problem is that when I do a DockerHub search, the chain-looking "link" icons don't do anything when clicked.

     

    And when I hover, the popups all read "Go to DockerHub page ca_href"

     

    I'm happy to gather more information, if useful.

  7. On 12/16/2020 at 9:08 AM, Gunny said:

    This did it for me! Thanks a bunch for posting that.

     

    Glad to hear it helped!

     

    For the record, today the Unraid web GUI was draaaaagging...and I discovered these "worker process...exited on signal 6" messages rapidly spamming /var/log/syslog again.

     

    So, I went hunting for stale Unraid sessions open in browsers on other computers, and found two.

     

    When I closed one, the spamming slowed, and when I closed the other, the spamming stopped.

  8. Thanks so much for creating the Wallabag docker template!

     

    I'm trying to use it for the first time, and keep running into Error 500, and seeing this at the end of the log:

     

    2021/01/25 15:54:14 [error] 201#201: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught RuntimeException: Unable to write in the cache directory (/var/www/wallabag/var/cache/prod)
    
    192.168.1.226 - - [25/Jan/2021:15:54:14 -0600] "GET / HTTP/1.1" 500 31 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36"
    in /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php:676
    Stack trace:
    #0 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(573): Symfony\Component\HttpKernel\Kernel->buildContainer()
    #1 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(117): Symfony\Component\HttpKernel\Kernel->initializeContainer()
    #2 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(166): Symfony\Component\HttpKernel\Kernel->boot()
    #3 /var/www/wallabag/web/app.php(18): Symfony\Component\HttpKernel\Kernel->handle(Object(Symfony\Component\HttpFoundation\Request))
    
    }
    thrown in /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php on line 676" while reading response header from upstream, client: 192.168.1.226, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "tower:6500"

     

    I haven't managed to get this fixed, and rolling back to 2.3.8 didn't help. Any thoughts on what I might try or otherwise chase?

  9. On 10/19/2020 at 2:33 PM, RealActorRob said:

    ...two tabs with the web interface open...

    I was just watching my syslog being spammed with

    nginx ... worker process ... exited on signal 6

    2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped.

     

    Hope this helps someone.

    • Thanks 1
  10. tl;dr: After migrating Docker and VM storage to Unassigned Device /mnt/disks/nvme, when I stop the array and that drive unmounts, I'm left with a /mnt/disks/nvme directory containing appdata, domains, and system dirs that each contain proper-looking directory structures, but no files.

     

    I recently migrated Docker and VM storage to a /mnt/disks/nvme volume mounted by the UD plugin (carefully updating all container configuration files and even the dockerMan template files along the way), and everything seems to be working well.

     

    But, as described above, I've noticed strange results when I stop the array and, though /mnt/disk/nvme does unmount, I'm then left with a /mnt/disk/nvme directory containing appdata, domains and system dirs that are empty except for a handful of appropriate-looking empty subfolder trees.

     

    Is this to be expected?

     

    If I boot with Docker and VMs both disabled, these dirs don't appear when the array stops, suggesting to me that they are created for potential use as mountpoints, but it seems strange to me that they are created as services are stopping and drives are unmounting.

     

    To be clear, these unexpected dirs aren't causing any problem I know of, except that to (lazily-written) scripts of mine, it looks like the /mnt/disks/nvme volume is still mounted 🙄

     

    At any rate, I'd love to understand why this is happening, even if I shouldn't be concerned. Thanks very much for any insight, and apologies if this ultimately isn't truly a UD plugin question.

     

    I'm on Unraid 6.8.3 with Unassigned Devices 2020.10.25 and Unassigned Devices Plus 2020.05.22 installed, and Destructive Mode turned off.

  11. tl;dr: Improve handling of the uncommon situation that the mover can't move some files from cache to array because they unexpectedly already exist on the array.

     

    In Unraid 6.8.3, it appears that if a file exists on the cache drive (e.g. /mnt/cache/foo/bar.txt) and also on the array (e.g. /mnt/user/foo/bar.txt), then mover simply leaves the file on the cache drive forever.

     

    I recognize that this situation shouldn't naturally occur, but *shrug* things happen.

     

    I can see four problems with this situation, whenever it somehow occurs:

    1. Endangered data (since the newer file is never propagated to the array)
    2. Wasted cache space
    3. User confusion (e.g. /mnt/cache/foo/bar.txt and /mnt/user/foo/bar.txt match, but /mnt/disk1/foo/bar.txt doesn't)
    4. Potential pointless spinning up of drives every time the mover runs, only to find it can't move these conflicting files (in my real-life situation, chasing of mysterious drive spin-ups is what led me to all this)

     

    Here's my two suggestions to improve the situation:

    1. At the very least, some sort of user notification if some files are "stuck" in the cache due to conflicts.
      Plus, some mechanism for rectifying the situation would be a bonus.
    2. Optimally, Unraid would compare any such conflicting file to what exists on the array, then rectify the situation automatically if they match.
      This would be a potentially expensive operation when it occurs, but should be uncommon, and the results could arguably be worth the expense.
      Also, for a definition of files "matching", I suggest that if the file
      contents match, regardless of other metadata (timestamps, ownership, etc), then the metadata from the version of the file in the cache should end up on the array.

     

    Apologies in advance if I'm wrong about how things work, but this appears to me to be the current situation with Unraid 6.8.3.

  12. On 2/5/2020 at 12:34 PM, SuperDan said:

    Im curious if you tried running the test script "urbackup_snapshot_helper test" from the container console to verify that Urbackup can use the BTRFS snapshot feature?

     

    I somehow missed that, so I did not. Thanks for the tip! I'll give it a try when I have a few free minutes and report back.

     

    I tried it and got an immediate failure of:

    Backupfolder not set

    I don't have time at the moment to look into that deeply, but I did find food for thought at https://forums.urbackup.org/t/urbackup-mount-helper-test-backupfolder-not-set/5271.

  13. On 11/11/2019 at 7:06 AM, fireb1adex said:

    I am seeing an error about upgrading

    I'm running into this (often? always?) after upgrades, as well.

     

    If I do this from the container console...

    # dpkg --configure -a

    ...and restart the container, it fixes me up until the next upgrade.

     

    Though I'm keeping an eye out for a better solution, naturally :)

  14. On 12/23/2019 at 12:32 AM, maxse said:

    I can’t seem to find any info on how to install it

    @maxse, FYI, I'm just getting started with restic, and to install it I downloaded the 'linux_amd64' build from the restic releases page on Github, and have a script called from /boot/config/go that (along with plenty of other boot-time tweaks) handles copying it to /usr/local/bin.

     

    I'll also mention that my startup scripts set the XDG_CACHE_HOME environment variable to point to a dir I made on a nice, speedy Unassigned Device (though you could also use /mnt/cache/.cache or wherever you like) so that restic makes all its cache files somewhere persistent, instead of in the RAM disk, where they'd be lost on a reboot, which almost certainly isn't what you want!

     

    The restic Docker container may be great, but it sounded like an un-necessary layer of complication to me, so I approached it this way.

  15. On 11/19/2019 at 9:51 AM, SuperDan said:

    ...this image seems to be missing BTRFS tools

    I'm curious about this, too, @binhex!

     

    I've been attempting incremental backups (directly to a /mnt/cache/... path within a cache-only share folder on a BTRFS cache drive) and finding that UrBackup is making none of the expected BTRFS subvols or snapshots.

     

    I'm absolutely not up to speed on what all is involved in a Docker container performing BTRFS-specific operations on an "external" BTRFS volume.

     

    So, following up on @SuperDan's question, might this be because certain BTRFS resources are excluded from the binhex-urbackup image? And, if so, is that strategic?

     

    If it isn't strategic, it would be lovely to see them added. And when I have a bit of free time, if I won't be duplicating someone else's efforts, I'll take a shot at it myself.

     

    EDIT: After some consideration and experimentation, I'm not even sure I'm thinking about this correctly.

     

    I installed btrfs-progs within the binhex-urbackup container (# pacman -Fy && pacman -S btrfs-progs) as an experiment, but my next incremental backup still didn't create the BTRFS subvolume I was hoping for.

     

    On the UrBackup Developer Blog, it says that "[e]very file backup is put into a separate sub-volume" if "the backup storage path points to a btrfs file system and btrfs supports cross-sub-volume reflinks."

     

    So, admitting I'm more than a touch out of my depth here, perhaps: 1) Unraid btrfs doesn't support cross-sub-volume reflinks for some reason, or 2) I shouldn't expect it to work from within a Docker container accessing a filesystem that's outside the container, or 3) ...something else.

     

    Any insight is appreciated, and I'll post here if I happen to get it figured out.

  16. 21 hours ago, TheExplorographer said:

    So what did you do?

    Sorry...should've explained! I was lucky enough to have recently migrated the VM in question to a second, non-Unraid box to use as a template for another project, so I was able to simply go grab a copy of the OVMF_VARS.fd file from there.

     

    Had that not been possible, I suppose I would've grabbed a clean copy of that file from here or here, the downside being the loss of my customized NVRAM settings.

     

    I didn't notice if any cores were pegged with this happened, but I rather doubt it, because in my case there was no boot activity--I didn't get to the Tianocore logo, nor even to the point of generating any (virtual) video output for noVNC to latch onto.

    • Like 1
  17. 19 hours ago, bland328 said:

    My trusty macOS VM that I've been running 24/7 for years suddenly won't start.

    For the record, I solved this...but I'm not sure what to make of it. And it almost surely has nothing to do with the OP's problem, but I'll leave the solution here anyway, in case it helps someone else:

     

    Apparently, the 'OVMF_VARS.fd' file (OVMF NVRAM backing for the VM) for that VM became corrupt. It is stored in an unassigned btrfs volume on an NVME drive, and the btrfs volume itself does not appear to be corrupt.

     

    I've no idea what happened there, but hopefully (and probably) it has nothing to do with Unraid 6.8.1.

  18. I'm having a somewhat similar problem. My trusty macOS VM that I've been running 24/7 for years suddenly won't start.

     

    And by that I mean that the VM claims to have started (green 'play' button lights up in the Unraid GUI), but nothing ever happens--I don't get even as far as being able to VNC in (I get the "Guest has not initialized the display (yet)" message).

     

    In the log file for the VM, not much appears--when I try to start the VM, it spits out the long set of qemu args, then this standard stuff:

    2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: high-privileges
    2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: custom-argv
    2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: host-cpu
    char device redirected to /dev/pts/1 (label charserial0)

    and then...nothing.

     

    I know this may not actually have anything to do with 6.8.1, but in the name of research...is anyone else having VM woes under Unraid 6.8.1?

  19. 8 hours ago, ryoko227 said:

    I actually never got around to trying it TBH. ...  I will have to look into this patching again when I get some free time.

    Thanks for the update, @ryoko227. If/when I get some time, I'll also put some work into this, and will post here.

     

    EDIT: For the record, I'm on an ASUS PRIME X370-PRO + AMD Ryzen 5 2600, with a 500GB Kingston NVME drive in the motherboard slot, formatted BTRFS, and unassigned. Turning off IOMMU in the BIOS does stop the flood of page faults, but I need to turn that back on soon 😅