bland328

Members
  • Posts

    105
  • Joined

  • Last visited

Everything posted by bland328

  1. @bobbintb, did you happen to use any particular guide to accomplish this? Or have one in mind that you recommend? I'm also in need of auditd support, and though I have many years of Linux experience, I have yet to build a custom kernel. Thanks for any advice!
  2. A hopefully-quick OpenVPN-Client question: The overview for for the app begins: Am I correct that this is wrong, and the *.ovpn file should actually be renamed to 'vpn.conf'?
  3. CA failure report, as requested by plugin: The situation in which the failure occurred was slightly atypical: I attempted to install Portainer from CA, but it failed to start due to a port binding conflict, after which I was automatically returned to CA, and this error appeared, with instructions to post it here.
  4. Here's an idea for another check this wonderful plugin could make, though I thoroughly admit it is an edge case: Check /mnt permissions to ensure the mode is 0755. I suggest this because a sloppy boot-time script on my system borked permissions on /mnt, and the result was that all my User Shares (both via SMB and the GUI) vanished until I found and fixed it. Again--absolutely an edge case, but thought it might save someone a couple hours at some point
  5. Ah--that's excellent, thanks. The new approach of passing the appropriate $disk or $dev to check_temp (rather than just $name) makes a bunch of sense.
  6. Hmm...very strange--the code you posted doesn't match the version of the monitor script I'm running. On my 6.9.2 install, /usr/local/emhttp/plugins/dynamix/scripts/monitor calls check_temp thusly: // check array devices foreach ($disks as $disk) { .. // process disk temperature notifications check_temp($name,$disk['temp'],$text,$info); .. } // check unassigned devices foreach ($devs as $dev) { .. // process disk temperature notifications check_temp($name,$temp,$text,$info); .. } This explains why the tweak I posted makes check_temp work properly on my system, but doesn't explain why my 6.9.2 monitor code differs from what you expect. Might the code you posted be from a post-6.9.2 revision?
  7. tl;dr: It appears to me that Unraid 6.9.2 doesn't honor device-specific temperature notification settings for Unassigned Devices for a straightforward reason that is easily fixed. Now that I have two unassigned NVME drives in my Unraid server, the annoyance of over-temp notifications that ignore the per-device settings has doubled, so I've come up with what is hopefully a true fix, rather than a workaround, in the form of a small change to the check_temp function in /usr/local/emhttp/plugins/dynamix/scripts/monitor. Here's the diff for /usr/local/emhttp/plugins/dynamix/scripts/monitor in Unraid 6.9.2: 61,62c61,66 < global $notify,$disks,$saved,$display,$server,$top; < $disk = &$disks[$name]; --- > global $notify,$disks,$devs,$saved,$display,$server,$top; > if (isset($disks[$name])) { > $disk = &$disks[$name]; > } else { > $disk = &$devs[$name]; > } The logic behind the change is that it appears to me that while the $devs array does properly include the hotTemp ("Warning disk temperature threshold") and maxTemp ("Critical disk temperature threshold") values as a result of CustomMerge.php merging them from the /boot/config/smart-one.cfg file, the check_temp function in 6.9.2 fails to consider the $devs array at all. This patch changes check_temp so that $devs is included as a global, so that if the passed $name can't be found in the $disks array, a lookup in $devs can be attempted, instead. I suspect there's a more elegant way to implement the fallback from $disks to $devs, but I'll leave that as an exercise for people who know PHP well I don't claim this to be well-researched or production-quality code, but it does fundamentally make sense, and It Works On My System™, so I hope this is helpful.
  8. +1 Under 6.9.2, the smart-one.cfg file behaves properly, in that GUI changes show up in the file and vice-versa. However, I still get over-temp notifications in cases that I should not.
  9. I'm unsure on the right priority for this--it is certainly not an Urgent data-loss emergency, but also seems more than Minor, as it can be a significant problem for those relative few unlucky enough to run into it.
  10. Problem description Under Unraid 6.9.2, mover appears to leave symbolic link files targeting non-existent files ("broken" symlink files) in the cache forever. Though such symlinks are commonly called "broken" or "invalid" symlinks, those titles are a bit misleading; in truth, though they "feel" weird, they are perfectly legitimate file system objects with a variety of purposes and reasons for existing, so I hope/assume this is simply a bug and/or oversight in mover. How to reproduce Simply make a symlink to an invalid path in a user share that uses the cache, like this... # ln -s /foo/bar /mnt/user/test/symlink ...then start the mover, and check to see if the symlink file has been moved to the array. Notes I see this both on a mature production server and on a newer box that still has a close-to-vanilla configuration; the attached diagnostics are from the latter. Before capturing diagnostics, I turned mover logging on and started mover to test against a /mnt/user/test/test_symlink file. I see "/mnt/cache/test/test_symlink No such file or directory" appears in the syslog, which may be a clue as to the bug causing this: broken symlink files are peculiar in that some methods of testing for existence fail because the symlink's target file doesn't exist (because the symlink is being "followed"). Here's an example of what I mean, in hopes it helps: root@Tower:/mnt/user/test# ls -la test_symlink lrwxrwxrwx 1 root root 18 Apr 28 08:21 test_symlink -> /invalid/test/path root@Tower:/mnt/user/test# [[ -e test_symlink ]] && echo "exists" || echo "does not exist" does not exist root@Tower:/mnt/user/test# [[ -L test_symlink ]] && echo "exists" || echo "does not exist" exists protekto-diagnostics-20210428-1027.zip
  11. tl;dr: Why am I being asked to re-provide my encryption key when upgrading a data drive? I've been an Unraid user since version 2013, and understand quite a bit about how and why Unraid does what it does. Today, however, I'm confused, and can't figure out if Unraid is doing something unexpected, or if I'm being dense, since I haven't replaced a drive in a couple years. I'm replacing a healthy 4TB drive with a new 8TB drive, so I stopped the array, set the 4TB drive to "no device," shut the system down, swapped out the old drive for the new, powered up again, and selected the new 8TB drive to replace the now-missing 4TB. Off to the right of the newly-selected 8TB drive, it now says "All existing data on this device will be OVERWRITTEN when array is Started", and immediately below it says "Wrong" and displays the identifier for the old 4TB drive; this all seems right to me. Near the bottom of the page, it says "STOPPED: REPLACEMENT DISK INSTALLED" to the left of the Start button and "Start will start Parity-Sync and/or Data-Rebuild" to the right; this also seems right to me. However, here's the parts I don't get: The Start button is disabled, and there's no "yes, I want to do this" checkbox of any kind to enable it, which may be related to the "Encryption status: Enter new key" message appearing below the Maintenance Mode checkbox. Though I do use encrypted XFS for my data drives, I don't understand Unraid wanting a new encryption key for two reasons: /root/keyfile already exists. Why would Unraid need an encryption key for a disk that I gather is about to be overwritten on a sector-by-sector basis, rather than being formatted? And if the answer is that new disks always get reformatted before rebuilding...well...okay, but why isn't the existing /root/keyfile being used? I don't recall having to re-supply the encryption key in the past, so I figured I'd check in here before moving forward and potentially doing something unfortunate. Thanks for any insight! EDIT: I pasted the phrase from my /root/keyfile in as a "new" passphrase, and it worked; Parity-Sync/Data-Rebuild is now running. However, I'd still appreciate any insight as to why this was necessary, so I can decide whether to submit a 6.9.0 bug report or a simply a feature request to improve this experience.
  12. I'm having a problem with CA that's new to me, but I'm not clear if it started with version 2021.02.27, or earlier. The problem is that when I do a DockerHub search, the chain-looking "link" icons don't do anything when clicked. And when I hover, the popups all read "Go to DockerHub page ca_href" I'm happy to gather more information, if useful.
  13. Glad to hear it helped! For the record, today the Unraid web GUI was draaaaagging...and I discovered these "worker process...exited on signal 6" messages rapidly spamming /var/log/syslog again. So, I went hunting for stale Unraid sessions open in browsers on other computers, and found two. When I closed one, the spamming slowed, and when I closed the other, the spamming stopped.
  14. Thanks so much for creating the Wallabag docker template! I'm trying to use it for the first time, and keep running into Error 500, and seeing this at the end of the log: 2021/01/25 15:54:14 [error] 201#201: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught RuntimeException: Unable to write in the cache directory (/var/www/wallabag/var/cache/prod) 192.168.1.226 - - [25/Jan/2021:15:54:14 -0600] "GET / HTTP/1.1" 500 31 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" in /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php:676 Stack trace: #0 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(573): Symfony\Component\HttpKernel\Kernel->buildContainer() #1 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(117): Symfony\Component\HttpKernel\Kernel->initializeContainer() #2 /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(166): Symfony\Component\HttpKernel\Kernel->boot() #3 /var/www/wallabag/web/app.php(18): Symfony\Component\HttpKernel\Kernel->handle(Object(Symfony\Component\HttpFoundation\Request)) } thrown in /var/www/wallabag/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php on line 676" while reading response header from upstream, client: 192.168.1.226, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "tower:6500" I haven't managed to get this fixed, and rolling back to 2.3.8 didn't help. Any thoughts on what I might try or otherwise chase?
  15. I was just watching my syslog being spammed with nginx ... worker process ... exited on signal 6 2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped. Hope this helps someone.
  16. tl;dr: After migrating Docker and VM storage to Unassigned Device /mnt/disks/nvme, when I stop the array and that drive unmounts, I'm left with a /mnt/disks/nvme directory containing appdata, domains, and system dirs that each contain proper-looking directory structures, but no files. I recently migrated Docker and VM storage to a /mnt/disks/nvme volume mounted by the UD plugin (carefully updating all container configuration files and even the dockerMan template files along the way), and everything seems to be working well. But, as described above, I've noticed strange results when I stop the array and, though /mnt/disk/nvme does unmount, I'm then left with a /mnt/disk/nvme directory containing appdata, domains and system dirs that are empty except for a handful of appropriate-looking empty subfolder trees. Is this to be expected? If I boot with Docker and VMs both disabled, these dirs don't appear when the array stops, suggesting to me that they are created for potential use as mountpoints, but it seems strange to me that they are created as services are stopping and drives are unmounting. To be clear, these unexpected dirs aren't causing any problem I know of, except that to (lazily-written) scripts of mine, it looks like the /mnt/disks/nvme volume is still mounted 🙄 At any rate, I'd love to understand why this is happening, even if I shouldn't be concerned. Thanks very much for any insight, and apologies if this ultimately isn't truly a UD plugin question. I'm on Unraid 6.8.3 with Unassigned Devices 2020.10.25 and Unassigned Devices Plus 2020.05.22 installed, and Destructive Mode turned off.
  17. tl;dr: Improve handling of the uncommon situation that the mover can't move some files from cache to array because they unexpectedly already exist on the array. In Unraid 6.8.3, it appears that if a file exists on the cache drive (e.g. /mnt/cache/foo/bar.txt) and also on the array (e.g. /mnt/user/foo/bar.txt), then mover simply leaves the file on the cache drive forever. I recognize that this situation shouldn't naturally occur, but *shrug* things happen. I can see four problems with this situation, whenever it somehow occurs: Endangered data (since the newer file is never propagated to the array) Wasted cache space User confusion (e.g. /mnt/cache/foo/bar.txt and /mnt/user/foo/bar.txt match, but /mnt/disk1/foo/bar.txt doesn't) Potential pointless spinning up of drives every time the mover runs, only to find it can't move these conflicting files (in my real-life situation, chasing of mysterious drive spin-ups is what led me to all this) Here's my two suggestions to improve the situation: At the very least, some sort of user notification if some files are "stuck" in the cache due to conflicts. Plus, some mechanism for rectifying the situation would be a bonus. Optimally, Unraid would compare any such conflicting file to what exists on the array, then rectify the situation automatically if they match. This would be a potentially expensive operation when it occurs, but should be uncommon, and the results could arguably be worth the expense. Also, for a definition of files "matching", I suggest that if the file contents match, regardless of other metadata (timestamps, ownership, etc), then the metadata from the version of the file in the cache should end up on the array. Apologies in advance if I'm wrong about how things work, but this appears to me to be the current situation with Unraid 6.8.3.
  18. Enhancement request: Not a huge deal, but it would be lovely if Unassigned Devices would report not just "luks" as the filesystem for encrypted disks, but also the effective filesystem type (e.g. "luks:xfs" or something to that effect). Thanks for all the work on Unassigned Devices--it's fantastic!
  19. Did you end up opening an issue on this? If so, I'd love to contribute to it, at least in the form of a +1, as I really need to get IOMMU going again and patching the kernel myself sounds at least...unwise. I mean, it sounds kinda fun, but mostly unwise
  20. I somehow missed that, so I did not. Thanks for the tip! I'll give it a try when I have a few free minutes and report back. I tried it and got an immediate failure of: Backupfolder not set I don't have time at the moment to look into that deeply, but I did find food for thought at https://forums.urbackup.org/t/urbackup-mount-helper-test-backupfolder-not-set/5271.
  21. I'm running into this (often? always?) after upgrades, as well. If I do this from the container console... # dpkg --configure -a ...and restart the container, it fixes me up until the next upgrade. Though I'm keeping an eye out for a better solution, naturally
  22. @maxse, FYI, I'm just getting started with restic, and to install it I downloaded the 'linux_amd64' build from the restic releases page on Github, and have a script called from /boot/config/go that (along with plenty of other boot-time tweaks) handles copying it to /usr/local/bin. I'll also mention that my startup scripts set the XDG_CACHE_HOME environment variable to point to a dir I made on a nice, speedy Unassigned Device (though you could also use /mnt/cache/.cache or wherever you like) so that restic makes all its cache files somewhere persistent, instead of in the RAM disk, where they'd be lost on a reboot, which almost certainly isn't what you want! The restic Docker container may be great, but it sounded like an un-necessary layer of complication to me, so I approached it this way.
  23. I'm curious about this, too, @binhex! I've been attempting incremental backups (directly to a /mnt/cache/... path within a cache-only share folder on a BTRFS cache drive) and finding that UrBackup is making none of the expected BTRFS subvols or snapshots. I'm absolutely not up to speed on what all is involved in a Docker container performing BTRFS-specific operations on an "external" BTRFS volume. So, following up on @SuperDan's question, might this be because certain BTRFS resources are excluded from the binhex-urbackup image? And, if so, is that strategic? If it isn't strategic, it would be lovely to see them added. And when I have a bit of free time, if I won't be duplicating someone else's efforts, I'll take a shot at it myself. EDIT: After some consideration and experimentation, I'm not even sure I'm thinking about this correctly. I installed btrfs-progs within the binhex-urbackup container (# pacman -Fy && pacman -S btrfs-progs) as an experiment, but my next incremental backup still didn't create the BTRFS subvolume I was hoping for. On the UrBackup Developer Blog, it says that "[e]very file backup is put into a separate sub-volume" if "the backup storage path points to a btrfs file system and btrfs supports cross-sub-volume reflinks." So, admitting I'm more than a touch out of my depth here, perhaps: 1) Unraid btrfs doesn't support cross-sub-volume reflinks for some reason, or 2) I shouldn't expect it to work from within a Docker container accessing a filesystem that's outside the container, or 3) ...something else. Any insight is appreciated, and I'll post here if I happen to get it figured out.