Kilrah

Members
  • Posts

    1872
  • Joined

  • Last visited

Everything posted by Kilrah

  1. Indeed there's no more mdX, hadn't noticed either.
  2. Yep my setting is "Errors only". Didn't remember it was there, but yes
  3. Last update has an issue with notifications, woke up to a slew of "warning: skipping verification for this container because its not wanted!" on unraid and discord just because the Verify Backup? Normally, tar detects any errors during backup. This option just adds an extra layer of security is off. Also got some "XML file for [container] was not found!", those are from compose stacks. These should be at most info and not trigger the notification system as they are normal and will just spam everyday.
  4. It's not a device script but a manual unmount, but I am pretty certain it was not the case because - UD button was back to "mount" - Syslog has "Successfully unmounted", i.e. the notification would have been received yet: - Main page still showed writes to the drive - IOwait was still high - Unplugging the device (see timestamps, a whole 6 minutes later) caused xfs to go haywire, IOwait dropped confirming it was coming from that drive - Next mount needed several minutes to replay the log All of these suggest activity on the OS level after unmount, the unmount having "written" to the disk and returned but not synced and those still being slowly flushed from the OS cache... Aug 11 12:42:52 Unraid unassigned.devices: Unmounting partition '/dev/sdc1' at mountpoint '/mnt/disks/WCJ4D6L2'... Aug 11 12:42:52 Unraid unassigned.devices: Synching file system on '/mnt/disks/WCJ4D6L2'. Aug 11 12:49:12 Unraid unassigned.devices: Warning: shell_exec(/bin/sync -f '/mnt/disks/WCJ4D6L2') took longer than 90s! Aug 11 12:49:12 Unraid unassigned.devices: Unmount cmd: /sbin/umount '/mnt/disks/WCJ4D6L2' 2>&1 Aug 11 12:49:12 Unraid unassigned.devices: Successfully unmounted '/dev/sdc1' Aug 11 13:05:12 Unraid kernel: usb 2-2: USB disconnect, device number 10 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274673912 op 0x1:(WRITE) flags 0x1000 phys_seg 8 prio class 2 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274689336 op 0x1:(WRITE) flags 0x1000 phys_seg 8 prio class 2 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274696760 op 0x1:(WRITE) flags 0x1000 phys_seg 8 prio class 2 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274704248 op 0x1:(WRITE) flags 0x1000 phys_seg 8 prio class 2 Aug 11 13:05:12 Unraid kernel: XFS (sdc1): metadata I/O error in "xfs_buf_ioend+0x111/0x384 [xfs]" at daddr 0x879500f8 len 32 error 19 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274714680 op 0x1:(WRITE) flags 0x1000 phys_seg 8 prio class 2 Aug 11 13:05:12 Unraid kernel: XFS (sdc1): metadata I/O error in "xfs_buf_ioend+0x111/0x384 [xfs]" at daddr 0x87951df8 len 32 error 19 Aug 11 13:05:12 Unraid kernel: XFS (sdc1): metadata I/O error in "xfs_buf_ioend+0x111/0x384 [xfs]" at daddr 0x87953b38 len 32 error 19 Aug 11 13:05:12 Unraid kernel: XFS (sdc1): metadata I/O error in "xfs_buf_ioend+0x111/0x384 [xfs]" at daddr 0x879563f8 len 32 error 19 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274689336 op 0x1:(WRITE) flags 0x1000 phys_seg 4 prio class 2 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274689368 op 0x1:(WRITE) flags 0x1000 phys_seg 4 prio class 2 Aug 11 13:05:12 Unraid kernel: device offline error, dev sdc, sector 2274696760 op 0x1:(WRITE) flags 0x1000 phys_seg 4 prio class 2 Aug 11 13:05:12 Unraid kernel: XFS (sdc1): Metadata I/O Error (0x1) detected at xfs_buf_ioend+0x251/0x384 [xfs] (fs/xfs/xfs_buf.c:1260). Shutting down filesystem. Aug 11 13:05:12 Unraid kernel: XFS (sdc1): Please unmount the filesystem and rectify the problem(s)
  5. Seem to have found an edge case when unmounting with UD. When clicking the button UD sends a `sync -f mountpoint` which flushes the filesystem, then unmounts and returns "successfully unmounted". It doesn't however sync the device itself after the unmount and wait for / ensure the writes involved in the unmounting itself have been committed before reporting a successful unmount. I've run into the situation of a badly "clogged" SMR drive where for UD it was unmounted well and good but the drive still needed minutes before actually being safe to unplug. Maybe an additional device-level sync would be a good addition.
  6. You just "Add container" on the Docker page and fill up the template with the contents of your run command.
  7. There's ffmpeg-nvidia: Shouldn't need to care about the nvidia part of things, just support is there if you have the hardware but it should work just fine in software.
  8. There's one in apps already, and plenty more on dockerhub you can use.
  9. Click on it in the list and see if it's set to not be stopped.
  10. I don't know adguard (I use pihole) but if you decide on "tower.local" as a domain name you need to tell it that *.tower.local is 192.168.1.200. Then on npm you enter a proxy host that'll be e.g. "cloud.tower.local", that'll redirect to 192.168.1.200:whateverport. NPM needs to be running on ports 80/443 on the unraid box, so you'll typically want to either move unraid's UI to other ports.
  11. Some discussion here for example: Typical FOSS conundrum, someone got excited about something then dropped it and nobody took over. Recent or not doesn't matter. I doubt it'll ever get supported by unraid unless the situation changes drastically.
  12. Can't help with the current situation, but ZFS' own encryption isn't supported in unraid, and also deprecated/not maintained anymore in ZFS itself so not recommended in general.
  13. It is. You set up your dns so that all domains you want resolve to the ip of where NPM runs on, then in NPM you setup hosts so that domain maps to the desired ip:port.
  14. What you put in /boot/extras gets installed on boot, that's what nerdtools does.
  15. You can already do that if your hardware supports it.
  16. I just spun up my test unraid usb on an old machine to clear 5 drives that were previously used in a Drobo DAS, hadn't been booted in a while and all 5 drives were available to clear or format, installed all the plugin updates including UD and now 2 of them were shown as "pool". Had do use wipefs -a to clear the "raid member" signature from them. A bit too cautious maybe? not great if there is no GUI way to clear previously used drives that had a layout that isn't even in use on unraid.
  17. As a note I've had to manually change it to IP at some point by editing the config file as it sometimes wasn't resolving the name, but it's never caused me problems.
  18. If those are both in the array then 30-40MB/s is normal, have to live with it. And any other concurrent access to the array will slow it down further.