Nogami

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Nogami

  1. Thanks for this, worked fine on my main system and backup to remove the mbuffer error.
  2. I think a reasonable timeframe for supporting old versions with upgrades that only patch major vulnerabilities is fine. Maybe 2 years tops.
  3. Think this is pretty fair. 1 year of included upgrades is pretty standard and I don’t have an issue with this as long as they never move to a “don’t pay a sub and your existing paid software expires” (adobe’s garbage model). As long as this is not the case, it’s just a 1 year support contract and not a “subscription” in the sense of Apps where you lose everything if you don’t continue to pay. Gotta nip that line of FUD in the bud. Retaining a lifetime option that’s a bit more expensive is also quite reasonable for us long term users. There must also be a way that’s less expensive than buying an entire new license to reactivate your license if you don’t keep renewing every year and want to skip a bunch of interim upgrades and just get current. Too many companies don’t value customers that helped get them there and it would pain me if limetech went that way and only chased new subs rather than treasuring existing customers who are usually the biggest advocates.
  4. Thanks for the update, I’ve switched to the default recent master. I’ll report back if there are any issues.
  5. Saw this in my log from last night: seems that UPS [ups] is in OL+DISCHRG state now. Is it calibrating or do you perhaps want to set 'onlinedischarge' option? Some UPS models (e.g. CyberPower UT series) emit OL+DISCHRG when offline. This is using 2.8 stable. Didn't trigger a shutdown this time, if it was the same issue. Was only in that mode for 2 seconds, then switched back on and: UPS [email protected] on line power
  6. Thanks for all of the testing and updates here. I've had two random NUT-related shutdowns in the last month, so I'm thinking I may have something similar, however I don't have enough logging enabled to verify everything. That my system came back online without a parity check would indicate a safe shutdown (and both my main server that monitors the UPS, and my backup server which monitors through NUT both did clean shutdowns at the same time, so I'm pretty sure that's where the issue is. Also switched my config to stable 2.8.0 to see if the problem is solved. device.model Back-UPS RS 1350MS
  7. I have a few dockers with data inside the appdata folder, however getting an error that there's nothing inside the volume and it refuses to back anything up. [18.06.2023 19:18:20][jackett][info] Stopping jackett... [18.06.2023 19:18:25][jackett][info] done! (took 5 seconds) [18.06.2023 19:18:25][jackett][debug] Backup jackett - Container Volumeinfo: Array ( [0] => /mnt/apps/jackett_downloads/:/downloads:rw [1] => /mnt/user/appdata/jackett:/config:rw ) [18.06.2023 19:18:25][jackett][debug] usorted volumes: Array ( [0] => /mnt/user/appdata/jackett [1] => /mnt/apps/jackett_downloads ) [18.06.2023 19:18:25][jackett][info] Should NOT backup external volumes, sanitizing them... [18.06.2023 19:18:25][jackett][info] jackett does not have any volume to back up! Skipping [18.06.2023 19:18:25][jackett][info] Starting jackett... (try #1) /mnt/apps/jacket_downloads is indeed empty for /mnt/user/appdata/jackett, there's 21MB of files and riirectories in there. Note that if I enable "save external volumes" for that docker it does back it up, however the data is inside the appdata folder, so it shouldn't show as an external volume, should it? Maybe related to having my appdata on a ZFS drive in 6.12? One other small question - it's creating backups with user root, group root. Shouldn't it be nobody/users when creating on shares?
  8. Thanks for checking, I just upgraded to stable, I'll test it later this evening and see if I can replicate it again.
  9. Strange permissions issue related to cache drive set to ZFS format, fixed when I reformat cache to BTRFS or XFS. Having a strange permissions issue that seems to have cropped up after setting my NVME cache drive to ZFS (for the compression). When my browser (Firefox - no Google here) downloads a file, it saves it as temporary file, then renames it to the final filename when the download is complete. My cache drive is then retaining a 0 byte file (the temporary file) (which I have no permission to delete over SMB). I can delete or rename the part1(1).zip file normally. the 0 byte file (no extended attributes) cannot be deleted or modified through a SMB connection with the same permissions. Files created in one-pass on the cache (copied from windows for example) don't seem to have this issue, it seems related to the way that files are streamed to the cache drive as a temporary download file, then renamed when complete and the original file should be deleted, which doesn't happen as it seems to lose the permissions to delete the temp file. Even after the files are moved from the ZFS cache to the XFS array, the problem persists. * Reformatted cache to BTRFS and the problem is gone, so seems specific to the way files are created and modified on ZFS. The fact that it's Firefox doing it is irrelevant, as any software package is capable of writing files this way. Edit: running mover made no difference, but once I reformatted the cache drive to BTRFS (with compression), I could delete the 0 byte files. Makes me think there was some sort of link that mover didn't take care of, but stopping the array, reformatting the cache, seemed to break the link to the old attributes and allow deleting. Very strange. gonna stay on BTRFS for the time being. Edit2: Also tried cache as XFS and is fine with no side effects. Any ideas?
  10. Thanks, this worked here as well when my ZFS pool hung up after doing a lot of messing with converting my appdata and cache to ZFS drives for the compression.
  11. Creating a new share and for some reason it's also creating a new ZFS dataset with the share name as well despite it being set to only use my cache drive. ie: create share "scanner", set it to use cache pool only. Creates ZFS dataset "scanner". Delete the ZFS dataset "scanner" and the "scanner" share vanishes as well. Very strange. The share is set to use only the cache pool, however when I add some data to it, it actually goes into the ZFS dataset that it created. It only seems to be the "scanner" share that causes the issue. If I create a new share with a different name, a ZFS dataset is not created and it seems fine. Maybe a reboot is in order... Edit: deleted everything from the share, deleted the share, re-created it and now no mystery ZFS dataset created. Maybe there was some sort of mystery symlink or something hanging around? Strange. Continuing to test.
  12. Any chance would could get an option (or default) show datasets to be enabled?
  13. Perfect, thank you! Brain wasn't keeping up with my eyes. Great job BTW, this is awesome!
  14. Just wondering why it's recommended to erase and re-create zpools? My RC5 ZFS pools seemed to come in OK. Am I overlooking something basic (bit of a ZFS newb).
  15. Here's what I got back, looks like it's reporting it properly though the CLI in 6.11.5 (I added -h for readability) But in 6.11.5 SMB Shares:
  16. When mounting a ZFS pool from 6.12 RC5 remotely on 6.11.5 through SMB, the available free space and the size of the overall storage is shown as the main array size, rather than the ZFS pool remaining space. Apologies if this is still related to not having both of my systems on the RC, the main server gets to move when we go stable. ZFS Pools on 6.12 RC5, doc_backup has 1.47TB used and 1.64TB remaining in the pool The one stock array on the 6.12 RC5 server with 940GB in use. What 6.11.5 sees when connecting remotely through SMB, doc_backup shows 940GB used, and 7TB free, not the real ZFS pool values.
  17. Really enjoying this plugin so far, makes managing my ZFS pool much easier in 6.12. My only comment would be a GUI suggestion that when you make a modification to a ZFS entry (for example renaming or adding/deleting something), it should trigger an immediate refresh on the unRAID UI, rather than waiting for the next scheduled update (~30 sec).
  18. Low prioriy, but just adding that the UI looks bugged for the tab bar at the top on Firefox at actual size zoom. When zoomed-in one step, it looks OK. I refuse to use Chrome due to Google.
  19. I've had this problem for a while, and I've done some testing and found that disabling the "unassigned devices" plugin resolves it for me on one of my servers, but not the other one. Must be plugins that insert data on the main page.