samsausages

Members
  • Posts

    114
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

samsausages's Achievements

Apprentice

Apprentice (3/14)

29

Reputation

  1. Kicking around the idea of replacing my parity drives with some dual actuator HDD's. Wondering if anyone has tried it yet and how performance is looking for them.
  2. Feel like LT catching flack for Broadcom's BS. People really are on edge out there!
  3. Thanks for the update. Pretty much what I expected from LT, support for existing community. You all haven't let me down yet.
  4. Unraid hasn't let me down yet. I'm not going to worry or get upset about something that hasn't even been officially announced yet, comes from Reddit speculation on a Sunday night into a Federal Holiday... Unraid and LT have build more credibility with me than that, over many years. I have a feeling license changes are coming with the talk of increasing the array disk limit to exceed 30. That would explain the extra tiers. i.e. unleashed. But as of right now, I have no reason to believe that my multiple Pro licenses won't continue to be honored as advertised. I do understand the need for model changes, as I paid for this once and I have been getting updates for many years. Now I do expect them to honor our agreement (lifetime), but everyone should be able to make new and unique license terms as they see fit, going forward.
  5. I'm also looking for nvme-cli as I have some Intel NVMe SSD's that I need to low level format from 512 to 4k. Would be nice to have it in unraid by default, or in nerdtools! But I did find the package online and installed it manually, sure was easy enough... took me longer to find the package than it did to install it.
  6. I manage file permissions at the linux level, that is my intention for my use case. I don't always want the SMB user to be the owner of the file. I also don't run all my containers with 99:100, sometimes I use masking, sometimes I don't, depends on what I'm doing.. I'm not telling other people to do what I'm doing... I'm asking what others are doing so I can get ideas for my use case.
  7. So for a while a lot of my SMB files were changing owner to the SMB user I was using to log into SMB Unraid through Windows. I added some lines to the SMB config file that forces the default user to "nobody:nogroup" I'm using this and it works fine defaulting it to nobody:nogroup [share] path = /mnt/user/share public = no browseable = no guest ok = no writeable = yes read only = no force user = nobody force group = nogroup create mask = 0640 directory mask = 0750 #Show ZFS Snapshots vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = autosnap_%Y-%m-%d_%H:%M:%S_daily shadow: localtime = yes However, I think it would probably be better if I had it inherit the user from the root folder instead, as I have some more restricted folders setup. But this doesn't work as expected. Removing "force user" "force group" and then adding this has no effect: inherit permissions = yes Is there a better way of accomplishing this?
  8. @NeoDude I don't think this is a zfsmaster issue, but I think I got an idea of what's happening. What is your folder and dataset structure? You may need to execute the command with -r "recursive" It sounds like you are trying to rollback the parent dataset and it has children datasets nested within it. When you browse .zfs/snapshot of the parent, then you won't see data for the children. But if you go to .zfs/snapshot in the children, then you will see the data of the children. The only way for you to mess this up is if you didn't take your snapshot with -r recursive. If you have the snapshot, then the data is in there. Another reason I can see for this is if the dataset isn't being mounted. To restore, you need to do the restore on the specific child dataset. If you have subfolders within, that are actually configured to be datasets, then you need to restore each folder/dataset. (Or make sure it's doing it recursively) To pretty much clone the dataset, recursively with all the snapshots and properties you use: `zfs send -wR metapool/appdata@migration | zfs receive -Fdu metapool/appdata_new` But read my notes below so you understand the flags, as that's unmounted. and name the send differently for your testing, so you don't override anything. Then when you confirm the data is there, you can rename it and update your mount points if needed. Here are some of my notes on how I have done it in the past. Hope it helps you! If you need more help you can IM me so we aren't clogging up the thread here. # Backups & Snapshots ## Snapshots ### Create New Snapshot `zfs snapshot workpool/nextcloud@new_empty` Recursive: `zfs snapshot -r workpool/nextcloud@new_empty` ## Transfer Dataset from one location to another ### Create Snapshot 1. `zfs snapshot -r metapool/appdata@migration` ### Send to new dataset (Recursive with dataset properties & snapshots) 2. `zfs send -wR metapool/appdata@migration | zfs receive -Fdu metapool/appdata_new` -w sends raw data, needed with encrypted datasets. also keeps recordsize & options. -R is recursive and includes all snapshots/clones -F Forces the overwrite of the target dataset - use with care! -d Uses the provided dataset name as the prefix for the names of all received datasets. Essentially, this means that the data will be received into the named dataset, but not as a clone -u ensures that the received datasets are not mounted, even if their mountpoint properties would typically cause them to be automatically mounted. Requires manual mounting! ### 3. Confirm data is in location and present 4. Rename old dataset to appdata_old 5. Confirm mount points changed for appdata_old 6. Rename appdata_new to appdata ### Mount Dataset (if left unmounted with -u flag) 7. `zfs mount metapool/appdata` Done ## Syncoid ### Replicate Dataset from dataset (Not fully tested) `syncoid metapool/gitea/database workpool/gitea/database`
  9. @Iker I can't tell you how nice it has been to use the plugin since you changed the refresh methodology! Huge improvement for my use case! I do have a future feature request: The ability to refresh by pool. I.e. a refresh button on the pool bar that has the "hide dataset" "create dataset" buttons. And/or in the config the ability to select/deselect pools from the refresh. How I use this: All my ZFS pools are SSD's, so I don't care about spinup/down on those. But I do have some ZFZ formatted disks as snapshot backup targets in the Unraid Array. I rarely browse those and don't need to update ZFS Master very often. Being able to exclude just those pools (or having a button to refresh only the pool that I'm working on) would make it so those zfs array disks spin up even less. But even without that, it has been a huge improvement! My disks are only spun up for about an hour a day now, where before they spun almost all day!
  10. @JorgeB I guess I better test it again and monitor my ARC more closely, I didn't check my hit rate. The read test I ran last night was giving me 17,000 MB/s-20,000MB/s, on my VM with a ZVOL and primarycache=none So I figured that performance is coming from the ARC.
  11. Updated to v0.9. Lots of upgrades to make it more durable and robust. Mainly error checks and validation checks.
  12. I'm not able to disable the ARC for pools/datasets. For example, when I create a dataset with: zfs create -o primarycache=none -o secondarycache=none -o atime=off -o compression=off pool/test doesn't disable ARC caching in my testing. Am I overlooking something or is this expected behavior?
  13. @itimpi Thanks for that, that makes sense! I already backup the USB to local storage and then to a server I have at work, offsite. But I don't have any type of cloud storage that isn't in my circle and I avoid that. The benefit I see of this is that should something happen to the physical boot USB, I already have a USB with all my config files ready to go, only needing me to activate the license. And I already have a USB that I sync some critical files to, with the purpose that I can pull it in an emergency and take it with me. So adding a usb backup to that would only take me a few more lines of code.
  14. So I had this thought. Can I just: 1. add another USB drive 2. Format FAT32, mount/automount 3. Write a script that copies/rsyncs boot usb to backup usb 4. Profit Anything I'm missing? Feels too simple, haha. Will probably try this out this weekend.
  15. @Iker the caching is a welcome addition! Now it's operating exactly how I would have expected! Good releases, thanks!