NytoxRex

Members
  • Posts

    15
  • Joined

  • Last visited

NytoxRex's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Okay, waiting around for a new version of unraid got me a little impatient... Via the Tools -> Update OS, I reverted back to version 6.12.2. I am happy to report that the encryted datasets can write data much more efficiently now. Here a screenshot of the cpu usage now with the same workload: This is more in line with my expectation. Thanks for the help!
  2. Thanks! Ill just wait for the new release. Ill reply to this thread if the issue is resolved.
  3. Running the suggested code cat /sys/module/zfs/parameters/zfs_vdev_raidz_impl cat /sys/module/zcommon/parameters/zfs_fletcher_4_impl Gives: cycle [fastest] original scalar sse2 ssse3 [fastest] scalar superscalar superscalar4 sse2 ssse3 Though, I dont really know what this means
  4. @JorgeB im running the latest stable version at the moment, which is version 6.12.4. I upgraded from 6.12.2 two weeks ago. If this bug is fixed in a later version, I assume that the performance is automatically fixed? E.g. I do not have to reshuffle the data again?
  5. Hi all, I have recently reshuffled my unraid server with zfs. I went from a non-encrypted zpool to a zpool with almost all datasets encrypted. I noticed a quite significant and non stable overhead on my system (see image with overall load = 85%), when copying my data (large photos of +- 30MB each) back onto the new encrypted zpool from another (single) harddisk. As a debugging step I have tried the 'top' command to see what is eating CPU resources, giving me the 'z_wr_iss' process. I read that this process handles the writes to the zpool and also handles the encryption. My CPU is a AMD Ryzen 5 5500, checking for the encryption hardware give me a positive result: root@Deimos:~# grep -m1 -o aes /proc/cpuinfo aes aes root@Deimos:~# grep module /proc/crypto | grep -v kernel | sort | uniq module : aesni_intel module : crc32_pclmul module : crc32c_intel module : crct10dif_pclmul module : ghash_clmulni_intel module : sha512_ssse3 Then I tried to copy that same data to an unencryted dataset to check if this cpu overhead was due to the encryption. The result is shown in the image with overall load = 30%. So it looks like the encryption is adding a significant overhead!!! The system at idle (stopped copy tasks) is shown in the image with overall load = 2% Finding out what encryption is used for my datasets: root@Deimos:/boot# zfs get encryption phobos/nino NAME PROPERTY VALUE SOURCE phobos/nino encryption aes-256-gcm - I read that zfs encryption has only a 1-10% performance hit vs. non-encrypted dataset in terms of speed. I read that this encryption could and/or should be handled by the aesni_intel crypto-acceleration-hardware. I copy with rsync. What I'd like to know: is this large overhead normal behaviour for encryption, or is something misconfigured? Thank you in advance!
  6. Oh I see now, indeed the file needs to be changed, otherwise it will not show up. Thanks all for the help!
  7. Great! I found the solution and it's working almost perfectly now! I just had to add the line to the "/etc/samba/smb-shares.conf" file. Easy, i would say. The only problem left is that only folders show the previous versions tab, not files. Visible in the two pictures are a folder (with prev. vers.) and a file in that folder (missing prev. vers.). If anyone knows how to fix this and make the system perfect, let me know!
  8. So here is the problem (likely), the snapshots generated by the deimoshdd/nino/documents is stored in deimoshdd/nino/documents/.zfs/snapshots, while the config of the share nino (mnt/user/nino/nino = deimoshdd/nino) makes it search in the given mnt/user/nino/nino/.zfs/snapshots which is equal to deimoshdd/nino/.zfs/snapshots. Of course there is nothing to find, as only the sub folders (or datasets) are flagged to use the snapshot feature. Now I *could* make a different share for each of the folders, but I would not like to do that, I just want one folder (share) that is used for a single person. Thus there are two options left: 1) The samba config is configured that it looks in all subfolders for the .zfs/snapshots folders. 2) The snapshots are stored in a the parent folder in its *parent*/.zfs/snapshots and not in the *parent*/documents/.zfs/snapshots Is there a way to do either one of these? I think this is the problem. Thanks!
  9. Added the -r flag, as found in the link on the second entry of this thread. Still no luck getting the subfolders/files to have previous versions capabilities. I would like some help here, thanks!
  10. Changed the path to "path = /mnt/user/nino/nino" to include the symlink, and it works now! sort of... The first image shows the directories working, which is great, but when diving into the four directories none of the files have previous versions... I wish I could have all individual files included as well, such that a single file can be restored. How is this done, seems like a simple -r (recursive) flag or something.
  11. I think the -u flag in this line in the zfs-auto-snapshot.sh script changes the time to utc, so either this script has to be changed or local time disabled in the "/etc/samba/smb-shares.conf" settings if I'm correct. (The latter has been done now the config is:
  12. Quick remark, time is an issue. I just did a run of the sript again at 11:52 and the latest snapshot gives the following. The time in the unraid GUI is correct, so this is a incorrect setting of the time in the creation of the snapshots. Though, i dont exactly know where the time difference is coming from, and weather it affects the workings of previous versions in windows.
  13. Okay I've checked the "/etc/samba/smb-shares.conf" and the settings in there were incorrect. I have changed these settings, as can be seen in the image. I've removed the %S as seconds, as the 01 in the names is the 'label' attribute set to 01 in my script. I've also added the write list and valid users, all of this without any luck. I have also added the structure of the folders as an image, might be helpful.
  14. Hello all, I've setup a new Unraid server with a shared forlder in ZFS, but I cannot seem to get the Shadow Copy working. I followed this guide (https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764) for my setup, with my own alterations ofcourse! I will attach some images of the file structure and config. The shadow img shows the previous version tab in W11, for a file in my 'nino/documents/' folder. The smbconfig shows the config in the extra options of SMB settings. The snaps shown in the last image, show the files in 'nino/documents/.zfs/snapshots/'. <a href="https://imgbb.com/"><img src="https://i.ibb.co/hBg6q7C/shadow.jpg" alt="shadow" border="0"></a> <a href="https://ibb.co/X7GYfNP"><img src="https://i.ibb.co/D9jrPsF/smbconfig.png" alt="smbconfig" border="0"></a> <a href="https://imgbb.com/"><img src="https://i.ibb.co/9nVnfsh/snaps.png" alt="snaps" border="0"></a>