• Posts

  • Joined

  • Last visited

Everything posted by subivoodoo

  1. I don't know how to make more step by step infos as in my original post... background knowledge of ZFS and iSCSI is needed for this kind of "official not supported" solutions. Attach to VM???? The iSCSI solution has noting to do with VM's => iSCSI is used to add remote disks on any client system in a network... a laptop or whatever supports iSCSI as client. The example of use a zVOL as disk in a VM on Unraid doesn't need iSCSI. But probably we can figure out how to fix your main issue (too many snapshots)? User script: I've attached my personal user script for resetting the "game-disk" which are used by 3 computers in my network (deletes the iSCSI targets, deletes all the clones, does new clones and recreates the iSCSI targets). => but note that this script is 100% for my personal setup Regards iSCSI-RenewAllKidsGames .sh
  2. @Iker Will it be possible to show also zvol's in a future version of your plugin? Currently I see only the pool name if I click "SHOW DATASETS"... yes I know, zvols are't datasets ๐Ÿ˜†
  3. I edited that... anyway thanks to all community developers!!!!
  4. So I ended up useing the 'hybrid mode' called by @SpaceInvaderOne to create my pool, datasets and zvol's and import/export them to Unraid: Another help for ZFS+iSCSI is this post here: But creating a zvol (which is more or less a 'disk within a file stored on your ZFS pool') is easy as: zfs create -s -V 100G -o volblocksize=4096 -o compression=lz4 YOURPOOLNAME/testzvol -s = sparse, thin provisioning so only used space within the zvol is allocated -V 100G = 100GB size All the created zvol's are listed under /dev/zvol/YOURPOOLNAME/*** (example above is therefore /dev/zvol/YOURPOOLNAME/testzvol ) and also shown there after reboot. The zvol's are also shown with zfs list and it's possible to create zvol's within other datasets if you need it. Such a zvol can be used as VM disk by just add a manual entry like this: or in my case use it together with the iSCSI target plugin by @SimonF and @ich777: You just need to create the backstorage manually (I think at the moment ๐Ÿ˜‰) with the following commands: targetcli /backstores/block create name=testzvol dev=/dev/zvol/YOURPOOLNAME/testzvol cd /backstores/block/testzvol/ set attribute block_size=4096 set attribute emulate_tpu=1 set attribute is_nonrot=1 cd / exit The rest can be configured within the iSCSI plugin, you can just pick this manually created backstorage there: If you don't need it any longer, remove the initiator mapping and delete the backstorage entry (note the zvol still exists): targetcli cd /backstores/block/ delete testzvol cd / exit And last but not least how to clone an existing zvol and/or delete it: zfs snapshot YOURPOOLNAME/testzvol@yoursnapshotname zfs clone -p YOURPOOLNAME/testzvol@yoursnapshotname YOURPOOLNAME/testzvol.myclone zfs destroy YOURPOOLNAME/testzvol.myclone zfs destroy YOURPOOLNAME/testzvol@yoursnapshotname zfs destroy YOURPOOLNAME/testzvol
  5. If there are any interests in my results of "have fun with ZFS/iSCSI on Unraid" for a shared game library... I've finished my tests and I will NOT use dedup. The real world performance of copying hundreds of GB to my dedup enabled test zvol's via iSCSI (10G network) is horrible and tooks hours... compared to dedup-off which tooks just 12 minutes for 520GB. The syntetic benchmarks are also better without dedup (see attached screenshots). My second idea of "setup all games and clone it" works great... now I need even less storage in my pool as with dedup enabled because of there is just one fully installed game library present, all the others are clones with just a few MB of different files. The performace is better and I don't need much more RAM for the DDT. For the update process I have a little script that removes the iSCSI mapping/backstorage, creates a new snapshot/clone and re-creates the iSCSI mapping/backstorage... so game install or update is simple as: - do it on the main gaming rig - run a user-script - bam, a few seconds later all my kids getting new games ๐Ÿ˜ => My next project based on this setup is testing GPU-P and clone another 2-3 game libraries... which is now done in seconds and does not need any additional storage!!! If someone needs the commands to create iSCSI backstorages for zvol's or someting... I can write a little tutorial. Benchmarks with dedup ON vs. OFF over iSCSI (better write performance, max out my 10G network, pool with 2 cheap SATA consumer SSD's stiped, 24GB for ZFS ARC):
  6. And now my speed compare ZFS/Dedup/iSCSI to TrueNAS (as VM on Unraid) vs. ZFS/Dedup/iSCSI to native Unraid... I mean to the awsome community plugins on Unraid ๐Ÿ˜‡ !!! Thanks a lot at this point for all your work. As I hoped the performance is (mostly) better. It could be because of no virtio layer needed (I don't know the performance of the FreeBSD/TrueNAS virtio driver). The "real world" 10GB movie file copy doesn't drop as much and runs really faster on "native unraid". Conclusion: Will I use iSCSI => yes, the performance for games over 10G network is great, normally just a few seconds compared with local Nvme which is not noticable if you have to watch intos... Will I use ZFS => yes! Will I use Dedup => it depends... the dedup ratio could be better for that and the drawbacks of slower performance with more data in the future is a big point. I've another idea to test... Next idea: Prepare one game lib as zvol with really everything installed, take a snapshot/clone of it for every client. The clones should only use the space on disk for the changed data and no additional RAM needed for dedup table. With such a setup I need to update only the initial/primary "game lib zvol" and a reset/redo new clones via a user-script should be possible.
  7. Yes and that's one reason (I think) why my "real world" write performance to my test pool (after the ARC is full) degrades the bigger the DDT gets. Do you know if the DDT will be moved automatically if I add a special dedup vdev now to my existing pool? And back to the pool if I remove this dedup vdev later?
  8. No, just speaking generally from educated guesses. Probably I can test it within the VM... but then it would be a disk image instead of a real SSD. And before I buy another sata ssd I will go and grab another 64G consumer RAM ๐Ÿ˜‰ Next I'll figure out if the performance "native" on unraid (I mean with the great community plugins) is the same as on the TrueNAS VM... or I hope even higher because of no virtio needed in between. But I don't have time right now... probalby next week.
  9. I changed from 3 HDDs in RAIDZ1 to 2 SSDs striped only for higher read speed in "real world". The DDT real RAM use can't be shown anywhere... you have to calculate it by the following command (from that forum entry / I used it to calc the 16-17GB): zpool status -D poolname => DDT entry count * the bytes in core (which by the way raises the higher the dedup ratio goes) I have no more SSD left to test... and as jortan writes you need to have the complete DDT in RAM anyway because of performance.
  10. Here some numbers while I'm still testing my ZFS-dedup+iSCSI game library idea and the pool is not yet exported to unraid. Specs: - TrueNAS 12.U7 as VM on unraid with 32G RAM, 2x2TB SATA SSD passthrough (I descided to use some old cheap consumer SSD's) - these 2 disks striped together in a pool, sync off, dedup on (data loss doesn't matter, games can be downloaded again and again...) - 4 game libraries in total (2x800G + 2x200G) fully loaded, each as individual zvol linked via iSCSI to the clients, tested and all is working - 951G allocated disk space for this 2TB data in total (at the moment) - Dedup ratio of 2.16, compression off (I don't get more than 1.01 compression ration on the game libs) - 58mio DDT entries ร  300B => around 16-17GB dedup table in RAM needed - 10G network Subjective impressions: Read speed is astonshing for disks over network, game loading times doesn't differ a lot compared with local NVMe SSD's! Write speeds are good as long as the data fit's into the ARC cache... afterwards it drops to 50-100MB/s until the cache is flushed and ready for high speeds ๐Ÿ˜. Some benchmarks screenshot and the "10G file copy" test attached (Windows really can't calculate a correct copy time...). And here some game loading time differences in seconds (local gen3 Nvme vs. iSCSI-ZVOL): - MSFS2020 until Menu: 185 vs. 195 (I personally don't understand why this game tooks so long to load also on Nvme!!!) - Battelfield V until Menu: 57 vs 63 - Battlefield level loading: 26 vs 28 - Doom Ethernal until Menu: 46 vs 56 (I hate watching intros and warn text every time ๐Ÿ˜„) - Doom Ethernal level loading: 6 vs 8 - Cyberpunk 2077 level loading: 6 vs 14 (here I can see the biggest difference) The next step is export this game library ZFS pool to unraid, configure iSCSI on unraid and test again.
  11. I'll report when its done... at the moment I don't see much RAM useage. But what I can see right now is way more CPU usage for the dedup, sync and/or iSCSI running directly on unraid compared with the TrueNAS VM as "same backend" for this (on the same machine). Having just one copy of all the games in my house seams to be possible... probably also GPU-P can be a thing now for me with only the OS locally... Linus has done a Video for that right now Just sharing my main Gaming rig with VM's 3-4 times with the full Steam Lib "loaded" within every VM and don't need more storage for every new copy! Not to mention about buying new GPU's at the moment for all of my kids... ๐Ÿ˜‘
  12. Update on my testing... zfs with dedup/compression/zvol + iSCSI works also on unraid. It was my fault, I forgot to disable bitlocker on the test laptop. Now as I have disabled bitlocker, the dedup ratio begins to raise if I copy the same file multiple times onto the drive... see here (still the imported pool from TrueNAS): # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT testpool 928G 11.6G 916G - - 0% 1% 1.08x ONLINE -
  13. My intention was not to start the to dedup or not discussion โ˜บ๏ธ But your discussion is interesting... I have a homelab and my intension is just figure out how iSCSi and ZFS works. So I started to test with a TrueNAS VM on my existing unraid server... were I get great results (almost max 10G line speed from cache, dedup ratio up to 3 for 4 more or less equal steam libs). So as I know the drawbacks of dedup I will descide... bigger cheep SSD's in the PC's of my kids or figure out if my current 80G RAM of unraid is enough. But other than that... the dedup ratio is anyway 1.00 if i run this setup directly on unraid (new sparse zvol's). So it does not work anyway... but I need understand why just for myself ๐Ÿ˜ By the way, its cool anyway that iSCSI + ZFS is possible on unraid with 2 great plugins from the community + one manual CMD line to create a block backstorage in iSCSI based on a zvol...
  14. Hmmm, this means dedup should work also on unraid. I think you have created your pool within unraid and you don't use Zvols, right?
  15. That's clear... the max numbers comes from the ARC cache. Real seq read of a big movie file for the first time tops at around 350-400MB/s in Windows Explorer... which is the combined max of my 3 mechanical disks in the ZFS pool (I assume) and small files will be less. Someone else know if dedup is possible?
  16. Performance isn't that bad as I already have a 10G Network to all clients... measured up to 1200MB/s read - 900MB/s write over iSCSI with Crystaldiskmark (to the ZFS cache) on the TrueNAS test VM running on Unraid. This is with Dedup+Compression ON and Sync OFF on the ZVOL. Also downloading isn't my issue with 1G fiber... My aproach is safe some space on clients... and play with IT stuff ๐Ÿ˜ But it seams that the dedup feature isn't working if I run this setup on Unraid... while it works with TrueNAS.
  17. Hi, I played a little bit with this plugin + the iSCSI plugin for a "shared" steam library but based on native unriad. Currently I managed to create ZVOL's and propagate them via iSCSI to my clients... it basically works. But I can't see any deduplication benefits from haveing equal files on these ZVOL's... which works within TrueNAS. I use TrueNAS as VM to create/manipulate and export/import my test ZFS pool based on the idea of @SpaceInvaderOne. Is it possible that this ZFS implementation/plugin laks the deduplication feature? Thanks for an info
  18. Any performance gains with disabled ACL?
  19. I also know that this plugin is Alpha/Beta... and I've used the usercript version before I installed it... and I use it now. As written before, it's important to report the findings otherwise this plugin will always remain Alpha/Beta.
  20. The VM Backup Plugin is great and really I want to use it but it causes several issues (see also 2 posts above), so I had to uninstalled the VM Backup Plugin. This uninstall fixes the following 3 issues for me: 1. the "error: failed to connect to the hypervisor" in console during startup (many others reported this error too in another thread, but it seams to work anyway) 2. Array stop not possible, stuck at "sync filesystem" => therefore no reboot/shutdown possible 3. Install of habridge docker image (and probably others?) not possibel with the following error: Error: could not get decompression stream: fork/exec /usr/bin/unpigz: no such file or directory
  21. Now I've also uninstalled the VM Backup Plugin. This uninstall fixes 3 issues for me: 1. this "error: failed to connect to the hypervisor" 2. Array stop not possible, stuck at "sync filesystem" 3. Install of habridge docker image not possibel with the following error: Error: could not get decompression stream: fork/exec /usr/bin/unpigz: no such file or directory
  22. I've changed my VM's NIC also from 'virtio' to <model type='virtio-net'/> and since then no more "unexpected GSO type" in my logs. I've 3 VM's (2 Win + 1 CentOS) running and 3 dockers on br0 with 5.5 kernel.
  23. You have to compile your own kernel or just use the new one from Leoyzen for unraid 6.8.3 Have a look at this topic:
  24. 6.8.3-5.5.8 works great for me, thanks!!! I need Navi Reset Patch + AMD onboard audio/usb controller flr patch on a B450 board... and all I need works like a charm!!!