• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

subivoodoo's Achievements


Rookie (2/14)



  1. The stick works on Windows, it can't be broken. VM passthrough issues possible yes, so I've tested it bare metal with the same behavior. The following lines from your link are missing on "my init": kernel: i2c i2c-9: a8293: Allegro A8293 SEC attached kernel: DVB: registering new adapter (em28178 #0) kernel: usb 2-1: DVB: registering adapter 0 frontend 0 (Montage M88DS3103)... Sadly I found an amazon customer review having equal issues with the same hardware revision (BAH9 in my case) on RPI and also I found this page German support page with "Linux support only for older revisions". So it was a short test... Thanks anyway.
  2. Hi, For testing purposes I plugged in a PCTV DVB-S2 461e USB stick into my test Unraid (running as VM), installed the DVB Driver plugin and rebooted (tested with LibreELEC and TBS). But there is nothing present in /dev/dvb and therefore nothing shown in the "DVB Info". But it it seams that at least the driver/firmware is loaded... dmesg shows: [ 12.199609] em28xx 1-3:1.0: EEPROM ID = 26 00 01 00, EEPROM hash = 0x72d64ca2 [ 12.200519] em28xx 1-3:1.0: EEPROM info: [ 12.200716] em28xx 1-3:1.0: microcode start address = 0x0004, boot configuration = 0x01 [ 12.208369] em28xx 1-3:1.0: AC97 audio (5 sample rates) [ 12.208582] em28xx 1-3:1.0: 500mA max power [ 12.208790] em28xx 1-3:1.0: Table at offset 0x27, strings=0x148c, 0x1874, 0x0a6a [ 12.262860] em28xx 1-3:1.0: Identified as PCTV DVB-S2 Stick (461e v2) (card=104) [ 12.263980] em28xx 1-3:1.0: dvb set to bulk mode. [ 12.264366] usbcore: registered new interface driver em28xx [ 12.270478] em28xx 1-3:1.0: Binding DVB extension [ 12.275056] em28xx: Registered (Em28xx dvb Extension) extension [ 12.279261] em28xx 1-3:1.0: Registering input extension [ 12.301840] rc_core: IR keymap rc-pinnacle-pctv-hd not found [ 12.302049] Registered IR keymap rc-empty [ 12.302304] rc rc0: PCTV DVB-S2 Stick (461e v2) as /devices/pci0000:00/0000:00:07.0/usb1/1-3/1-3:1.0/rc/rc0 [ 12.303331] input: PCTV DVB-S2 Stick (461e v2) as /devices/pci0000:00/0000:00:07.0/usb1/1-3/1-3:1.0/rc/rc0/input6 [ 12.304431] em28xx 1-3:1.0: Input extension successfully initialized [ 12.304713] em28xx: Registered (Em28xx Input Extension) extension Is it possible that this stick doesn't work at all? Or the V2 of this card is not supported or am I doing something wrong? Thanks for some info Regards
  3. I don't know how to make more step by step infos as in my original post... background knowledge of ZFS and iSCSI is needed for this kind of "official not supported" solutions. Attach to VM???? The iSCSI solution has noting to do with VM's => iSCSI is used to add remote disks on any client system in a network... a laptop or whatever supports iSCSI as client. The example of use a zVOL as disk in a VM on Unraid doesn't need iSCSI. But probably we can figure out how to fix your main issue (too many snapshots)? User script: I've attached my personal user script for resetting the "game-disk" which are used by 3 computers in my network (deletes the iSCSI targets, deletes all the clones, does new clones and recreates the iSCSI targets). => but note that this script is 100% for my personal setup Regards iSCSI-RenewAllKidsGames .sh
  4. @Iker Will it be possible to show also zvol's in a future version of your plugin? Currently I see only the pool name if I click "SHOW DATASETS"... yes I know, zvols are't datasets ๐Ÿ˜†
  5. I edited that... anyway thanks to all community developers!!!!
  6. So I ended up useing the 'hybrid mode' called by @SpaceInvaderOne to create my pool, datasets and zvol's and import/export them to Unraid: Another help for ZFS+iSCSI is this post here: But creating a zvol (which is more or less a 'disk within a file stored on your ZFS pool') is easy as: zfs create -s -V 100G -o volblocksize=4096 -o compression=lz4 YOURPOOLNAME/testzvol -s = sparse, thin provisioning so only used space within the zvol is allocated -V 100G = 100GB size All the created zvol's are listed under /dev/zvol/YOURPOOLNAME/*** (example above is therefore /dev/zvol/YOURPOOLNAME/testzvol ) and also shown there after reboot. The zvol's are also shown with zfs list and it's possible to create zvol's within other datasets if you need it. Such a zvol can be used as VM disk by just add a manual entry like this: or in my case use it together with the iSCSI target plugin by @SimonF and @ich777: You just need to create the backstorage manually (I think at the moment ๐Ÿ˜‰) with the following commands: targetcli /backstores/block create name=testzvol dev=/dev/zvol/YOURPOOLNAME/testzvol cd /backstores/block/testzvol/ set attribute block_size=4096 set attribute emulate_tpu=1 set attribute is_nonrot=1 cd / exit The rest can be configured within the iSCSI plugin, you can just pick this manually created backstorage there: If you don't need it any longer, remove the initiator mapping and delete the backstorage entry (note the zvol still exists): targetcli cd /backstores/block/ delete testzvol cd / exit And last but not least how to clone an existing zvol and/or delete it: zfs snapshot YOURPOOLNAME/testzvol@yoursnapshotname zfs clone -p YOURPOOLNAME/testzvol@yoursnapshotname YOURPOOLNAME/testzvol.myclone zfs destroy YOURPOOLNAME/testzvol.myclone zfs destroy YOURPOOLNAME/testzvol@yoursnapshotname zfs destroy YOURPOOLNAME/testzvol
  7. If there are any interests in my results of "have fun with ZFS/iSCSI on Unraid" for a shared game library... I've finished my tests and I will NOT use dedup. The real world performance of copying hundreds of GB to my dedup enabled test zvol's via iSCSI (10G network) is horrible and tooks hours... compared to dedup-off which tooks just 12 minutes for 520GB. The syntetic benchmarks are also better without dedup (see attached screenshots). My second idea of "setup all games and clone it" works great... now I need even less storage in my pool as with dedup enabled because of there is just one fully installed game library present, all the others are clones with just a few MB of different files. The performace is better and I don't need much more RAM for the DDT. For the update process I have a little script that removes the iSCSI mapping/backstorage, creates a new snapshot/clone and re-creates the iSCSI mapping/backstorage... so game install or update is simple as: - do it on the main gaming rig - run a user-script - bam, a few seconds later all my kids getting new games ๐Ÿ˜ => My next project based on this setup is testing GPU-P and clone another 2-3 game libraries... which is now done in seconds and does not need any additional storage!!! If someone needs the commands to create iSCSI backstorages for zvol's or someting... I can write a little tutorial. Benchmarks with dedup ON vs. OFF over iSCSI (better write performance, max out my 10G network, pool with 2 cheap SATA consumer SSD's stiped, 24GB for ZFS ARC):
  8. And now my speed compare ZFS/Dedup/iSCSI to TrueNAS (as VM on Unraid) vs. ZFS/Dedup/iSCSI to native Unraid... I mean to the awsome community plugins on Unraid ๐Ÿ˜‡ !!! Thanks a lot at this point for all your work. As I hoped the performance is (mostly) better. It could be because of no virtio layer needed (I don't know the performance of the FreeBSD/TrueNAS virtio driver). The "real world" 10GB movie file copy doesn't drop as much and runs really faster on "native unraid". Conclusion: Will I use iSCSI => yes, the performance for games over 10G network is great, normally just a few seconds compared with local Nvme which is not noticable if you have to watch intos... Will I use ZFS => yes! Will I use Dedup => it depends... the dedup ratio could be better for that and the drawbacks of slower performance with more data in the future is a big point. I've another idea to test... Next idea: Prepare one game lib as zvol with really everything installed, take a snapshot/clone of it for every client. The clones should only use the space on disk for the changed data and no additional RAM needed for dedup table. With such a setup I need to update only the initial/primary "game lib zvol" and a reset/redo new clones via a user-script should be possible.
  9. Yes and that's one reason (I think) why my "real world" write performance to my test pool (after the ARC is full) degrades the bigger the DDT gets. Do you know if the DDT will be moved automatically if I add a special dedup vdev now to my existing pool? And back to the pool if I remove this dedup vdev later?
  10. No, just speaking generally from educated guesses. Probably I can test it within the VM... but then it would be a disk image instead of a real SSD. And before I buy another sata ssd I will go and grab another 64G consumer RAM ๐Ÿ˜‰ Next I'll figure out if the performance "native" on unraid (I mean with the great community plugins) is the same as on the TrueNAS VM... or I hope even higher because of no virtio needed in between. But I don't have time right now... probalby next week.
  11. I changed from 3 HDDs in RAIDZ1 to 2 SSDs striped only for higher read speed in "real world". The DDT real RAM use can't be shown anywhere... you have to calculate it by the following command (from that forum entry / I used it to calc the 16-17GB): zpool status -D poolname => DDT entry count * the bytes in core (which by the way raises the higher the dedup ratio goes) I have no more SSD left to test... and as jortan writes you need to have the complete DDT in RAM anyway because of performance.
  12. Here some numbers while I'm still testing my ZFS-dedup+iSCSI game library idea and the pool is not yet exported to unraid. Specs: - TrueNAS 12.U7 as VM on unraid with 32G RAM, 2x2TB SATA SSD passthrough (I descided to use some old cheap consumer SSD's) - these 2 disks striped together in a pool, sync off, dedup on (data loss doesn't matter, games can be downloaded again and again...) - 4 game libraries in total (2x800G + 2x200G) fully loaded, each as individual zvol linked via iSCSI to the clients, tested and all is working - 951G allocated disk space for this 2TB data in total (at the moment) - Dedup ratio of 2.16, compression off (I don't get more than 1.01 compression ration on the game libs) - 58mio DDT entries ร  300B => around 16-17GB dedup table in RAM needed - 10G network Subjective impressions: Read speed is astonshing for disks over network, game loading times doesn't differ a lot compared with local NVMe SSD's! Write speeds are good as long as the data fit's into the ARC cache... afterwards it drops to 50-100MB/s until the cache is flushed and ready for high speeds ๐Ÿ˜. Some benchmarks screenshot and the "10G file copy" test attached (Windows really can't calculate a correct copy time...). And here some game loading time differences in seconds (local gen3 Nvme vs. iSCSI-ZVOL): - MSFS2020 until Menu: 185 vs. 195 (I personally don't understand why this game tooks so long to load also on Nvme!!!) - Battelfield V until Menu: 57 vs 63 - Battlefield level loading: 26 vs 28 - Doom Ethernal until Menu: 46 vs 56 (I hate watching intros and warn text every time ๐Ÿ˜„) - Doom Ethernal level loading: 6 vs 8 - Cyberpunk 2077 level loading: 6 vs 14 (here I can see the biggest difference) The next step is export this game library ZFS pool to unraid, configure iSCSI on unraid and test again.
  13. I'll report when its done... at the moment I don't see much RAM useage. But what I can see right now is way more CPU usage for the dedup, sync and/or iSCSI running directly on unraid compared with the TrueNAS VM as "same backend" for this (on the same machine). Having just one copy of all the games in my house seams to be possible... probably also GPU-P can be a thing now for me with only the OS locally... Linus has done a Video for that right now Just sharing my main Gaming rig with VM's 3-4 times with the full Steam Lib "loaded" within every VM and don't need more storage for every new copy! Not to mention about buying new GPU's at the moment for all of my kids... ๐Ÿ˜‘
  14. Update on my testing... zfs with dedup/compression/zvol + iSCSI works also on unraid. It was my fault, I forgot to disable bitlocker on the test laptop. Now as I have disabled bitlocker, the dedup ratio begins to raise if I copy the same file multiple times onto the drive... see here (still the imported pool from TrueNAS): # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT testpool 928G 11.6G 916G - - 0% 1% 1.08x ONLINE -
  15. My intention was not to start the to dedup or not discussion โ˜บ๏ธ But your discussion is interesting... I have a homelab and my intension is just figure out how iSCSi and ZFS works. So I started to test with a TrueNAS VM on my existing unraid server... were I get great results (almost max 10G line speed from cache, dedup ratio up to 3 for 4 more or less equal steam libs). So as I know the drawbacks of dedup I will descide... bigger cheep SSD's in the PC's of my kids or figure out if my current 80G RAM of unraid is enough. But other than that... the dedup ratio is anyway 1.00 if i run this setup directly on unraid (new sparse zvol's). So it does not work anyway... but I need understand why just for myself ๐Ÿ˜ By the way, its cool anyway that iSCSI + ZFS is possible on unraid with 2 great plugins from the community + one manual CMD line to create a block backstorage in iSCSI based on a zvol...