Jump to content

glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. ps no need to stop the array dor any zfs stuff. Typicaly i wipe the drives first with a dummy unassigned devices format if they protest when trying to create a pool. Could be some remnants of earlier use on them that gets you that message. But stopping the pool is not needed as not part of the pool. Be very carfull however to select the right drive ids as to not accidentaly destroy the unraid array drives.
  2. Nope , unassigned devices is not involved in zfs other then like any other non array drive its show the drives there. Drives involved in zfs pools do show up as zfs_member in unassigned devices but dont touch them there or use mount button. Its all commandline baby ✌️
  3. funny coincidence and could be totaly unrelated to your issue. Was struggeling to get a trustmaster to work in w10vm as well via a passed asmedia usb3+usbc controller. Same wheel worked fine on native w10machine with same drivers/software etc so was not a driver issue. Gave up after a week trying. Then for another reason i started suspecting the asmedia controller (was not always recognised on boot and often had to reseat the card and or reboot a few times for it to be recognised on my supermicro board. Also i could intermittently not get usb3 to work on osx vm) Bought from ebay a Sonnet allegro usb card (mainly for its native stellar osx support) and bam. Steering wheel works without issue without changing a thing in w10. And osx support is great as expected So if you have access to other usb cards, give it a try.
  4. seriuosly fingers crossed that ixgbe support is fixed and included in final release as 10g intel card(s) are my main cards in my unraid boxes. would be a nogo without. wait and pray ........
  5. ouch, hope thats just temporary rc thingy as loosing my 10gb card for unraid is a nogo. Am sure i am not alone
  6. 1. Yes you can have multiple zpools. 2. Not sure about the self healing of single drive zpools, but i use them (large spindle) for snapshot send / recieve targets . 3. regarding tips. - auto mounting on a specific set mountpoint does not work for me on reboot (while mountpoint is properly set on datasets) so i created a user scripts to start after reboot to export/import on the target mountpoint. - for appdata to work properly you need to mount your datasets under /mnt. i use /mnt/disks/appdata for example. - for trimming i run nightly zpool trim commands using user scripts plugin. - i heavily use zfs snapshot , send /recieve etc triggered by user scipt plugin , rollbacks , cloning etc and all working great. - use the user scripts plugin to limit arc after boot as described in the beginning of this topic. - read this whole topic and check the usefull tips for monitoring and sending notifications on zfs events or used in you own snapshot scripts etc. - do not upgrade unraid unless a new zfs plugin version has become available (ouch)
  7. great stuff !!! Looking forward to the final version. Cant test the rc's as have multiple zfs pools running and zfs plugin need to first support the next release before i can move ahead. But again great to hear you guys are on the top of it as always. Moving into the unraid world has been an extremely satisfying and landscape transforming experience for me.
  8. link to the patch and respective discussion https://forum.level1techs.com/t/vega-10-and-12-reset-application/145666/86
  9. Carefull !! tried it today with my vega 56 , but if i dont pass the sound after the first reboot of the (osx) vm , instead of the vm hanging, my complete unraid server kernel panics after an initial garbled caleidoscopic screen output on the vega. Thats a new one for me as never ever in several years had unraid crashing on me. Luckily my array and all my zfs ssd pools did not give an inch and worked fine after the crashes. So back to passing both and just waiting for a proper fix and until then just rebooting the box if i have to restart the vm. crap.
  10. Hope it includes a fix/bandaid for the amd reset bug which is driving me insane.
  11. nope ☺️ Guess its related to using an smbios of an as of yet not existing MacPro hahaha. tried with 8/16/32G mem. no difference. so far seems cosmetic (i hope) 🙏
  12. Tnx Leoyzen, You saved me from going mad after a few days struggle to get upgrades (either from mojave or from latest catalina beta win Vega 56 card) to catalina final working. Whatever i tried , it never worked. either bootloops or freezes. Was super confused as all worked fine for every beta until the final, which broke. Was something in my EFI but endless tries did not managed to find it. Did find the bootlooper (fixed by MCEReporterDisabler.kext) but ran into other crap. By using your catalina.clover.qcow as a base and overwriting my own, adding MCEReporterDisabler.kext, Lilu and CPUfriend to kexts/Other directory (to keep /Library/Extensions clean) it finaly went thru on a copy of my main Davinci Resolve render node. All working fine incl acceleration as pre upgrade. Was using iMacPro1.1 before but the new Macpro7.1 seems to also work great post upgrade. Not sure if there is any sideeffects of the funny message about memory banks. Renders tests all good and no difference so guess we can ignore it, but if you find anything please let us know. p.s. Am using physical 10G cards so have not looked into virtio that seems to have some succes now, but maybe in the future (as i dont like that it needs post start hotplug apparently). I used physical hardware basically because virtual network cards on OSX sucked bigtime sofar so if that situation has changed it would be a very interesting development.
  13. run it with specifying size. Using a zero never worked for me. So for example : diskutil apfs resizeContainer disk0s2 48G rem: if you get an error complaining about partition map too small, run "diskutil repairDisk /dev/disk0" and then retry the resize. You typicaly get this error if you use the diskutil gui version to resize the container. Running this command before fixes that.
  14. Something else i do wonder though with using ZFS. How about trim for ssds ? I have several ZFS ssd pools and the normal trim command i would run against my btrfs pool(s) does not work anymore for zfs. Is it part of ZFS itself somehow ? Or am i missing something ? --- # fstrim -v /mnt/disks/virtuals fstrim: /mnt/disks/virtuals: the discard operation is not supported
  15. As hoopster mentioned , drives are the nobrainer in a swap. Just finished a big one between 2 supermicro boards. Even connected them up differently between motherboard sata, hbas, bakcplanes etc. Zero issues as unraid will find them for you. The gotchas are - vms when passtrhu hardware as hardware adresses change - vfio stubs for same reason - cpu pinning (only if changing cpus / nr of cores in the process, which i had) - network can get messed up seriously on internal motherboard ports so prepare to redo these. - bios settings (if you have ipmi you can remote in and screenshot every bios page) So rigorously document / screenshot all you current settingseverywhere it says the word setting. And if you have no vms and dockers you can ignore most of this list
  16. Besides ocd there is a case for spreading/distributing data up to every single file as equaly as possible. Read Speed !!! I edit videos straight of my unraid array very comfortably. Have only equal sized 7200rpm seagates in there and keep them all equaly filled and data spread as much as humanly possible to maintain maximum read spead. Works amazing and almost any file i acces has 200+ MB/s Its getting as close to a classical striped array without a striped array For the stuff i need to guarantee blazing speed above this max spindle rate i move the project to cache using unbalance and back with mover (as unbalnace does not spread files unfortunately) when done. If you would fill up and stack all files in your edit on the same drive , it slows down massively and the more it fills up the slower a drive.
  17. Ah good to know. Tnx for the info. Btw just finished testing direct I/O and although it does improve artificial write speeds (using dd direct on server) it does not do a lot for real world over the net write speeds, but worse , it dropped my read speeds from 900MB/s to 230MB/s , so that idea goes in the bin At least i know the cause and dont keep looking for a misconfiguration that is not there. Tnx mate. Waiting for fuse 3.x it is i guess......
  18. Tnx. Wow that is a lot of overhead. Yeah i also just remembered direct I/O , but also vaguely remembered that in the past setting that caused issues with some of my dockers. But these have been moved off to zfs pools so will try and play with it again.
  19. Trying to understand where the huge speed diff comes from. Having a btrfs raid10 4x 1TB ssd samsung evo 860's cache pool. (connected via 12G sas card) Writing directly to it (freshly trimmed and about 20% full) via /mnt/cache/sharename/ gets me a solid 1.1 GB/s speed as expected (about 500+ per ssd divided by 2 for raid10). Writing to the cache via a share set to cache yes or only and /mnt/user/sharename/ gets me about 50-60% of that. (testing via dd if=/dev/zero of=file1.txt count=25k bs=1024k, but similar via real world data speedtests via 10G ether ) So what is this huge overhead as parity is not involved here and can this be tuned to get me optimised cache write speeds ?
  20. Personaly i am happy with the main array as it is. But have been bitten now about 3 times with corrupt btrfs cache pool. I use a large multi TB ssd (btrfs raid1) cache pool for speed and direct fast (via 10GB ether) access to video data . I keep projects that i am working on in the pool and once done trigger mover to move them to the (still fast 7200rmp) spindels (xfs). Back and forth with unbalance keeps it very workable. For my vm's i enjoy the snapshots feature of btrfs But 2 of these times i was hit it resulted in corrupt VM images that btrfs could not fix and the last one into complete pool corruption, that was not fixable by any of the btrfs methods i researched. Of course have good backups but it sucks having to rebuild everything and drops my sense of btrfs reliabilty to near zero (while i was bragging about btrfs to most of my friends untill then) At that time i did not know much if any about zfs. Started reading up on it, installed the plugin, moved most data i kept on the btrfs cache to ssd only zfs raid1 pools and they have been stellar since. Also zfs literay (in plain english rather then useless syslog messages) told me the main reason that i had issues with btrfs already in the first weeks , which was a wobbly connection to 2 of 8 ssds in a hotswap cage, but in contrast to btrfs it only reported it but kept fixing any issue continously and kept the pool healthy, where btrfs broke in same setup / same disks / same cables / same problems. I unplugged and replugged live ssd's to see which drive it was and all the time zfs kept smiling like nothing happend. There is just "zero" comparison or competition with btrfs . Specs or having a system runnig for years without issues are nice but real world behavior when there "are" issues and how you recover from them is what counts. The argument that btrfs has and zfs does not have recovery tools was mute in my case as none of the tools helped me at all other then to save some part of the data but not it fixing the pool, while zfs just fixed it for me while helping me also identify the problem. So i have become a complete convert from a btrfs advocate to a zfs fanboy. So a great option would be to be able to have zfs for cache only so get the best of both worlds. I can live without it for the main array as i see less benefits and more complications on implementing it there as already discussed. Nice pool features but still a fast and super reliable cache pool (with raidzx options as i did not even mention btrfs lack of anything (reliable) more then raid1/10)
  21. Ah well, i just put the export and re-import commands in the go file and now its fine. But still wierd. Same for both of my unraid +zfs plugin boxes.
  22. To not explode the post , i grepped on mountpoint as i guess that is what you need right ?. Its all good but after boot i have to redo it as described. The pools willl default be mounted at root level after boot. #zfs get all | grep mountpoint ZFS_BACKUPS_V1 mountpoint /mnt/disks/ZFS_BACKUPS_V1 default ZFS_BACKUPS_V1/DCP mountpoint /mnt/disks/ZFS_BACKUPS_V1/DCP default ZFS_BACKUPS_V1/FCS mountpoint /mnt/disks/ZFS_BACKUPS_V1/FCS default ZFS_BACKUPS_V1/NODE1 mountpoint /mnt/disks/ZFS_BACKUPS_V1/NODE1 default ZFS_BACKUPS_V1/TACH-SRV3 mountpoint /mnt/disks/ZFS_BACKUPS_V1/TACH-SRV3 default ZFS_BACKUPS_V1/W10 mountpoint /mnt/disks/ZFS_BACKUPS_V1/W10 default ZFS_BACKUPS_V1/appdata mountpoint /mnt/disks/ZFS_BACKUPS_V1/appdata default virtuals mountpoint /mnt/disks/virtuals default virtuals/DCP mountpoint /mnt/disks/virtuals/DCP default virtuals/FCS mountpoint /mnt/disks/virtuals/FCS default virtuals/NODE1 mountpoint /mnt/disks/virtuals/NODE1 default virtuals/appdata mountpoint /mnt/disks/virtuals/appdata default virtuals2 mountpoint /mnt/disks/virtuals2 default virtuals2/Mojave mountpoint /mnt/disks/virtuals2/Mojave default virtuals2/MojaveDev mountpoint /mnt/disks/virtuals2/MojaveDev default virtuals2/TACH-SRV3 mountpoint /mnt/disks/virtuals2/TACH-SRV3 default virtuals2/W10 mountpoint /mnt/disks/virtuals2/W10 default edit : in case you do need other stuff. Here from a single pool. # zfs get all virtuals NAME PROPERTY VALUE SOURCE virtuals type filesystem - virtuals creation Fri Sep 6 15:29 2019 - virtuals used 207G - virtuals available 243G - virtuals referenced 27K - virtuals compressratio 1.33x - virtuals mounted yes - virtuals quota none default virtuals reservation none default virtuals recordsize 128K default virtuals mountpoint /mnt/disks/virtuals default virtuals sharenfs off default virtuals checksum on default virtuals compression lz4 local virtuals atime off local virtuals devices on default virtuals exec on default virtuals setuid on default virtuals readonly off default virtuals zoned off default virtuals snapdir hidden default virtuals aclinherit restricted default virtuals createtxg 1 - virtuals canmount on default virtuals xattr on default virtuals copies 1 default virtuals version 5 - virtuals utf8only off - virtuals normalization none - virtuals casesensitivity sensitive - virtuals vscan off default virtuals nbmand off default virtuals sharesmb off default virtuals refquota none default virtuals refreservation none default virtuals guid 882676013499381096 - virtuals primarycache all default virtuals secondarycache all default virtuals usedbysnapshots 0B - virtuals usedbydataset 27K - virtuals usedbychildren 207G - virtuals usedbyrefreservation 0B - virtuals logbias latency default virtuals objsetid 54 - virtuals dedup off local virtuals mlslabel none default virtuals sync standard default virtuals dnodesize legacy default virtuals refcompressratio 1.00x - virtuals written 27K - virtuals logicalused 276G - virtuals logicalreferenced 13.5K - virtuals volmode default default virtuals filesystem_limit none default virtuals snapshot_limit none default virtuals filesystem_count none default virtuals snapshot_count none default virtuals snapdev hidden default virtuals acltype off default virtuals context none default virtuals fscontext none default virtuals defcontext none default virtuals rootcontext none default virtuals relatime off default virtuals redundant_metadata all default virtuals overlay off default virtuals encryption off default virtuals keylocation none default virtuals keyformat none default virtuals pbkdf2iters 0 default virtuals special_small_blocks 0 default
  23. Love this plugin. One issue i have is that mountpoints seem to not be persistent over a boot. After the boot i have to do a zpool export <pool> and then an import -R <mountpoint> <pool>, to get the mountpoint correct again. zfs get mountpoint shows it correctly after the reimport. Anything i missed ?? Its shitty as my dockers/vm are on there so will fail after reboot untill manualy fixed.
×
×
  • Create New...