Jump to content

glennv

Members
  • Content Count

    52
  • Joined

  • Last visited

Community Reputation

4 Neutral

About glennv

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah well, i just put the export and re-import commands in the go file and now its fine. But still wierd. Same for both of my unraid +zfs plugin boxes.
  2. To not explode the post , i grepped on mountpoint as i guess that is what you need right ?. Its all good but after boot i have to redo it as described. The pools willl default be mounted at root level after boot. #zfs get all | grep mountpoint ZFS_BACKUPS_V1 mountpoint /mnt/disks/ZFS_BACKUPS_V1 default ZFS_BACKUPS_V1/DCP mountpoint /mnt/disks/ZFS_BACKUPS_V1/DCP default ZFS_BACKUPS_V1/FCS mountpoint /mnt/disks/ZFS_BACKUPS_V1/FCS default ZFS_BACKUPS_V1/NODE1 mountpoint /mnt/disks/ZFS_BACKUPS_V1/NODE1 default ZFS_BACKUPS_V1/TACH-SRV3 mountpoint /mnt/disks/ZFS_BACKUPS_V1/TACH-SRV3 default ZFS_BACKUPS_V1/W10 mountpoint /mnt/disks/ZFS_BACKUPS_V1/W10 default ZFS_BACKUPS_V1/appdata mountpoint /mnt/disks/ZFS_BACKUPS_V1/appdata default virtuals mountpoint /mnt/disks/virtuals default virtuals/DCP mountpoint /mnt/disks/virtuals/DCP default virtuals/FCS mountpoint /mnt/disks/virtuals/FCS default virtuals/NODE1 mountpoint /mnt/disks/virtuals/NODE1 default virtuals/appdata mountpoint /mnt/disks/virtuals/appdata default virtuals2 mountpoint /mnt/disks/virtuals2 default virtuals2/Mojave mountpoint /mnt/disks/virtuals2/Mojave default virtuals2/MojaveDev mountpoint /mnt/disks/virtuals2/MojaveDev default virtuals2/TACH-SRV3 mountpoint /mnt/disks/virtuals2/TACH-SRV3 default virtuals2/W10 mountpoint /mnt/disks/virtuals2/W10 default edit : in case you do need other stuff. Here from a single pool. # zfs get all virtuals NAME PROPERTY VALUE SOURCE virtuals type filesystem - virtuals creation Fri Sep 6 15:29 2019 - virtuals used 207G - virtuals available 243G - virtuals referenced 27K - virtuals compressratio 1.33x - virtuals mounted yes - virtuals quota none default virtuals reservation none default virtuals recordsize 128K default virtuals mountpoint /mnt/disks/virtuals default virtuals sharenfs off default virtuals checksum on default virtuals compression lz4 local virtuals atime off local virtuals devices on default virtuals exec on default virtuals setuid on default virtuals readonly off default virtuals zoned off default virtuals snapdir hidden default virtuals aclinherit restricted default virtuals createtxg 1 - virtuals canmount on default virtuals xattr on default virtuals copies 1 default virtuals version 5 - virtuals utf8only off - virtuals normalization none - virtuals casesensitivity sensitive - virtuals vscan off default virtuals nbmand off default virtuals sharesmb off default virtuals refquota none default virtuals refreservation none default virtuals guid 882676013499381096 - virtuals primarycache all default virtuals secondarycache all default virtuals usedbysnapshots 0B - virtuals usedbydataset 27K - virtuals usedbychildren 207G - virtuals usedbyrefreservation 0B - virtuals logbias latency default virtuals objsetid 54 - virtuals dedup off local virtuals mlslabel none default virtuals sync standard default virtuals dnodesize legacy default virtuals refcompressratio 1.00x - virtuals written 27K - virtuals logicalused 276G - virtuals logicalreferenced 13.5K - virtuals volmode default default virtuals filesystem_limit none default virtuals snapshot_limit none default virtuals filesystem_count none default virtuals snapshot_count none default virtuals snapdev hidden default virtuals acltype off default virtuals context none default virtuals fscontext none default virtuals defcontext none default virtuals rootcontext none default virtuals relatime off default virtuals redundant_metadata all default virtuals overlay off default virtuals encryption off default virtuals keylocation none default virtuals keyformat none default virtuals pbkdf2iters 0 default virtuals special_small_blocks 0 default
  3. Love this plugin. One issue i have is that mountpoints seem to not be persistent over a boot. After the boot i have to do a zpool export <pool> and then an import -R <mountpoint> <pool>, to get the mountpoint correct again. zfs get mountpoint shows it correctly after the reimport. Anything i missed ?? Its shitty as my dockers/vm are on there so will fail after reboot untill manualy fixed.
  4. I have the same experiences and was never able to solve it. Followed this kind gentlemens advice and even copied his xml but did not work. Its some magical combination of factors that makes it not work/work. Same for sierra/high sierra/mohave. Tried different clovers etc. Topologies, no topologies , tried for weeks. Max i got was it booted but then almost every program you run crashes. Gave up so if you find it let me know. Run all my osx vms with max 32 vcores (16 hyperthreaded) and called it a day.
  5. Unraid rocks hard. So happy bday and yes, feel free to send me that coveted badge.
  6. Unraid is even worse then the standard effect on any filesystem as in addition to normal filesystem overhead it has to do parity calculations/writes for each small file. I gave up struggling with trying regular incremental backups of huge audio sample libraries (milions) because it would take ages. Different bakcup solution react differently but all suffer. Regardless if you have fast ssd cache aparently. I do them now only once a month to unraid where the bakcup tool has a local database so does not have to recheck every file at every backup run. But still insanely slow always. And daily outside unraid. Normal large files , which is most of my data are rocketing over my 10g net at over 500-700MB/s without issues Its the only downside of an otherwise stellar Unraid experience.
  7. Thanks. Did not know that about the array staying online during clear. Is there a specific order to do things to make sure this happens when you add a new drive ? So off array, add and assingn new drive and start array will fully start array and then do clear separately in background ?
  8. Great stuff man. Thanks for you efford in figuring this out for us.
  9. +1 Absolutely vital i would say to be able to disable all these mitigations for non exposed servers or whatever. Couldnt care less for my server. I just need as much perf as the cpu can deliver. But hats of for LT for beeing on top of this. Respect.
  10. I think its not the starting of the VM but as i reported in some other thread , going to the VM tab starts (in my case) all drives. I bypass by never visiting that tab to start VM but do it from the dashboard.
  11. +1 for integrated in Unraid. currently happy using cloudcommander but nothing beats integrated,
  12. Keep a “tail -f /var/log/syslog” running on the console so when it hangs you can still see the last messages and maybe some hint.
  13. Run lspci -v This shows you all hardware recognised. You should see your cpu (look for brand name) in the output list and its adress, which you eventualy need if you want to do anything with it.
  14. The old card that did not work was a serverpull. This one no idea but works like a charm, better , faster (8 ssd’s) and way way cooler on full load then the old serverpull card. Tested the crap out of it and its a beauty. Your mileage may vary with stuff from there.
  15. based on other post on this forum discussing the exact requirements and issues with trim on hba’s and lates5 firmwares / unraid releases (cant find it but do a search on trim) I moved from a flashed H200 up to a LSI 9300-8i (tnx ebay for cheap chinese card if you are not in a hurry) and also replaced all my evo 950’s with 960’s and got fully btrfs trim working finally. You need the proper card and the proper drive now to get it working on btrfs otherwise you are out of luck. nice little speedboost as well with faster hba. Before that i had to temporary connect to motherboard sata , do trim and connect back to hba.