Jump to content

segator

Members
  • Content Count

    151
  • Joined

  • Last visited

Community Reputation

5 Neutral

About segator

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. this will works? https://www.amazon.com/ORICO-External-Docking-Duplicator-Function/dp/B07MQCDVJ2/ref=sr_1_1_sspa?dchild=1&keywords=orico+5+bay&qid=1604833876&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyWFJXVVpLRDlSUTRUJmVuY3J5cHRlZElkPUEwMzY1MDIzMlExSElGUjZFRTRNSiZlbmNyeXB0ZWRBZElkPUEwMjAzOTU3MUpFOURDVDZSQldETiZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=
  2. just thinking to extend my storage also with the orico of 5bay drives, did you @steve1977 were able to got this running? or any other USB enclosure
  3. Well the idea is to have it there 24/7 running to expand my array, some will be disks on unraid array (so i supose no problem as disks works independently, but i also plan to create another ZFS raid there(3disks).
  4. oh.. sorry what a newbie I am... yes says smart ok, so I supose if in the future some issues are detected I will be notified as any disk in the array, right¿? BTW someone tried ZFS over USB disks? is that a crazy idea? i am out of space on my rack server and i'm considering buying a 4slots usb enclosure. any other idea if not?
  5. interesting not on mine (those disks are a zfs pool) shall I need to do something else?
  6. question, how do you guys monitor the disk smart when using ZFS? unassgined devices plugin is not monitoring disks that are not mounted right?
  7. Memory issue fixed increasing blocksize in the dataset to 1M ashift=12 disabled dedup and compression on not needed volumes (like multimedia vols) -15GB of usage. @Marshalleq which error you have, running latest beta with gaming VM and ZFS RC2 (NAS + DEsktop PC all in one) all working fine after my fixes with the memory. in your logs I saw you have disabled the xattr?
  8. oh thats good point, I have also a dataset with some media files, i supose i will need to create a new one with dedup disabled and move data there, right?
  9. yes I remember readed about 1gb per TB of data in my case I have 10.4T of data thats then 10.4GB of ram so If I sum my VM ram + arc + dedup 24 12 11 means out of ram tootally.. nevertheless I reduced now to 20 8 11 and i'm still on 0gb of free ram
  10. in my case have a lot of benefits because the type of data I store on my volume. I build code and archive all the builds so the deduplication ratio is so high as the diference between build are a couple of kb. Anyway as far as I understood, teorically when linux need memory, ZFS free some memory of the arc cache, but its seems its not doing this, could be? also, do you know how this parameters affect when using ZFS on unraid? reccommended values in tip & tweak unraid plugin are: vm.dirty_background_ratio = 2 % vm.dirty_ratio = 3 %
  11. how is this possible? i have 24gb for the VM 10GB for ARC cache and running nextcloud with redis and mysql and some rubish dockers. how can I see the real ram usage of ZFS? (arc + metadata cache + dedup.. ) I have 10.4TB on my zfs vol btw
  12. total used free shared buff/cache available Mem: 47 42 2 1 2 2 Swap: 0 0 0
  13. time read miss miss% dmis dm% pmis pm% mmis mm% size c avail 11:24:21 0 0 0 0 0 0 0 0 0 12G 12G 3.3G
  14. Hey using from a couple of weeks ZFS on unraid with gaming VM as primary desktop pc (nas + desktop pc all in one), it works fine but sometimes unraid decides to kill my VM because "out of memory" i assigned 16gb of ram to the VM and the host have 64gb of ram, I think zfs is not cleaning enough fast the arc memory when other containers reclaim memory and then kernel decides to kill my VM i can fix it using hugepages but i don't like it because then its memory that ZFS can not use it when the VM is shutted down (that at the end is the 90% of the time), I tried to limit ZFS arc with echo 12884901888 >> /sys/module/zfs/parameters/zfs_arc_max but same
  15. nope, I still have same issue but the machine is more or less usable if there are not extreme load on the virtual ethernet device.