isvein

Members
  • Posts

    406
  • Joined

  • Last visited

Everything posted by isvein

  1. Ok, so I think I got it, maybe. I turned off arc cache on all datasets and then I restarted the server. When started, the arc used 7% of ram and it started to build up to 34% and as it did the read and writes increased too. So I think this has something to do with that zfs reads and writes data to ram "all the time". Clearly some data is cached to ram even if you tun it off on the datasets, else the zfs ram usage would not increase. But it would be good to get some info on this from someone who knows zfs better
  2. Found out it seems to have nothing to do with docker at least. I cleared the R/W counter and waited around 20min with docker service turned off and as you can see, its only R/W on the ZFS drives, not the XFS one. I start to think this not a error or bug, but how zfs may work, going to test some more.
  3. But do you get the random read and writes?
  4. Just to have tested it, I checked if changing docker to folder instead of image made any difference, it did not
  5. I tested now, sat drives spin down to 30min (just not to need to wait for hours) and kept away from the "main" page and they did indeed spin down. My arrays are 10 of 11 drives are zfs. But once I clicked on "Main", they all spun up again as expected pr info from @Iker (all zfs drives, the xfs one keeps off as also expected)
  6. I tested on 6.12.3 now, sat drives spin down to 30min (just not to need to wait for hours) and kept away from the "main" page and they did indeed spin down. My arrays are 10 of 11 drives are zfs. But once I clicked on "Main", they all spun up again as expected pr info from @Iker (all zfs drives, the xfs one keeps off as also expected)
  7. Hello This seems to be a thing, problem many has noticed, but I could not find a bug report on it yet (maybe I did not look deep enough) I first came across the problem on Reddit and then found out I had the same situation. It looks to be related to docker, because if I/we shut down docker or does not run any dockers, nothing is written. I cant say for sure if this happened with zfs before 6.12.3, but people on reddit seems to have noticed this on 6.12.3 and not before. There is also the thing that zfs drives in array does not spin down if you have them on a timer, but this seems to be linked to ZFS-Master plugin IF you have the "Main" tab open as the plugin then reads the snapshot data and that wakes up the drive. But this does not explain the writes. I have tried to isolate it to a specific docker, with no luck. The writes also happens on zfs drives on the array, not only zfs-pools. Anyone else know more about this? oneroom-diagnostics-20230719-2154.zip
  8. Know this is an old thing, but it would be nice if the VPN Manager under Settings also supported OpenVPN and not just Wireguard
  9. thanks! I think I got confused and though of each container as an peer, but I think I get how it works now
  10. One way to do it is to use a site like ipleak.net and/or dnsleaktest.com
  11. So the DNS should always be set as extra parameter on each docker and NOT under the tunnel dns settings?
  12. Just delete the docker folder after stopping docker and created a new image Then used the Appstore to just redownload all previous installed dockers with latest saved config, since no important data is saved in the docker images
  13. I dont remember how many datasets, switched back to btrfs image for the docker. But I have 24 dockers and it was way more datasets, maybe double
  14. So it's more of a bug than a feature then, thanks 🙂
  15. (not sure if this is the right forum) Hello Im just curious, when using docker folder (not image) under zfs, it creates a lot of datasets and it seems to automatic take snapshots ether when dockers update or gets added, why does this happen? Is it by design? It looks to be more datasets than I have containers.
  16. Im getting this message now when I try to move data from my last xfs drive (the last one does not have much free space) to zfs drives. Tried different things but same happens Disk9 is xfs and disk8 is zfs Edit: Hmmm looks like this may be because of something else, but not sure why
  17. So this plugin wont add much on zfs formatted array drives as it has the scrub and checksum already? and if a drive goes down, we always have good old parity as usual
  18. I wonder the same thing, but for an ZFS formatted array
  19. That makes sense, but can unbalanse put data into shares on another disk? I thought it just copied or moved folders from root of one disk to root of another, since this works just fine on say xfs disks.
  20. When creating a ZFS dataset on a zfs-pool, ether with ZFS-Master plugin or terminal, the share that is created sets primary storage to "array" instead of said pool. Its been like this since the first beta/rc with zfs support as far as I know. Not sure if this is considered a bug or by design as Im sure the "propper" way to create a dataset is from the "share" tab
  21. I have tested it now, it works just as usual, but if you try to use it to move data to an ZFS formatted disk, each folder does not become its own dataset as it does if you create a new share on an zfs drive. All folders and files just gets moved/copied to the pool root
  22. Will the stuff "ZFS Master" plugin do be part of Unraid without the need for the plugin at any time?
  23. After updating dockers, the "Versions" tab on folders still says "update ready", it wont change to "up-to-date" before I refresh the Docker tab. Was like this before 6.12 too, not sure if its by design or a Firefox problem
  24. Now v27 is out and same problem, stuck on the backup phase. I remember I had to delete an file before it worked though the terminal but cant remember what file.
  25. Have I understood it correct that what the stuff ZFS Master plugin does will one day be part of unraid wthout need of a plugin?