d3m3zs

Members
  • Posts

    57
  • Joined

Everything posted by d3m3zs

  1. Ah, thanks a lot. It is disabled.
  2. I see in prometheus "Get "http://myIP:9100/metrics": dial tcp myIP:9100: connect: no route to host" Bu I can open myIP:9100 and see all information: # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 2.7502e-05 go_gc_duration_seconds{quantile="0.25"} 3.3061e-05 go_gc_duration_seconds{quantile="0.5"} 3.9764e-05 go_gc_duration_seconds{quantile="0.75"} 4.5858e-05 go_gc_duration_seconds{quantile="1"} 6.8148e-05 go_gc_duration_seconds_sum 0.001682561 go_gc_duration_seconds_count 41 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 7 ...... Is it some issue in Prometheus Node Exporter plugin ?
  3. How to install this awesome dashboard? I probably missed smt but don`t see some simple installation guide
  4. Thank you, updated and it seems working, unassigned ZFS disks do not spin up anymore. 🤝
  5. Thanks, by the way, BTRFS has the same features, but never spin up (I have also BTRFS on unassigned disks)
  6. Hi! I already posted my question to ZFS plugin, but author told me it is issue of Unassigned device plugin. How can I fix it? Unassigned ZFS disks automatically spin up when I open Main tab on Unraid, all other ZFS disks (in array) continue sleeping.
  7. 1. Yes, in pool I have one ZFS drive and it is sleeping. 2. No. 3. No. And disk spin up only when I open Unraid Main tab. I can manually spin down disk on Main tab and in few seconds it will spin up again.
  8. Can anyone tell me why when I have ZFS drive in Unassigned Devices - this drive will never spin down.
  9. Sorry, but how did you install openCL? And which one? I googled and there are many options.
  10. Found one more issue - invalid character '[' in name Need to remove such symbol first find . -depth -type d -name '*\[*\]*' -execdir bash -c 'mv -- "$1" "${1//[\[\]]/}"' bash {} \;
  11. I thought I need script that should be executed to run sanoid jobs. Yesterday I used his script, works good except one issue: when I want to create replication of one child dataset it will fail because I provide path like "nvme/downloads/Kopia" script doesn`t expect second symbol "/" in path and can`t parse it, easy to fix it in destination path just replace by regex second "/" to "_" (for example). But honestly I don`t like his script because it is overingeneered: there are many conditions and verification like "does sanoid installed, does ZFS installed", but it parametrized well. In my opinion script should be match easier: get only 2 parameters as arguments (source and destination) create snapshots and replica of source dataset according to policy and follow retention logic send to destination notification In my opinion it should be just a few commands. And only 1 script that I can store in local folder and use it like that from any terminal or from scheduler scripts: zfs_snap_replica.sh "cache/docker" "disk13/backups/docker" Probably I will create such script in next few days, when will be free.
  12. Please correct me f I am wrong, but Sanoid provides just some kind of syntax sugar to pure ZFS, so even after installation of Sanoid I need to write script that will be send snapshots to backup storage.
  13. Thank you! That is exactly what we need to this feature. All other - agree and understood.
  14. This is great update! Thank you! But also would be great to automatically remove tmp folders after rsync command done. For example, I selected folder, decided to convert it, dialog was appeared with progress and then was closed, I checked only 20GB was copied to new dataset (should be 80GB), ok, after 10-20 minutes I checked again and see that new dataset ha all 80GB of data and all tmp folder is also 80GB of data. So, as user I don`t understand if rsync job successfully finished or not and I don`t understand why I still have tmp folder. So, I need now execute one more rsync command manually with dryrun to be sure that everything was transfered.
  15. Did you mean this https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html ?
  16. Thank you. It seems best way to backup is - do not create child dataset, just use usual folders or write such script that will be sending all child one by one.
  17. I have a question, maybe I don`t understand how should I work with plugin, but I expected that when I create snapshot of dataset - it should have all datasets inside, instead of this I can see that when I created (in my case) snapshot of docker dataset - it takes 0 space and all folders will be empty, but each dataset that inside docker - created. Doesn`t matter did I checked "Recursively create snapshots of all descendent datasets" or not, nvme/docker Snapshot will be empty each time. Because I executed command zfs send nvme/docker@{my_snapshot} | zfs recv disk3/zfs_backups/dockers and noticed that all folders in "disk3/zfs_backups/dockers" are empty and started my investigation.
  18. None of them, but maybe after reboot I will be able to remove.
  19. 1. Yes it has, but from some tutorial with your plugin I saw this "scrub" button and don`t see where it was removed according to last change log. 2. Thank you, activated, but after destroying see 3. No refresh works only for array disks, I don`t know why, but unassignedd will never spin down.
  20. Thanks, got that. Also, I have a few questions: 1. I see no button "scrub" in plugin, it was removed or some setting should activate it? I use 2023.12.08.48 version 2. Is it possible to remove dataset via plugin? For example it is easy to create, but for removing I have to do that in CLI. BTW: "No Refresh" doen`t affect Unassigned disks, so I just formatted them to btrfs and xfs.
  21. I would like to restore a few containers, but I don`t see them in list, bbut can see in templates, for example qbittorent. Checked last 4 backup files and did not find.
  22. And one more question, maybe I did not get. How can I manually open snapshot? I removed a few folders by mistakes, would like to restore.
  23. I found weird behavior: if I have Unassigned Devices formatted in ZFS they will never spindown. It seems this setting 'no refresh' works only for array ZFS drive.
  24. Found issue - increased docker timeout from 10s to 45s and issue gone.
  25. Could someone help me to resolve the same issue: I upgraded and saw "Reboot" link. After reboot I see this message: And now began Parity checking for few days... And this is not first time, almost each reboot or shutdown - parity check, I don`t want to have it each time, because in winter server could be turned on-off each days, because of issue with our electricity infrastructure. Which log and from where I have to check?