foo_fighter

Members
  • Posts

    204
  • Joined

  • Last visited

Everything posted by foo_fighter

  1. ZFS snapshots take less space because they only store the deltas ZFS replication will be faster since it only sends the deltas The corollary is that deleting file won't save any space until all snapshots are also deleted ZFS replication of app data/cache drive doesn't require shutting down and restarting dockers. ZFS doesn't have a built in file repair system ZFS can detect bit-rot but can only repair bit-rot in a pool with redundancy(mirror/raidz, etc) ZFS Master plugin seems to spin up and write to drives often.
  2. If you do an all ZFS pool, you would need to add 1 drive to the array. Could be a USB drive. One other option is a 2 parity but all ZFS array(all individual disks). That would give you the compression, not have to spin up all the disks, but you would lose the self-repair option of a ZFS Pool.
  3. In my opinion, the best way of doing this is to use ZFS replication from a cache pool to another ZFS disk in the array or a ZFS pool. There is a sanoid/syncoid video for this:
  4. I'm also getting these "errors" and the last 2 youtube videos I watched on unraid also showed these error scrolling by. I'm not experiencing any issues, but it seems this particular type of warning is pretty prevalent. Both drives and ports are USB 2.0.
  5. Yes, always good to 100% understand what you're doing. You do need to stop the array to make changes, but that doesn't invalidate the parity. Here is a great video that might help: You actually don't need the extra step of erasing and then formatting, but you can follow it as is.
  6. Then you would only need at most 2 rounds of conversions. If the total space used on all 3 reiserFS disks is less then the 1 parity disk then only one round. Here's how you would do it: Convert parity2 to a data disk.(Meaning temporarily change the config to a 1 parity system, and change parity2 to a data disk with 2x the space of any other drive.) For the above step I'm not 100% sure if you need to pre clear(write all 0s) the 2nd parity drive before you add it back into the array as a data disk. You may want to do it in 2 steps to be safe. Copy the contents of the reiserFS disks or 2 largest disks onto the new data disk. Verify all data(data integrity plugin) Reformat all the disks which data is duplicated to the new file system. Copy all of the data back. Verify all data(data integrity plugin) Repeat once more if needed. Change the temp data disk back to parity2 You'll still be protected by 1 parity disk the entire time and you're duplicating data so the risk is a bit less.
  7. Bunker does not like spaces in dirnames when the dir is double quoted "/dir name/": \ /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker: line 91: $monitor: ambiguous redirect /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker: line 133: $monitor: ambiguous redirect but seems okay with it backslashed as /dir\ name/.
  8. @bonienl May I make an enhancement request? Some of us like to run bunker from the command line and output STDOUT to a log file. I'd like a mode similar to -q (surpress all output) but to remove the "Scanning... spinning icon" output so it only outputs relevant information and also remove escape characters so that the output can go into a plain readable text file for a .log.
  9. Thanks. Nice script. A couple of comments: You may want to run bunker -a on the $src_path before running the rsync command. bunker doesn't seem to really play nice with piping STDOUT to a log file. It keeps dumping "Scanning \-/-/" (spinning bar) into the log file. Your bunker -v command works for your case where everything is under /Test but it wouldn't work in general for different destination directories. Also it will verify Everything under /Test even though you only rsync'd /Test/Agoria and /Test/Peacock. For example if there's an existing /Test/HugeSizeDirectory, it would also try to verify that.
  10. You can pre clear it over usb, with usb 3.0 it'll run just as fast as internally.
  11. Use something like: find . -exec setfattr --remove=user.md5 {} \;
  12. hashes are stored as Xattrs in the files. try getfattr -d filename. You can also export them to files on the boot usb.
  13. I set zfs_master refresh interval to "no refresh" and that did seem to help.
  14. Has there been an updates on this? Whenever I refresh Main, it firsts reads my 1 ZFS disk and then also writes to it, thus the parity drive spins up also. I only have 1 ZFS disk in the array, all others are XFS.
  15. No, unfortunately rebuilding the drive will restore everything including the filesystem. If you're risk tolerant, you could convert one of the parity drives to a data drive and transfer data to it and then rinse and repeat then move it back to dual parity when you're done.
  16. photo prism, photo structure(heif-convert) or icloudpd can do the conversion.
  17. Do you have snapshots for that data? Disk space won't be freed until all snapshots are deleted as well.
  18. You could use a SAS expander JBOD with another 8i HBA. https://www.pc-pitstop.com/8-bay-expander-tower-trayless Or possibly a USB/C JBOD. https://www.amazon.com/QNAP-TL-D800C-Desktop-Enclosure-Connectivity/dp/B086WCRFQ3?ref_=ast_sto_dp&th=1 https://www.amazon.com/Syba-Swappable-Drive-External-Enclosure/dp/B07MD2LNYX/ref=sr_1_5?crid=1R2191VVMMSCR&keywords=8%2Bdrive%2Bjbod&qid=1702435563&s=electronics&sprefix=8%2Bdrive%2Bjbod%2Celectronics%2C109&sr=1-5&th=1
  19. Are you going from PC -> Unraid? FreeFileSync(or the many gui sync applications or Rsync(command line) on unraid would work.
  20. I have a similar setup. I don't think it matters but I'd also like to know if anyone's tried to optimize their setup. 300MB/s for Sata gen2 is more than the sustained read/write of your spinning disks.
  21. You would use a JBOD or a picopsu with a couple of 5in3 SATA hot swap backplanes. Here is a similar setup:
  22. It's pretty transparent. If you have an older drive or ssd and a spare port you might as well try it. In the shares setting, you can set the preferences for "Movies." It will first copy it to the cache and then later, the mover will move it from cache to the array. The entire time, it will appear is if it is under the "Movies" share. You'll speed up transfers to the limit of your network connection instead of your array write speeds. In your case, it might not have any benefits.