foo_fighter

Members
  • Posts

    196
  • Joined

  • Last visited

Everything posted by foo_fighter

  1. @bonienl May I make an enhancement request? Some of us like to run bunker from the command line and output STDOUT to a log file. I'd like a mode similar to -q (surpress all output) but to remove the "Scanning... spinning icon" output so it only outputs relevant information and also remove escape characters so that the output can go into a plain readable text file for a .log.
  2. Thanks. Nice script. A couple of comments: You may want to run bunker -a on the $src_path before running the rsync command. bunker doesn't seem to really play nice with piping STDOUT to a log file. It keeps dumping "Scanning \-/-/" (spinning bar) into the log file. Your bunker -v command works for your case where everything is under /Test but it wouldn't work in general for different destination directories. Also it will verify Everything under /Test even though you only rsync'd /Test/Agoria and /Test/Peacock. For example if there's an existing /Test/HugeSizeDirectory, it would also try to verify that.
  3. You can pre clear it over usb, with usb 3.0 it'll run just as fast as internally.
  4. Use something like: find . -exec setfattr --remove=user.md5 {} \;
  5. hashes are stored as Xattrs in the files. try getfattr -d filename. You can also export them to files on the boot usb.
  6. I set zfs_master refresh interval to "no refresh" and that did seem to help.
  7. Has there been an updates on this? Whenever I refresh Main, it firsts reads my 1 ZFS disk and then also writes to it, thus the parity drive spins up also. I only have 1 ZFS disk in the array, all others are XFS.
  8. No, unfortunately rebuilding the drive will restore everything including the filesystem. If you're risk tolerant, you could convert one of the parity drives to a data drive and transfer data to it and then rinse and repeat then move it back to dual parity when you're done.
  9. photo prism, photo structure(heif-convert) or icloudpd can do the conversion.
  10. Do you have snapshots for that data? Disk space won't be freed until all snapshots are deleted as well.
  11. You could use a SAS expander JBOD with another 8i HBA. https://www.pc-pitstop.com/8-bay-expander-tower-trayless Or possibly a USB/C JBOD. https://www.amazon.com/QNAP-TL-D800C-Desktop-Enclosure-Connectivity/dp/B086WCRFQ3?ref_=ast_sto_dp&th=1 https://www.amazon.com/Syba-Swappable-Drive-External-Enclosure/dp/B07MD2LNYX/ref=sr_1_5?crid=1R2191VVMMSCR&keywords=8%2Bdrive%2Bjbod&qid=1702435563&s=electronics&sprefix=8%2Bdrive%2Bjbod%2Celectronics%2C109&sr=1-5&th=1
  12. Are you going from PC -> Unraid? FreeFileSync(or the many gui sync applications or Rsync(command line) on unraid would work.
  13. I have a similar setup. I don't think it matters but I'd also like to know if anyone's tried to optimize their setup. 300MB/s for Sata gen2 is more than the sustained read/write of your spinning disks.
  14. You would use a JBOD or a picopsu with a couple of 5in3 SATA hot swap backplanes. Here is a similar setup:
  15. It's pretty transparent. If you have an older drive or ssd and a spare port you might as well try it. In the shares setting, you can set the preferences for "Movies." It will first copy it to the cache and then later, the mover will move it from cache to the array. The entire time, it will appear is if it is under the "Movies" share. You'll speed up transfers to the limit of your network connection instead of your array write speeds. In your case, it might not have any benefits.
  16. Unraid can run on Terramaster and Asustor(x64) without too much hackery. It comes by default on the lincstation. For the Lincstation I was going to add 3.5" drives via a USB 10gbs JBOD, 10gbs should be enough for 4 drives, either that or use SATA extension cables to a 4 or 5 bay sata hot swap backplane(That's admittedly a bit of a hack). Speaking of AOOSTAR, they announced 2 new NAS devices, available soon: https://aoostar.com/blogs/news/aoostar-pro-4-bay-nas-with-n100-n305-5700u-cpu The TPU can be installed in a E.key right? some of these systems come with wifi-bt in as an E.key module.
  17. If you have mover tuning. I had to set it from 0% to 5% to get it to work. I'm not sure if it was the act of saving settings or the change from 0 to 5.
  18. You could run file integrity / bunker on top of that and check the Xattr or something like snap raid. Rsync has its own -checksum switch but it slows down the process dramatically. If backing up ZFS to ZFS, you should investigate syncoid(part of sanoid) or Znapzend. Here is a similar thread:
  19. Both photoprism and photo structure have options leave the photos inplace. I tried both and stuck with photoprism. I still have photostructure but just disabled the docker.
  20. The Linkstation N1 is currently €256 EUR On Indiegogo €322 EUR if you wait for the release. I'm very tempted to get one and do some mods/hacks, like run 3.5" drives from the 2 Sata ports possibly more with a M.2 to Sata converter. I know this defeats the purpose of a "silent" NAS but I'd rather have more storage. Cool 3d Print BTW.
  21. OP: have you looked at any of the pre-build N5105 systems? Asustor, Terramaster, or even the Lincstation N1 with the included Unraid License?
  22. Let's say I only have 2 identical disks. Both Unraid Parity with 1 ZFS Disk and A Zpool would be considered "Mirrors" with the same information identically stored on both drives. But it seems like there are some subtle differences: A Zpool would have bitrot protection(A Scrub would auto fix it) but Unraid parity would not(Need to restore from Backup if Integrity plugin indicates mismatches)? Both can provide ZFS snapshots. Unraid Parity: Reads only would only require 1 drive to spin up, Would a Zpool require both drives to spin up? Expanding: Unraid Parity is easier to expand with extra drives without touching existing data. I'm not an expert but it seems like you cannot change a ZMirror to a RaidZ1(Parity) easily? You have to destroy/rebuild the pool? A build with Zpools only needs a dummy parity assigned(could be USB)? Mover can move from Cache Pool to Array. Can it also now be configured to Move from 1 Pool to another Pool? Any other pros/cons/differences between the 2?
  23. I've had this same issue and it turned out to be bad memory. Running memtest found it and after replacing the RAM, the corrupted USB drive went away as well as other issues.