Jump to content

foo_fighter

Members
  • Posts

    204
  • Joined

  • Last visited

Posts posted by foo_fighter

  1. If you do an all ZFS pool, you would need to add 1 drive to the array. Could be a USB drive.

    One other option is a 2 parity but all ZFS array(all individual disks). That would give you the compression, not have to spin up all the disks, but you would lose the self-repair option of a ZFS Pool.

  2. 3 minutes ago, piyper said:

    I guess I was thinking that you could not format a drive while it was part of the array and you had to take it out of the array and once you did that, the parity would have to be rebuilt.

     

    I didn't know you could reformat a disk while it is still part of the array, I figured that even an empty drive would "compare" differently with two different formats, therefore as soon as you format the disk and put that disk back into the array the parity (on both parity drives) would not match the data on the drives anymore.

     

    I guess I am making am assumption about how parity is stored and if just one bit is different (say from a different format) that that part of the parity would not be correct anymore and unraid would just force a parity rebuild.

     

    I apologize if I am missing something and and I do thank you for your patience and help

    Yes, always good to 100% understand what you're doing. You do need to stop the array to make changes, but that doesn't invalidate the parity. 

     

    Here is a great video that might help:

     

    You actually don't need the extra step of erasing and then formatting, but you can follow it as is.

     

  3. 14 hours ago, piyper said:

     

    Thank you for the suggestion, unfortunately, my two parity drives are more than twice the size of the biggest data drive (planning for future upgrades), that would force me to upgrade at least one of the data drives now and Santa didn't bring me any new drives :(

     

    Then you would only need at most 2 rounds of conversions. If the total space used on all 3 reiserFS disks is less then the 1 parity disk then only one round.

     

    Here's how you would do it:

    Convert parity2 to a data disk.(Meaning temporarily change the config to a 1 parity system, and change parity2 to a data disk with 2x the space of any other drive.)

    For the above step I'm not 100% sure if you need to pre clear(write all 0s) the 2nd parity drive before you add it back into the array as a data disk. You may want to do it in 2 steps to be safe.

     

    Copy the contents of the reiserFS disks or 2 largest disks onto the new data disk.

    Verify all data(data integrity plugin)

    Reformat all the disks which data is duplicated to the new file system.

    Copy all of the data back.

    Verify all data(data integrity plugin)

    Repeat once more if needed.

    Change the temp data disk back to parity2

    You'll still be protected by 1 parity disk the entire time and you're duplicating data so the risk is a bit less.

     

     

  4. 15 hours ago, Gragorg said:

    So something to add to this.  In the SMART I posted helium level is showing as 64.  I just rechecked it and Attribute 22 it is now showing as 100.  Both show that it has never failed.  Could this be a bad sensor in the drive?  I assume I should be ok to leave it and just monitor it?

    64hex = 100Decimal.

     

  5. Bunker does not like spaces in dirnames when the dir is double quoted "/dir name/":

    \ /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker: line 91: $monitor: ambiguous redirect

    /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker: line 133: $monitor: ambiguous redirect

     

    but seems okay with it backslashed as /dir\ name/.

     

     

  6. @bonienl May I make an enhancement request?

    Some of us like to run bunker from the command line and output STDOUT to a log file.

    I'd like a mode similar to -q (surpress all output) but to remove the "Scanning... spinning icon" output so it only  outputs  relevant information and also remove escape characters so that the output can go into a plain readable text file for a .log.

     

     

  7. Thanks. Nice script.

     

    A couple of  comments:

    You may want to run bunker -a on the $src_path before running the rsync command.

    bunker doesn't seem to really play nice with piping STDOUT to a log file. It keeps dumping "Scanning \-/-/"  (spinning bar) into the log file.

    Your bunker -v command works for your case where everything is under /Test but it wouldn't work in general for different destination directories. Also it will verify Everything under /Test even though you only rsync'd /Test/Agoria and /Test/Peacock. For example if there's an existing /Test/HugeSizeDirectory, it would also try to verify that.

     

     

     

  8. On 10/31/2023 at 9:36 AM, johnsanc said:

    I am cleaning up my array and switched to BLAKE3. I want to ensure ALL files have a BLAKE3 hash and ideally wipe out all the older hashes since I have no use for them. What is the best way to do this?

     

    I noticed that the Remove action doesn't actually remove old hashes, it only seems to remove the timestamps + whatever hash is currently selected. Is this a bug or by design?

     

    Based on my current observations it seems the only way to start from a clean slate is to do a Clear and Remove on every disk for every hashing method. 

    Use something like:  find .  -exec setfattr --remove=user.md5 {} \;

  9. 6 hours ago, Oublieux said:

    Hi everyone, I set up DFI for the first time today. It seems straightforward enough, but I seem to be running into an issue (?) that is puzzling me:

    • I hit "Build" to start generating BLAKE3 hash files for my all four of my disks at once.
    • Two of the disks are completed and the remaining two are still in the build phase.
    • I assumed that the hash files for the two disks that completed would be present, but clicking the "hash files" icon that opens the directory shows that there are no files at all.
       

    Has anyone run into this before? Or, should I be waiting for the build to complete across all four disks first?

     

    hashes are stored as Xattrs in the files. try getfattr -d filename. You can also export them to files on the boot usb.

     

  10. On 9/8/2023 at 2:44 AM, eicar said:

     

     

    In any case: would anyone know how to solve the latter two issues? (SATA power supply + enclosure?)

     

    You would use a JBOD or a picopsu with a couple of 5in3 SATA hot swap backplanes. Here is a similar setup:

     

     

    • Like 1
  11. It's pretty transparent. If you have an older drive or ssd and a spare port you might as well try it. In the shares setting, you can set the preferences for "Movies." It will first copy it to the cache and then later, the mover will move it from cache to the array. The entire time, it will appear is if it is under the "Movies" share. You'll speed up transfers to the limit of your network connection instead of your array write speeds. 

    In your case, it might not have any benefits. 

  12. Unraid can run on Terramaster and Asustor(x64) without too much hackery. It comes by default on the lincstation.

    For the Lincstation I was going to add 3.5" drives via a USB 10gbs JBOD, 10gbs should be enough for 4 drives, either that or use SATA extension cables to a 4 or 5 bay sata hot swap backplane(That's admittedly a bit of a hack).

     

    Speaking of AOOSTAR, they announced 2 new NAS devices, available soon: https://aoostar.com/blogs/news/aoostar-pro-4-bay-nas-with-n100-n305-5700u-cpu

     

    The TPU can be installed in a E.key right? some of these systems come with wifi-bt in as an E.key module.

     

  13. 16 hours ago, JorgeB said:

    Btrfs and zfs create checksums based on blocks, not files, to verify them you need to run a scrub, cannot use an external utility

    You could run file integrity / bunker on top of that and check the Xattr or something like snap raid. 

    Rsync has its own -checksum switch but it slows down the process dramatically. 

    If backing up ZFS to ZFS, you should investigate syncoid(part of sanoid) or Znapzend.

     

    Here is a similar thread: 

     

×
×
  • Create New...