Matthew_K

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by Matthew_K

  1. Sorry to hijack this thread, but I wan ran into an interesting issue. So I am in the process of settign up a new system I setup my cache drive, made sure that everything was working. Mounted and formatted the drive with btrfs. Then I swapped the Hardware as soon as it became available, now when I look at the system, It tells me that the Cache is healthy and active but the mount say No filesystem. If i go to /mnt/cache it also tells me that location does not exist. Looking at the error logs Jul 22 10:16:01 Tower kernel: ata2.00: exception Emask 0x10 SAct 0x7000 SErr 0x4090000 action 0xe frozen ... Jul 22 10:27:22 Tower kernel: BTRFS error (device sdc1): parent transid verify failed on 26034176 wanted 56 found 19 Jul 22 10:27:22 Tower kernel: BTRFS warning (device sdc1): failed to read fs tree: -5 ... Jul 22 10:27:22 Tower emhttpd: /mnt/cache mount error: No file system So am I looking at a Bad Drive, Cable, something else? I was mapping an NTFS drive to copy data to the Disk 1 partition and started and stopped the array in rapid succession. also the drive does have one Uncorrectable error count
  2. Does anyone remember Parchive? back in the usenet days, Par was used to recover corrupted or missing data from Usenet block. The nice thing about Par was that it would use the file(s) themselves to create a parity and would only require the number of blocks to replace the missing data. Hypothetically speaking if you had a 20GB file had a 4k corruption within that file you would only need a 4k par file to repair the original file (assuming the bad data was on in same block). I was cool tech back then, and I and surprised that it never when anywhere. I was just curious what people think. I could see a background task that would reserve 1%-10% of each drive to create a Par set per file at 4K per block. sure the initial calc would be long, but considering that the majority of data never changes, and that this could be done on a per file basis I could see this as a way to adding partial a functionality of a Q disk with needing a second drive, Just my 2 cents.
  3. I am new to UNRaid, and I looking for some answers about how best to configure a system. Right now I am running EXSI 6.7 and have 1TB of /OS/VMSpace and then 4x6TB Drives all mapped as raw disks to a Window Server 2016 VM. Obviously this is not the best setup but it is what I kinda grew into overtime. Based upon what I have read UNRaid acts as a JBOB with each disk being a standard XFS or BTRFS partition and then a parity drive that acts as redundancy for the disk. Because the way that UNRaid calculates the parity data this allows for full redundancy across mismatched drives as long as the parity drive is same size or large. Enter KVM… As much as each file system tries to be everything to everyone it always comes down to using the right file system for the correct application. A good case would be with VM, BTRFS kinds sucks because of the overhead, but BTRFS is more feature rich then say XFS or EXT4. It seems that UNRiad is all or nothing for the array? Is it in the plans to allow for mixed file systems for the best possible performance for any give operation? I thought other a quick diagrams with the knowledge I have on how this might work. Is this what was meant in the last poll for t 2020 for multi array support? Partitioned Disk Dedicated Disk Finally as someone who had to deal with KVM snapshots in the past, when this feature is added, please, please make sure that each snapshot delta is stored as separate file. This fixes so many issues and on top of that because the base OS in VM tends to be the largest part of a VM, it should make, creating a parity of the base VM easier because you only really need to do it once. however, this is form a person that tends to work off of deltas and merge the delta back into the head every once in a while and create a new working delta. I don’t think a lot of people know just how powerful this trick really is. Thank you for humoring me.