unraidfan

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by unraidfan

  1. Right @primeval_god but apparently SnapRAID is a more refined process and includes it in the parity storage from what I'm hearing?
  2. Yeah, it would be so nice. I was heartbroken when I discovered this was missing. Had already installed it on two servers and really was falling in love with the overall UX and of course UI. I'm actually rather sick over this right now. TrueNAS SCALE is pretty sexy and I like their Chia functionality and integration setup, but man, it's just overkill for my budget in terms of the expandability of drives. I think the only reasonable solution for the time being is going to be Proxmox + OpenMediaVault (maybe LXC or just VM depending on if I can get bindmounts to work for LXC) + SnapRAID + MergerFS. I'm scared just typing all of these strung together since I really wanted a cohesive UI to minimize config and headaches, but that's where we're at... 😭
  3. I do agree that ZFS is a fantastic filesystem. I have used it for smaller client arrays for roughly 11 years and it has never failed to deliver rock solid filesystem integrity even with various crashes. So I can agree with that. But I would like a lighter option than raidz*. SnapRAID appears to deliver at a happy medium ground where you can maintain integrity similar to raidz[1,2,3], but at a reduced frequency. This is great for archival situations where you can have hourly or daily snapshots instead of "always up-to-date" protection. I can tolerate losing an hour of changes or additions to my storage array, but I can't tolerate invisible and unrecoverable loss.
  4. I don't really think it's fair inferring laugh-ability by using the term bogeyman (although your use is cute). Furthermore using ZFS in conjunction with Unraid is a bit of overkill in light of SnapRAID. SnapRAID is compatible with the "light" expandability philosophy of Unraid and therefore much more generally compatible with the system such that bit-rot becomes a non issue while also facilitating recovery (even if bit-rot doesn't occur) at any given time based on frequency of the execution of SnapRAID. This seems to fit in perfectly with Unraid and seems as though it would and should be something Unraid would have already adopted, so I'm very surprised it's not yet integrated, let alone no plugin for it.
  5. I've recently learned that SnapRAID is a solution that can recover from bit-rot in addition to any other scenario. It has been interesting to see people running it hourly with OpenMediaVault + MergerFS to accomplish very similar goals to Unraid. I have to say, I really like Unraid. The polish is great and the other functionality for VMs + containers like Docker are really icing on the cake... Can we run SnapRAID on top of Unraid to accomplish an anti-bit-rot regiment? I'm not going to get into a debate of whether or not bit-rot happens or if it's critical or anything like that. I have enough evidence from engineers in the industry to convince me that if I had unlimited money and resources I'd just go with with raidz3 on TrueNAS SCALE and be done with this entirely, but I don't and I'm here and I'd really like to see if I can make this work. Has anyone successfully done this? Here's a reference document by the way doing the OMV + SnapRAID + MergerFS: https://www.michaelxander.com/diy-nas/ and another from archive.org on a down site: https://web.archive.org/web/20210308170014/https://www.networkshinobi.com/snapraid-and-mergerfs-on-openmediavault/
  6. With ZFS one could do the (perhaps dreaded) `copies=2` which stores two copies. Definitely suboptimal, but something has to be done otherwise I don't see Unraid as being tenable now 😭 This makes me very very sad after spending an inordinate amount of time to get it to work on the servers I have.
  7. While doing a preclear of my drives for my setup, I thought I'd do a little reading about the safety precautions for bit-rot that Unraid might have in place. I was horrified to see that there is no such functionality to use the (2) Parity drive(s) to scrub and repair silent corruption. I have seen debates about ECC memory being the source (which I do not believe based on discussions I've had with engineers who worked at Sun and Apple who worked extensively in this space with ZFS in the early days and based on their evaluations where they lined up a lot of other scenarios that simply demonstrate a big "NO" to that thought process - bit-rot unequivocally exists and happens), etc. but in the end the result is that Unraid simply doesn't appear to facilitate silent corruption detection + repair. Even tools like DYNAMIX FILE INTEGRITY PLUGIN doesn't solve the problem since it doesn't allow you to recover from a corrupted file. What if your backup(s) has captured a corrupted snapshot and you simply no longer have an uncorrupted file anymore? I suppose BTRFS could be used on an individual disk basis, but again, how do you recover things quickly enough to avoid all bad scenarios? I don't see any automated solution there, but that would be nice if it exists. I think if it does exist this might be useful and I would definitely use BTRFS instead of XFS for this sanity saver. To state this even more clearly, I would LOVE to see ZFS be used in conjunction with Unraid to facilitate this perpetual scrubbing and recovery! 💙 But alas, it doesn't appear that this exists...... Anyone have a solution? I'm willing to buy an additional drive or two to facilitate perpetual scrubbing recovery if I could just use Unraid due to it's expandability attribute, but at this rate I just cannot tolerate the risk of silent corruption and will have to go with a ZFS based solution... 😿
  8. I have a lot of highly compressible data that take up enormous amounts of space. Is there a method to obtain transparent integrated compression with Unraid shares? I was hoping to get away with compression sans manually managing it since there are so many such data files.
  9. I'm having this same problem with an R710. I have two of them, one boots unRAID 6.9.2 in BIOS (NOT UEFI) mode fine, but the other R710 will not boot unRAID 6.9.2 at all even with a BIOS USB flash drive that works on the other R710! The real problem I'm seeing is that the BIOS appears to be configured IDENTICALLY so there's clearly something I'm missing somewhere.
  10. Thank you, so it appears as though the two Parity drives are essentially identical and simply offer a redundancy in case one goes bad. I was looking for an FAQ but cannot seem to find one in order to understand: how many drives can fail at one time given any size before losing data? Only one or more?
  11. Thank you @itimpi for your remarks! I find this very interesting regarding the device identifier changing. I thought that USB drives such as this had a GUID that always was able to identify it. Bootloaders such as GRUB, OpenFirmware, OpenCore, and so on rely on this information and reliably boot to USB drives even if they change ports... so I find your remark very curious and I would like to hear a developer chime in on this topic! Since this is a Parity drive we are first specifically talking about, I have to believe that it won't impact read/write performance or create other headaches. Furthermore I would think the SSD cache might further reduce impact if there is any kind of impact. The USB device should, even if it disconnects, be able to recover rapidly and since the system continually checks consistency I don't think corruption would occur or at least occur in any significant amount (?)... Again, thank you and I find this very interesting. It does sound like a STORAGE component to the Array would be more problematic and I probably should avoid that. I guess one more point is that I am confused about the point of how many Parity drives can a system have? I thought that the more Storage drives it has the more Parity drives it would or should need. Is 2 Parity drives the only number it will ever have or will this count go up as more actual drives are plugged in? I keep having the thought that USB attached Parity drives seem almost ideal since they're easy to replace if and when they go bad (but I suppose always having an internal one would be sane maybe).
  12. As a backup parity as in this case is it reasonable? Could you explain what my expectations should be from the following?: Obviously in this case I have ONLY used the USB drive for Parity and not Storage, but I was going to add storage to USB 3 as well at some point... But yeah, I'd like to know what I should expect with this and then perhaps we can expand discussion to storage after we're more clear on the big picture and any bottlenecks or problems that might arise that relate to performance or otherwise...
  13. Gee, I just tried it and it's letting me do it for a parity device................................. Are you differentiating Parity from Array and Pool even thought Party devices are listed under Array?
  14. I have a server with 6 internal drive bays and I'd like to attach a few more drives via USB for both storage and parity functionality. Has anyone seen any problems with doing this as long as they're all attached to a UPS keep outages in sync with the server being on? I plan on using a couple of SSDs or a F80 for cache so I'm thinking that this might be okay...
  15. Has anyone had any success at cleanly migrating exported Hyper-V virtual machines over to KVM on unRAID? If so, could you please provide the steps you followed to make a clean conversion? So far others seem to report some problems here and there with failed attempts of migration.