sunbear

Members
  • Posts

    103
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sunbear's Achievements

Apprentice

Apprentice (3/14)

10

Reputation

1

Community Answers

  1. Any luck with this? I'm having the same issue. Think it's something off with my smb-extra settings.
  2. Has anyone been able to get this to work with an ASROCK RACK X570D4U or X570D4U-2L2T? The previous plugin version had a patch that was supposed to work for these models but it doesn't work with this new plugin. All readings and communication seem to be working but when I do a CONFIGURE for the fans, it only detects ONE of my 5 fans. And control doesn't seem to be working.
  3. In other words, the same performance, regardless which path I use?
  4. Yes, I would be very interested! Thanks. I'm wondering what the GUI will show for a pool that I modify in this way? I assume capacity/usage calculations will still work correctly with the additional mirror (I guess the capacity would stay the same).
  5. Is there a specific path that I need to reference in order to get the supposed IO performance boost from using exclusive shares? In other words do I need to use the following path: /mnt/pool/exclusive-share Or the following: /mnt/user/exclusive-share Or does it not matter?
  6. I know currently if you already have a vdev of mirrors, the unraid GUI will let you add an identical vdev of mirrors striped, increasing your pool capacity (I assume this is raid01 like you mention). I'm just not sure if it will let you do the inverse (add an identical group of striped drives in mirror config (i.e. raid10). I haven't been able to find anyone discuss such a configuration and I'd like to know it's possible before I spend the money on the drives. I was hoping to avoid command line because I'm a noob. But I suppose that can be my last resort.
  7. I am currently running two pcie4.0 NVME drives in zfs raid0 for that sweet sweet performance. However, I am worried about the lack of error correction/drive failure protection. If I were to buy 2 more NVME drives, would it be possible to then run the two new drives raid0/stripped and the mirror both sets for drive protection? Please note that I am not asking if you can stripe two sets of mirrors, which I know you can do and easily add mirror groups within raid. I'm asking if it is possible to MIRROR two sets of STRIPED drives. The former gives you the 2X read speeds but doesn't give you the benefit of 2X write speeds, while the later gives you both 2X read speeds and 2X write speeds (theoretically of course). Thanks.
  8. I'm having the same lock-ups and log errors as OP. On rc7. It doesn't lock up completely just extremely unresponsive. Most dockers not working. @DuzAwe, did you figure anything out with this?
  9. Is the new version available in the apps store yet? How do I install otherwise?
  10. So if a user is adding multiple identical drives, can I assume that it will generally always make more sense to add a raid-protected pool rather than adding individual drives to the parity-protected array? Thanks so much for the responses, btw. These are super helpful!
  11. Awesome, thank you. Would you say there is any difference in the feature set between ZFS protection and File Integrity's protection (blake3)? Or do they both just provide notification of corruption and that's it? Lately, the File Integrity plugin has been very processor intensive when running checks, so I'm wondering if ZFS may be better. Am I correct in assuming ZFS has no "scanning" process and the detection is done automatically? Or is it like the btrfs check which is quite quick? Last thing, so if I have a mirrored or raidz2 pool is it possible to ALSO have it protected under the array parity drive, or is it just like another cache pool?
  12. TWO QUESTIONS: 1. If I convert my XFS array to ZFS, does it make sense to still use the File Integrity Plugin? Or is it overkill since checksums are already verified with ZFS? 2. In order to use the raid features of ZFS, does that require me to create a separate array with a separate parity drive? Or can it be utilized under a parity drive for an already existing XFS array?
  13. I believe I have updated since then but I think you may be correct.
  14. Just had this happen to me. Rebooted server. Any info on the cause or a fix?