ratmice

Members
  • Content Count

    315
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ratmice

  • Rank
    SubGenius
  • Birthday 08/28/4

Converted

  • Gender
    Undisclosed
  • Location
    West O' Boston
  • Personal Text
    Give me slack!, or give me death.

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So I noticed that one of my data disks (disk 7) and my cache disk, are both showing read errors. The data disk SMART report shows a few CRC errors from a long time ago (I think). The cache disk seems like it's dying. Would someone be kind enough to see if my assessment of the 2 disks is correct? Other recommendations would be greatly appreciated, as well. Thanks for the help. One add'l question: can I use btrfs for cache (in case I want create a pool later) while using XFS for data, or does that cause any problems. tower-diagnostics-20210110-1407.zip
  2. OK, thanks. I think I will just not tempt fate and leave everything alone until the new controllers get here. That should be an adventure, as well. O_o
  3. So, one last question, now that the repair has proceeded, and lots of items placed in Lost + Found, indicating, I think, that there was a lot of corruption, should i bother to try mounting it? or, will that screw up things when I go and try to rebuild the disk. My thinking here is that if it is actually mountable, then unraid will think that it's current state is what it's supposed to be and adjust parity accordingly, thus giving me a screwed up rebuild. As it is it appears that the emulated disk works OK.
  4. That's what I thought, i can wait a few days for the controllers to arrive. Here goes nothing Thanks again.
  5. So, I attempted to run xfs_repair and got this output, seems like the disk is really borked. However is there a way to attempt mounting a single disk while in maintenance mode, or is starting the array and having it choke on this disk enough to just jump to ignoring the log while repairing? I am too *nix illiterate to know this. root@Tower:~# xfs_repair -v /dev/md17 Phase 1 - find and verify superblock... - block cache size set to 349728 entries Phase 2 - using internal log - zero log... zero_log: head block 449629 tail block 449625 ERROR: The filesystem has valuable
  6. Thanks for the prompt reply. As always, Johnny, you are a superb asset to the forums. I am going to replace the controllers ASAP. I currently have a SuperMicro X8SIL-F motherboard. looks like I'm limited to x8 PCI-e cards (but it seems the SASLP are x4 cards). Any recommendations for direct replacements for the SASLP controllers? seems like: LSI 9211-8i P20 IT Mode for ZFS FreeNAS unRAID Dell H310 6Gbps SAS HBA might be a reasonable replacement, I'm just a bit fuzzy on the bus/lane deal. Are there more stable, proven replacements that have the SFF-8087 SAS connector so I can
  7. Here are the post-reboot diagnostics. Also the disabled dis has a note that it in "unmountable: no file system". Not sure if that is SOP for disabled disks, or not. tower-diagnostics-20200625-1540.zip
  8. Unraid - 6.7.2 So I woke up this morning to an array that has a disabled disk (single parity system) and seven disks all with millions of read errors. The array isn't mounted, and is unreachable from the network. In the shares pane only the disk shares are showing up, none of the other shares. I pulled a diagnostic report (attached below), and now wonder what the safe thing is to do. I did start a read test as indicated on the main page, but paused it almost immediately, not knowing if it would screw things up. Last night I rebooted the server, as I was having some trou
  9. Thanks again Johnnie. Just to be extra clear (paranoid) the UnRAID managed device number should always be the same as the disk number, correct? So if I need to zero disk 16, I would use md16. Sorry for the cluelessness.
  10. OK, so back again. I am trying to use the 'clear array drive' script in order to shrink my array. I added a drive to the array earlier today and shortly realized that another drive was acting up. I am in the process of trying to remove the newly added drive by the clear drive and then redeploy it for the dodgy drive to be rebuilt. When I run the script, it finishes instantly and the folder 'clear-me' still remains on the drive in question. This drive was only added to the array and formatted (to do so) so does not have any data on it. I don't see any pesky hidden files, so I am won
  11. Thanks, Johnnie. You always seem to be around to answer these questions and I really appreciate it. Have a great day.
  12. Thanks for the explanation. Also, what happens if I screw up the exclusion/inclusion thing?