Jump to content

TheSystemAdmin

Members
  • Posts

    6
  • Joined

  • Last visited

Posts posted by TheSystemAdmin

  1. 5 minutes ago, itimpi said:

    You can use the New Config tool to change the disk assignments to what you want, but you then need to rebuild parity to match the new assignments.

     

    Understood! Here we go for another 25 hour rebuild! Haha

     

    Actually, I will just leave it alone and start building backwards. Seems cosmetic and not a big deal.

     

    Was I correct in disk gaps not being acceptable? If I build back down to disk 1, and then add a disk 7 I would be okay so long as disk 1-6 are populated?

  2. Hello!

     

    I searched the forums and only found posts that were years old or not making a ton of sense for my thick skull.

     

    My unRAID server had 6 data disks and 2 parity disks. I removed 4 small disks which left "disk 5" and "disk 6". I then rebuilt parity with only two disks in the array. That was all well, but I would like to move 5/6 to 1/2 and add another disk for 3. It sounds like I can do this, but my second parity disk will not fly and need to be rebuilt again.

     

    My question is this, on unRAID 6.9.2, can I move those disks to 1/2 and be 100% fine with checking "parity is valid"?

    Or do I need to unassign Parity 2, do my drive reconfiguration (along with adding the third disk), start up the array and then stop the array to re-add Parity 2 and have it rebuild.

     

    The disk arrangement is strictly OCD, if this isn't possible with how I explained above without reinventing the wheel I will just add the third disk as "disk 4" as it seems gapping drives is a no-go.

     

    Thanks!

     

     

     

  3. 8 minutes ago, JorgeB said:
    Jan 28 10:35:32 TSA-NAS01 kernel: ahci 0000:03:00.1: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000e address=0xb0010000 flags=0x0000]

     

    Problem with the onboard SATA controller, both cache devices dropped offline because of that:


     

    Jan 28 10:36:33 TSA-NAS01 kernel: ata1.00: disabled
    Jan 28 10:37:30 TSA-NAS01 kernel: ata2.00: disabled

     

    This is quite common with some Ryzen boards, rebooting should bring the pool back but if it keeps happening best to use an ad-don controller (or replace the board).

     

    Reboot appears to have resolved it, data is back, containers started up and VMs are reflecting.

     

    Will definitely consider replacing the board if this issue occurs a second time. Been debating on switching to Intel but the wife won't approve any more tech spending for a few months. Haha

  4. 2 minutes ago, Squid said:

    Cabling certainly appears to be the prime suspect (the drive isn't even showing any SMART report at all)

    Since it looks like you sync the backup from the plugin to backblaze it's probably not a major issue, but I don't recommend storing a backup of the drive you're backing up on the drive itself.

     

     

    True, I have been debating on plugging an external drive in and having it backup to that for a local copy but the data footprint is so small that pulling from the cloud wouldn't take more than an hour or two.

     

  5. Hello unRAID Community!

     

    I was watching Plex when it disconnected on me. I hopped onto my webGUI and received no notifications, but it did not look good.

     

    1. Several (not all) containers were stopped.

    2. All VMs are gone ("No Virtual Machines installed")

    3. Several TBs of data is not showing up in Windows or through the "Shares" tab, but the utilization on the disks appears to be correct.

     

    Logs are spamming this:

    Jan 28 11:37:55 TSA-NAS01 kernel: blk_update_request: I/O error, dev sdk, sector 73447704 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
    Jan 28 11:37:55 TSA-NAS01 kernel: BTRFS error (device sdf1): bdev /dev/sdk1 errs: wr 52, rd 8464053, flush 0, corrupt 0, gen 0
    Jan 28 11:37:55 TSA-NAS01 kernel: sd 1:0:0:0: [sdf] tag#31 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=0s
    Jan 28 11:37:55 TSA-NAS01 kernel: sd 1:0:0:0: [sdf] tag#31 CDB: opcode=0x88 88 00 00 00 00 00 00 3e ae 20 00 00 00 20 00 00
    Jan 28 11:37:55 TSA-NAS01 kernel: blk_update_request: I/O error, dev sdf, sector 4107808 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0
    Jan 28 11:37:55 TSA-NAS01 kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 54, rd 10210651, flush 0, corrupt 0, gen 0
    Jan 28 11:37:55 TSA-NAS01 kernel: sd 2:0:0:0: [sdk] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=0s
    Jan 28 11:37:55 TSA-NAS01 kernel: sd 2:0:0:0: [sdk] tag#18 CDB: opcode=0x88 88 00 00 00 00 00 00 3e 0e 20 00 00 00 20 00 00
    Jan 28 11:37:55 TSA-NAS01 kernel: blk_update_request: I/O error, dev sdk, sector 4066848 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0
    Jan 28 11:37:55 TSA-NAS01 kernel: BTRFS error (device sdf1): bdev /dev/sdk1 errs: wr 52, rd 8464054, flush 0, corrupt 0, gen 0
    Jan 28 11:37:55 TSA-NAS01 kernel: BTRFS info (device sdf1): no csum found for inode 72150 start 1000931328
    Jan 28 11:37:55 TSA-NAS01 kernel: sd 1:0:0:0: [sdf] tag#22 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=0s
    Jan 28 11:37:55 TSA-NAS01 kernel: sd 1:0:0:0: [sdf] tag#22 CDB: opcode=0x88 88 00 00 00 00 00 04 61 59 18 00 00 00 08 00 00
    Jan 28 11:37:55 TSA-NAS01 kernel: blk_update_request: I/O error, dev sdf, sector 73488664 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
    Jan 28 11:37:55 TSA-NAS01 kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 54, rd 10210652, flush 0, corrupt 0, gen 0

     

    From what I can tell in my quick (panicked) Google searches is there is something wrong with my cache.

    I have a pool of 2 SSDs that show 0 Errors, if I try to scrub them, I get an aborted status:

     

    UUID:             bdbe2a64-9dd0-40b4-82fb-75fba1b30eca
    Scrub started:    Fri Jan 28 11:21:47 2022
    Status:           aborted
    Duration:         0:00:00
    Total to scrub:   178.97GiB
    Rate:             0.00B/s
    Error summary:    no errors found

     

    Also getting this on the Balance Status:

    image.thumb.png.f96369ffc96e74a7c9bc6f7784185d41.png

     

    Before I start ripping things apart and re-seating cables. I wanted to make sure I'm on the right direction. While losing data is not the end of the world, I would rather not have to rebuild everything.

     

    Both SSDs are connected straight to the motherboard while the rest of my data disks are through an HBA.

     

    I do have backups utilizing the CloudBerry App to a Backblaze S2 bucket which does show data (woo!) I also have backups via the CA Backup / Restore Appdata plugin which appears to have run today at 3am. Though it currently reports it has no backup sets since that data is now missing on the unRAID side. (Again, also in Backblaze)

     

    Any help would be really appreciated!

     

    Thank you.

    tsa-nas01-diagnostics-20220128-1140.zip

×
×
  • Create New...