Jump to content

Unable to mirror (RAID1) second new cache drive


Go to solution Solved by JorgeB,

Recommended Posts

Hi. I added a new (second) cache drive to my existing pool. It auto-balanced the drive and my understanding is that mirroring RAID 1 is the default; however, you can see from my screen shots it shows the double the capacity like RAID0.

 

How do I fix this? I tried switching to RAID 0 balance and then back to RAID 1 and still doesn't work. I seem to have these "single" modes that I do not think should be here. Thanks

 

 

Screen Shot 2022-04-18 at 6.41.29 PM.png

Screen Shot 2022-04-18 at 6.42.34 PM.png

 

Edited by Groto
Link to comment
9 hours ago, JorgeB said:

Some data corruption is being detected causing the balance to abort, you can run a scrub to list the corrupt files in the syslog, then delete/replace those, also a good idea to run memtest.

Thank you. I will give these a try and report back.

Link to comment
16 hours ago, JorgeB said:

Some data corruption is being detected causing the balance to abort, you can run a scrub to list the corrupt files in the syslog, then delete/replace those, also a good idea to run memtest.

 

I haven't run memtest yet, but I ran the scrub and had 4 uncorrectable errors:

 

UUID: 337ea1f9-e2bb-4f1e-bc50-437fe84e43e2 Scrub started: Tue Apr 19 21:06:23 2022 Status: finished Duration: 0:04:32 Total to scrub: 270.31GiB Rate: 1010.63MiB/s Error summary: csum=4 Corrected: 0 Uncorrectable: 4 Unverified: 0

 

These are the details:

 

Apr 19 21:10:55 milkyway kernel: BTRFS warning (device dm-7): checksum error at logical 367830970368 on dev /dev/mapper/sdb1, physical 367830970368, root 5, inode 26598, offset 52375552, length 4096, links 1 (path: system/libvirt/libvirt.img)
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): bdev /dev/mapper/sdb1 errs: wr 0, rd 0, flush 0, corrupt 21, gen 0
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): unable to fixup (regular) error at logical 367830970368 on dev /dev/mapper/sdb1
Apr 19 21:10:55 milkyway kernel: BTRFS warning (device dm-7): checksum error at logical 367830974464 on dev /dev/mapper/sdb1, physical 367830974464, root 5, inode 26598, offset 52379648, length 4096, links 1 (path: system/libvirt/libvirt.img)
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): bdev /dev/mapper/sdb1 errs: wr 0, rd 0, flush 0, corrupt 22, gen 0
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): unable to fixup (regular) error at logical 367830974464 on dev /dev/mapper/sdb1
Apr 19 21:10:55 milkyway kernel: BTRFS warning (device dm-7): checksum error at logical 367884857344 on dev /dev/mapper/sdb1, physical 367884857344, root 5, inode 26598, offset 106262528, length 4096, links 1 (path: system/libvirt/libvirt.img)
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): bdev /dev/mapper/sdb1 errs: wr 0, rd 0, flush 0, corrupt 23, gen 0
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): unable to fixup (regular) error at logical 367884857344 on dev /dev/mapper/sdb1
Apr 19 21:10:55 milkyway kernel: BTRFS warning (device dm-7): checksum error at logical 367884861440 on dev /dev/mapper/sdb1, physical 367884861440, root 5, inode 26598, offset 106266624, length 4096, links 1 (path: system/libvirt/libvirt.img)
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): bdev /dev/mapper/sdb1 errs: wr 0, rd 0, flush 0, corrupt 24, gen 0
Apr 19 21:10:55 milkyway kernel: BTRFS error (device dm-7): unable to fixup (regular) error at logical 367884861440 on dev /dev/mapper/sdb1

 

Do I need to delete libvirt.img to fix this? I think that is for VMS, but I am actually not using any VMS at the moment and only briefly did.

 

Thanks

Link to comment
16 hours ago, JorgeB said:

Some data corruption is being detected causing the balance to abort, you can run a scrub to list the corrupt files in the syslog, then delete/replace those, also a good idea to run memtest.

 

I went ahead and deleted the file since I don't use VMs currently and ran scrub again. No errors found, I rebalanced to RAID 1 and now my drives show up as a total 1TB for the 2x1TB drives, so looks like everything is working properly. Thank you! As soon as I have time I'll bring the server down to run memtest.

 

Those “single” mode listings are now gone and they all say RAID1 now.

Edited by Groto
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...