Jump to content

snolly

Members
  • Posts

    65
  • Joined

  • Last visited

Posts posted by snolly

  1. On 5/7/2023 at 2:28 PM, Keinden said:

    When I used `migratable=off` it did lower my stutter enough that it was at a playable state hence why I added that detail in. However I would still advise to install Windows 11 in SCSI mode since it made the biggest difference. Note: VM drivers have to be loaded in as sata.

    Hello, can an existing VM be converted to use SCSI? Reinstalling everything would be a huge pain.

  2. So, firmware M3CR046 that solves the issue is not available from Crucial in a direct download. You need to use Windows and Crucial Executive software which is crappy and some people are mentioning that in order to update the firmware this way you need to be able to write to it and actually have something being written to the disk while updating otherwise the process will fail - which is totally ridiculous. So I made a bootable portable Win11 usb stick which took 3 hours to do and the Crucial software does not launch in there (crashes).

     

    I then combed the interwebs and I found a user who contacted Crucial and they gave him the firmware file. Thread here and I also attached the FW on this post. You need to unzip the zip file and use the .bin file in there.

     

    I had a BTRFS cache with 2 affected MX500 drives in there.

     

    1. Stopped the array
    2. Removed one if the MX500.
    3. Started the array and waited for it to rebuild the BTRFS cache.
    4. Now one of the MX500 was listed as unassigned device.
    5. Copied the FW .bin file somewhere in Unraid
    6. run this command 
      hdparm --fwdownload /mnt/user/athens/temp/mx500/1.bin --yes-i-know-what-i-am-doing --please-destroy-my-drive /dev/sdb
    7. where first argument is the path to the .bin file and the second one is the name of the drive I need to update. In this case /dev/sdb
    8. rebooted the server and I checked that the drive has been updated to M3CR046 (click on the drive's name and go to identity tab)
    9. stopped the array and added the updated drive back to the cache.
    10. started the array and wait for the cache rebuilt.
    11. stopped the array removed the other drive and then do steps 3 to 9 for the second drive.

    I hope this helps anyone who faces the same issue. Hopefully I will not face the same problems again.

    M3CR046.zip

    • Like 2
    • Thanks 4
  3. 8 minutes ago, JorgeB said:

     

    Hmmm, thanks for the tip. Both my MX500 are on M3CR045. I will try to update them to at least M3CR046 that got the issue resolved. Can I do that with Unraid or do I have to plug them in a windows machine?

  4. Unfortunately even after replacing the (what I thought) faulty new SSD - the issue is still here.

    At some point I get a warning about my log being full and I know what this means. Cache issues that flood the logs. Issues are like this:

     

    May 25 17:22:05 Earth kernel: BTRFS info (device sdb1): read error corrected: ino 0 off 7569215369216 (dev /dev/sde1 sector 255824472)

    May 25 19:07:10 Earth kernel: I/O error, dev sdb, sector 517930272 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0

     

    If I go and perform a Scrub I get millions of uncorrected errors.

    If I restart the array then one of the 2 cache disks is usually gone (this time it didn't)

    After the restart docker.img is corrupt and docker cannot start. Previous times libvirt.img was also corrupt. This time it didn't.

    I changes sata ports, cables, replaced the 1 SSD that was gone after errors and restarts.

     

    This is the log I kept

     

    CT1000MX500SSD1_2302E69B81B5 (sde) - red cable ------> this one I got replaced

    CT1000MX500SSD1_2311E6BA7F07 (sde) - red cable / ------> this one is the new ssd in place of the what I thought faulty one

     

    CT1000MX500SSD1_2302E69B81AF (sdb) - black cable

     

    This is which drive went missing after restarting the array after getting all the BTRFS errors.

     

    10/4 CT1000MX500SSD1_2302E69B81B5

    12/4 CT1000MX500SSD1_2302E69B81B5

    12/4 CT1000MX500SSD1_2302E69B81AF (pre-clear errors)

    22/4 CT1000MX500SSD1_2302E69B81B5

    29/4 CT1000MX500SSD1_2302E69B81B5

    27/5 CT1000MX500SSD1_2311E6BA7F07 (sde)

     

    This is the error I get after I restart the array after syslog is flooded with errors

     

    image.png.58cf11e5d15830e0077cf8208f8d3ace.png

     

    Also attached are diagnostics.

     

    So this happens every couple of days and I have to rebuild my docker.img and possibly the libvirt.img.

     

    I am lost here, If BTRFS is to blame I could go with an XFS single drive cache but what about redundancy?

     

    Any help would be greatly appreciated.

     

    Cheers

     

    earth-diagnostics-20230527-1332.zip

  5. I removed the faulty SSD from the cache pool and it automatically run balance and scrub which both succeeded with no errors.

     

    I then tried to preclear the faulty SSD and even that fails. So I guess that was the issue all along.

     

    Just for the shake of knowledge, what is the point of mirroring / redundancy if when we face situations like this (one faulty disk) we also get corruptions and data loss? Shouldn't this be avoided and have the system mark the faulty drive as failed instead of all this mess?

     

    Thanks again for all the help

  6. 5 minutes ago, JorgeB said:
    Apr 12 00:11:12 Earth kernel: BTRFS info (device sdd1): bdev /dev/sdb1 errs: wr 28768406, rd 2597271, flush 111412, corrupt 56071, gen 0

     

    This shows that one of the devices dropped offline in the past, run a correcting scrub and post the output, also see here for better pool monitoring.

    wow, thanks for the reply. can you point to some documentation on how to run a correcting scrub?

  7. Hello all,

     

    I was on a 9th gen Intel System (that worked perfectly fine for years) and I decided to upgrade to a 13th gen one. My cache lived on 2x500GB Crucial SSDs in a BTRFS mirroring pool.

     

    Along with the new hardware I bought 2x1TB Crucial SSDs to replace the 500GB ones. I did that 2 nights ago by removing the 1st 500GB and putting the 1TB in its place, let the rebuilding process finish and then did the same with the 2nd SSD.

     

    That completed without any issues. Last night I replaced the motherboard/cpu/ram and rebooted the system.

     

    The array started just fine but both docker and VMs were unavailable due to docker.img and libvirt.img corruption errors.

     

    So I deleted both docker.img and libvirt.img, recreated them and the Docker and VM services started (and work) correctly. I have not restored any VMs (I am using the community backup plugin and that doesn't seem to have a restore functionality - if someone can enlighten me I would appreciate it).

     

    Now I get a plethora of BTRFS errors in the logs and I am not sure what needs to be done. Is one or both of the new 1TB SSDs faulty? Can I remove one of the two from the array and stay with one SSD while waiting for the faulty one to be replaced? How do I know if other stuff that lives in the cache is not corrupted (like appdata - docker apps seem to work fine btw).

     

    Any help for course of action would be greatly appreciated.

     

    Attaching diagnostics for more info on the system.

     

    Thanks

    earth-diagnostics-20230412-0039.zip

×
×
  • Create New...