Jump to content

eXorQue

Members
  • Posts

    31
  • Joined

Posts posted by eXorQue

  1. @JorgeB First of all, thank you for helping me so far. Everything seems to work out great :) 


    I've executed both Scrub and Balance. Those disks seem to be fine. It had corrected a bunch of things.

    I then tried to start docker again by disabling and enabling the service in Settings > Docker. This gave me the message in the docker tab "Docker Service failed to start."
     

    Then with this command, but got the following error:

    ```
    /etc/rc.d/rc.docker start
    no image mounted at /var/lib/docker
    ```


    Could it be that the docker.img isn't mounted? What to do

     

    Edit: added diags

    supermicro-diagnostics-20240503-0944.zip

  2. Whe I unassign the cache devices, can I mount them to see what the contents are?

    I have my photoprism importing on the cache
    Also, where can I see whether the mover has ran lately?

    @JorgeB Now, after unassigning the "Cache 2" I see the notice "Start will remove the missing cache disk and then bring the array on-line". Why wasn't this shown when "Cache 1" was missing? Is it possible that the Cache Pool is a duplicate from each other? If yes, could this be holding my old cache data?


  3. I had a configuration like so:

     

    Array Devices
    DEVICE    IDENTIFICATION    TEMP.    READS    WRITES    ERRORS    FS    SIZE    USED    FREE
    Parity    unassigned
    Parity 2    WDC_WD40EZAX-00C8UB0_WD-WX12A82ED2EV - 4 TB (sdg) 26 C            
    Disk 1    ST2000DM001-1CH164_Z1E6HF1W - 2 TB (sdd) 30 C        xfs    
    Disk 2    WDC_WD40EZAX-00C8UB0_WD-WX92D622NRV2 - 4 TB (sdb) 24 C        xfs    
    Disk 3    WDC_WD40EZAX-00C8UB0_WD-WX42D6213VCY - 4 TB (sdc) 24 C        xfs    
    Slots:    5
    Pool Devices
    DEVICE    IDENTIFICATION    TEMP.    READS    WRITES    ERRORS    FS    SIZE    USED    FREE
    Cache    Samsung_SSD_860_QVO_1TB_S4CZNF0MB47942W - 1 TB (sdf) 24 C    
    Cache 2    ADATA_SX8200PNP_2N3529191NRH - 1 TB (nvme0n1)

     

    I wanted to switch the "Disk 1" to a new disk similar to what I did with the Parity disk and Disks 2 and 3.
    So I stopped the array, shutdown, switched the cables from Disk 1 to the 4TB disk. Started everything.

    Problem encountered: After starting the array the configuration was as such

     

    **Note that Disk 1 changed accordingly, *but* Cache was gone**

     

    Array Devices
    DEVICE    IDENTIFICATION    TEMP.    READS    WRITES    ERRORS    FS    SIZE    USED    FREE
    Parity    unassigned
    Parity 2    WDC_WD40EZAX-00C8UB0_WD-WX12A82ED2EV - 4 TB (sdg) 26 C            
    Disk 1    WDC_WD40EZAX-00C8UB0_WD-WX42D62F1K5E - 4 TB (sde) 30 C          xfs
    Disk 2    WDC_WD40EZAX-00C8UB0_WD-WX92D622NRV2 - 4 TB (sdb) 24 C        xfs    
    Disk 3    WDC_WD40EZAX-00C8UB0_WD-WX42D6213VCY - 4 TB (sdc) 24 C        xfs    
    Slots:    5
    Pool Devices
    DEVICE    IDENTIFICATION    TEMP.    READS    WRITES    ERRORS    FS    SIZE    USED    FREE
    Cache  unassigned    
    Cache 2    ADATA_SX8200PNP_2N3529191NRH - 1 TB (nvme0n1)

     

    I'm not sure what to do.
    * Now, when attaching the original cache disk it says: "All existing data on this device will be OVERWRITTEN when array is Started"

    * And I'm not sure whether the new disk 4TB will be rebuilt. Wether I select the original 2TB disk OR the new 4TB disk, in both cases it says: "New Device".

     

    Attached the Diagnostics.

    supermicro-diagnostics-20240502-0931_anonymized.zip

  4. So I have this issue where sometimes, the server hangs and I'm not able to access shares or docker services. I notice it when my uptime monitor starts signalling that websites are unreachable. Sometimes this happens every other day, or it happens once a month.

     

    Usually it comes back after a few hours(!) and I can do a safe reboot, sometimes I have to force restart.

     

    I attached diagnostics and the uptime logs

    supermicro-diagnostics-20240324-1425.zip

  5. Hi,

    After moving my disks from one system to the other I added a 4TB disk to the parity set. I should've probably just replaced the old one instead of extending it with a second disk, but here we are.

     

    Then a data disk broke down, and I had to replace it. But as my 2TB parity is still there, it's not big enough. What would be a good way to go forward?

    I've added my diagnostics as an attachment.
    supermicro-diagnostics-20231218-1500.zip

  6. Parity check is completed.

     

    So now I would take one disk out, as in, set the disk to "no device", and run diagnostics again?

     

     

    Results:

    Quote

    Last check completed on Thursday, 23-11-2023, 20:02 (today)
    Duration: 3 hours, 29 minutes, 25 seconds. Average speed: 159.2 MB/s
    Finding 134558 errors

     

    parity-check-finished.thumb.png.66c25566b45650e3bcb6bc4b8df0868f.png

     

    Added diagnostics as well (for the record, it says "supermicro" that's my NAS name as my previous machine was a Supermicro)

     

     

    supermicro-diagnostics-20231123-2038.zip

  7. On 11/22/2023 at 1:04 AM, eXorQue said:

    What do I do next? I don't understand what needs to be done from the comments above saying "rebuilding one by one". What do I need to do to get it up again.

     

    On 11/22/2023 at 9:29 AM, eXorQue said:
    On 11/22/2023 at 9:27 AM, JorgeB said:

    Rebuilding one by one is an option

     

    What does this mean. Where do I need to do what?

     

    On 11/22/2023 at 10:11 AM, JorgeB said:
      On 12/2/2019 at 9:13 AM, JorgeB said:

    Like mentioned it was a possibility, since partitions are outside parity this can usually be fixed by rebuilding one disk at a time so Unraid recreates the partitions correctly, you can check by unassigning one of the data disks, start the array and check if the emulated disk mounts and data looks correct, if yes you can rebuild on top, then repeat for all other disks one by one.

     

    Sorry for my ignorance. 

    My question is still, how do I do this?

     

    This is my current situation:

    192_168.1.3_4433_Main.thumb.png.16a62ea97cc4385179a2880b2fe0ca57.png

    Step-by-step (is this correct?):

    1. stop array
    2. set parity 1 to disk `(sde)`
    3. set disk 1 to disk `(sdd)`
    4. set disk 2 to disk `no device`
    5. set disk 3 to disk `no device`
    6. start array
    7. do parity check
    8. set disk 1 to disk `no device`
    9. set disk 2 to disk `(sdc)`
    10. start array
    11. do parity check
    12. set disk 2 to disk `no device`
    13. set disk 3 to disk `(sbd)`
    14. start array
    15. do parity check
    16. set disks 1,2,3 to resp sdd, sdc, sdb
    17. start array
    18. everything works?

    Does this change when I would like to move from the 2TB disk to the 4TB disk (as I'm planning to replace all disks with 5 4TB disks)

  8. Thank you for the response.
    I understand that it happens, which is fine-ish. It's explainable, so for me that is ok.

     

    11 minutes ago, JorgeB said:

    Rebuilding one by one is an option

     

    What does this mean. Where do I need to do what?

     

    11 minutes ago, JorgeB said:

    but it would have been better if you checked parity was valid

     

    There was not such an option to check parity was valid... Where should that have been?

  9. First of all, sorry for digging up this old thread, but this is exactly what happens with me too. 

     

    I moved from a Supermicro blade (with a separate hotswap bay/raid controller, I guess) to a new custom build, with sata on the micro-ATX motherboard. 

     

    I am 100% sure which drive belongs in which "slot" of the "Disk 1-3" (I used to have only 3 disks and an SSD cache).

    I am sure about this, because I used Unassigned devices to check the amount that's used of the disk. 

    Old server:

    telegram-cloud-photo-size-4-5969839237194956451-y.thumb.jpg.e43c3cfe0fc67cc19571c107d51bfc93.jpg

     

    Unassigned Devices, mounted disks, with disk usages:

    image.thumb.png.e86cd5e4e1104b11b97f28a83da4d913.png

     

    New server after assigning disks to the correct Disk #
    telegram-cloud-photo-size-4-5974271841012924510-y.thumb.jpg.647e7ff9b23feed9c34a9a58b10d4f36.jpg

     

    I ran the "New Config" on the "Array" disks after I set them up as shown in the screenshot above, which lead to this config (btw, the cache disk was detected before successfully (as seen on the screenshot above this line), nothing changed there. Maybe because I only created "New Config" for "Array" devices?)

     

    image.thumb.png.45fe4afb5961d152f51d7059c6bc3396.png

     

    However, when I start the array, I get an error on each disk. "Unmountable: Unsupported partition layout"
     

    What do I do next? I don't understand what needs to be done from the comments above saying "rebuilding one by one". What do I need to do to get it up again.
    IMO it should be fine now, why is it Unmountable?

  10. First off, thank you for the thorough response.

     

    To recap, correct me if I misunderstood, you basically mention two upgrade/transfer strategies with some presteps

     

    Presteps

    1. list of drives/serial numbers, print of Main tab

    2. local backup of flash drive (Main > flash > Flash Backup)

    3. backup of Diagnostics

     

    These seem logical presteps. Thanks for that! :)

     

    Upgrade/transfer strategies

    1. move as much as possible to the new server, check disk assignments, start the array

    2. replace parity drive, rebuild untill all drives are replaced

     

    Or did you mean that it's part of ONE process? You mention doing baby steps, this doesn't look like a baby step to me though... Sounds more like one big massive step, moving all drives, and hoping that everything will work when you put them in the new pc.

     

    A few questions/comments on things you mentioned that aren't clear to me.

     

    12 minutes ago, ConnerVT said:

    All parity drives need to be larger than the data drives in the array.

    Understood. To be completely safe, I could use my spare 4TB as a second parity drive. Thus, first adding the 4TB as second parity, then replace the old 2TB parity with the new 4TB HDD.

     

    14 minutes ago, ConnerVT said:

    Having the 2nd parity drive, as you suggested, is not much benefit. 

     

    I think this is incorrect as mentioned here: https://docs.unraid.net/unraid-os/manual/storage-management/#replacing-a-disk-to-increase-capacity

    Quote

    * If you have single parity then you are not protected against a different drive failing during the upgrade process. If this happens then post to the forums to get advice on the best way to proceed to avoid data loss.

    * If you have dual parity and you are upgrading a single data drive then you are still protected against another data drive failing during the upgrade process.

    It does make sense, as when you're rebuilding parity, there's no "backup" when that rebuilding is going on.

    22 minutes ago, ConnerVT said:

    For the Cache drive, follow the standard cache drive replacement procedure in the Unraid manual. 

    I don't see that in the manual, but to replace the cache drive I could do this? (starts at 8:40) https://youtu.be/ij8AOEF1pTU?t=520

     

  11. Currently I have a Supermicro 1U Blade NAS (very old) with 4 HDD + 1 SSD;

    • 3 HDDs 2TB storage
    • 1 HDD 2TB parity
    • 1 SSD 1TB cache

    I want to move everything on to a new server and new disks.

    The new setup will be 5 HDDs + 1 M.2 SSD:

    • 4 HDDs 4TB storage
    • 1 HDD 4TB parity
    • 1 M.2

     

    I had the following in mind, correct me if this isn't going to work. I have a few question with this approach at the end.

     

    1. Add one 4TB as parity disk. Now I have two parity disks, this'll protect the data a bit more for when one would fail during rebuilding when replacing the others
    2. Replace one 2TB with a new 4TB disk
    3. Repeat step 2 till all disks are replaced
    4. Add the last 4TB disk. As I had only 4 2TB disks, I can add the last 4TB disk as the last one
    5. Remove the 2TB parity disk. Now my system is complete wrt HDDs for the new system


    Questions:

    1. How do I transfer from my SSD to the M.2, or should I not do that at all? How to get this part to work?
    2. Can I just transfer the disks to the new system? Does it work in these steps?
      1. Transfer disks to new system
      2. Transfer Flash drive to new system
      3. Boot from Flash drive
      4. Have unraid happily running?

     

  12. How to see which packages were installed? 

     

    I've upgraded, but didn't know this one would be outdated. Is there any log of the previous packages that were installed so I can add them to the /boot/extra list. 

     

    Btw, is the /boot/packages replaced by /boot/extra?

     

    Edit: just found out that /boot/packages was from the dmacias72 plugins. So that's my answer to both questions

×
×
  • Create New...