Jump to content

ahab666

Members
  • Posts

    73
  • Joined

  • Last visited

Posts posted by ahab666

  1. Well, I am facing a similar problem even with only one HQ (???) nvme as cache drive ....

     

    The moving of the files from a defective drive (emulated by one parity) drive runs with 30 to 95 MB/sec and uses up to 60% CPU resources.

    So moving a half full 18 TB drive seems to take 4 ever - reading speed off the HDD are about 140 MB/sec, writing speeds are around 80-90 ...

    Transfer speeds stay at about 100 for some short time (20 sec) and then crash down to 20 or 30 for the next few minuts ..... so patience ....

     

    I just hope that the emulated date stay emulated until a crc 32 as well as some MD5 hash check confirm the validity of the data on the physical drive ... 

     

    good luck to you and cheers

  2. Hi gurus and experts,

     

    I have the opportunity to speed up my server a bit, so I'd like to ask you some NOOB questions,

     

    1. My HDD HBA card has 4x4 SATA 6 connections, with 4 connectors on the card and 16 endings for

    the individual HDDs. How important is it to keep the HDDs on the same port (LUNs)?

     

    2. In theory, I can add 3 NVMs, but the connection speeds and lane counts are different ! 2 of them have an PCie gen 4 port, the third one is PCIe gen 3. Will the slower one interfere with the 2 faster ones ? Either by slowing them down, or might it even cause timing problems ?

    6TB of cache would be nice, but I can easily live with 4 (2x2) TB. Does splitting them up in 2 pools, like one exclusively for docker apps the 2 faster ones as mirrors Raid 0 ? and the slower one as input buffer ?

     

    3. Can one of name me the terminal commands for checking on the integrity of the data HDDs (XFS) superblocks and all that jazz b4 I start the disassembly and re-assembly of my server.

     

    Aaaah and I need/will replace 1 Data drive as well as the second  Parity disk. 

     

    Anything else to be aware of ?

     

    thx for reading and cheers - ahab

  3. Well, it did, and I am definitely not using a RAID controller - just a pure HBA.

    I just wonder if I should add two new replacement disks to the array and copy the emulated files to them ?

    Now I need to prepare for my vacation - I will be away for 2.5 weeks starting with Friday.

    But I'll be back :)

     

    cheers and thx for your help so far

  4. Yes i had to - my controller has mini SAS combi ports, 16 channels and 4 jack ports, the cable splits up halfways and the HDDs are connected via 4 seperate jacks (sata).  

     

    There were all ready issues with both disk 01 and Disk 02 when I sent my original post - both disks were unmountable, but disk 2 had a different indicator in front.

    Disk01 had the red disabled icon, while disk 2 has the orange warning triangle.

    There are issues with both Disks, what really troubles me is the fact, that as soon as I start the array in non maintainece  I see boatloafs of read errors on the parity 2 drive

     

    any ideas what's going on ???

     

    cheers

     

  5. Hello JorgeB et al

    here is a newer diagnostic file.i changed the Powersupply cables as well as the data cables. Disk 01 as well as disk 02 are still emulated but the plethora of errors i saw in the xfs repair report with the -n optopn are really bothering me. If i resart the array another resync will be initiated, that will take about 48 hours to complete. if I run a complete pre-clear you need to ad about 2x48 hours as well ....

     

    here is the new diag - cheers

    tower-diagnostics-20231107-1705.zip

  6. Hello,

     

    after some power problems, my Unraid server is missing 2 drives, unsupported or missing file system.

    One of the drives is disabled, but the data on it are still emulated, the other one does not even show that red indicator.

    XFS repair replaced the primary super blocks but is stuck at stem 2 because the Log files are empty or zeroed.

     

    any ideas on how to repair or rescue my array ?

     

    cheers

    tower-diagnostics-20231107-0342.zip

  7. Hello again,

     

    For some reasons the Gui button for invoking the Mover do not work any longer, i mean the buttons from :

    1.  Gui : main, array operations, Move

    2. Gui : main, array operations, (Move) Scheduler, Mover Settings, Move Now

     

    case 1. There is a very short Error msg like "Disabled ......."  to fast to read

     

    case 2. The GUI shows Mover is running and "jumps" to the Trim Tab. Clicking the Mover Settings tab again, it still shows mover is running until I refresh the screen and go back to Mover settings.

     

    any idea what is going wrong ?

     

    cheers ahab 

    tower-diagnostics-20231007-1614.zip

  8. okay - lets say the recovery of my data drive works out, and it changes back to a physical=not emulated state. How can I recover the first parity drive ?

    Do I need to pre-clear it 1again and then rebuild parity ?

     

    BTW 2 new diagnostic files one after the rebuild, one after a reboot and some maintenance (2 docker app updates)

     

     

    cheers

    tower-diagnostics-20230822-0900 after reboot.zip tower-diagnostics-20230822-0842 after rebuild.zip

  9. @JorgeB et al,

     

    the 3 short and extended SMART tests are all error free, so far there are no errors on Parity 1, Parity 2 and/or the Data Drive in question.

    here are 2 diag files the name explains what they are .... i could pack and send you the smart log files as well if you need them.

     

    cheers

    tower-diagnostics-20230820-1927 maint mode.zip tower-diagnostics-20230820-1937 norm. mode.zip

  10. @JorgeB et al,

     

    here we go, sirs,  diagnostics - btw - both short and extended SMART Tests for both Parity disks are clean ....

     

    BTW - during re-sync I am getting that drive is either full or something is flooding my syslog warning. the  memory usage for logging is constantly at 100 %

     

    cheers

    tower-diagnostics-20230819-1509.zip

  11. @JorgeB

     

    After reading the doc I am even more confused. 

    I tried the unassign, reassign , re-sync (in maintenance mode) but I have got a buck load of read errors from my Parity 2 drive, something like 500 000 reads and with more than 20 Million errors. Also some of the emulated data seem to be destroyed = unreadable now.

     

    Now I am SMART testing the 2 Parity and the "defective" Data drive in maintainence mode - short smart shows no errors, extended seems not to work at all, on the 2 Parity drives it the testing stops at 10%.

     

    I am getting a little bit nervous now.

     

    cheers

  12. Thx, I found a plethora of log files but most of them (and their entries) are referring to duplicate hash entries - from the time before I shrank my array to less but bigger physical disks.

     

    Can I search for a specific string in all of that files that will point me to the defective files ?

     

    And how do U get rid of the of duplicate entries ? Deleting the log files after I got rid of the corrupted ones ? 

     

    I am no nix expert - so I am not familiar with the grep command .... 😕 

     

    cheers - alex

  13. Hello,

     

    I am getting a "bunker" - warning about some md5 file corruptions, even though the most recent parity check ended with 0 errors.

     

    What can I do now ? Is there is a log file, in which I can see the files listed that are corrupt, please tell me where to find it. Will a XFS file system check reveal the corrupt files and maybe even correct them ?

     

    cheers alex 

     

    Bunker notify.png

×
×
  • Create New...