Jump to content

dafa

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by dafa

  1. 1 hour ago, Kilrah said:

    Physically what's connected to what...

     

    Only one cable or 2? If one that's 4 SAS2 lanes, shared between 34 drives so yes when all are accessed simulataneously that would bean about 80MB/s to each of them.

     

    Single drive disk speed test would not show anything since it's only when a significant number are accessed simulataneously that the bottleneck would be noticeable.

    I can get what you are takling about ...

    But here is the situation ,all the 8 drives are syncing now is in the dell 720xd server itself .There should be no conection issue, and even there is one ,how could it slow down to 14m/s now……

  2. 4 minutes ago, Kilrah said:

    This doesn't say how they're connected. So you have 13 drives connected to an 8-port card, and 32 drives connected to another 8-port card? That means you have expanders, and it's precisely what's between the controllers and drives in terms of hardware and cabling that's needed.

    what kind of disk connection describ is better for understanding ,I am new…… sorry

  3. Just now, Kilrah said:

    This doesn't say how they're connected. So you have 13 drives connected to an 8-port card, and 32 drives connected to another 8-port card? That means you have expanders, and it's precisely what's between the controllers and drives in terms of hardware and cabling that's needed.

    the 13 drive is in a Dell R720XD server rack ,and the another 34 is in a supermicro disk chaise ,connect with SFF-8088 cable and hba card .

  4. 5 hours ago, Kilrah said:

    You'd have to describe your whole setup with how disks are connected through what HBAs/expanders, what mobo slots are used etc so someone can figure out if there's a bottleneck somewhere.

    Thanks for your help 

    here is the disk connection map

    Host bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 DMI2 (rev 04)

    PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 2c (rev 04)

    RAID bus controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
         13 drives(left 13 in the picture ,main disks in parity sync situation)

    PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 3a (rev 04)

    Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
         34 drives

    579991278_2023-05-2519_33_52.thumb.png.4972eb0626f57b0d2bae77b4442dde12.png

     

    2076150034_2023-05-2519_34_04.thumb.png.f899ff0b5eb98b99dcb01d3b9cdc3333.png

     

    and it is going down to 14m/s ……

    I cant figure out why this is happening ,all the 8 disk(2parity and 6 data disks) are in the first bus, and no background app is running .

     

     

  5. 1 hour ago, Kilrah said:

    Seems about right, there could be a bottleneck somewhere with that many drives but since parity operations are limited to the slowest drive and these old 3TB ones are likely not able to do more it's probably more of the latter. 

    Thanks a lot

    But I have done the Disk Speed test in docker ,the result shows the slowest one is 90mb/s ,and its not the disk in the array but the unassigned device which even did not mounted...

    So is there any other reason I can make some improvements.

  6. After last request “unraid kepping rebooting”

    There turns out to be no solution

    so i just buy another server to replace the last one 

    At begianing it went queit well in two days ,and suddenly it died 

     

    I dont know how to deal with it ,and I started to think is this may be my os problem? my unraid flash drive is wrong?

    or is there any way that I can pull my data pool setting out only without backup the whole flash?

     

    Annnnnyone HELP!

  7. 2 minutes ago, itimpi said:

    No.   By default the logs are only kept in RAM and thus the diagnostics you posted do not cover the failure period.    The link was for a method for getting logs that DO survive a reboot.

    ok i will sent the syslog here after  I get it ,thanks

     

  8. On 11/10/2019 at 1:51 PM, Benson said:

    It seems just booting point to a wrong device instead of USB. Pls try enter LSI BIOS and disable "bootable".

    On 11/11/2019 at 8:49 PM, Benson said:

     

    Hell Yeah!

    I finally made it with Benson's methord .

    Disable the SAS adapter by the ctrl-c 

    Great thanks to Benson!! love you guys amazing !

     

    (picture below was after the success change to the boot ,so it might not be accurate .

    just show the properate order to get things done . 

     

    IMG_2027.thumb.jpg.dd953d6e46bcbe16791ef171c083ef0f.jpgIMG_2028.thumb.jpg.cff66e5d79f5356e3478ca7354b851fc.jpgIMG_2029.thumb.jpg.7881c513a9578ea9d19c1a340a72aa8d.jpgIMG_2030.thumb.jpg.6d04a658a388dcc2247b9e719d9bec7b.jpg

  9. On 11/11/2019 at 2:29 PM, Dissones4U said:

    It looks like you're using ecc memory that is going bad but I feel like something may be wrong with your mover settings too, hopefully one of the guys with more experience can make sense of your syslog because there is a lot going on there.

     

    I think it would be okay to go to settings --> scheduler --> mover settings and disable logging for now....

    its not working ,but thanks 

  10. 2 hours ago, Dissones4U said:

    It looks like you're using ecc memory that is going bad but I feel like something may be wrong with your mover settings too, hopefully one of the guys with more experience can make sense of your syslog because there is a lot going on there.

     

    I think it would be okay to go to settings --> scheduler --> mover settings and disable logging for now....

     

    Thanks I have already disable it ,and I will reboot to check if it will be ok after the current copy transfer job being down .

    and then I will repost the result .Thanks!

  11. 21 hours ago, Benson said:

    It seems just booting point to a wrong device instead of USB. Pls try enter LSI BIOS and disable "bootable".

     

    Here I check in the bios but did not see the option you talk about .

     

    And I check the boot setting , make sure there is only the usb as 1st boot device ,and no other device bootable .(with pic IMG_6601)

    It turns out nothing changed.

     

    And I even check the boot setting ,to set all the drive to disable .Still get nothing change .

     

    IMG_0759.HEIC IMG_3721.HEIC IMG_4239.HEIC IMG_4318.HEIC IMG_4794.HEIC IMG_6601.HEIC

  12. hi all

     

    Here is the problem in simple:

    Quote

    Cant boot UNRAID with the disks inside server

     

    here is the full discribe of the problem:

     

    I install the unraid succesffly in my server ,and started to use it for a while. It runs quite well for me .

    But one day I shunt down it ,then I just cant boot it . The bios shows the usb drive is ok ,so I plug it into another server for test ,it turns out just fine and boot to UNRAID successful .

     

    So I plug it back to the original server ,and I just plug all my disks in the arry out before boot the server .

    It immeditely boot into UNRAID success ,and I just plug all the disks back to server ,it works.

     

    BBBBBBUUUUUUTTTTTT !

    BUT!

    Everytime when I reboot or shunt down and turn it on again , I have to plug all the disks out before boot the unraid. It becomes a problem ,I have more than 20 disks ,so I had to repeat plug out then plug in again.........

     

    I am new to unraid for a month , can I get some help with you guys ?

    thanks!

     

×
×
  • Create New...