Jump to content

bcbgboy13

Members
  • Posts

    556
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by bcbgboy13

  1. I am not sure where are you located - but it is best to retire the old AMD platform and replace it with an Intel one (even if old).

     

    Check Ebay, your local Kijiji or Craigslist boards - there are a lot of people selling used server grade hardware where you could probably find an used Supermicro MB, XEON CPU(s) and 4-8GB ECC memory for $100 or so.

    Check it out to see if it is stable - run memtest for a day,  even install some old windows and run some of the software for stress test and if stable then upgrade.

     

    This is the one thing that will speed up you parity test or rebuilds and even leave some horsepower for other stuff that you are not presently using but may consider doing in the future.

     

    Reason for that - the newer Unraids are using newer Linux kernels, which are using newer instruction sets (AVX2) which are not available in these older AMD CPUs - Semprons, Athlons x2, x4 or Phenoms.  If you keep your present system even with a new CPU (raw CPU speed is better than more cores) you will have a very long parity checks, especially if you update to the dual parity. There is no way around that....

    https://en.wikipedia.org/wiki/Advanced_Vector_Extensions

     

    I personally loved these old AMD CPUs for basic Unraid functionality.

     

    The AMD CPUs were always ECC capable and ECC and a UPS is a must for me.

    Some of the motherboard manufacturers at the time (Asus, Biostar, ) actually routed all 72 tracks to the RAM slots and kept (or did not disable) the ECC functionality in the BIOS - so by using the slightly more expensive unbuffered ECC memory one could end with an energy-efficient "server-grade" system at very low cost compared to the price of the Intel Xeon stuff.

    However I started out with a dual core 4850e (45W TDP), once I migrated to 6.6.6 I upgraded it to a 4-core 610e (still 45W TDP) but it was still not powerful enough. Doing a parity check I was afraid to use anything else, even preclearing a new HD on the side as it was maxed out. But for basic Unraid functionality they were OK.

    If you decide to stay with the current system and just change the CPU - raw CPU speed is better for the parity speed than the number of cores.

  2. I am not sure if you can mix SATA and SAS drives on the same breakout cable.

    They use a different signaling scheme, which also allows for much longer cables for the SAS drives.

    I believe that if you mix them on the same cable this will force the controller to use the SATA signaling and this may not be liked by the SAS drives

     

    The cable you are using is already 1m long anyway which is the max for a SATA cable.  It is better to use a shorter regular SATA cable attached to one of the MB ports for the SATA drive and keep the SAS drives separate.

  3. Possible problems  with "clear_array_drive" script and xfs formatted disks resulting in extremely low speed

     

    I initially ran the script on one "reiserFS" formatted disk and it worked beautifully. 

     

    Then I decided to change my disks from "reiserFS to xfs using the "mirror" strategy and ended with one unused drive (the smallest one). By default my disks are set to use xfs and when I formatted it in the last step it did use the xfs.

    I saw that the disk was set as "MBR unaligned" and I zeroed the first 512 bytes. After another formatting it was all OK and I proceed to zero the disk using the script above.

    It started with over 200MB/s but quickly dropped to 1.2MB/s and later to 600-700kB/s. In a few hours it was zeroed only 1.5GB but it resulted in tens of millions of "writes" on the main page. Nothing wrong in the log file... I tried to kill the "dd" PID, then used Ctrl-C but was not successful and had to stop the array.

    It took maybe 10-20-30 minutes but finally it stopped and I could power down the server.

     

    I inspected the cables and the powered-up the server , formatted the disk again and then let it run overnight. In the morning it was barely zeroed 15GB, speed was around 500kB/s and again tens of millions of "writes " om the main page.

     

    Repeated the stopping/shut-down procedure again but this time after the power-up I formatted it with the older "reiserFS" and then gave it another go with the script.

    BINGO  - it works as it should - in the 2 and half hours it did zeroed 700GB+ and the "writes" are around 1.65 million

     

    I believe someone should investigate it further

    reiserFS formatted disks - works fine

    xfs formatted disks - read elsewhere that the array should be in "maintenance mode" (not tested by me)

    btrfs formatted disks - no idea!!!

     

    Just for the kick I decided to  format the drive with btrfs and surprisingly the script refused to run claiming that there is not a single  empty hard drive !!!

    ==============================================================================================

    To summarize my tests for the "clear_array_drive" script:

    unRAID v 6.6.6

    1TB WD green disk with no SMART problems.

     

    reiserFS format - 33.6MB used space - script runs as intended

     

    btrfs format - 17.2MB used space - script refused to run claiming no empty hard drive found

     

    xfs format - 1.03GB used space - script runs painfully slow with tens of millions of "writes" to the disk.

    The test was cancelled by me as the estimated time could be weeks or even month!!!

    Another post here claims OK but with array running in "maintenance mode"  (this I did not test)

    =============================================================================================

     

     

     

     

     

     

  4. Hi,

     

    i have an old fujitsu siemens RX300 with SAS 1068E :

     

    root@ubuntu01:~/Flash/LSI/1.5Gs_3Gs_SATA_Support_Firmware# ./sasflash -listall
    
    ****************************************************************************
        LSI Corporation SAS FLASH Utility.
        SASFlash Version 1.24.00.00 (2009.11.13)
        Copyright (c) 2006-2007 LSI Corporation. All rights reserved.
    ***************************************************************************
           Adapter Selected is a LSI SAS 1068(B0):
    Num   Ctlr      FW Ver     NVDATA   x86-BIOS     EFI-BSD    PCI Addr
    -----------------------------------------------------------------------
    1   1068(B0)  01.10.01.00  22.11  06.06.00.00    No Image   00:02:08:00
    Finished Processing Commands Successfully.
            Exiting SASFlash.

    Looks like it's an B0 chipset ? Is there a way to update it ?

    Thanks

    Guldil

     

    One has to be careful here - there is 1068 chip and there is 1068e with one of them being PCI-X chip and the other being PCIe (express) chip.

     

    Yours appear to be 1068 (b0) and in this case you should look for the LSI legacy products as source for newer firmware and BIOS. One such example is the LSI SAS 3081X-R

     

    Direct link to the latest firmware here - http://www.lsi.com/downloads/Public/Host%20Bus%20Adapters/Host%20Bus%20Adapters%20Common%20Files/SAS_SATA_3G_P21/SAS3080XR_%20Package_P21_IR_IT_Firmware_BIOS_for_MSDOS_Windows.zip

     

    If you decide to proceed I am not responsible for what could happen.

    And if you succeed please let us know.

  5. Please help - I have HP xw9400 Workstation with onboard LSI 1068E(B1) SAS IR controller (connected as PCIe x8).

    Trying to flash it to IT mode, but so far all I get is Error: Can't flash 2MB firmware into 1MB flash

    Where can I get right firmware?

     

    It is strange as the IR firmware is always larger than the IT firmware:

     

    Anyway the link is here - http://www.lsi.com/downloads/Public/Host%20Bus%20Adapters/Host%20Bus%20Adapters%20Common%20Files/SAS_SATA_3G_P21/SAS3081ER_%20Package_P21_IR_IT_Firmware_BIOS_for_MSDOS_Windows.zip

     

    As you have B1 chip you must use 3081ETB1.fw

  6. I just bought an M1015 at ebay (prices are highly variable...)

     

    Can I flash a M1015 directly to P13 by replacing the IT firmware "2118it.bin" in batch file to the new version from LSI website?

    anything else needs to be changed? "mptsas2.rom"?

     

    Has anyone tried the new P14 version?

    Are the changes needed to flash P14 same as P13?

     

    can someone with P14 please answer.

    Thank you

     

    I do not use the files posted on the first page so cannot comment ot the flashing procedure under Windows.

     

    However one can safely copy/overwrite the firmware file and the BIOS file from any newer revision over the exiting ones and update right to the latest one.

     

    Keep in mind that 2118it.bin is the original firmware file name for LSI9211-8i and 2108it.bin is for LSI9210-8i - they are functionally identical but the device name will change. Do not forget to overwrite the correct BIOS file too- always named mptsas2.rom

     

    If you have a previous version one does not need to go thru the whole procedure - update is done with one line command only:

     

    sas2flsh -o -f 2118it.bin -b mptsas2.rom

  7. Successfully flashed - IBM ServeRAID M1015 to LSI MegaRAID to SAS2008(P11).zip IT mode on a Asus p5b deluxe

    is this this the latest version of the firmware?

    great directions, very easy to follow! 8)

     

    P11 is old version for the firmware - the latest one is P13.5 - http://www.lsi.com/downloads/Public/Host%20Bus%20Adapters/Host%20Bus%20Adapters%20Common%20Files/SAS_SATA_6G_P13.5/9210_8i_Package_P13.5_IR_IT_Firmware_BIOS_for_MSDOS_Windows.zip

     

    And as this is often read from other non Unraid users there is a newer firmware for 9240-8i mode as well - http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/MERGED_20.10.1-0099_SAS_2008_FW_Image_APP-2.120.294-1580.zip

     

    BTW - there are now PCIe 3.0 versions of these cards (based on SAS2308)

  8. Follow up ...

    I did find the manual to this motherboard and see that is says both the PCIe slots are reserved for Graphics cards.  Wish I'd have know this was a potential problem before I bought the card.

    Any thoughts on the links to potential replacements, or any other LGA 775 Core 2 Duo motherboards at Newegg that would work with the IBM ServeRAID M1015 would be appreciated.

     

    A few suggestions:

     

    1. Flash the motherboard with the latest BIOS, then set the default configuration, save it, reboot, go into the BIOS again and disable any unused hardware features - Floppy drive, serial and parallel ports, audio, firewire, IDE controller if you are not using any of the older IDE HDs. Save this one.

     

    2. Make sure that all the jumpers JPE1 to JPE8 and also JPE9 are set to CrossFire mode - (all of them are in position 2-3 closed) - reference page 21 bottom right - in this way you "enable" the second PCIe x16 slot to be functional and working at x*8 speed. By default they are in position 1-2 to provide your video card with max bandwidth and the second PCIe x16 slot is disabled.

    Try now - your M1015 should work.

     

    3. If no luck again - swap the controller with the video card and try again (Controller in the Master PCIe, video card in the Slave PCIe).

     

    4. If no luck again  and if you only plan to use max of 16-18 HDs you can try to look for an older PCI only video card - people usually trow these in the garbage - but it may be possible to avoid buying an new MB

     

    However I am 99% sure that everything will work after steps 1 and 2

     

    Good luck and let me know.

     

    PS. Use only Beta 12/12a or the new just released RC2

  9. I see the flash procedure for the Intel RS2WC080 works. Will this work on the RS2Bl080 too?

     

    This is much nicer (and different card - based on SAS2108 chip). It looks like the LSI 9260-8i, IBM M5015

     

    The procedure will be similar but you may have to use different tools and firmware.

     

    Some quick search:

    http://www.intel.com/content/www/us/en/servers/raid/raid-controller-rs2bl.html

    http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx

    http://www.xtremesystems.org/forums/showthread.php?271922-LSI-2108-based-card-cross-flashing-%28Dell-H700-LSI-9260-IBM-M5015-Intel-RS2BL080%29

  10. I am running the 12 beta on a brand new 3TB Hitachi green HD.

     

    There was a 6 to 11 MB/s difference in the speed reported from the console (higher) than the myMain at one point. This difference has grown now to 40 MB/s during the post-read.

    myMain speed is given as 51 MB/s while the console is showing around 91 MB/s ( at the 58% done post-read -around 28h:30min mark)

     

    Will report later once it is done.

    After it is done, and before assigning the drive to the unRAID array, you can post the results of this command to assist me in knowing if it worked as expected on your 3 TB drive:

    dd if=/dev/sda count=1 2>/dev/null | od -x -A x

    (substituting your disk for /dev/sda )

     

    The report:

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ========================================================================1.12

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == invoked as: ./preclear_disk1.12beta.sh -A /dev/sdb

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ==  Hitachi HDS5C3030ALA630    MJ1311YNG258NA

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Disk /dev/sdb has been successfully precleared

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == with a starting sector of 1

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Ran 1 cycle

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ==

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Using :Read block size = 8225280 Bytes

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Pre Read Time  : 9:42:34 (85 MB/s)

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Zeroing time  : 9:26:21 (88 MB/s)

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Post Read Time : 18:50:50 (44 MB/s)

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Total Time    : 38:00:54

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ==

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Total Elapsed Time 38:00:54

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ==

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Disk Start Temperature: 29C

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ==

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Current Disk Temperature: 32C,

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ==

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ============================================================================

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ** Changed attributes in files: /tmp/smart_start_sdb  /tmp/smart_finish_sdb

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:      Temperature_Celsius =  187    206            0        ok          32

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  No SMART attributes are FAILING_NOW

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  0 sectors were pending re-allocation before the start of the preclear.

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  0 sectors are pending re-allocation at the end of the preclear,

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:    the number of sectors pending re-allocation did not change.

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  0 sectors had been re-allocated before the start of the preclear.

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:  0 sectors are re-allocated at the end of the preclear,

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]:    the number of sectors re-allocated did not change.

    Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ============================================================================

     

     

    The output of the command: dd if=/dev/sdb count=1 2>/dev/null | od -x -A x

     

    000000 0000 0000 0000 0000 0000 0000 0000 0000

    *

    0001c0 0002 ff00 ffff 0001 0000 ffff ffff 0000

    0001d0 0000 0000 0000 0000 0000 0000 0000 0000

    *

    0001f0 0000 0000 0000 0000 0000 0000 0000 aa55

    000200

     

  11. I am running the 12 beta on a brand new 3TB Hitachi green HD.

     

    There was a 6 to 11 MB/s difference in the speed reported from the console (higher) than the myMain at one point. This difference has grown now to 40 MB/s during the post-read.

    myMain speed is given as 51 MB/s while the console is showing around 91 MB/s ( at the 58% done post-read -around 28h:30min mark)

     

    Will report later once it is done.

  12. You are using full Slackware install  (which is way above my "pay grade"  ;))

     

    I'm setting up my unRAID (Pro) server for the first time (and running on a full Slackware 13.1 installation).

     

    Thoughts?

     

    If I saw this in someone else's log, I might think it was due to an insufficient PSU.  I don't think that's my issue, though; I've got a 480W Antec power supply running 3 HDDs, a CD/DVD drive, a graphics card, and the motherboard/CPU -- that's it.

     

    but you are omitting  a lot of things in your "simple" hardware list - WD7000 SCSI and 3ware 9xxx controllers, RAID6, some Compaq hardware and all this on an older nVidia based motherboard.

     

    Let see what the Linux guys will say.

     

     

     

  13. Interesting, didn't know that.

     

    No surprise.... it is wrong.

     

    Not exactly Raid but it is true as the SSD controller uses parallel channel architecture and the smaller capacities use only a part of these channels. For example the Intel one uses 10 channels -  http://www.intel.com/cd/channel/reseller/apac/eng/products/nand/feature/index.htm

    If you populate the PCB with only 5 pcs of 8 MByte flash memories on one side you will end up with the value series 40MB model which uses only 5 channels hence 35MB/sec writing speed.

    Populate all ten pcs on the same side, then you are going to use all 10 channels and this doubles the speed to 70MB/s.

     

    Picture of the drive PCB here:

    http://www.storagereview.com/intel_x25v_ssd_review_40gb

     

×
×
  • Create New...