bcbgboy13

Members
  • Posts

    556
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by bcbgboy13

  1. We have four different physical connectors and their pictures are below: http://zone.ni.com/cms/images/devzone/tut/a/c05afe69124.gif Smaller card will fit and probably will work in the larger slots (but with reduced performance). BR10i is physically x8 card
  2. Yes, either Option 1 or Option 2 as I outlined above will have plenty of power to run all of those add-ons. The new Celerons that bcbgboy13 mentioned may also work, but some add-ons will benefit from a dual core processor. They are dual core. However I am not sure if they have all the bell and whistles for virtualization. And they are unconfirmed reports that they support ECC with ECC chipset.
  3. They are brand new Celerons in socket 1155 now providing much better value and apparently very energy efficient in the same time. G620, G840 and G850 - 65W G620T is only 35W The G620 is $73 from Amazon
  4. http://lime-technology.com/forum/index.php?topic=7451.0 It is an x8 card (will need physical x8 or x16 connector to fit, but should work electrically in x4 without problems) Driver support for sure in 5beta... Unfortunately it appears that there is no 3TB or larger HD support as it is an older card and the manufacturer may not provide firmware for that. (it always better to sell new cards). Warning - many of the cards on Ebay are without mounting bracket.
  5. Buy a cheap BR10i ( and one or two cables breakout cables) from EBay and you will have plenty of expansion options
  6. In mine opinion and with only 4 new ED20EARS drive the smartest choice is to keep everything you have and only buy a single 4in3 such as this one: http://www.amazon.de/s/ref=nb_sb_noss?__mk_de_DE=%C5M%C5Z%D5%D1&url=search-alias%3Daps&field-keywords=STB-3T4-E3-GP&x=17&y=20 This will provide excellent cooling for the four new HDs (and I assume the two others are presently taken care of) The 300W power supply is probably powerful enough to handle the 6 HDs and is with excellent efficiency.
  7. Why not AMD - you know that with AMD you can get the ECC functionality for "free" as it is build in any relatively recent AMD CPU and the extra cost is only the slightly more expensive ECC memory. However not all motherboard vendors will support it but Asus usually does. On the other hand Intel charges an arm and a leg for this "must" feature on any server. IMHO - ECC, UPS and keep your HD temperatures under 40C and you will rarely if ever visit the "support" forum here.
  8. Are the HDs set to AHCI mode (in the motherboard BIOS) as in mine own experience 38 to 65 hours for preclear is awful (should be around 26-28 hours on a decent hardware like yours). Perhaps you should use new data cables with locking clips and not not use zip ties at all for your test to see if the speed will increase (you may have damaged the cables in your quest for nice cabling job)
  9. After it is done, and before assigning the drive to the unRAID array, you can post the results of this command to assist me in knowing if it worked as expected on your 3 TB drive: dd if=/dev/sda count=1 2>/dev/null | od -x -A x (substituting your disk for /dev/sda ) The report: Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ========================================================================1.12 Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == invoked as: ./preclear_disk1.12beta.sh -A /dev/sdb Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Hitachi HDS5C3030ALA630 MJ1311YNG258NA Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Disk /dev/sdb has been successfully precleared Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == with a starting sector of 1 Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Ran 1 cycle Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Using :Read block size = 8225280 Bytes Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Pre Read Time : 9:42:34 (85 MB/s) Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Zeroing time : 9:26:21 (88 MB/s) Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Post Read Time : 18:50:50 (44 MB/s) Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Last Cycle's Total Time : 38:00:54 Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Total Elapsed Time 38:00:54 Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Disk Start Temperature: 29C Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Current Disk Temperature: 32C, Jun 18 21:27:21 unraid preclear_disk-diff[16035]: == Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ============================================================================ Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ** Changed attributes in files: /tmp/smart_start_sdb /tmp/smart_finish_sdb Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Jun 18 21:27:21 unraid preclear_disk-diff[16035]: Temperature_Celsius = 187 206 0 ok 32 Jun 18 21:27:21 unraid preclear_disk-diff[16035]: No SMART attributes are FAILING_NOW Jun 18 21:27:21 unraid preclear_disk-diff[16035]: Jun 18 21:27:21 unraid preclear_disk-diff[16035]: 0 sectors were pending re-allocation before the start of the preclear. Jun 18 21:27:21 unraid preclear_disk-diff[16035]: 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. Jun 18 21:27:21 unraid preclear_disk-diff[16035]: 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. Jun 18 21:27:21 unraid preclear_disk-diff[16035]: 0 sectors are pending re-allocation at the end of the preclear, Jun 18 21:27:21 unraid preclear_disk-diff[16035]: the number of sectors pending re-allocation did not change. Jun 18 21:27:21 unraid preclear_disk-diff[16035]: 0 sectors had been re-allocated before the start of the preclear. Jun 18 21:27:21 unraid preclear_disk-diff[16035]: 0 sectors are re-allocated at the end of the preclear, Jun 18 21:27:21 unraid preclear_disk-diff[16035]: the number of sectors re-allocated did not change. Jun 18 21:27:21 unraid preclear_disk-diff[16035]: ============================================================================ The output of the command: dd if=/dev/sdb count=1 2>/dev/null | od -x -A x 000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0001c0 0002 ff00 ffff 0001 0000 ffff ffff 0000 0001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 0001f0 0000 0000 0000 0000 0000 0000 0000 aa55 000200
  10. I am running the 12 beta on a brand new 3TB Hitachi green HD. There was a 6 to 11 MB/s difference in the speed reported from the console (higher) than the myMain at one point. This difference has grown now to 40 MB/s during the post-read. myMain speed is given as 51 MB/s while the console is showing around 91 MB/s ( at the 58% done post-read -around 28h:30min mark) Will report later once it is done.
  11. I also upgraded a week ago. Parity check was fine but with the usual speed (the initial estimates of under 6 hours did not materialize) madburg test (spin up, spin down, spin up, spin down, stop, reboot - controlled with power meter) performed a few times - no problem whatsoever. (knock on wood) Hardware used: Biostar TA790GX 4 GB ECC DDR2 (2 x 2GB HP Smart Buy) 750W UPS 10 HDs (from 1 to 2 TB) One Supermicro AOC-SASLP-MV8 One M1015 flashed to LSI IT BIOS (for now diskless) No cache disk and only Unmenu. I am going to preclear a 3TB disk now and probably unavailable for 36- 40 hours???
  12. Are you running stock unRAID, any addon's, running unRAID via ESXi, etc.? Not unrunning under any virtulization, physical unRAID. It has the same add-ons as before since I upgraded by copying those 2 files. It has unMENU, smarthistory, preclear_disk, APCUPSD, Powerdown, SSMTP, Status email, VIM, Screen, iStat just basic's for a lack of better words. What I did notice is... in this release the MTU size of 16110 did not take per the syslog, it always did previously, I changed it to 9000 and it took. Maybe the driver or something change, not a big deal, the goal was to go with the highest and then let the end to end negotiate. And I cant see this causing a problem being set at 9000 with this version. Something is very wrong in your posted syslog if you claim no virtualization and pure physical Unraid.: from the syslog: Jun 14 21:47:55 PNTower pspci[10195]: 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08) Jun 14 21:47:55 PNTower pspci[10195]: 00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08) Jun 14 21:47:55 PNTower pspci[10195]: 00:07.7 System peripheral: VMware Inc Virtual Machine Communication Interface (rev 10) Jun 14 21:47:55 PNTower pspci[10195]: 00:0f.0 VGA compatible controller: VMware Inc Abstract SVGA II Adapter Jun 14 21:47:55 PNTower pspci[10195]: 00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:11.0 PCI bridge: VMware Inc PCI bridge (rev 02) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.0 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.1 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.2 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.3 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.4 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.5 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.6 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:15.7 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:16.0 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:16.1 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:16.2 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) Jun 14 21:47:55 PNTower pspci[10195]: 00:16.3 PCI bridge: VMware Inc Unknown device 07a0 (rev 01) 440BX is Pentium II era and you have Xeon 3470 Jun 14 19:12:09 PNTower kernel: CPU0: Intel® Xeon® CPU X3470 @ 2.93GHz stepping 01
  13. Upgraded to this one. No problems so far. Started parity check and noticed significant increase in the reported speed in the first few minutes - at least 15-20 MB/s better. Will see the end results tomorrow.
  14. To "speeding ant" - you have very old Gigabyte board and at present two of the HDs are with HPA. I am not sure of your exact upgrade path so far but it was supposed for everyone to upgrade first to 4.7, check out if everything was OK and then proceed to 5.xxx. However I do recall having a HPA was a big "no go" for upgrade to 4.7 Perhaps the "new" hard drive that you have is new because of a new second instance of the HPA, then the MBR will be affected and you may get the symptoms you have now. BIOS: Jun 8 10:31:40 Media kernel: DMI: Gigabyte Technology Co., Ltd. EP35-DS3/EP35-DS3, BIOS F4 06/19/2009 Jun 8 10:31:40 Media kernel: ata5.00: HPA detected: current 1953523055, native 1953525168 Jun 8 10:31:40 Media kernel: ata5.00: ATA-8: ST31000340AS, SD15, max UDMA/133 Jun 8 10:31:40 Media kernel: ata6.00: HPA detected: current 2930275055, native 2930277168 Jun 8 10:31:40 Media kernel: ata6.00: ATA-8: ST31500341AS, SD1B, max UDMA/133 Jun 8 10:31:40 Media emhttp: Device inventory: Jun 8 10:31:40 Media emhttp: ST31000340AS_9QJ0PVWK (sda) 976762584 Jun 8 10:31:40 Media emhttp: ST31000333AS_9TE1KPE1 (sdc) 976762584 Jun 8 10:31:40 Media emhttp: ST31000340AS_9QJ0TBQ1 (sdd) 976761527 Jun 8 10:31:40 Media emhttp: ST31500341AS_9VS0B8ER (sde) 1465137527 Jun 8 10:31:40 Media emhttp: ST3500418AS_6VMG8Z7K (sdg) 488386584 Jun 8 10:31:40 Media emhttp: WDC_WD15EARS-00Z5B1_WD-WMAVU1362121 (sdf) 1465138584
  15. The "SIL3124 bit" I got from the OCZ drivers package is for a single IBIS drive. From the BRiT post we can see that this is indeed four SSD HDs in a 3.5" HD housing. It also uses a "Industry standard SAS cable" - aka SFF8087 to SFF8087. Then there is another bit of interesting info in the OCZ support forums in regards to updating the firmware on the REVO and IBIS drives: http://www.ocztechnologyforum.com/forum/showthread.php?86688-flashing-revo-and-Ibis-to-new-drive-firmware Pay attention to the user HBA adapters (especially the first two ) So in reality it looks like the IBIS is a REVO in a 3.5" HD form factor. They realized that with REVO they are limited to 1 or 2 drives per motherboard and in order to sell more they need to allow the users to add more in a similar way as users can add HDs. So the OCZ HDSL controller may be able to be used for 16 HDs (with the appropriate SFF8087 to 4 SATA forward cables) but will the current firmware looks for OCZ in the HD names?
  16. Not seen or heard but look for the IBIS drivers: ; This INF file installs the Silicon Image Serial ATA Raid 5 driver ; for the SiI 3124 controller on systems using the AMD 64-bit ; processor. ; ; Copyright © 2006 by Silicon Image, Inc. ; All rights reserved and ; This INF file installs the Silicon Image Serial ATA Pseudo Processor ; device on systems running Windows NT 4.0, Windows 2000, and Windows XP. ; ; Copyright © 2004 - 2005 by Silicon Image, Inc. ; All rights reserved
  17. They are some quirks in your syslog: May 31 08:38:50 UR kernel: pci 0000:00:1f.0: quirk: region 0400-047f claimed by ICH6 ACPI/GPIO/TCO May 31 08:38:50 UR kernel: pci 0000:00:1f.0: quirk: region 0480-04bf claimed by ICH6 GPIO May 31 08:38:50 UR kernel: pci 0000:00:1f.0: ICH7 LPC Generic IO decode 1 PIO at 0800 (mask 000f) May 31 08:38:50 UR kernel: pci 0000:00:1f.0: ICH7 LPC Generic IO decode 2 PIO at 0290 (mask 000f) And then you have onboard SATA 6 ports controller, onboard JMicron SATA 2 ports & IDE controller and two additional SIL controllers and that combination together with the fact that the Hitachi HD are SATA3 may not like each other that much (and in addition they do share IRQs): May 31 08:38:50 UR kernel: sata_sil24 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 May 31 08:38:50 UR kernel: sata_sil24 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 May 31 08:38:50 UR kernel: ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 May 31 08:38:50 UR kernel: ahci 0000:05:00.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19 May 31 08:38:50 UR kernel: JMicron IDE 0000:05:00.1: PCI INT B -> GSI 16 (level, low) -> IRQ 16 How to fix that: 1. Make sure you have the latest BIOS for the motherboard. Then load the default values, save and reboot - watch out here for Gigabyte famous HPA. Go again into the BIOS, make sure that the HDs are in AHCI mode and then disable any unused hardware features (serial and par. ports, audio, firewire, floppy, IDE controller, etc. and if you have choice you can disable a few of the USB ports too). Save this configuration. 2. Make sure you flash the two SIL cards with the latest nonRAID BIOS as they may come with an older one that does not like the SATA3. Then you can try again and do not forget to update us with the status.
  18. You can try to move the two HDs from the on board Jmicron to the other channel of the SIL3132 cards and then disable the Jmicron controller all together. (including the IDE)
  19. It was posted here before - http://lime-technology.com/forum/index.php?topic=7398.0
  20. Except that the study was done on HDs between 80 GB and 400 GB in size and nowadays we use HDs with double of that on a single platter. The wide temperature variations will play bigger role. A more recent study (done by a Russian data recovery lab) concluded that the heads of the newer high density WD HDs are very heat sensitive and the prolonged use above 45 °C will result in a very short lifetime. It is up to you to believe or not. Couple that with the Unraid community obsession with the 5x3 with their rather limited air-flow, and the fact that WD drives are the only ones that do not keep the temperature range in their SMART status and you may understand the reported by many high "fatality" rate of the newer WD20EARS.
  21. The card may work well if it fits in the connector but that they will fit in any x8/x16 PCIe connectors is a bold statement to make. The pure x8 PCIe is rarely present on the motherboards and in most of the cases the motherboard MFRs will use a x16 physical connectors. If your motherboard has a single x16 PCIe connector then is is designed to provide up to 75W to the cards. On the other hand a x4/x8 connector has to provide only up to 25W to the cards - you have different power requirements and thus a chance to have big and bulky capacitors very close to the other "wrong" side of the PCIe connector that will stand in the way of the UIO cards. Since you have the card I will suggest for you to take a nice frontal picture with a ruler beside as to give an idea to the potential buyers if this card can fit in their motherboards.
  22. Move the SIL3114 card to a different PCI slot if possible and it may change
  23. You made the mistake of buying brand new, just of the shelf hardware platform. I always wonder what are people thinking when they do this. See - they are usually no problems when you run a mainstream software (such as Windows) as the hardware vendors and the software ones will work hand in hand (exchanging beta hardware and software) to insure that they are no major problems on the "launch" date when the new hardware hit the shelves. But not sure how many Linux developers were given the free hardware to test these beforehand. And then we have the many Linux "flavors" and the Slackware upon which Unraid is based is not the most popular and widely used nowadays. And then Unraid development itself goes with even slower pace and they do not use the latest Slackware releases. So for someone to buy a brand new shiny hardware platform and to expect that it will work under a custom software like Unraid without any problems is like hopping for a miracle. Miracles sometimes happen but most of the time they do not. So the early adopters are now in the "sweet" troubleshooting territory all for them-self. What can you do. 1. Always disable all the unused hardware features on the motherboard - serial and parallel ports, audio, firewire, floppy, IDE controller if you do not use any of the older PATA drives as you are going to free some resources this way. 2. Check out daily your motherboard vendor for BIOS updates and if available apply them immediately and then do not forget about "#1" above. And you may even complain to them that this hardware does not work with that software... 3. Forget about 4.7 - go with the latest beta. If you just started with Unraid 4.7 then IMHO you should not have any issues with 5.0b6a If these do not help your only choice is to try to run the latest full Slackware distro. And please do not forget to share your success (or pain) along the way. PS.read your PM
  24. Usually the beep codes are BIOS dependent. Not sure what "customization" SM has done to theirs. If the board has AMI BIOS then search for AMI beep codes (or AWARD, PHOENIX, etc) or look here - http://www.bioscentral.com/beepcodes/amibeep.htm# Chances are they are not changed. If the board has UEFI - maybe this one will be usable - http://www.clunk.org.uk/forums/hardware/39557-uefi-bios-error-post-codes.html PS. I did not pay attention on your hardware configuration - you can try to start with a single RAM module and then expand from there if you need more. Kind of looks unbelievable to support 32GB of unbuffered memory... but if they claim it must be true (or you have to play a lot with all the memory settings and voltages to make it work).
  25. Any way to get this test release - I was away and then got swamped at work. Have a couple BR10i, one Dell 6i/R flashed with older LSI firmware and another still with the Dell's (these are listed as supporting 3TB), and then have one M1015 (SAS2008 based).