ICH8R/JMicron 8 x onboard SATA 300 + Attansic L1 Gigabit LAN Success (4.0-beta4)


Koolkiwi

Recommended Posts

EDIT (25 Mar): Changed thread name from "ASUS P5B-E 6 x SATA - Attansic L1 networking support for 4.0" to current name, based on progress of this thread.

 

Hi,

 

I'm an unRAID noob, who is looking to use my existing P5B-E motherboard as the basis for a SATA unRAID system.

 

I grabbed unRAID 4.0 beta 2, which I see is based on Slackware with kernel 2.6.20.

 

The key nice feature of the P5B-E motherboard for unRAID is the ICH8R 6 x onboard Intel ICH8R SATA ports, as well as PCI-e connected gigabit LAN (for full throughput).  If this works, then just using this well respected motherboard alone (which I already own) with no other I/O cards needed would give me up to 6 SATA drives, then presumably by later adding up to 2 Promise 4x SATA cards would eventually take me to a possible 14 drive unRAID system.

 

Booting up looks promising as I have my (currently) only connected SATA drive automatically mounted as dev/sda (sda1,sd2,sd3 for the drives existing partitions).  The flash drive comes up as sdg/sdg1, which I guess possibly indicates sdb...sdf would be the remaining motherboard SATA devices if drives were attached (apologies, I'm also a linux noob).

 

So, the respected P5B-E motherboard's ICH8R onboard 6 x SATA support is looking positive with the new unRAID 4.0 :)

 

OK, now the problem. The P5B-E uses Attansic L1 Gigabit onboard LAN, which is currently not included in the unRAID 4.0 beta build.

 

Researching this, I discovered that there is now kernel support for the Attansic L1 as of 2.6.20 kernal (as atl1).

 

Latest 2.6.20 kernal driver can be found here (driver 2.0.6.1):

http://www.hogchain.net/attansic/attansic.html

 

Only trouble is, that being a linux noob, I have no idea how to install this onto the unRAID flash drive to test this out. 

 

Reading these forums, it seems I need support for atl1.ko driver, so that I could presumably then try:

modprobe atl1

 

Does this require me to wait until Tom can include atl1.ko support in a future beta?

 

TIA.

 

Link to comment

I downloaded & built that driver - now in 4.0-beta4.  Please post back if it works.

Thanks for the quick response!

 

I have downloaded and tested beta4 with the P5B-E motherboard, and the driver appears to be working fine.

 

So, support for the P5B-E with 6x ICH8R onboard SATA 300 controllers, and onboard PCI-e Gigabit LAN is looking promising! :)

 

Now my last? remaining problem...  When I boot the network config cannot be found, as the flash drive is not successfully mounting as /boot.

 

Here is my troubleshooting log:

---

On boot I get error:

/boot/config/network.cfg: No such file or directory

 

Error appears to be caused by unsuccessful mount of Flash Drive during boot up.

 

    * Manually mount Flash Drive as /boot (just to prove it works)

 

mount /dev/sdf1 /boot
ls /boot

 

Successful, can now list Flash Drive contents as /boot!

 

    * Check Attansic L1 Ethernet interface (atl1) is now valid

 

ifconfig eth0

 

Interface looks good and is UP, but no IP address!

 

    * Manually assign IP Address

 

ifconfig eth0 down
ifconfig eth0 192.168.157.22 netmask 255.255.255.0 up
ifconfig eth0

 

    * Test connectivity by pinging gatway

 

ping 192.168.157.1

 

Successful pings, yippee!

 

    * Assign Default Gateway

 

route -n
route  add default gw 192.168.157.1
route -n

 

    * Start the management utility

 

/usr/local/sbin/emhttp &

 

Successful! Now I'm able to remotely connect to management web page and all looks good with my SATA drive selectable in the drop-down boxes! :)

 

SO, all appears to be now working and ready to go (after the above manual start-up process).

 

Any thoughts on why my Flash Drive will not mount as /boot on startup?  I see a few references to this in the forum, but haven't quite figured this problem out yet.

 

Being a complete linux noob, can someone point me to how I read the bootup system log (eg. and how do you stop it scrolling off your screen etc.)? As I suspect this is the place to start looking for the problem.

 

TIA.

 

Link to comment

Plug your Flash back into your PC.  When it shows up in My Computer, right-click and select Properties.  Make sure the volume label is set to "UNRAID" (without the quotes).  Right click again, select Eject, plug back into server and reboot.

 

The way the Flash is distinguished from other storage devices is by volume label being set to UNRAID.

Link to comment

Plug your Flash back into your PC.  When it shows up in My Computer, right-click and select Properties.  Make sure the volume label is set to "UNRAID" (without the quotes).  Right click again, select Eject, plug back into server and reboot.

 

The way the Flash is distinguished from other storage devices is by volume label being set to UNRAID.

You're a star! I'm now up and running. :)

 

<red faced> I can't believe I missed that one! You know, last night I even went as far as formatting an old IDE hard drive with the boot image on - to see if it was a USB BIOS compatibility issue (it sure booted fast, but had the same problem).  I had even tried changing the volume name... to 'BOOT'.

I did see the failure to mount UNRAID in the syslog (once I figured out the 'tail' and 'less' linux commands), but didn't click on that clue, as I just assumed it was looking for the actual unRAID array which I hadn't setup yet (I am a total noob to all of this remember).

 

OK, next step to further test support for the P5B-E motherboard / ICH8R (which actually has a total of 8x PCI-e SATA 300 controllers - 2 are on the Jmicron controller - 1 is external), is that I will try plugging in my current 'only SATA drive' into the other SATA connectors and reboot to check they are detected (I suspect that maybe only the 6 ICH8R controllers will work???).

 

I will post back my findings.

 

I believe the 4.0-beta4 addition of the Attansic L1 Gigabit LAN driver will be a bit feature addition, as there are  number of newer motherboards that use this controller.  Based on it now being accepted for the 2.6.20 kernel (and what I have read), it appears to now be stable and well performing.

 

The next question from unRAID users will likely be, what is the best value motherboard out there that supports the ICH8R with 6 or 8 PCI-e SATA 300 connectors, and supported on-board Gigabit LAN.  The Asus P5B-E is a lot cheaper than the Deluxe version, and is a well respected mobo, but there may be others that are cheaper with same SATA port count / Gigabit features?

 

Link to comment

OK, next step to further test support for the P5B-E motherboard / ICH8R (which actually has a total of 8x PCI-e SATA 300 controllers - 2 are on the Jmicron controller - 1 is external), is that I will try plugging in my current 'only SATA drive' into the other SATA connectors and reboot to check they are detected (I suspect that maybe only the 6 ICH8R controllers will work???).

 

I will post back my findings.

 

Woohoo... Looking good!

 

Firstly, when I tried connecting to the other ICH8R connectors, all appeared to work, but the 2x JMicron SATA ports would not be detected. :(

 

Then I found the following info over at:

http://linux-ata.org/driver-status.html

 

Intel ICH "IDE" mode

Driver name: ata_piix

 

Summary: No TCQ/NCQ. Looks like a PATA controller, but with a few added, non-standard SATA port controls. Hardware does not support hotplug. "Warmplug" support is possible.

 

Update: ICH6/7/8 include support for addressing the SATA PHY registers. This is not yet supported in Linux, mainly because some BIOS do not fill in the necessary (PCI BAR) resources.

 

Update: Boot-time, probe-time issues continue to persist in some cases, related to the "PCS" register. The ata_piix driver in 2.6.18 and later provides a "force_pcs" module option to help users deal with this (values: 0=default, 1=ignore PCS, 2=honor PCS). Play around with 'force_pcs' if you have device detection problems.

--------------------------------------------------------------------------------

AHCI (newer Intel ICH, ULi, others)

 

Driver name: ahci

 

Summary: Full NCQ support, full SATA control including hotplug and PM.

 

Note1: AHCI specification is completely open.

 

Note2: ATI, [glow=red,2,300]Intel[/glow], [glow=red,2,300]JMicron[/glow], NVIDIA, SiS, ULi and VIA are currently known to have deployed AHCI in their chipsets.

 

Hopefully others will follow. AHCI is a nice, open design.

 

So, I changed the P5B-E BIOS setup to set the JMicron controller to AHCI mode instead of the default IDE mode.

 

Rebooted, and woohoo... we have the JMicron SATA ports detected. :)

 

I also noted in the BIOS that the ICH8R 6x SATA controllers also allow selecting AHCI mode, instead of the default IDE mode.  Based on the above information, it appears that the best option for 4.0 unRAID application would be change both controllers in the BIOS to AHCI mode instead of the default IDE mode.

 

So appears we now have support for 8 x PCI-e onboard SATA 300 controllers on ICH8R + JMicron controller motherboards (like the Asus P5B-E), albiet one of the SATA controllers is external (so 7 Internal + 1 External).

 

Add in the new Attansic L1 Gigabit onboard LAN support, and we are rock'in with the new 4.0-beta4 build. :)

 

Next step will be to build an actual unRAID system (need to buy another SATA drive first), and do some performance tests.

 

Link to comment
  • 2 weeks later...

 

So appears we now have support for 8 x PCI-e onboard SATA 300 controllers on ICH8R + JMicron controller motherboards (like the Asus P5B-E), albiet one of the SATA controllers is external (so 7 Internal + 1 External).

 

 

Awesome news!

 

Now if we can get something like the  PROMISE SUPERTRAK EX4350 PCI-E x4 SATA and/or SUPERTRAK EX8350 running then we should have some nice performing 12-14 drive rig, with just one add on card (albeit a relatively expensive one)

 

BTW is the board 6 internal and 1 external or 7 internal + 1 internal, I cant seem to find (on the picture of the MB) the extra 1 internal sata.

 

Thanks for the post and redirect.

 

Edit:

 

How about this as a 12 port contender:

 

ASUS L1N64-SLI

 

http://www.newegg.com/Product/Product.asp?Item=N82E16813131146

 

 

Link to comment

BTW is the board 6 internal and 1 external or 7 internal + 1 internal, I cant seem to find (on the picture of the MB) the extra 1 internal sata.

 

7 internal + 1 external.  The JMicron internal SATA300 connector is the black connector in the corner of the board by the PCIe-x1 slot.

 

How about this as a 12 port contender:

 

ASUS L1N64-SLI

 

http://www.newegg.com/Product/Product.asp?Item=N82E16813131146

 

Interesting.  Although rather pricey and perhaps overkill with dual CPU sockets and quad PCIe x16 slots!

 

The question is whether there would be any useful performance gain over just using a couple of relatively cheap SATA300TX4 PCI cards to extend the ICH8R southbridge onboard 6 x SATA300 to 14 x SATA300 capability.

 

For anyone wanting to build a ~3TB Media Server, the onboard options of the P5B-E teamed up with 7 x 500GB drives looks like a good config, with no additional controller cards required (but with the later option of adding cards to take you up to the max 14 drive capability).

 

Link to comment

For anyone wanting to build a ~3TB Media Server, the onboard options of the P5B-E teamed up with 7 x 500GB drives looks like a good config, with no additional controller cards required (but with the later option of adding cards to take you up to the max 14 drive capability).

 

Or beyond, if you have the case capacity. (This of course, would require the software to be extended to allow larger drive quantities.)

Link to comment

Interesting. Although rather pricey and perhaps overkill with dual CPU sockets and quad PCIe x16 slots!

 

It's expensive, but about the same that I paid for my original build of the near extinct (recommended) intel MB and 3 SATA300TX4 PCIs . I was one of the early adopters of the SATA only 12 drive build. What I've found is that my system writes are nothing to write home about and the syncs are unbearable @10-12 hours- I absolutely saturated the PCI bus.

 

So I think PCI-e add ons are the only answer at this point. I believe Tom doesn't see the utility of supporting allot of these raid centric cards, because the only improvements will be Syncs and continuous writes. I hope the move to the 2.6 kernel and perhaps a different implementation of drive sync will bring the much needed performance boost to my system.

 

Link to comment

Now that all is up and running with V4.0-beta9, I finally did some casual parity and performance testing last night.

 

So far I'm very happy with performance.

 

I currently have only 2 x Seagate 500GB SATA drives, so I tested as 2 drives with no parity, and then as 1 data drive with parity.

 

a. Without parity, writing a 10GB file across the network was achieving 40% - 41% GigE utalisation (ie. ~400Mbps).  10GB copy took about 4 minutes. :)

 

b. Reconfigured for parity + disk1. Parity build took around 137 minutes, with web console reported speed of ~59MBps.

 

c. With the parity built, writing the same 10GB file took about 8 mins, as the network utilisation would cycle up and down (which I understand from Tom's post elsewhere is normal behaviour due to the LAN being faster than the data + parity writing).

 

d. Reading the 10GB file back from the parity + disk1 setup, was achieving a constant 25% GigE utilisation (ie. ~250Mbps).

 

e. Removed the data drive and reinitialised to make it appear as unformatted, then performed a parity recovery.  I didn't time this as I went to bed, but based on the parity last checked at 2:33am message this morning, it appears it took around 2.5 - 3 hours to rebuild the 500GB drive from the parity drive.

 

PS: After I started the parity recovery, I also tried watching a high bitrate HD movie for about 15 minutes (played perfectly), and also left it copying about 17GB of PAR'd data files to the server's disk1 while it was simultaneously recovering that drive (why not throw everything at it). :)

So this activity is all included in the total 2.5 - 3 hours 500GB recovery time.

 

So far I'm pretty happy with this performance! Especially when an array consistenty check on my LSI MegaRaid IDE RAID5 array takes about 3 days!

 

NB: The above was also all done across a network with other simultaneous traffic (but I don't expect that would change much).

 

Link to comment

Now that all is up and running with V4.0-beta9, I finally did some casual parity and performance testing last night.

 

So far I'm very happy with performance.

 

I currently have only 2 x Seagate 500GB SATA drives, so I tested as 2 drives with no parity, and then as 1 data drive with parity.

 

a. Without parity, writing a 10GB file across the network was achieving 40% - 41% GigE utalisation (ie. ~400Mbps).  10GB copy took about 4 minutes. :)

 

b. Reconfigured for parity + disk1. Parity build took around 137 minutes, with web console reported speed of ~59MBps.

 

c. With the parity built, writing the same 10GB file took about 8 mins, as the network utilisation would cycle up and down (which I understand from Tom's post elsewhere is normal behaviour due to the LAN being faster than the data + parity writing).

 

d. Reading the 10GB file back from the parity + disk1 setup, was achieving a constant 25% GigE utilisation (ie. ~250Mbps).

 

e. Removed the data drive and reinitialised to make it appear as unformatted, then performed a parity recovery.  I didn't time this as I went to bed, but based on the parity last checked at 2:33am message this morning, it appears it took around 2.5 - 3 hours to rebuild the 500GB drive from the parity drive.

 

PS: After I started the parity recovery, I also tried watching a high bitrate HD movie for about 15 minutes (played perfectly), and also left it copying about 17GB of PAR'd data files to the server's disk1 while it was simultaneously recovering that drive (why not throw everything at it). :)

So this activity is all included in the total 2.5 - 3 hours 500GB recovery time.

 

So far I'm pretty happy with this performance! Especially when an array consistenty check on my LSI MegaRaid IDE RAID5 array takes about 3 days!

 

NB: The above was also all done across a network with other simultaneous traffic (but I don't expect that would change much).

 

In the following post I did a even more difficult performance check while rebuilding one of the 7 data disks in my PATA based array.

 

http://lime-technology.com/forum/index.php?topic=578.msg3754#msg3754

 

Below is an image of the "top" command showing how the unRaid server was doing in my test... Looks just under 70 percent idle, while reconstructing a replaced disk in my 8 disk array AND while serving up 4 separate DVD ISO images that were on the disk being reconstructed to various PCs and network based media players in my home.

 

Before I started serving all the ISO images the reconstruction was going at just over 18,000KB/second.  Serving 4 ISO images of DVDs have slowed the reconstruction rate to 10,000KB/Second.    I'm not sure I'd recommend slowing the reconstruction when you have an actual failure but its nice to know unRaid 4.0-beta5 can handle it with ease even though it was reading 7 disks to reconstruct and stream the 4 different ISO images of DVDs on the 500 gig disk being reconstructed.

 

As I said in my prior post, i am impresssed.

 

Joe L.

 

30mx0cz.jpg

 

Link to comment

b. Reconfigured for parity + disk1. Parity build took around 137 minutes, with web console reported speed of ~59MBps.

 

That is an impressive number. I averaged 11MBps on a parity build for 12 drives (11 +1). I think I can live with 59 :). Wonder if it's the same for something like 12 drives?

Link to comment
  • 1 month later...

Nice find, Koolkiwi!

 

Based on your positive review, I just myself a P5B-E Plus too ;) The rig has been built w/ the following parts:

 

CPU : Pentium-D 925

RAM : 1GB Corsair Value Select DDR2

PSU : Cooler Master RS-600-ASAA

Case: Enlight 8902

Drive bays:  Enlight 8721-A02 5-in-3 SATA and a 3 enlight removable PATA drive trays.

PCI Cards: S3 Savage PCI VGA card, Promise Ultra100

USB Flash: TwinMOS 128MB

 

As you can see, I've got a couple of old stuffs that I'm trying to reuse on this rig (the case, the drive trays, usb stick, old ass vga and promise pata card). The thing assembled with no problems, and soon as I could boot I've updated the bios of promise ultra100 and asus p5b-e plus to the latest. The TwinMOS 128MB was formatted w/ syslinux, copied unRAID 4b10 over and it boots just fine. Everything's fine and dandy up to here.

 

Now the problems...

 

1). I'm having problem getting my PATA drives detected on unRAID. I connected one drive on the onboard PATA connector (JMicron I assume) and another drive on the Promise Ultra100. Both of the drives show up in BIOS and card initialization (JMicron and Promise Ultra100), yet unRAID can find neither of them.

 

2). I check up the JMicron documentation and it claims to support SATA Port Multiplier. Nice. So I hooked up a Stardom ST6600 (w/ Sil-3726 Port Multiplier chipset) to the eSata port, boots, and only one drive out of the 5 connected lights up. Drats.

 

So, I'm hoping for some help and support here..

 

Koolkiwi: Do you experience the same problem? Have you tried the PATA\SATA-PM features?

 

Tomm: Are the above mentioned PATA controllers supported in unRAID 4beta10 (Promise Ultra100 and JMicron in PATA mode)? Does JMicron correctly support SATA-PM in Linux 2.6?

 

Thanks!

 

YS

Link to comment

1). I'm having problem getting my PATA drives detected on unRAID. I connected one drive on the onboard PATA connector (JMicron I assume) and another drive on the Promise Ultra100. Both of the drives show up in BIOS and card initialization (JMicron and Promise Ultra100), yet unRAID can find neither of them.

 

I only used the PATA connector once when I tried an experiment of booting unRAID from an old hard drive instead of from a flash drive (work fine, but not supported for license keys, and not really a useful thing to do).

 

As I was interested in best performance, my build was intended to be SATA only.

 

2). I check up the JMicron documentation and it claims to support SATA Port Multiplier. Nice. So I hooked up a Stardom ST6600 (w/ Sil-3726 Port Multiplier chipset) to the eSata port, boots, and only one drive out of the 5 connected lights up. Drats.

 

So, I'm hoping for some help and support here..

 

Koolkiwi: Do you experience the same problem? Have you tried the PATA\SATA-PM features?

 

Tomm: Are the above mentioned PATA controllers supported in unRAID 4beta10 (Promise Ultra100 and JMicron in PATA mode)? Does JMicron correctly support SATA-PM in Linux 2.6?

 

Thanks!

 

YS

 

I have not tried the port multiplier approach.  Given the relatively low price of the Promise TX4 SATA300 cards, perfromance focus, and the 14 drive maximum supported by my P5B-e + 2x SATA300 TX4 cards, I didn't see any need to explore the PM approach.

 

Will be interested to follow progress on this though, if the PM approach eventually works and does not impact performance greatly?

 

Link to comment

Hm.. I've just realized (d'oh) that what I bought is different than what you have. My motherboard is P5B-E Plus, it has a tag line of '100% whatchamacallit capacitors' or some such. Supposedly it makes the board more reliable. That's all fine and dandy, but apparently it also uses a different network chipset: Marvell Yukon. Anyhow, I'm not complaining... the system is up and running unraid 4.0, and I've migrated loads of old stuffs to it for storage.

 

About the speed, I'm getting 42MB/s-55MB/s range on Parity sync and parity checks. Adding a PATA drive doesn't drop the speed by much (maybe 1-2MB/sec), which I don't consider that crucial due to the design of unRAID (Bless you, Tom).

 

As for Port Multiplier, I got a very nice external case (5 bay) that would be useful for drives expansion and it only needs a single power cable and a single eSata cable. PM runs on Sata-II (300MB/sec bandwidth), divided for 5 drives, each of them gets 60MB/sec... which seems more than adequate for the current crop of drives to me.

 

Btw, here are some pertinent numbers that I've used for design consideration: (There should be some subtle additions to each of them that I've omitted, such as PCI-e and firewire seems to have a QoS scheme).

 

PCI: 127.2MB/sec

PCI-e 1x: 250MB/sec

 

USB 2.0 = 60MB/sec

Firewire 400 = 50MB/sec

Firewire 800 = 100MB/sec

PATA : 133MB/sec

SATA-I : 150MB/sec

SATA-II : 300MB/sec

 

Link to comment

Hm.. I've just realized (d'oh) that what I bought is different than what you have. My motherboard is P5B-E Plus, it has a tag line of '100% whatchamacallit capacitors' or some such. Supposedly it makes the board more reliable. That's all fine and dandy, but apparently it also uses a different network chipset: Marvell Yukon. Anyhow, I'm not complaining... the system is up and running unraid 4.0, and I've migrated loads of old stuffs to it for storage.

 

Yes, the P5B-E Plus was quite a bit more expensive than the P5B-E where I live, and as you note, does use a different LAN controller. Glad to hear the controller is supported, and all is working with unRAID 4.0.  I guess you just have a more reliable system for your money. ;)

 

As for Port Multiplier, I got a very nice external case (5 bay) that would be useful for drives expansion and it only needs a single power cable and a single eSata cable. PM runs on Sata-II (300MB/sec bandwidth), divided for 5 drives, each of them gets 60MB/sec... which seems more than adequate for the current crop of drives to me.

 

Good point about the math, although these numbers are theoretical.  The ultimate test would be in practice, whether a single controller / 5 drive PM setup would actually have a noticable degradation of speed.  Although, as reading generally only involves the drive with the data, and writing is only 2 drives (data + parity), there may indeed be no impact in normal use.  The difference micght only show up during a parity build or drive recovery, when all drives are being accessed concurrently.

 

Btw, here are some pertinent numbers that I've used for design consideration: (There should be some subtle additions to each of them that I've omitted, such as PCI-e and firewire seems to have a QoS scheme).

 

PCI: 127.2MB/sec

PCI-e 1x: 250MB/sec

 

USB 2.0 = 60MB/sec

Firewire 400 = 50MB/sec

Firewire 800 = 100MB/sec

PATA : 133MB/sec

SATA-I : 150MB/sec

SATA-II : 300MB/sec

 

Nice list of theoretical throughput numbers.  Interesting though as it reminds me of when I tested a USB2 / Firewire external drive a few years back, and noticed that the firewire throughput was better than the USB2 throughput, despite firewire being technically slower. Perhaps this just pointed to firewire maybe having lower overheads than USB2 for large file / external drive usage?

 

Link to comment

Yes, the P5B-E Plus was quite a bit more expensive than the P5B-E where I live, and as you note, does use a different LAN controller. Glad to hear the controller is supported, and all is working with unRAID 4.0.  I guess you just have a more reliable system for your money. ;)

 

Well, that's the price I've paid for being out of touch with the motherboard market for quite awhile... I've been purchasing barebones systems lately (SFF and servers) and didn't take note that ASUS has all these Deluxe, Plus, Platinum Vista version shenanigans in the market :)

 

Good point about the math, although these numbers are theoretical.  The ultimate test would be in practice, whether a single controller / 5 drive PM setup would actually have a noticable degradation of speed.  Although, as reading generally only involves the drive with the data, and writing is only 2 drives (data + parity), there may indeed be no impact in normal use.  The difference micght only show up during a parity build or drive recovery, when all drives are being accessed concurrently.

 

Ah yeah, good point. unRAID will not hit the bottlenecks as often as those striped raid design which makes full use of all the drives during every read\writes. PM support will be a big feature for when unRAID supports multiple arrays. It'll become very easy to add sets of 5 drives to an unRAID server.. and even for transporting arrays from one unRAID to another(!)

 

Nice list of theoretical throughput numbers.  Interesting though as it reminds me of when I tested a USB2 / Firewire external drive a few years back, and noticed that the firewire throughput was better than the USB2 throughput, despite firewire being technically slower. Perhaps this just pointed to firewire maybe having lower overheads than USB2 for large file / external drive usage?

 

It may be due to the QoS scheme used.. and in USB 2.0's case: the poor implementation of USB 2.0 amongst low-end chipset makers (oxford, via) at least on their early iterations.

Link to comment

Hi,

 

How much power does your system use in different situations (idle, writing, reading, ...)?

 

B.R.

 

/Fredrik

 

This is a good question. 

 

I believe the biggest current drain contributing devices in any PC are:

- CPU

- Northbridge

- Memory

- Video card

- Hard Drives

 

For unRAID:

- Video card can be removed (or certainly a low performance / lower power PCI card can be used).

- Hard Drives are spun down when not being used (what is the spun down power drain? - I don't know - but based on the spun down Drives returning to room temperature, I suspect not much power used).

 

Therefore, the other controllable factor relating to power consumption is the choice of CPU / memory speed / clock speed.

 

I have chosen a Celeron CPU.  Clearly a Prescot core Pentium 4 would be a bad choice in terms of power consumption.

 

In the interests of minimising Power consumption, it would therefore seem useful to know the optimum CPU / unRAID performance trade-off. ie. The point where installling a faster / higher power consuming CPU has greatly diminished performance gains.

 

 

Link to comment
  • 2 weeks later...

1). I'm having problem getting my PATA drives detected on unRAID. I connected one drive on the onboard PATA connector (JMicron I assume) and another drive on the Promise Ultra100. Both of the drives show up in BIOS and card initialization (JMicron and Promise Ultra100), yet unRAID can find neither of them.

 

Did you ever get this problem resolved?  I am having the same issue - neither of my IDE drives show up in Unraid.  I would hate to have to return this motherboard and go back to my old P5PE-VM, but I need those IDE drives to work.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.