Peregrine

Members
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Peregrine's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I would also add replacing (or supplementing) telnet with SSH, and using HTTPS for the management web page, or else passwords can be sniffed off of the network with relative ease.
  2. When I being to envision loading one of those out with the new Hitachi 1 TB drives, I grow faint at the prospect...
  3. <AOL> Me too!</AOL> Seriously, I do want to endorse this - this is probably the best way to use IDE for unRAID, as it gets around the "one-drive-per-channel-at-a-time" behavior of IDE. As to master/slave, I have always preferred to manually set that as appropriate, unless the mobo does not support it and requires cable select (like some Dells I have encountered). It is simply "one less thing to go wrong", as the cable position and drive settings will match with no detect-and-set required by the mobo BIOS. I would set your first two drives as master on separate cables, and your third (second data drive) as slave on the same cable as your first data drive.
  4. The Silverstone unit is a 4-in-3 converter. OP asked for 5-in-3.
  5. Quoting another poster from another thread: Six SATA ports on a mini ITX board - no cards needed.
  6. Hard question; I hadn't looked before I posted that suggestion. I would check the manufacturer's support site for documentation. That might show MB layout and what's connected to what.
  7. The reviewer's talk of 'cache' is simply the RAM in the faster NAS. He also notes that a faster processor* (when file sizes exceed 'cache'/RAM) helps as well. So, I would suggest the following to boost unRAID performance: 1. PCIe bus (Not from the NAS article, but from discussions here on the forums.) 2. SATA drives (Ditto) 3. RAM (This from the NAS article) 4. Fast processor (Ditto*) Note that you may never reach the theoretical throughput of Gig Ethernet due to performance issues of other components, but the PCIe bus will help that. Hard drive performance could still be a bottleneck, but the review suggests that 8 MB cache in a 7200 RPM SATA drive is the bang-for-the-buck mark, as 16 MB cache or 10,000 RPM Raptors aren't worth the price - the performance boost isn't high enough. If you want to put extra money into your hardware for performance, here are my recommendations: 1. Get a MB with a PCIe bus. Make sure any onboard Gig Ethernet or SATA controller is on the PCIe bus, or get PCIe cards as necessary 2. Use SATA drives (obviously) 3. Pump up the RAM according to your budget 4. I think a current Celeron (-D or -M) at 2+ GHz would be sufficient processor*, based on the forums here. If you really want something better, the dual core Pentium 805D is running about $65 at Newegg. *The faster processor in the NAS article was a 600 MHz Celeron, and the processor speed was only relevant when file size exceeded 'cache'/RAM size. If you put 4 GB RAM (the reviewer's 'dream' amount for a NAS) in your unRAID box, processor speed will likely only become significant if you are dealing with files larger than 4 GB. And with today's Celerons, you are well beyond the scale of what the reviewer experienced.
  8. I don't have hard numbers to work from, so what follows is simply my understanding based on perusing the forums, etc. Those with better information than I can confirm or deny my comments. Disclaimer out of the way, here's my take: the minimum requirements are based on the fact that the parity calculations are performed by the CPU. If by "a few HDD", you mean 3-6, and you are willing to accept very slow performance when writing files or (especially) performing full parity calculations, then I would think it would work. For routine data reads you should be fine, although multiple simultaneous reads might also show some performance issues. If you decide to go with this configuration, I'm sure Tom would love to see a performance report. So would a lot of the rest of us. Let us know how it goes.
  9. Well, the CM Stacker cases used for the hardware builds on the website hold two power supplies. And I've seen the 1kW power supplies myself. (Finding a UPS for one of those monsters is an expense in itself...) As to heat, case design impacts that greatly. For the case I am using, airflow across the 12 internal bays has been deliberately engineered; the 5-in-3 and 3-in-2 mounts have fans included. And the only time either of these issues would be of concern is, as noted, at boot, parity check, or drive rebuild. With multiple arrays, however, the power and heat are spread across arrays for parity check or drive rebuild (as only the drives in a given array are active for parity check or drive rebuild); only at boot would all drive spin up. These are system design issues, however. Desired performance parameters setting a limit on number of drives per array is good design on Tom's part; if multiple arrays were possible then the performance issues could be reduced even further (at the cost of lost data space for additional parity drives), and we who build our own systems could make the informed decision about how many drives to put in a single case, how many arrays to configure them in, etc. The unRAID concept has a lot of potential, and I would very much like to see that potential unleashed.
  10. Ah. That makes a *lot* of sense. (Need to go edit another post I just made...) So, for the kind of megabuild I'm envisioning (although I plan to use the external drives for backup to disk, so I'd be looking at 23+/- drives in the case for data and parity), maybe two or three separate arrays would be wise...
  11. That will be inherent in the build. I am not going to have the same bzimage / bzroot format. The USB drive will have an actual linux filesystem. My releases will be an image file that you will need to "DD" onto your drive. I think I may do two releases. One that has build utilities and extra libraries (for intermediate users to install thier own stuff), and one for typical end users (which will fit on smaller chips). Primarily, I'm going to be releasing the same platform I build on, so there may be extra stuff laying around in some of the earlier builds (log files and whatnot) out of the box. I hope this is what you meant when you said "open". I assumed this meant easier to make perpetual changes, etc. The way I'm trying to accomplish this is by mounting the USB drive as nosync, and then running sync once an hour (configurable), on configuration changes, on UPS messages, on user intervention, and other trigger events. Off topic, but I just remembered, I'm going to put a firewall on there as well, but its going to be completely disabled by default. That is exactly what I meant; excellent! Moving to unRAID 4.x as the base is definitely the right move; the 2.6.x kernel will enable a much broader hardware selection. Now, can you do something about that 14 drive/array limitation... Redacted due to an excellent explanation elsewhere on the boards about why there is a 14 drive limitation. How about running multiple unRAID arrays on a single build?
  12. Has Tom mentioned why there is a 14 drive limit? Or, why he came up with 14? 14 is an odd number in the PC world. With the config I mentioned above, the 2 x JMicron onboard SATA ports remain unused. Also, if you use the well regarded CM Stacker case with 4 of the standard CM 4-in-3 modules, the case nicely holds 16 drives. 16 is a nice power of 2, so why does the unRAID software limit to 14 instead of 16 drives? This probably belongs in the suggestions forum, but wouldn't it be great if a future version allowed: - Perhaps up to 32 drives. - An option for a 2nd parity drive for large array's, such that any 2 drive simultaneous failure could be recovered (like RAID6). Given unRAID's energy efficient ability to spin-down all drives, and only spin-up the drive fith the files being accessed, this power efficiency should make it viable to have much larger drive arrays. Since I have brainstormed a way to up my above 28 drive config to 32 drives (23 internal and 9 eSATA), I'd be all for that.
  13. Moving to 2.6 kernel/unRAID 4.x is good! How about a more "open" build so that we can add whatever stuff we want to our own systems? (Like P2P clients...)
  14. I am using the same MB with this case: http://www.newegg.com/Product/Product.aspx?Item=N82E16811112062 12 internal HD bays, plus 7 external 5-1/4 bays; a single 5-in-3 mount and a pair of 3-in-2 mounts and I could fit (2+1=3, no carry...) 23 HD in the case, plus one external via the eSATA port on the MB. 7 internal SATA ports on the MB, plus 3 PCI slots @ 4 SATA ports per controller yields (3*4=12, +7...) 19 internal SATA ports plus one eSATA. Hm. I'd need a PCI-e SATA card to max out the case. 23 internal and one eSATA - 24 drives. Hm. Another PCI-e controller, and one of these (http://www.cooldrives.com/intoexes4pop.html) plus a 4 bay external case (http://www.newegg.com/Product/Product.aspx?Item=N82E16817994037) - 28 drives. 14 drives is starting to seem like a low limit...