Jump to content

StevenD

Community Developer
  • Posts

    1,608
  • Joined

  • Days Won

    1

Everything posted by StevenD

  1. My guess is your Areca card... try using the parity disc on the SAS2LP as well. Why use this RAID card anyway? Well, just for grins, I moved my parity to a 6Gpbs on-board SATA port and I'm not very happy with the results. I lost 20MB/s on my array write speeds. I'm down to 40MB/s again. I thought I would be installing my M1015 this weekend, but my existing cables are about an inch too short! Looks like it will be Friday before I'm able to migrate to the M1015s. In the meantime, Ive been able to pull out one 1.5TB drive and Im working on the other two this week. I replaced one with a 4TB drive and the other two are just being distributed to other drives in the array for now. If the new config doesnt give me at least 60MB/s sustained array writes, I will be re-installing my Areca. Or getting a new Areca that supports 6Gbps.
  2. It was posted that Tom is away for a few days dealing with his Father's estate. Its also a holiday weekend!
  3. Thanks! This worked perfectly on my two new cards!
  4. I'm splitting up my raid 0 parity. I'm pretty sure those drives are good. thanks for the warning.
  5. I have a known working drive. I know, I know... Anyhow, is there a "quick" preclear so I can add it to the array quickly?
  6. Hey Steven, I just went back and looked at your results, and I would agree that your parity drive array is probably not the problem. Most likely those 1.5 TB drives are the main culprit. Having a mix of drive sizes impacts parity check / rebuild speed in multiple ways. Primarily, the slowest drive sets the pace for the whole array. Additionally, you get multiple slow-downs as each drive reaches the inner cylinders at different points during the parity check: so you would have slowdowns approaching 1.5TB, 2TB and 4TB. This doesn't necessarily affect read or write performance unless you're accessing data on one of those drives. Unless your parity check/rebuild times are unfathomably long, upgrades may not be cost effective. Anyway, I'm interested to hear how your upgrades go. -Paul My parity checks are longer than I would like them to be (~15 hours). I see some folks with ~8 hour checks with a 4TB parity. Besides the speed of a RAID0, one of the reasons I went with the Areca was the ability to configure up to an 8TB parity without buying new drives. I can play around with it and see what my best solution will be. I've been wanting to run unRAID on top of ESXi, and the two additional controllers will allow me to do that. Right now, I'm using the motherboard controllers in my array, so I don't have any controllers left over for ESXi after passing them through. I plan on putting in a new dual-port NIC that I will pass through to unRAID and experiment with NIC teaming. My unRAID server hosts all my media for the TVs in my house, as well as via Plex to several family members outside my house. Hy HTPC's are my only sources on each of my TVs, so I need to maximize performance as much as possible.
  7. Well, this thread has inspired me to do some upgrades. The sub-100MB/s speeds are just killing me. I have two M1015's coming on Saturday, as well as new SSDs for cache. I also plan on getting rid of the 1.5TB drives finally. At least for now, Im going to take the Areca card out to see if thats the bottleneck...I find it hard to believe it is.
  8. There is a way! Your scheduled parity check is just a cron job. The crontab format is: #minute hour mday month wday command So if you used the following values: 0 0 1-7 * * test $(date +%u) -eq 1 && /root/mdcmd check CORRECT Then the job would run at midnight (0 0), sometime between the 1st-7th of each month (1-7), every month (*), every day (*), but only if the test shows it is a Monday (test $(date +%u) -eq 1 &&) and it calls the parity check directly. I add this to my go file so it's always in my cron job list on reboot: # Add unRAID Fan Control & Monthly Parity Check Cron Jobs crontab -l >/tmp/crontab echo "# Run unRAID Fan Speed script every 5 minutes" >>/tmp/crontab echo "*/5 * * * * /boot/unraid-fan-speed.sh >/dev/null" >>/tmp/crontab echo "# Run a Parity Check on the First Monday of each Month at 12:00am" echo "0 0 1-7 * * test $(date +%u) -eq 1 && /root/mdcmd check CORRECT" crontab /tmp/crontab Note: I also add my unraid-fan-speed.sh script as a cron job, you don't need that part. Doing it this way doesn't require any packages or scripts or add-ons. Just a few lines in your go file. -Paul Thank you!! Thank you!! I'm not entirely familiar with decoding the crontab format. I figured out how to change the date and time, but couldnt figure out the way I really wanted.
  9. My guess is your Areca card... try using the parity disc on the SAS2LP as well. Why use this RAID card anyway? I don't have an SAS2LP card...its an SASLP. I use the Arcea for a RAID0 4TB parity and a 1TB RAID1 cache. I'm pretty sure its not the Areca slowing things down. It even has 256MB of cache. root@nas:~# hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 26486 MB in 1.99 seconds = 13284.33 MB/sec Timing buffered disk reads: 830 MB in 3.10 seconds = 268.14 MB/sec
  10. These 100MB/s+ speeds are pissing me off. I need to figure out what my bottleneck is!
  11. This is my entire reason for this. My parity check starts at midnight on the 2nd day of the month. Right now, my parity check takes ~15 hours to complete. If that happens to fall on a weekend, that's almost a whole day that I'm having to deal with it. I wish there was a way to schedule my monthly parity check for something like the 1st Monday of each month. I would love for my parity check to not fall on the weekend. I will report back on Monday to see if my parity check speed has increased.
  12. I just rebooted and re-enabled Plex...my family members might be itchy if it wasnt available tonight. Ill find some time to run another test. Just for the hell of it, Im running at full blast right now: Tunable (md_num_stripes): 5648 Tunable (md_write_limit): 2544 Tunable (md_sync_window): 2544 Could the fact that Im running my parity on a hardware RAID affect the numbers?
  13. Just finished a run with v2.0. I rebooted into "Safe Mode" and ran the utility from the console. I was also running to in another window and I never saw the CPU go over 3%. Im thinking I need to get rid of the 1.5TB Seagates to pick up any more speed. Thanks Paul! Tunables Report from unRAID Tunables Tester v2.0 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 67.5 MB/s 2 | 1536 | 768 | 640 | 74.4 MB/s 3 | 1664 | 768 | 768 | 75.5 MB/s 4 | 1920 | 896 | 896 | 69.8 MB/s 5 | 2176 | 1024 | 1024 | 77.2 MB/s 6 | 2560 | 1152 | 1152 | 72.9 MB/s 7 | 2816 | 1280 | 1280 | 78.7 MB/s 8 | 3072 | 1408 | 1408 | 75.2 MB/s 9 | 3328 | 1536 | 1536 | 75.6 MB/s 10 | 3584 | 1664 | 1664 | 79.2 MB/s 11 | 3968 | 1792 | 1792 | 74.8 MB/s 12 | 4224 | 1920 | 1920 | 79.7 MB/s 13 | 4480 | 2048 | 2048 | 75.3 MB/s 14 | 4736 | 2176 | 2176 | 79.5 MB/s 15 | 5120 | 2304 | 2304 | 78.9 MB/s 16 | 5376 | 2432 | 2432 | 75.7 MB/s 17 | 5632 | 2560 | 2560 | 80.8 MB/s 18 | 5888 | 2688 | 2688 | 76.0 MB/s 19 | 6144 | 2816 | 2816 | 79.5 MB/s 20 | 6528 | 2944 | 2944 | 79.7 MB/s --- Targeting Fastest Result of md_sync_window 2560 bytes for Medium Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 5416 | 2440 | 2440 | 76.5 MB/s 22 | 5440 | 2448 | 2448 | 79.3 MB/s 23 | 5456 | 2456 | 2456 | 78.6 MB/s 24 | 5472 | 2464 | 2464 | 79.5 MB/s 25 | 5488 | 2472 | 2472 | 79.3 MB/s 26 | 5504 | 2480 | 2480 | 79.5 MB/s 27 | 5528 | 2488 | 2488 | 79.2 MB/s 28 | 5544 | 2496 | 2496 | 79.5 MB/s 29 | 5560 | 2504 | 2504 | 79.2 MB/s 30 | 5576 | 2512 | 2512 | 80.8 MB/s 31 | 5600 | 2520 | 2520 | 79.5 MB/s 32 | 5616 | 2528 | 2528 | 79.6 MB/s 33 | 5632 | 2536 | 2536 | 80.6 MB/s 34 | 5648 | 2544 | 2544 | 81.5 MB/s 35 | 5664 | 2552 | 2552 | 80.7 MB/s 36 | 5688 | 2560 | 2560 | 79.5 MB/s Completed: 2 Hrs 10 Min 56 Sec. Best Bang for the Buck: Test 3 with a speed of 75.5 MB/s Tunable (md_num_stripes): 1664 Tunable (md_write_limit): 768 Tunable (md_sync_window): 768 These settings will consume 71MB of RAM on your hardware. Unthrottled values for your server came from Test 34 with a speed of 81.5 MB/s Tunable (md_num_stripes): 5648 Tunable (md_write_limit): 2544 Tunable (md_sync_window): 2544 These settings will consume 242MB of RAM on your hardware. This is 99MB more than your current utilization of 143MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values.
  14. Awesome! I had planned on running this last night, but was too tired to come upstairs to kick it off. I will run it tonight after the new version is posted.
  15. Hopefully you mean your desktop crashed and not your unRAID server, right? If you lost your connection during the test, that probably means two things: a Parity Check was still running (feel free to cancel it) and the last set of tested values are still in use. You can still see the accumulated results in the TunablesReport.txt file, as it is written to as the test progresses. This will also give you a clue as to what set of values was being tested. If you want to get back to your normal values, there are multiple ways, but the easiest is probably just to restart your server. I highly recommend using screen, especially if you are using a remote connection like Telnet. After you log onto the server, you run screen, and then you can run one or more console windows through screen. If you get disconnected for any reason, you telnet back onto the server and run a screen -r to reconnect. Everything remained running while you were disconnected. I use screen when doing pre-clears. I'll telnet to the server, open up several screen console windows, start up multiple pre-clears, then close my telnet connection. I then monitor everything through unMenu's MyMain status page, which shows pre-clear progress. -Paul Yes, my Windows 7 workstation crashed. Its never blue screened before. Looks like I have some work to do this weekend to see why. I usually run Pre-clears with Screen, but I figured this would take no more than 18 hours, and much of it would have been overnight. I went ahead and let the parity check continue to run. Its almost done and it appears to be running much faster than before. I will know for sure when its done and I can calculate the speed. I will run this again, probably on Sunday night.
  16. Well crap! The workstation I ran this from blue-screened last night and this didnt finish. I'll run it again on the console next week. I need Plex up for the next few days.
  17. Scared he crap out of me!! Temps are now showing in Fahrenheit.
  18. New Hitachi 4TB. Took 39 hours for one pass. ========================================================================1.13 == invoked as: ./preclear_disk.sh -A -m /dev/sda == Hitachi HDS724040ALE640 PK2301P == Disk /dev/sda has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 8225280 Bytes == Last Cycle's Pre Read Time : 9:46:06 (113 MB/s) == Last Cycle's Zeroing time : 10:19:09 (107 MB/s) == Last Cycle's Post Read Time : 18:50:08 (59 MB/s) == Last Cycle's Total Time : 38:56:32 == == Total Elapsed Time 38:56:32 == == Disk Start Temperature: 36C == == Current Disk Temperature: 39C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sda /tmp/smart_finish_sda ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 153 166 0 ok 39 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================ ============================================================================ == == S.M.A.R.T Initial Report for /dev/sda == Disk: /dev/sda smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: Hitachi HDS724040ALE640 Serial Number: PK2301P Firmware Version: MJAOA3B0 User Capacity: 4,000,787,030,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Wed Sep 12 20:25:26 2012 Local time zone must be set--see zic m SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 24) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 255) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 054 Pre-fail Offline - 0 3 Spin_Up_Time 0x0007 100 100 024 Pre-fail Always - 0 4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 4 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 020 Pre-fail Offline - 0 9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 20 10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 4 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 4 193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 4 194 Temperature_Celsius 0x0002 166 166 000 Old_age Always - 36 (Min/Max 25/37) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. ============================================================================== ============================================================================== == S.M.A.R.T Final Report for /dev/sda == Disk: /dev/sda smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: Hitachi HDS724040ALE640 Serial Number: PK2301P Firmware Version: MJAOA3B0 User Capacity: 4,000,787,030,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Fri Sep 14 11:21:58 2012 Local time zone must be set--see zic m SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 24) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 255) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 054 Pre-fail Offline - 0 3 Spin_Up_Time 0x0007 100 100 024 Pre-fail Always - 0 4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 4 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 020 Pre-fail Offline - 0 9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 58 10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 4 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 4 193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 4 194 Temperature_Celsius 0x0002 153 153 000 Old_age Always - 39 (Min/Max 25/40) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. ==============================================================================
  19. Is anybody running this under 5.0? I upgraded from 4.7 (where it was working perfectly), to 5.0-rc6-r8168-test2, and now its not working. PWMCONFIG doesnt find any devices. root@nas:/sys/class/hwmon/hwmon0/device# pwmconfig # pwmconfig revision 5770 (2009-09-16) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed /sys/class/hwmon/hwmon0/device doesn't list any PWM devices either lrwxrwxrwx 1 root root 0 2012-09-01 12:57 driver -> ../../../bus/platform/drivers/coretemp/ drwxr-xr-x 3 root root 0 2012-09-01 12:57 hwmon/ -r--r--r-- 1 root root 4096 2012-09-02 12:41 modalias -r--r--r-- 1 root root 4096 2012-09-01 12:58 name drwxr-xr-x 2 root root 0 2012-09-02 12:41 power/ lrwxrwxrwx 1 root root 0 2012-09-01 12:57 subsystem -> ../../../bus/platform/ -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp2_crit -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp2_crit_alarm -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp2_input -r--r--r-- 1 root root 4096 2012-09-02 12:41 temp2_label -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp2_max -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp3_crit -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp3_crit_alarm -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp3_input -r--r--r-- 1 root root 4096 2012-09-02 12:41 temp3_label -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp3_max -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp4_crit -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp4_crit_alarm -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp4_input -r--r--r-- 1 root root 4096 2012-09-02 12:41 temp4_label -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp4_max -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp5_crit -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp5_crit_alarm -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp5_input -r--r--r-- 1 root root 4096 2012-09-02 12:41 temp5_label -r--r--r-- 1 root root 4096 2012-09-01 12:58 temp5_max -rw-r--r-- 1 root root 4096 2012-09-01 12:57 uevent ls -l of /sys/class/hwmon only lists hwmon0 root@nas:/sys/class/hwmon# ls -l total 0 lrwxrwxrwx 1 root root 0 2012-09-01 12:58 hwmon0 -> ../../devices/platform/coretemp.0/hwmon/hwmon0/ Output of unraid-fan-speed.sh: root@nas:/boot/custom/bin# unraid-fan-speed.sh Highest temp is: 35 ./unraid-fan-speed.sh: line 74: /sys/class/hwmon/hwmon0/device/pwm2_enable: No such file or directory cat: /sys/class/hwmon/hwmon0/device/pwm2: No such file or directory ./unraid-fan-speed.sh: line 89: [: : integer expression expected ./unraid-fan-speed.sh: line 93: /sys/class/hwmon/hwmon0/device/pwm2: No such file or directory Setting pwm to: 120 Any ideas??? Thanks.
  20. I recently purchased one of these: http://www.amazon.com/gp/product/B004C3VI20 The description states, "Supports TWO 2.5" SATA I or II Hard Drives or SSD". However, I have a Crucial C300 and a Crucial M4 SSD in it (both SATA III/6Gb/s devices), and I get the same exact speeds that I get hooked directly to the motherboard.
  21. See this post: http://lime-technology.com/forum/index.php?topic=11585.msg110605#msg110605 Thanks!
×
×
  • Create New...