Jump to content

StevenD

Community Developer
  • Posts

    1,608
  • Joined

  • Days Won

    1

Everything posted by StevenD

  1. You missed the key file. Its on the root of the flash drive.
  2. Do you have an "Auto ON" or "Always ON" setting in BIOS. Its been a while since I used this motherboard, so I dont remember.
  3. StevenD

    Slim HDMI

    I would join in a heartbeat! That being said, Ive been able to find a lot of what Monoprice sells on Amazon, with Prime shipping. http://www.amazon.com/b/ref=sr_1_1_acs_h_1i_2529935011&node=2529935011?ie=UTF8&qid=1378397711&sr=8-1-acs
  4. My new Monoprice cables are due to arrive today. I will let you know how they work out.
  5. StevenD

    Slim HDMI

    I dont think $10 for a 6' Redmere cable is expensive at all!
  6. Sounds like your Areca is not the problem. 60 MB/s write speeds is phenomenal for an unRAID server. Hopefully we can tune your write speeds soon for even better performance! Yeah, not much sense in running your script right now. Ill wait until I get my "final" config. All controllers will be 6Gbps, no drives smaller that 2TB. Cache drive has been upgraded to a 180GB Intel SSD. Hopefully we will have the cache pool shortly. I have another 180GB drive ready to add to it for redundancy. Once I get everything working how I want, I will then move it to ESXi. I have two ESXi servers already, with a fibre channel SAN. It would be nice to have one more host for failovers.
  7. My guess is your Areca card... try using the parity disc on the SAS2LP as well. Why use this RAID card anyway? Well, just for grins, I moved my parity to a 6Gpbs on-board SATA port and I'm not very happy with the results. I lost 20MB/s on my array write speeds. I'm down to 40MB/s again. I thought I would be installing my M1015 this weekend, but my existing cables are about an inch too short! Looks like it will be Friday before I'm able to migrate to the M1015s. In the meantime, Ive been able to pull out one 1.5TB drive and Im working on the other two this week. I replaced one with a 4TB drive and the other two are just being distributed to other drives in the array for now. If the new config doesnt give me at least 60MB/s sustained array writes, I will be re-installing my Areca. Or getting a new Areca that supports 6Gbps.
  8. It was posted that Tom is away for a few days dealing with his Father's estate. Its also a holiday weekend!
  9. You need to get your case model number, then figure out which breakout cable you need. They look like this: This particular one is part number CBL-0084L. Its the 16-pin one. I believe they also come in a 20-pin.
  10. This! Been using it for years. Way better than Windows Indexing.
  11. Thanks! This worked perfectly on my two new cards!
  12. As suspected...Tom released 5.0 so he can take a vacation!
  13. I'm splitting up my raid 0 parity. I'm pretty sure those drives are good. thanks for the warning.
  14. I have a known working drive. I know, I know... Anyhow, is there a "quick" preclear so I can add it to the array quickly?
  15. Hey Steven, I just went back and looked at your results, and I would agree that your parity drive array is probably not the problem. Most likely those 1.5 TB drives are the main culprit. Having a mix of drive sizes impacts parity check / rebuild speed in multiple ways. Primarily, the slowest drive sets the pace for the whole array. Additionally, you get multiple slow-downs as each drive reaches the inner cylinders at different points during the parity check: so you would have slowdowns approaching 1.5TB, 2TB and 4TB. This doesn't necessarily affect read or write performance unless you're accessing data on one of those drives. Unless your parity check/rebuild times are unfathomably long, upgrades may not be cost effective. Anyway, I'm interested to hear how your upgrades go. -Paul My parity checks are longer than I would like them to be (~15 hours). I see some folks with ~8 hour checks with a 4TB parity. Besides the speed of a RAID0, one of the reasons I went with the Areca was the ability to configure up to an 8TB parity without buying new drives. I can play around with it and see what my best solution will be. I've been wanting to run unRAID on top of ESXi, and the two additional controllers will allow me to do that. Right now, I'm using the motherboard controllers in my array, so I don't have any controllers left over for ESXi after passing them through. I plan on putting in a new dual-port NIC that I will pass through to unRAID and experiment with NIC teaming. My unRAID server hosts all my media for the TVs in my house, as well as via Plex to several family members outside my house. Hy HTPC's are my only sources on each of my TVs, so I need to maximize performance as much as possible.
  16. Well, this thread has inspired me to do some upgrades. The sub-100MB/s speeds are just killing me. I have two M1015's coming on Saturday, as well as new SSDs for cache. I also plan on getting rid of the 1.5TB drives finally. At least for now, Im going to take the Areca card out to see if thats the bottleneck...I find it hard to believe it is.
  17. I usually stay out of these discussions, but a "Pro" license is peanuts compared to what most of us spend on the hardware for our unRAID servers. Just buy the Pro and move along.
  18. There is a way! Your scheduled parity check is just a cron job. The crontab format is: #minute hour mday month wday command So if you used the following values: 0 0 1-7 * * test $(date +%u) -eq 1 && /root/mdcmd check CORRECT Then the job would run at midnight (0 0), sometime between the 1st-7th of each month (1-7), every month (*), every day (*), but only if the test shows it is a Monday (test $(date +%u) -eq 1 &&) and it calls the parity check directly. I add this to my go file so it's always in my cron job list on reboot: # Add unRAID Fan Control & Monthly Parity Check Cron Jobs crontab -l >/tmp/crontab echo "# Run unRAID Fan Speed script every 5 minutes" >>/tmp/crontab echo "*/5 * * * * /boot/unraid-fan-speed.sh >/dev/null" >>/tmp/crontab echo "# Run a Parity Check on the First Monday of each Month at 12:00am" echo "0 0 1-7 * * test $(date +%u) -eq 1 && /root/mdcmd check CORRECT" crontab /tmp/crontab Note: I also add my unraid-fan-speed.sh script as a cron job, you don't need that part. Doing it this way doesn't require any packages or scripts or add-ons. Just a few lines in your go file. -Paul Thank you!! Thank you!! I'm not entirely familiar with decoding the crontab format. I figured out how to change the date and time, but couldnt figure out the way I really wanted.
  19. My guess is your Areca card... try using the parity disc on the SAS2LP as well. Why use this RAID card anyway? I don't have an SAS2LP card...its an SASLP. I use the Arcea for a RAID0 4TB parity and a 1TB RAID1 cache. I'm pretty sure its not the Areca slowing things down. It even has 256MB of cache. root@nas:~# hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 26486 MB in 1.99 seconds = 13284.33 MB/sec Timing buffered disk reads: 830 MB in 3.10 seconds = 268.14 MB/sec
  20. These 100MB/s+ speeds are pissing me off. I need to figure out what my bottleneck is!
  21. This is my entire reason for this. My parity check starts at midnight on the 2nd day of the month. Right now, my parity check takes ~15 hours to complete. If that happens to fall on a weekend, that's almost a whole day that I'm having to deal with it. I wish there was a way to schedule my monthly parity check for something like the 1st Monday of each month. I would love for my parity check to not fall on the weekend. I will report back on Monday to see if my parity check speed has increased.
  22. I just rebooted and re-enabled Plex...my family members might be itchy if it wasnt available tonight. Ill find some time to run another test. Just for the hell of it, Im running at full blast right now: Tunable (md_num_stripes): 5648 Tunable (md_write_limit): 2544 Tunable (md_sync_window): 2544 Could the fact that Im running my parity on a hardware RAID affect the numbers?
  23. Just finished a run with v2.0. I rebooted into "Safe Mode" and ran the utility from the console. I was also running to in another window and I never saw the CPU go over 3%. Im thinking I need to get rid of the 1.5TB Seagates to pick up any more speed. Thanks Paul! Tunables Report from unRAID Tunables Tester v2.0 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 67.5 MB/s 2 | 1536 | 768 | 640 | 74.4 MB/s 3 | 1664 | 768 | 768 | 75.5 MB/s 4 | 1920 | 896 | 896 | 69.8 MB/s 5 | 2176 | 1024 | 1024 | 77.2 MB/s 6 | 2560 | 1152 | 1152 | 72.9 MB/s 7 | 2816 | 1280 | 1280 | 78.7 MB/s 8 | 3072 | 1408 | 1408 | 75.2 MB/s 9 | 3328 | 1536 | 1536 | 75.6 MB/s 10 | 3584 | 1664 | 1664 | 79.2 MB/s 11 | 3968 | 1792 | 1792 | 74.8 MB/s 12 | 4224 | 1920 | 1920 | 79.7 MB/s 13 | 4480 | 2048 | 2048 | 75.3 MB/s 14 | 4736 | 2176 | 2176 | 79.5 MB/s 15 | 5120 | 2304 | 2304 | 78.9 MB/s 16 | 5376 | 2432 | 2432 | 75.7 MB/s 17 | 5632 | 2560 | 2560 | 80.8 MB/s 18 | 5888 | 2688 | 2688 | 76.0 MB/s 19 | 6144 | 2816 | 2816 | 79.5 MB/s 20 | 6528 | 2944 | 2944 | 79.7 MB/s --- Targeting Fastest Result of md_sync_window 2560 bytes for Medium Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 5416 | 2440 | 2440 | 76.5 MB/s 22 | 5440 | 2448 | 2448 | 79.3 MB/s 23 | 5456 | 2456 | 2456 | 78.6 MB/s 24 | 5472 | 2464 | 2464 | 79.5 MB/s 25 | 5488 | 2472 | 2472 | 79.3 MB/s 26 | 5504 | 2480 | 2480 | 79.5 MB/s 27 | 5528 | 2488 | 2488 | 79.2 MB/s 28 | 5544 | 2496 | 2496 | 79.5 MB/s 29 | 5560 | 2504 | 2504 | 79.2 MB/s 30 | 5576 | 2512 | 2512 | 80.8 MB/s 31 | 5600 | 2520 | 2520 | 79.5 MB/s 32 | 5616 | 2528 | 2528 | 79.6 MB/s 33 | 5632 | 2536 | 2536 | 80.6 MB/s 34 | 5648 | 2544 | 2544 | 81.5 MB/s 35 | 5664 | 2552 | 2552 | 80.7 MB/s 36 | 5688 | 2560 | 2560 | 79.5 MB/s Completed: 2 Hrs 10 Min 56 Sec. Best Bang for the Buck: Test 3 with a speed of 75.5 MB/s Tunable (md_num_stripes): 1664 Tunable (md_write_limit): 768 Tunable (md_sync_window): 768 These settings will consume 71MB of RAM on your hardware. Unthrottled values for your server came from Test 34 with a speed of 81.5 MB/s Tunable (md_num_stripes): 5648 Tunable (md_write_limit): 2544 Tunable (md_sync_window): 2544 These settings will consume 242MB of RAM on your hardware. This is 99MB more than your current utilization of 143MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values.
×
×
  • Create New...