maxinc

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by maxinc

  1. Hi all and happy new 2015! Just wanted to check in and report a very similar behaviour with the same model of drives WD20EARS on unRAID 6.0-beta12. System: ASUSTeK COMPUTER INC. - H87M-PLUS CPU: Intel® Core™ i7-4770S CPU @ 3.10GHz Cache: 256 kB, 1024 kB, 8192 kB Memory: 8192 MB (max. 32 GB) Network: eth0: 1000Mb/s - Full Duplex Kernel: Linux 3.17.4-unRAID x86_64 Parity WDC_WD20EARX-00PASB0_WD-WCAZAJ839165 (sdb) Disk 1 WDC_WD20EARS-00MVWB0_WD-WCAZA4635215 (sdf) Disk 2 WDC_WD20EARS-00MVWB0_WD-WCAZA4635377 (sdg) - Frequently Disk 3 WDC_WD20EARS-00MVWB0_WD-WCAZA4578960 (sdh) - Occasionally Disk 4 WDC_WD20EARS-00MVWB0_WD-WCAZA4629088 (sdi) - Frequently Disk 7 SAMSUNG_HD154UI_S1XWJD2ZB03874 (sdd) Disk 8 SAMSUNG_HD154UI_S1XWJ1LZ400560 (sde) Cache SanDisk_SDSSDXP240G_142517400522 (sdc) I'm only running cache_dirs and Plex Media Server inside a docker container. Running cache_dirs alone, with PMS turned off doesn't seem to trigger the problem. With PMS on, drives stay off until I access a certain movie. After I finish watching, I can see in system log the Spin down commands for all spinning drives. They momentarily appear to go off but then they come back on and stay on indefinitely. Spinning them down manually - either individually or through the array spin down command - spins them down completely and keeps them off. I tried changing from 15min to 30min but the same effect occurs. I use to think only a coupe of drives were affected (disk 2 and 4) but I'm noticing it now on drive 3 too. Never seen it on the Samsung drives though. Hope this helps identify the root of the problem. Happy to run any kinds of tests if helps. My linux skills are not much more than a few basic shell commands. Unfortunately for me getting back to v5 is no longer an option since I joined two physical machines into one and love the new Dockers. Best, Andy
  2. You probably missed the word "little" but I'm glad we agree
  3. I played with ESXi on N36L shortly but because (1) you can not passthrough a hardware controller to a VM and (2) CPU is incredibly weak for any practical application, it makes little sense to run ESXi on a microserver other than for testing and experimenting.
  4. I've installed today a new, more efficient PSU and taken some measurement on the new tower. Needless to say that I'm impressed! Best of all, the N36L board doesn't seem to suffer from high CPU load during writes to cashed user shares like the Atom boards seem to do. Although I don't have a spare SSD to test at full Gbit speeds, CPU load writing on cache drive (300G 7200rpm drive) alone at ~75-80MB/s was about 35-40% while writing to a cached user share added a small overhead of about 10% with room to spare. Bearing in mind this is a 1.3GHz CPU, I would say this is a great low power board for unRAID. And for the stats: N36L Microserver board with 4GB RAM Supermicro MV8 Controller 4 x 12cm fans powered @ 7V 7 Drives - 4 x 2TB WD EARS, 2 x 1.5TB Samsung Green, 1 x 300G Hitachi ---Tagon, dual 12V rail @ 20amps / rail - 480W PSU Power off - 7.5W Boot Peak - 152W Boot - 102W Idle - all drives spun up - 92W Idle - 1 drive spun up - 61W Idle - all drives spun down - 56W --- Antec Neo Eco - single rail @ 30amps / rail - 400W PSU Power Off - 1.5W Boot Peak - 122W Boot - 82W Idle - all drives spun up - 74W Idle - 1 drive spun up - 52W Idle - all drives spun down - 43W
  5. I ended up doing the Trust Parity procedure which validated it immediately. A parity check and 7 hours later, everything is OK
  6. With an array this large, I would be reluctant to upgrade, especially if I wouldn't need any of the features in v5 in particular. I would try it on a test machine first to familiarise myself with the new features and concepts before trying it on a huge array. But this is me getting anxious when talking about TB of data
  7. After finishing some long drive consolidation procedures (replacing lots of smaller drives with fewer larger ones) I've come to a point where I would like to arrange the disks in the array to better correspond to their physical location in the tower. I have tried following the instruction on the wiki but upon reassignment to different slots, I'm getting the following screen: http://imageshack.us/photo/my-images/16/screenshot20130110at095.png/ I'm inclined at this stage to follow the Trust My Array procedure described here but reading the warnings, it mentions that no drives should be marked as disabled or missing. Maybe I'm misreading this and I though't I'd better ask first if it safe to ignore the missing marked disks as long as they are reassigned to different slots? Thanks!
  8. What was the average speed during file transfer? As suggested, xbmc is probably the best media player companion for unRaid so it's worth a try to narrow down the issue which at this point could be a combination of network performance, hw and sw capabilities. If you are running other services on the unraid such as sab or sickbeard, I would expect performance drop to occur especially if simultaneous read/write operations occur on the same disc even with a more powerful CPU. My new unRaid based on a microserver 1.3Ghz CPU is working miracles and streaming 1080p to several clients simultaneously so I know the CPU alone is hardly a limiting factor with unraid.
  9. I don't see why it wouldn't work. When transferring data between disks, I would copy it rather than moving it, this way I would still have a copy if the rebuild fails for any reason. Also, it is best if you don't use the server during the rebuild (like watching movies or copying data) to minimise stress on the busy disks. For me is always a time of great anxiety when such operations take place )
  10. Thanks Joe, that's reassuring. I think I now have a better understanding on how to read SMART reports
  11. As suspected, 195 Hardware_ECC_Recovered values have reset to "normal" values (a few thousands) after power down / reboot. I guess they can safely be ignored for the time being ... unless someone has a different theory
  12. The rebuild process moved past the 1.5TB mark and the 2 samsung drives stopped incrementing the Hardware_ECC_Recovered which has now reached 483,000,419 and 355,541,041. At this point I'm convinced this has happened during this rebuilt alone and I can't wait for it to finish rebuilding so that I can power down and reboot the server. It may be that the value will reset to 0 and that this is some kind of internal counter similar to what Seagate drives seem to use.
  13. I could be wrong since I never did thing on v5 but on v4.7 I was able to access the web interface at all times during rebuild and it took mine about 5-6 hours for a 2TB drive so I would be inclined to suspect something is wrong at this stage. Can you connect a monitor and see if you can access the console?
  14. Basically yes, it will create a single pool of storage equal to the combined size of all data drives (excl. parity drive) to which you can assign one or multiple user shares as you see fit. That is only true during routine parity checks / rebuilds (once a month or when adding / removing drives) where all drives need to read data simultaneously for the computation. Since files are not split among drives, the read and write speeds are determined by the drive where data being read / written sits, although writes are much slower than reads since they require parity to be written on the parity drive too. The parity drive needs to be equal or larger than the largest data drive in the array. for 1T + 1T + 2T + 3T data drives you need at least 3T parity (but can be larger such as 4T). If you add a 4T data drive into the array, you need to change the parity drive to at least 4T drive.
  15. You could check if you need to enable Network Discovery in Windows 8 ... http://windows.microsoft.com/en-US/windows-vista/Enable-or-disable-network-discovery
  16. I've been in this situation recently when I had to replace 4 x 1TB drives with 2 x 2TB. I needed the 1TB drives for something else and I like running the tower with the minimum amount of drives but not more than I actually need. For me 2TB of free space is enough and I add more capacity once I'm well into the last TB. There is information on the WiKi on how to remove drives and separate info on how to upgrade drives. http://lime-technology.com/forum/index.php?topic=2088.0 http://lime-technology.com/wiki/index.php/Replacing_a_Data_Drive As jonathanm mentioned, when you remove a drive which you don't intend to replace, unRAID will have to recompute the parity which will leave all your data unprotected for several hours. To minimise any potential problems I would suggest to perform a parity check before and after each procedure takes place and inspect the SMART report for the drives after each one of them. Here'e what I'd do in you case. 1. Parity check + SMART report check. Make sure there are no errors in the array to start with. 2. Replace 1st 500G drive with 2TB drive. (see 2nd link) -> Wait for the upgrade several hours. 3. Parity check. SMART report check. Make sure all went ok and new drive is healthy. 4. Copy all data from the 2nd 500G disk to the upgraded disk. 5. Stop array. Unassign drive from the the array. Telnet into server > initconfig (see instructions in the 1st link). 6. Start array and wait patiently to rebuild. -> SMART report check. 7. Parity check. SMART report check. Before using the array again, make sure all went well. It is a long process and it would be a lot shorted if you would have kept all 3 drives. On the other hand this is a good opportunity to check the array is in good shape and identify any potential weakness in the system. One quicker possibility if you have a spare SATA port is to: 1. Add 2TB drive into the array. Being pre-cleared this should take only a few minutes. 2. Copy data from 2 x 500G drives. 3. Parity Check + SMART 4. Remove both 500G drives at the same time. Initiconfig + rebuild. 5. Parity Check + SMART
  17. I would firstly copy one of the 1080p from the unraid server to the HTPC and play it locally. In the process I would evaluate the transfer speed for the file transfer as well as if the file plays correctly on the HTPC. You may be surprised with the capabilities of the HTPC software. Is your HTPC connected via WiFi, as this would explain poor streaming performance for 1080p. A 10Mbit/s movie needs about 1.2MB/s sustained transfer speed which shouldn't be a problem given the reported transfer speed of 10-30MB/s. These speed should comfortably allow full Bluray streams of 54Mbit/s over a LAN connection but a WiFi could struggle under certain conditions.
  18. Perhaps the important thing to know is that VGA sends analog signals compared with HDMI & DVI-D that are digital and can be interconnected without a device converting the signal in between. This is why you can't send VGA through HDMI without first converting the analog signal into digital. If your computer has a DVI connector as well or have a cheap video card laying around, you could connect that to HDMI through a small adapter. By your description, it seems like the TV screen can not understand the display resolution and refresh mode sent by the video card directly and you can't do much about that other that fiddling with TV settings to see if you missed anything. Is the video card built into the main board or is it an expansion card?
  19. I am in the process of upgrading some disks in my tower which involved several parity checks / rebuilds. I don't normally check smart reports but out of boredom this morning I did and on a couple of 1.5TB Drives (SAMSUNG HD154UI) I noticed the value of the Hardware_ECC_Recovered is increasing rapidly during the drive upgrade process. Here's what I've done during the last few days. I'm running 4.7 Pro. 1. Replaced the motherboad of the server with a HP microserver board but kept the MV8 controller that I had on the old celeron board. 2. Did a parity check with 0 errors. All 8 drives are run from MV8 card. Only cache drive runs form spare SATA port on MB. 3. Moved data from 2 old 1TB Samsung drives and removed them from the array. Rebuilt parity with 0 errors. 4. Another parity check to verify the rebuilt parity, again 0 errors. 5. Replaced 1st 1TB Hitachi drive with new 2TB WD Ears drive (pre cleared), Starting the upgrade process. 6. Decided to run some SMART reports to check on drive statuses for reallocated sectors and such. This is when I've noticed the 2 x SAMSUNG HD154UI reporting a high Hardware_ECC_Recovered which is increasing rapidly as the upgrade process continues. I don't have an old report to compare the figures with data before the upgrade. The other drives in the array don't report this parameter in the SMART report. The drive upgrade process is going to last at least another 4 hours. Should I put back the old drive that I'm currently upgrading and run some tests, or can I safely let it finish? ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 071 071 011 Pre-fail Always - 9510 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 1003 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 10618 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 69 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 085 073 000 Old_age Always - 15 (Lifetime Min/Max 10/15) 194 Temperature_Celsius 0x0022 084 066 000 Old_age Always - 16 (Lifetime Min/Max 10/17) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 165567083 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 100 100 000 Old_age Always - 0 ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 070 070 011 Pre-fail Always - 9710 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 1112 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 10636 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 70 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 084 066 000 Old_age Always - 16 (Lifetime Min/Max 11/16) 194 Temperature_Celsius 0x0022 083 060 000 Old_age Always - 17 (Lifetime Min/Max 11/17) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 109349427 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 100 100 000 Old_age Always - 0 Full reports and syslog attached. Any thoughts are greatly appreciated. Thanks! smart_report_hdd_1.txt smart_report_hdd_2.txt syslog-2013-01-09.txt
  20. The decision come from reducing costs. I had a spare MicroServer left after a client upgraded to something more powerful and I have another 2 running in the loft with other tasks. Overall they are neat little machines and it was hard for me to take one apart. Since my unRAID tower was due for an energy efficiency upgrade, my options were to either buy a new motherboard or put them inside a Microserver and upgrade to larger drives, both of which involved new hardware and more money which would have defeated the purpose. The thing I love most about unRAID is the flexibility it offers and that it works with pretty much anything. Apart from the MV8 controller, some drives and a couple of fans, most of the stuff it runs is on recycled hardware. Indeed, I wanted to preserve the components as much as possible. Apart from a 4mm whole into the mounting tray, everything is pretty much intact.
  21. I'm not sure if any of you did this already but today I've super-sized on my MicroServers into a full blown energy efficient 14-drive unRAID tower. I've used the Microserver mainly as a backup for important stuff while the main Tower used an old Celeron board which idling at 180W wasn't very energy efficient at all. Having a surplus microserver, I've decided to take the mainboard and some internal from the MicroServer and upgrade the old tower. The proprietary format of the mainboard made it a little difficult to fit but after some measuring I've decided to attach the the mainboard metal plate to the ATX tower and mount the mainboard into it's original and secure location. To my surprise, this was not only an energy efficient upgrade, but a performance one as well. Writes on the new mainboard have risen up to 21MB/s from 16MB/s while the cache drive writes at over 65MB/s from about 50MB/s on the previous setup. Mainboard is fully compatible with the Supermicro MV8 controller as you can see in the pictures, it runs with 9 Discs and has room for 5 more. Although you can use the HP mini SAS to 4 SATA cable, it was a bit difficult to manage the power connectors since they're joined with the data connector and terminated in molex connectors. I've ordered a normal SAS to SATA splitter cable, hence only 1 of the on board SATA ports are now in use. Migration was straight forward, I only had to reassign the drives and run a parity check. System idles now at 57W with 9 drives spun down, although the PSU is a bit dodgy (unknown brand) and sucks 8W when system is powered down!!?! Further 8W is taken by the MV8 controller alone while the 9 drives seem to drain about 9-10W when spun down. I run a variety of 2TB, 1.5TB, 1TB 5400 & 7200 drives. The BIOS complained about missing the original 4-pin fan and kept shutting down the system so I installed and connected it as well. I know is not as pretty as the original but when 6 drives won't cut it anymore, there is no need to invest in additional hardware, other than a new case. Had to drill the old case and mount 3 mounting spacers I used 2 of the existing whole on the plate and drilled a third one for stability Mainboard sits comfortably and secure with 9 drives connected.
  22. I've ordered mine from a company Lambdatek in UK. It took about 10 days to get it from the states, but they ship outside of UK. You'll just have to add anoter 7-10 days to that. Well worth the wait if you ask me. http://www.lambda-tek.com/componentshop/index.pl?searchString=AOC-SASLP-MV8&go=go
  23. To answer (4) I setup Dropbox as a sync mechanism since it's multi platform and it transfers files through local network. You still need all devices to have access to the internet account. Not ideal but it works great and Thumbnail folder is always in sync on all 3 computers (Live, Win 7 and OS X). I really hope this will change in the future with something simpler. My problem at the moment is that I used the cache drive for the MySQL and it worked excellent until the first reboot when all databases appear empty. I'm not sure if this has something to do with using the cache disk. I used the unMenu interface and setup the script to auto install after reboot.
  24. Now this a truly massive build! Impressive. I wonder how much it weights ... I noticed you have 8 drives on the PCI bus, doesn't that slow down the parity checks? Edit: Never mind that. I read it now under dislikes...
  25. I used to have a Chieftec case like this a few years ago. I was wondering at that time what could possibly one could install in such a case ... now the answer is obvious Regarding you speed problem, it looks like your controller is PCI-E 1x which should have in theory 250MB bandwidth. If you are pre-clearing all 4 drives simultaneously that should give you about 62.5MB / disk including all overheads so I'm guessing this could be a possible bottleneck.