maxinc

Members
  • Posts

    36
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

maxinc's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi all and happy new 2015! Just wanted to check in and report a very similar behaviour with the same model of drives WD20EARS on unRAID 6.0-beta12. System: ASUSTeK COMPUTER INC. - H87M-PLUS CPU: Intel® Core™ i7-4770S CPU @ 3.10GHz Cache: 256 kB, 1024 kB, 8192 kB Memory: 8192 MB (max. 32 GB) Network: eth0: 1000Mb/s - Full Duplex Kernel: Linux 3.17.4-unRAID x86_64 Parity WDC_WD20EARX-00PASB0_WD-WCAZAJ839165 (sdb) Disk 1 WDC_WD20EARS-00MVWB0_WD-WCAZA4635215 (sdf) Disk 2 WDC_WD20EARS-00MVWB0_WD-WCAZA4635377 (sdg) - Frequently Disk 3 WDC_WD20EARS-00MVWB0_WD-WCAZA4578960 (sdh) - Occasionally Disk 4 WDC_WD20EARS-00MVWB0_WD-WCAZA4629088 (sdi) - Frequently Disk 7 SAMSUNG_HD154UI_S1XWJD2ZB03874 (sdd) Disk 8 SAMSUNG_HD154UI_S1XWJ1LZ400560 (sde) Cache SanDisk_SDSSDXP240G_142517400522 (sdc) I'm only running cache_dirs and Plex Media Server inside a docker container. Running cache_dirs alone, with PMS turned off doesn't seem to trigger the problem. With PMS on, drives stay off until I access a certain movie. After I finish watching, I can see in system log the Spin down commands for all spinning drives. They momentarily appear to go off but then they come back on and stay on indefinitely. Spinning them down manually - either individually or through the array spin down command - spins them down completely and keeps them off. I tried changing from 15min to 30min but the same effect occurs. I use to think only a coupe of drives were affected (disk 2 and 4) but I'm noticing it now on drive 3 too. Never seen it on the Samsung drives though. Hope this helps identify the root of the problem. Happy to run any kinds of tests if helps. My linux skills are not much more than a few basic shell commands. Unfortunately for me getting back to v5 is no longer an option since I joined two physical machines into one and love the new Dockers. Best, Andy
  2. You probably missed the word "little" but I'm glad we agree
  3. I played with ESXi on N36L shortly but because (1) you can not passthrough a hardware controller to a VM and (2) CPU is incredibly weak for any practical application, it makes little sense to run ESXi on a microserver other than for testing and experimenting.
  4. I've installed today a new, more efficient PSU and taken some measurement on the new tower. Needless to say that I'm impressed! Best of all, the N36L board doesn't seem to suffer from high CPU load during writes to cashed user shares like the Atom boards seem to do. Although I don't have a spare SSD to test at full Gbit speeds, CPU load writing on cache drive (300G 7200rpm drive) alone at ~75-80MB/s was about 35-40% while writing to a cached user share added a small overhead of about 10% with room to spare. Bearing in mind this is a 1.3GHz CPU, I would say this is a great low power board for unRAID. And for the stats: N36L Microserver board with 4GB RAM Supermicro MV8 Controller 4 x 12cm fans powered @ 7V 7 Drives - 4 x 2TB WD EARS, 2 x 1.5TB Samsung Green, 1 x 300G Hitachi ---Tagon, dual 12V rail @ 20amps / rail - 480W PSU Power off - 7.5W Boot Peak - 152W Boot - 102W Idle - all drives spun up - 92W Idle - 1 drive spun up - 61W Idle - all drives spun down - 56W --- Antec Neo Eco - single rail @ 30amps / rail - 400W PSU Power Off - 1.5W Boot Peak - 122W Boot - 82W Idle - all drives spun up - 74W Idle - 1 drive spun up - 52W Idle - all drives spun down - 43W
  5. I ended up doing the Trust Parity procedure which validated it immediately. A parity check and 7 hours later, everything is OK
  6. With an array this large, I would be reluctant to upgrade, especially if I wouldn't need any of the features in v5 in particular. I would try it on a test machine first to familiarise myself with the new features and concepts before trying it on a huge array. But this is me getting anxious when talking about TB of data
  7. After finishing some long drive consolidation procedures (replacing lots of smaller drives with fewer larger ones) I've come to a point where I would like to arrange the disks in the array to better correspond to their physical location in the tower. I have tried following the instruction on the wiki but upon reassignment to different slots, I'm getting the following screen: http://imageshack.us/photo/my-images/16/screenshot20130110at095.png/ I'm inclined at this stage to follow the Trust My Array procedure described here but reading the warnings, it mentions that no drives should be marked as disabled or missing. Maybe I'm misreading this and I though't I'd better ask first if it safe to ignore the missing marked disks as long as they are reassigned to different slots? Thanks!
  8. What was the average speed during file transfer? As suggested, xbmc is probably the best media player companion for unRaid so it's worth a try to narrow down the issue which at this point could be a combination of network performance, hw and sw capabilities. If you are running other services on the unraid such as sab or sickbeard, I would expect performance drop to occur especially if simultaneous read/write operations occur on the same disc even with a more powerful CPU. My new unRaid based on a microserver 1.3Ghz CPU is working miracles and streaming 1080p to several clients simultaneously so I know the CPU alone is hardly a limiting factor with unraid.
  9. I don't see why it wouldn't work. When transferring data between disks, I would copy it rather than moving it, this way I would still have a copy if the rebuild fails for any reason. Also, it is best if you don't use the server during the rebuild (like watching movies or copying data) to minimise stress on the busy disks. For me is always a time of great anxiety when such operations take place )
  10. Thanks Joe, that's reassuring. I think I now have a better understanding on how to read SMART reports
  11. As suspected, 195 Hardware_ECC_Recovered values have reset to "normal" values (a few thousands) after power down / reboot. I guess they can safely be ignored for the time being ... unless someone has a different theory
  12. The rebuild process moved past the 1.5TB mark and the 2 samsung drives stopped incrementing the Hardware_ECC_Recovered which has now reached 483,000,419 and 355,541,041. At this point I'm convinced this has happened during this rebuilt alone and I can't wait for it to finish rebuilding so that I can power down and reboot the server. It may be that the value will reset to 0 and that this is some kind of internal counter similar to what Seagate drives seem to use.
  13. I could be wrong since I never did thing on v5 but on v4.7 I was able to access the web interface at all times during rebuild and it took mine about 5-6 hours for a 2TB drive so I would be inclined to suspect something is wrong at this stage. Can you connect a monitor and see if you can access the console?
  14. Basically yes, it will create a single pool of storage equal to the combined size of all data drives (excl. parity drive) to which you can assign one or multiple user shares as you see fit. That is only true during routine parity checks / rebuilds (once a month or when adding / removing drives) where all drives need to read data simultaneously for the computation. Since files are not split among drives, the read and write speeds are determined by the drive where data being read / written sits, although writes are much slower than reads since they require parity to be written on the parity drive too. The parity drive needs to be equal or larger than the largest data drive in the array. for 1T + 1T + 2T + 3T data drives you need at least 3T parity (but can be larger such as 4T). If you add a 4T data drive into the array, you need to change the parity drive to at least 4T drive.
  15. You could check if you need to enable Network Discovery in Windows 8 ... http://windows.microsoft.com/en-US/windows-vista/Enable-or-disable-network-discovery