Fireball3

Members
  • Posts

    1355
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Fireball3

  1. As for now, login is possible but the database is locked for maintenance.
  2. Welcome to the unRAID forums qwerki. First of all, tell us what do you want your card to do? Shall it end up in IT or in IR mode?
  3. Editing the wiki is not possible again! Neither when coming from the limetech site, nor out of the forum, nor when using the "log in" button in the wiki itself. Using firefox, and IE and both don't work.
  4. Yes, on unRAID 5 you can rearrange as you like.
  5. Under the bed... At our home I'm the one that does the vacuum-cleaning. If you ever had a look under your bed, you certainly wouldn't want a computer down there unless you have some real good air filters or do very regular cleaning of the machine also.
  6. What about the spin-down button on the unRAID GUI? I suppose if you issue that command you should be able to see/hear what is going down since all drives should stop spinning. If you can trigger that by the GUI I expect unRAID will be able to do it also. Edit: The only bad with this cards (5/i, 6/i) is that they do not support drives >2.2TB capacity. http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks
  7. Quoting myself: Alternatively: Search the forum for "9211-8i" to find users that own the card and then PM them.
  8. Fine! Temps, and SMART as well as spindown working also?
  9. I suppose your unRAID has booted up or it would not notice a unclean shutdown? In that case, I recommend you install the clean powerdown script (see my sig). Then it is sufficient to hit the power button and wait for your server to go down. Perhaps you do some investigation what is not "coming up" in those rare cases? How 2 disable the automatic parity check?
  10. Welcome to the unRAID forums. Do you mean the PERC 6/i? I don't know if this card works with unRAID? Do you have it running? Looks like a h/w raid adapter and I'm not sure if you can "just" pass through the drives. I have the PERC H310 flashed to IT mode and the spin-down works fine. Here is a list of HBA know to work with unRAID.
  11. Once the google robot has been here, this "news" will lead many people to the unRAID forums! Nice marketing trick Seagate!
  12. If it is genuine LSI and is obviously defective (port A not working) why not RMA? You did nothing illegal when trying to flash LSI supported firmware. At least they (LSI) should be able to send you a stock rom to solve your problem - although I doubt you will be able to solve potential hardware issues like that.
  13. Thanks for sharing this! Seagate is once more ruining their reputation. - shorten warranty periods for consumer drives --> negative - deliver drives with useless APM settings (ST1000DM003) --> negative - poor drive quality (personal opinion) --> negative - now this bull*$&% --> negative - more to come... Unfortunately they're one of the very few players left in this business.
  14. Thanks for sharing, added the card to the wiki. Just for confirmation for this specific card: Is it supporting drives > 2.2T capacity? Check this thread, card with the same chipset and obviously some issues with big drives. Please contribute to deliver accurate information to the wiki.
  15. I remember having read similar issues somewhere in this forum - keep searching. It is not caused by the script - it's a general issue with S3 on your configuration.
  16. @ Mr_Gamecase I noticed you're running the Highpoint 1740 cards. They are PCI 32bit cards with 4 SATA II ports. How are your parity check speeds? Must be a bottle neck or not? You have even 4 running on your PCI bus... I would like to add the information to the wiki, so please be accurate.
  17. In order to confirm that the controller can handle big drives (>2.2 TB) perhaps you can test it in a Windows environment? If it works in Windows, there might be a driver related issue in unRAID. It would be nice if you can work that out. I've added the card to our wiki and a definite information about the status of this card would be nice. Thanks for contributing.
  18. I tend to buy Hitachis only at the moment. I experienced issues with WD (green) drives as well as Seagates. Due to thermal reasons I'm using mostly (green) drives and the WD's were definitely dissapointing. One died while preclearing, others lost data (not in unRAID). --> impressed negatively so I somehow have a bad feeling with WD I have some Seagates in my pool also and many of them have bad sectors (platters seem to deteriorate). Got some used 4TB Hitachi and they seem fine, considering the operating hours they have. No reallocated sectors, quiet and cool. http://lime-technology.com/forum/index.php?topic=32564.msg300499#msg300499 This reflects my personal feelings about WD and Seagate and is not scientifically proven. I'm trying to get the drives as cheap as possible. Therefore I don't care about warranty terms anymore. Consumer drives warranty changed to 1 or 2 years mostly and in my usecase that's no reasonable period. Drives with higher warranty periods are too expensive imho.
  19. Having a short look at that board I see: 2 x Mini-SAS (for 8 x SAS 6Gb/s ports) 1 x Mini-SAS (for 4 x SATA II 3Gb/s ports) 2 x SATA III 6Gb/s ports If you're lucky to use that onboard controller as HBA and unRAID has an apropriate driver, you already covered 14 Drives. You have to examine that! ("...onboard LSI SAS") Install unRAID, get some forward breakout cables and plug the drives and see if they're available in unRAID. If you wanna save money, then you're probably good with the 2x M1015 (or similar builds). At my place the M1015 is about 50-70€ @ebay. Expander aren't that popular and you barely find them on ebay. Nice board btw.!
  20. Expander, OK but: 1. While I'm not sure how the expander is working - what do the experts here say with regard to the bandwith provided by a single PCIe x8 to connect 24 drives? Isn't it a bit of a bottle neck? 2. I suppose you will use a server grade motherboard. If you manage to get one with 8 SATA slots and add 2 M1015 (or other 8x SAS adapters) you also have 24 slots. Depends on your enclosure of course.
  21. quoting from here: http://lime-technology.com/wiki/index.php/Un-Official_UnRAID_Manual Your steps look correct. The "copy" button will only show up if you have a red-balled drive. That is why you have to do steps 1 + 2 (as you described). It should work, but you can consult limetech and let them confirm also.
  22. If you don't need the other features of the dynamix plugin you can also go with this standalone version of a S3/powerdown script. http://lime-technology.com/forum/index.php?topic=3657.msg313351#msg313351
  23. Hey guys, I've been reading through your posts and it seems this kind of error is an example why a correcting parity check is not always recommended - although this error seems to be very rare. Now I'm determined to find that drive that causing/delivering the swapping bit. Based on your posted snippets, I put together a script that does the md5 checksums for my drives. The only thing that I need to know now is where do I have to search the drives? Of course, since the error is popping up very soon after I pressing the button, it has to be from 0 to but I would like to understand what I'm doing here. While looking up for sectors and blocks I ended up totally confused. dd needs a block location. dd is by default using 512 byte blocks. block length x amount of blocks = drive size Syslog. May 1 21:39:22 Tuerke kernel: scsi 4:0:0:0: Direct-Access ATA WDC WD30EZRX-00M 80.0 PQ: 0 ANSI: 5 May 1 21:39:22 Tuerke kernel: sd 4:0:0:0: Attached scsi generic sg1 type 0 May 1 21:39:22 Tuerke kernel: sd 4:0:0:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) May 1 21:39:22 Tuerke kernel: sd 4:0:0:0: [sdb] 4096-byte physical blocks This is a 3TB drive. According to the equation I get: 3000592982016 bytes (512-byte blocks) logical block adress (LBA) OK so far - understood = easy! Now back to my error: May 1 21:40:12 Tuerke kernel: mdcmd (51): check CORRECT May 1 21:40:12 Tuerke kernel: md: recovery thread woken up ... May 1 21:40:12 Tuerke kernel: md: recovery thread checking parity... May 1 21:40:12 Tuerke kernel: md: using 2560k window, over a total of 3907018532 blocks. May 1 21:40:14 Tuerke kernel: md: correcting parity, sector=65680 Wait, this 3907018532 block are 2TB only. The parity disk is 4TB - shouldn't it be somewhat about 7814037064 blocks? Next line refers to sector=65680 of (parity disk I suppose). As I read up, the term "sectors" comes from the CHS adressing. Is sector 65680 the same as LBA 65680? That would result in position 33628160 bytes on the disk. And finally, in addition to all this comes the file system block size on top - very nice - more confusion. While most drives (there are new drives with real (physical) 4k blocks) have physically 512 byte blocks, the file system can define it's own block size. That means we get a different amount of blocks for the same drive. How does this interfere with my task? May 1 21:39:34 Tuerke kernel: REISERFS (device md1): journal params: device md1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30 Size 8192 - is this supposed to be the block size? May 1 21:40:12 Tuerke kernel: md: using 2560k window, over a total of 3907018532 blocks. What blocksize is related to this amount of blocks? I'm very keen on seeing your answers. I'm lost. And finally: May 1 21:39:33 Tuerke kernel: unraid: allocating 71544K for 1536 stripes (11 disks) May 1 21:39:33 Tuerke kernel: md1: running, size: 245117344 blocks May 1 21:39:33 Tuerke kernel: md2: running, size: 488386552 blocks May 1 21:39:33 Tuerke kernel: md3: running, size: 1465138552 blocks May 1 21:39:33 Tuerke kernel: md4: running, size: 1465138552 blocks May 1 21:39:33 Tuerke kernel: md5: running, size: 1465138552 blocks May 1 21:39:33 Tuerke kernel: md6: running, size: 390711352 blocks May 1 21:39:33 Tuerke kernel: md7: running, size: 2930266532 blocks May 1 21:39:33 Tuerke kernel: md8: running, size: 2930266532 blocks May 1 21:39:33 Tuerke kernel: md9: running, size: 3907018532 blocks May 1 21:39:33 Tuerke kernel: md10: running, size: 245117344 blocks Man, I can tell you that I'm really lost!