TheStapler

Members
  • Content Count

    67
  • Joined

  • Last visited

Community Reputation

0 Neutral

About TheStapler

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have 2 LSI SAS9201-16i 16 port cards that work awesome!
  2. I have 2x LSI SAS9201-16i JBOD cards. They are awesome, and work flawlessly. Cards look like brand new. I am located in Ontario Canada, or Detroit MI. Looking for $150 USD shipped each within Canada or the USA. PM for more information.
  3. I have 2x LSI-SAS9201-16i cards I am looking to sell... PM me if you're interested...
  4. Well... got my "new" server today.. this card, says it supports JBOD, but I can't get unraid to see it. When it first booted up, it didn't see my 3TB as a 3, but as a 2TB, so I reflashed the bios (that was fun to figure out), and it shows up as a 2.7TB now... but only in the controller config... I can't set it to JBOD, maybe if i was to "initialize" the drive, i might... but I didn't want to do that just yet. Looks like this card isn't going to work... oh well, it was worth a shot! Good new, the mobo has: 6 xSATA 3 Gb/s ports 3 x PCIe 2.0 x16 (at x16/x16/x1 or x16/x8/x8 mode)
  5. A server that my buddy just bought, has this raid/controller card in it, and was wondering if anyone has ever used this card, or had any luck using it? it is a 20 port controller (16 internal, 4 external), so if it would work, that would be awesome! I can't seem to even find in the documents about harddrive size limitations, but it does say that it supports JBOD in the docs, but not on the main page... As soon as he gets it, I will take a look and see if it will work... but until then, thought I would ask here. http://storage.microsemi.com/en-us/support/raid/sas_raid/sas-5164
  6. The power that feeds the backplane, I've never touched... I beleive there is only 1 power connector, but I could be wrong... I will double check later. My sata cables are not tie-wrapped, and are decent quality SFF-8087 fan out cables, and the 6 on the motherboard are brand new, and were pulled out of their bags the day the motherboard was installed. The fan out cables were brand new when I got the raid cards as well, which was when I bought the mobo. I replaced all the fans in the case at the same time too (most were dying, so it was safer to replace them all at once). These 2 drive
  7. oh, and I forgot, it is an older supermicro case, with 24 hotswap bays, all SATA female connectors on the backplane, the case has 4x500watt redundant power supplies, running 1&3 and 2&4 power supplies off 2 APC 1300XL UPS's.
  8. If I had a syslog, i would have posted it... the server has rebooted since this issue happened... this is why I posted the 2 SMART reports... I have searched the forum, and I saw a post about it, but I didn't really seem to see/understand the problem/result. I can add what my server is though: M/B: MSI - 970A-G46 (MS-7693) CPU: AMD FX(tm)-8350 Eight-Core @ 4000 HVM: Enabled IOMMU: Enabled Cache: 384 kB, 8192 kB, 8192 kB Memory: 16384 MB (max. installable capacity 32 GB) There are 2 supermicro raid/jbod cards as well Here is the output of the lspci: root@Tower:/mnt/u
  9. I put a new drive in, and within a day, it has gone RED X... the only errors with it, were the UDMA errors. If the drive is "good", why did it fail? Should I RMA this drive? I haven't been able to get an answer as to what these errors really are... some say it is a bad sata cable, or just bad communication error... but I don't know.... this drive was put in place of a failed 2tb drive, with some other errors on it as well as a tonne of UDMA errors. That drive, is still connected to my server, so it wasn't exchanged with that port, and I have ensured that all the cables are connecte
  10. Ok, not sure where to put this question, or if it has been asked/answered... (I did a search, didn't really find anything) I have a CuBox-i4Pro at home, not really doing much... it is a kool little box, 2gb ram, eSata II 3Gbps, pretty decent spec... would be awesome if unRaid could run on it...as I also have ab external esata 5 bay drive enclosure... with 6tb drives, I could have 24tb storage in a pretty small case! granted, might not be ideal, or i am not sure if it could support 6tb drives, but even with 2tb drives, and 8tb of storage, my parents would be happy. Any chances of
  11. it was more of a "they want this now", but down the road, they may want more... and with putting the least amount out now, and have a decently configurable 'upgrade path', is what I was looking for. looks like I have found a different path to go down, for not a lot more money... so... yeah.
  12. so are you saying that you're still better off buying 2 or 3 of those supermicro raid cards, and keeping it all in 1 case? currently, my unraid is 14 drives, my buddy has 21... the 4 bay ones still heat up? thought the airflow was pretty decent, would it be better off having a better cfm fan in the case? i dunno, having a big case, seems off putting for some people, and if you only need 10 drives, you don't need a case for 20... but if you still want 20 later, you can put 20 in... maybe use the antec 300 case, that has 6 5 1/4 bays, so space for 10 hotswap cages (even
  13. First off, not sure if this is the right spot for this question, but here goes anyways... I am looking at building another tower for my parents, and wondering if anyone has tried this, or know if this would work... (basic setup) * case with a 5in3 drive tray/cage (easier than removing the case to replace a harddrive), mother board with 6 sata onboard Now, when I am looking to expand from the 2+1 drives, I have 2 more in the case to add on too... when those are full, I was looking at doing this: (addon/advanced) * SuperMicro AOC-SASLP-MV8 raid card * sff8087-> 4
  14. Ok, checkdsk didn't find anything... I had actually had all drives shown on my second last boot... then when i started the array (said it wasn't started due to improper shutdown) it started...a nd then hung... so, reboot... 1 drive not started... I copied the dmesg and the syslog... here is the syslog: http://pastebin.com/JP9tMvtR looks like I have 1, possibly 2 drives that are failing/failed, which would explain why sometimes i have all, or missing up to 2 drives when i boot up... [some of] the errors are: Mar 5 18:50:08 Tower kernel: sd 1:0:3:0: [sde] Unhandled err