heffe2001

Members
  • Posts

    411
  • Joined

Everything posted by heffe2001

  1. I know my drives are fine (I ran a parity check a few days ago with no errors), so I removed and re-added the parity drive, and I'm letting it run another check.
  2. I was trying to pass through a raid card to a VM (had it listed in the append line), and when I started the array, even though that specific card was the only hardware checked (and listed), it also grabbed one of my LSI cards that has the parity drive, and a few other drives connected to it. The parity drive of course was no long able to be accessed, so the system disabled it. On a reboot, it shows it still disabled, how do I get it re-enabled? There's nothing WRONG with the drive, the controller was basically shunted to the VM when I started it, causing those drives to 'fail'.
  3. I was able to find the June 2013 SPP For the Gen8's, hopefully it'll be old enough to make the pass through stuff work. If not I know I can just pull the 420i out and drop in a LSI -16i card of some type in, but if I can use the hardware I have that'd be nice. (That VM will only use at most 4 of my 16 available slots on the system, so 2 of those front bays will probably be moved to a LSI card anyway). I'll give it a try when I get home from work this evening and report back on any success/failure.
  4. Sorted out the errors above, I new it was something to do with the power monitoring, but couldn't remember for the life of me what to do to fix it.. Looks like the seller loaded the latest BIOS/Firmware on this thing, the backup ROM is an older version, but newer than the suggested version in the other post. (P70, 08/02/14, Bootblock 03/05/2013). Do these handle downgrading the bios well, if so I'll look for something in the '13's. I'll also try the backup rom first, to verify it one way or another. In my other post on this machine, I was looking at passing the 420i card (in a PCI slot, not onboard) to a Server 2016 VM that I run our offsite backup server on for work, but was running into issues there. It appears that is also addressed here as well, lol.
  5. Has anybody sucessfully passed the P420i raid controller to a W2016 server VM? IOMMU Group info for relevent devices (trying to pass through either of those HP Smart Array controllers, or both): IOMMU group 21: [103c:323b] 02:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01) IOMMU group 23: [1000:0087] 04:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) IOMMU group 24: [103c:323b] 07:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01) IOMMU group 25: [1000:0064] 0a:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02 Used this in the syslinux.cfg: label Unraid OS menu default kernel /bzimage append vfio-pci.ids=103c:323b initrd=/bzroot It showed the controllers in the setup for a Windows VM I created, but I got a message when trying to start that it was in use? Also made the fan subsystem in that machine ramp up to 100% fan output. I'm most likely going to replace the P420i controller with another LSI board I have, but I'd like to use the onboard passed through if possible. If I can't get it to work I'll just disable the onboard and add yet ANOTHER LSI card, lol.
  6. Just moved my server over from a Cisco C200 M2 box to a HP DL380p G8 machine (MUCH quicker, and plenty of expansion room). I've noticed a bunch of ACPI errors in the logs, how have other people who have these boxes gotten rid of them, or do you just ignore them? [26353.628220] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [26353.628227] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) [26353.956422] ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) [26353.956436] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [26353.956451] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) [26354.574267] ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) [26354.574274] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [26354.574281] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) [26364.690270] ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) [26364.690277] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [26364.690284] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338)
  7. I'm seeing the same thing, but every other screen refresh they disappear under Unassigned Devices. This is on the latest RC.
  8. I think the 3108 is the same LSI card that came in my Cisco C200, if it is, it only supports drives up to I think 2.2tb, nothing higher. I ended up pulling that off the board (it was an add-on module), and install 2 other LSI cards, one for the internal slots, one for my external array. The 3108 IS supported if it is the same as mine, just has limitations...
  9. This is the link to the case, without the hot-swap bays (came with 2 4 drive bays that weren't removable, I replaced those with the hotswap modules). I don't think I have the old static-drive bays anymore, nor the blank covers for the upper 3 5.25 bays where the 5-in-3 is located. https://www.newegg.com/Product/Product.aspx?Item=N82E16811123131R&cm_re=sr10769-_-11-123-131R-_-Product The hotswap bays were around 60/each for the 4 bay ones. The 2 bay 2.5" is actually a Chenbro as well, which was around 50. The 5-in-3 is a Supermicro, which I believe was around 60-70. I've got around 400 in the case, but I'll let it go for much less for someone local-ish, of if you're not local, if you pay actual shipping (which I'd need to weigh it to get a shipping if anybody is interested).
  10. I've used the Areca card in my main server for the past 3 years, without any issues. Includes the battery and a cache module. Was running a 2 4tb-spanned array as a parity drive. Bought the battery back on 6/2015, still works fine and holds a charge. Card wouldn't fit in my new system so I replaced it with a low profile LSI. Everything works fine on it, all 12 ports work, as does the RJ45 on the back for network access (it's currently set static, 192.168.0.235 I believe). I have the card listed on ebay with a 69.99+shipping buy it now, but would sell directly to anyone here for 60 shipped. I will also supply 3 8087 to 4-sata breakout cables with this. I also have a WD5000HHTZ-04N21V0 10k RPM Raptor that I was using as my cache drive (replaced with a 500g SSD this weekend). It's still got 21 months of warranty left, and has had no issues according to the smart statistics (attached to this post). I don't have a use for it anymore, so I'm putting it up here as well. I'd like to get 50 shipped. I also have a Chenbro SR107 Tower case with 2 4 bay hot swap modules in the lower bays, plus a Supermicro 5-in-3 in the 5.25" bays, and a Supermicro dual 2.5-in-1 3.5" bay. I'd rather not ship this if at all possible, so maybe someone local/close to me would be interested? The case has been emptied, and still has all the drive trays, along with most of the screws (I have a baggie of the screws that I used on this, but assume you'll need more to fully load the box). I was using the above card to drive 2 of the 4 bay modules, and all but one slot on the 5 bay, and the mobo ports on the other 3. All fans work, and I have a new Corsair 120mm to put it the back fan mount. The case is a bit dusty but I'll blow it out if anybody is interested (I'll get pictures of this case tonight). I'll even include an adapter for an onboard USB motherboard header to 2 usb ports, so you can mount your unraid key internally (worked great on the system, my new box has that built on the mobo so not needed anymore). Other than the raid chassis and fans, this case is empty, no mobo, no power supply, no drives.I also do have a mounting plate for using dual-redundant power supplies in this case, just never got around to buying the supplies/module to install it. I don't really have a price in mind, so make an offer. I could ship this, but the buyer will have to pay the shipping (it won't be cheap, even without any drives or electronics in it, it's a big, heavy case). I have a Areca ARC-1680IX-24 that is giving me a firmware error. If you know how to fix these, you'll have a nice 28 port card (24 internal 4 external). Make an offer, but this one is being sold as-is. It does have the metal full-height bracket on it, just not in the picture. All this will be is the card, no cables included. media01-smart-20170626-1119.zip
  11. I ran a FX-8320 from about 9/2013 until last month without issues, running approximately 10 dockers. I DID have it set up water-cooled for about half that time though (not overclocked, just cooled at stock speeds). As far as the board, I was running an ASUS M5A97 R2.0 mobo, 32gb RAM plus an Areca 1280ml for ports, and a no-name PCI video card.
  12. I'm running the latest RC on that machine, and with the Azure skin loaded. I also have it set to show the write/read speeds on each drive. If I have the browser on any screen other than Main, it looks like the speed creaps up to about 125m/sec and stays there, but if I go back to main, it starts bouncing around. Maybe something to do with the polling for the drive speeds? I'm going to leave the system be for tonight without a browser connection, and see if the speed holds.. Since I'm doing a parity rebuild, I do have the Turbo writes enabled. Other than a bunch of ACPI warnings being logged, it's been a fairly uneventful move to the new box. Especially if the speed actually holds with not monitoring the speeds..
  13. Got all the hardware in today, and moved everything over to the new setup (Main server is now the Cisco C200 M2 box w/dual x5650 6-core Xeons, 48G ram, 9207-4i4e Controller in the 8x slot, with the internal port connected to the 4 bays in the Cisco box, connected to a 500g SSD, 300GB 10k Raptor, and 2 4tb WD Reds in a Raid-0 config for parity, 1 external port connected to the Norco 12D box, with 4 Seagate 8tb Archive drives. Cisco box has a 9201-16e in the 16x slot, with 2 ports connected to the other 2 ports on the 12D box, with several 3tb and 4tb drives). Since I went from an Areca card to the LSI cards, I had to reformat and reconfigure the parity drives, so it's running a parity rebuild at the moment. Not getting the greatest speeds (it's been bouncing anywhere between 30m/sec, and 140m/sec, averaging around 80m/sec). I started a preclear of the new 8tb I added, and it shows it's running at 208m/sec). The Areca card usually ran parity at around 120-130m pretty much all the way through a check, so either that card is just much faster (it did have a cache module and battery on it, but I can't remember if it had write caching turned on or not). I'm definitely happy with the responsiveness of the new box, it's definitely faster than my old 8-core AMD FX setup... (Yes, there are 2 C200's in there, plus the MD1000. The 2nd box is running Unraid also, with a single VM running Windows 2016 Server, with our Altaro Offsite backup server set up on it for our Hyper-V box at the office).
  14. I just went ahead and ordered the Norco box, I know it will work, just need to order cables for it too.. Of course it uses different cables than the MD1000, lol.
  15. I Know this is an expander, but wonder if it'll work with the Archive drives? 1 8088 in, 4 8087's to internal drive sets. If it'd do the 8tb's (or larger even), that coupled with the 16-bay 8087-equipped Norco case would be perfect for what I want. Probably take a speed hit though (2 ports in, with 2 8087's each would even be more ideal speed-wise). http://www.highpoint-tech.com/USA_new/series_EJ340-Overview.htm
  16. It's a bit pricy, but it'd cover me with what I'm using now, and I can continue to replace smaller drives with 8tb+ models..
  17. At a minimum I need 8 bays, 12 bays would be better, 16 would be max I'd ever need. I'm looking hard at the Norco 12D, which basically has 12 bays, with 3 8088 ports each directly connected to each bay in the chassis (at least I think that's how it's connected, that's how it looks anyway).
  18. I got the new card in, and it fits in the Cisco box without any issues, and even has room (barely) for the internal 4 drive bays cable to reach and plug into the internal 8087 port. As long as I plug the 8tb drives into those trays, they work perfectly fine. If I move any of them to the MD1000 box, they aren't detected (or one is, the other drives in the box aren't). Every other size drive I've tried with the MD1000 work fine as well (up to a 4tb model), just the 8tb's have issues. Anybody have an external box with an expander that actually works with those 8tb drives? I'm thinking a Norco DS-12D, as it's basically just 12 drive bays hooked to 3 external ports, no expanders. Since the drives work plugged into the front panel, I would assume it'd work as well.
  19. I've seen several motherboards with 8087's on board, although usually server boards, and most of the time the board has an LSI HBA onboard. One exception is the Cisco C200 M2 machine I'm using, it's got an 8087 on the LSI Mezzanine controller (1068 card if I remember right), and an 8087 onboard on the right side that connects to the onboard sata chipset (NOT the raid card) for ports 3-6 (1 and 2 are standard sata ports on the opposite side of the motherboard). It's not exactly an 'off the shelf' board, but it's not exactly unique either. There's also the SuperMicro X10SRM-TF, with a single 8087 onboard, plus an additional 6 regular style ports, for a total of 10. The Intel DBS2400SC2 has 2 8087's, plus 2 sata 6g, and 4 sata 3g ports. Far as regular, consumer-grade boards, I can't recall seeing any with 8087's on them.
  20. When I use the older card (the 31601e), and drives are inserted I can see them all in the card's bios, but the 8tb drives don't have the correct capacity. The 'power' light on the drive trays also lights. With the 9201, the 'power' light on each individual drive never comes on, nor does the activity light work. The new card I have on the way is a Gen3 LSI card, so I'm hoping it's got better support for the 8tb Seagates (it's on their compatiblity list for that specific card). I'm hopeful that it's not the MD1000, as I'm too far into the cost on it to replace it now (those trays weren't cheap, wish I'd have found one that already included them when I got the chassis). I'll make sure to post the results after the new card, just in case anyone else tries this, may save them some time and money.
  21. Looks like for 'certified' compatiblity from LSI/Broadcom, I need a SAS 9207-4I4E. Going to grab one from Newegg, hopefully with next day shipping it'll be here Friday.. I'll report if this works with the enclosure/8tb/etc, if it does I'll have an Areca 1680ix-24, a 1231ML 12 Port, the 9201-16e to get rid of..
  22. Is anybody using Seagate Archive 8tb drives on an LSI HBA (I'm trying to work with a 9201-16E, connected to a MD1000 15 bay chassis). The system appears to see one drive (after an abnormally long initialization process), and another 250gb drive (just another random drive I had and stuck in another slot). 9201-16e has P20 flash on it, and in the card's config, it shows the MD1000 box as an enclosure, but doesn't show any drives (even with just the 250g installed, shows no drives, no negotiated speed, or anything). I have a SAS31601E that detects the enclosure correctly, and actually shows the drive underneath it (but of course it won't work past 2.2tb). Just wondering if I may need to get a newer card.. I've got an Areca 1680ix-24 that I'm going to test with the enclosure tomorrow, but it won't fit my Cisco UCS (WAY too long, lol), but I'd rather find a LSI card with external ports that I can connect to the MD1000 (and possibly an internal port, one of the 4i4e cards would be ideal if it works with the 8tb drives).
  23. I've got a 9201-16E in the machine now, and it fit fine, but I need something to fit in the half-height slot, which is also short (6.5" long max, but would be better if it were shorter).
  24. I've not had any real issues with my Areca ARC-1231ML 12 port card. Like I said, I just wish it had better support (especially with their older cards being so reasonable now. I really wish the 1680ix 24 port I just got would fit my case, has the external port I need, plus internal (overkill at 24 ports total). I can certainly live without the OS reading the drive temp or spindowns, but it would be nice, lol.