Jump to content


  • Content Count

  • Joined

Community Reputation

3 Neutral

About heffe2001

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

871 profile views
  1. I'm seeing the same thing, but every other screen refresh they disappear under Unassigned Devices. This is on the latest RC.
  2. I think the 3108 is the same LSI card that came in my Cisco C200, if it is, it only supports drives up to I think 2.2tb, nothing higher. I ended up pulling that off the board (it was an add-on module), and install 2 other LSI cards, one for the internal slots, one for my external array. The 3108 IS supported if it is the same as mine, just has limitations...
  3. I primarily use Chrome, and that's where I've seen the issue. I'll try it with an alternate browser next time I see an update and see if it's chrome related or not. I'm going to assume you're right on this, as it seems to be consistently the preclear plugin that does this. I'll post over on the support thread for it here shortly.
  4. This is the link to the case, without the hot-swap bays (came with 2 4 drive bays that weren't removable, I replaced those with the hotswap modules). I don't think I have the old static-drive bays anymore, nor the blank covers for the upper 3 5.25 bays where the 5-in-3 is located. https://www.newegg.com/Product/Product.aspx?Item=N82E16811123131R&cm_re=sr10769-_-11-123-131R-_-Product The hotswap bays were around 60/each for the 4 bay ones. The 2 bay 2.5" is actually a Chenbro as well, which was around 50. The 5-in-3 is a Supermicro, which I believe was around 60-70. I've got around 400 in the case, but I'll let it go for much less for someone local-ish, of if you're not local, if you pay actual shipping (which I'd need to weigh it to get a shipping if anybody is interested).
  5. I've used the Areca card in my main server for the past 3 years, without any issues. Includes the battery and a cache module. Was running a 2 4tb-spanned array as a parity drive. Bought the battery back on 6/2015, still works fine and holds a charge. Card wouldn't fit in my new system so I replaced it with a low profile LSI. Everything works fine on it, all 12 ports work, as does the RJ45 on the back for network access (it's currently set static, I believe). I have the card listed on ebay with a 69.99+shipping buy it now, but would sell directly to anyone here for 60 shipped. I will also supply 3 8087 to 4-sata breakout cables with this. I also have a WD5000HHTZ-04N21V0 10k RPM Raptor that I was using as my cache drive (replaced with a 500g SSD this weekend). It's still got 21 months of warranty left, and has had no issues according to the smart statistics (attached to this post). I don't have a use for it anymore, so I'm putting it up here as well. I'd like to get 50 shipped. I also have a Chenbro SR107 Tower case with 2 4 bay hot swap modules in the lower bays, plus a Supermicro 5-in-3 in the 5.25" bays, and a Supermicro dual 2.5-in-1 3.5" bay. I'd rather not ship this if at all possible, so maybe someone local/close to me would be interested? The case has been emptied, and still has all the drive trays, along with most of the screws (I have a baggie of the screws that I used on this, but assume you'll need more to fully load the box). I was using the above card to drive 2 of the 4 bay modules, and all but one slot on the 5 bay, and the mobo ports on the other 3. All fans work, and I have a new Corsair 120mm to put it the back fan mount. The case is a bit dusty but I'll blow it out if anybody is interested (I'll get pictures of this case tonight). I'll even include an adapter for an onboard USB motherboard header to 2 usb ports, so you can mount your unraid key internally (worked great on the system, my new box has that built on the mobo so not needed anymore). Other than the raid chassis and fans, this case is empty, no mobo, no power supply, no drives.I also do have a mounting plate for using dual-redundant power supplies in this case, just never got around to buying the supplies/module to install it. I don't really have a price in mind, so make an offer. I could ship this, but the buyer will have to pay the shipping (it won't be cheap, even without any drives or electronics in it, it's a big, heavy case). I have a Areca ARC-1680IX-24 that is giving me a firmware error. If you know how to fix these, you'll have a nice 28 port card (24 internal 4 external). Make an offer, but this one is being sold as-is. It does have the metal full-height bracket on it, just not in the picture. All this will be is the card, no cables included. media01-smart-20170626-1119.zip
  6. When updating any of my dockers, the pages appear blank until the docker has been completely updated. I THINK this started with RC5, but it's still present on my system in RC6. Occasionally you'll get a screen with some of the data on it, but it's static, and doesn't change until after the update is completed (when the 'DONE' button appears). Previous versions would show the download percentages of each part of the docker, as well as the extraction messages as it went. I'm also not seeing a 'DONE' button on the update of the Preclear plugin, once it's completed, you have to hit the X on the window to clear it, instead of the 'DONE' button on all other plugins when they are completed. Has anybody else seen this behavior?
  7. Just wanted to report, for this build, that the modules that weren't loading for me before for IPMI/ACPI are no longer happening, the system is loading the correct modules without anything added in the go file.
  8. Just had an opportunity to restart one my my C200's with this build (and the modprobe lines commented out), and it appears that those modules for the IPMI/ACPI errors are loading correctly now, they at least show up in lsmod even though they weren't loaded in the go file..
  9. I ran a FX-8320 from about 9/2013 until last month without issues, running approximately 10 dockers. I DID have it set up water-cooled for about half that time though (not overclocked, just cooled at stock speeds). As far as the board, I was running an ASUS M5A97 R2.0 mobo, 32gb RAM plus an Areca 1280ml for ports, and a no-name PCI video card.
  10. I removed my modprobe lines for the ACPI/IPMI errors I've been seeing, and this release still has the errors. Stuck them back in and all's quiet. Just figured I'd let let you know..
  11. Just an update, I'm at 20+ hrs uptime, without any of those above errors. Seems like modprobing the missing 2 modules have fixed the issue.
  12. I looked in the settings, but didn't see anything. I'd still be using my Areca card, except that it won't fit in the machine I'm using now. Maximum length/size I can use now is 6.6" long, 1/2 Height. Not too terribly many Areca cards fit in that space, and I've not found one that does that you can buy reasonably. Could use a 7" full height, but I'd need at least 3-4 external ports. I'm currently using 2 LSI boards, a 9201-16E in the full height, a 9207-4i4e in the half height.
  13. In my old setup, I had an Areca card with 2 4tb WD Red's set up as the parity drive for several Seagate 8tb Archive drives. Everything worked fine, as the array was larger than the Seagates. In the new setup, I'm using LSI cards and tried to use the same configuration (2 4tb Reds in a Raid 0 array). I assigned that array as the parity drive and started the rebuild. It was scheduled to finish sometime in the early hours this morning, but when I got up and checked the system, it said that the parity rebuild failed with 145 errors. I checked the logs, and it showed that it had attempted to read past the end of the disk. I checked the size of the array and it was 7,812,499,404 KB according to Unraid. I checked one of the 8tb drives, 7,814,026,532 KB. I thought Unraid had the logic to not allow me to assign a parity drive that was too small to cover the largest drive in the system? If it's supposed to do that, then something didn't work as intended. I've assigned one of my spare 8tb Archive drives a parity until I can get in an Ironwolf 7200 RPM to use in it's place. I've also broken the raid array with the 2 4tb's, so I can run preclears on both just to be sure there are no problems with the individual drives. I seem to recall (NOW anyway, lol), that the LSI cards use some of the space of the drives for their housekeeping on the array, where the Areca cards didn't (or used less), so that would explain the size, but if Unraid is supposed to detect that a drive is too small to cover parity, I think there may be a bug in that logic. I've attached the diagnostic files just in case it needs to be looked at. media01-diagnostics-20170611-1219.zip
  14. Looks like adding the acpi_ipmi, and acpi_power_meter to my go file has my 2nd server 'fixed' for the lack of a better word. It's been up for approximately 30 minutes on 6.4rc2 without the warnings/errors. *EDIT* Been up for 2+hrs, no IPMI/ACPI issues.