• Posts

  • Joined

  • Last visited


  • Gender

gloryboxfailure's Achievements


Newbie (1/14)



  1. Excellent. Thanks for the help, I will try these options.
  2. This morning I received an alert stating that my system was "unable to write to cache" and "unable to write to docker image". After a bit of forum searching, I shut down my server and checked the connections on the cache drive cables. Upon reboot of the server and restart of the array, unraid is telling me that my cache drive is "unmountable: unsupported partition layout". FWIW, syslog is also throwing out buffer I/O errors like crazy on the cache drive. I've attached the diagnostic log. Is there any way to retrieve the cache drive partition layout and docker container configurations? Any help would be greatly appreciated. I have a new SSD I can use to replace the existing cache drive, but I really don't want to reconfigure my dockers unless I absolutely have to.
  3. Excellent. Thank you for the help. I'll mention that I flashed both cards in a spare system while I was preclearing a couple of drives for the new system, so the firmware on the cards is not an issue.
  4. I am currently building my second unRAID server, and I am running into an issue with the LSI cards that I purchased for it. The component list is as follows: MOBO: Supermicro X9SCM-F, bios rev. 2.2 CPU: Intel Xeon E3-1270v2 RAM: 16 GB ECC UDIMM Controller Card 1: LSI SAS9201-8i, flashed to FW Controller Card 2: Dell LSI SAS9201-16e, flashed to FW I intend to use the external card to hook the old server up to the new as DAS. The issue that I'm having is that when I access the SAS card configuration utility, only one of the cards shows up. Only the card plugged into the first slot on the board shows up. I switched the cards positions, and the same thing happened. Further confusing me is that I can boot into a manjaro live usb terminal, run the 'lspci' command, and both cards show up. I have attached a couple of photos to show what I am talking about. The questions I have are; will unRAID be able to see both of the cards, even though they aren't showing up in the configuration utility? Is there a way I can check in the motherboard BIOS to see if the board is recognizing both of the cards? Any help on this would be greatly appreciated. Thanks in advance.
  5. Thank you for the help. I will upgrade to v6; but before I do so I have a really dumb question: I won't lose my data by upgrading to v6, correct?
  6. So I was a little impatient and unmounted the data drives via telnet before the parity check was done. I used the guide in the wiki to do so. Then I rebooted the system. Upon reboot I was able to log back into the web interface. However, after I pressed the button to start the array, it has hung up at "Spinning up all drives...Start array...Mounting disks..." I'm still able to telnet into the tower, but refreshing the webpage for the GUI is not doing anything. The shares are still unavailable on the network. I will attach the syslog so that maybe someone else can make heads or tails of what I have going on. All jokes aside, this probably seems like a great opportunity to upgrade to v6. syslog.txt
  7. Hello, I am running v5.0-rc12 and recently had a power outage. I was unable to get home in time to shut down the server properly before the battery backup ran out. I was able to power up the tower and login to the browser GUI and start the array. However, I refreshed the page and now it will not connect (I am getting the "ERR_CONNECTION_RESET" message in Chrome. I am able to login via telnet, but I'm not really sure how to determine if the parity check has completed. I"m also unsure of which commands to run to properly shut down the array and reboot the tower. Currently my shares are not available on the network. Any help at all would be super appreciated; I'm not sure if this topic has been addressed before but I was unable to find anything in the forums.
  8. Thanks for the recommendation! I was definitely planning on using the cache drive as a warm spare. Thanks for reminding me about the size limitation on the drives as well. Hopefully I'll be able to get the drives up and running next week and throw some stats out as well.
  9. I'm a former FreeNAS user who is building a new unRAID box. I'm really intrigued by the capability to add any size drive to an array. My plan is to start the unRAID box with a few drives, move my old data from the FreeNAS to the new box, and then use the old drives to enlarge the array. That being said, I'd like to post my component list and read what you experts think. My main purposes for the machine: XBMC server for 6 machines throughout the house Data backup for an ever expanding library of pictures Sickbeard, CouchPotato, and SABNZBd integration I am open to other "essential" plugins and uses as well. Case: NZXT Source 210 - will hold up to 12 drives with the addition of a 4 in 3 cage. Mobo: MSI FM-A75MA-E35 mATX CPU: AMD A6-5400K PSU: Corsair TX750M Memory: Corsair XMS3 4GB DDR3-1600 Parity Drive: WD Green 3TB Cache Drive: WD Green 3TB Data Drive: WD Green 2TB WD20EARX Data Drive: 2x WD Green's from my old FreeNAS Case Fans: 2x Coolmax 120mm Cabling: Various SATA cables from prior builds Internal USB Header: Koutech IO-UU220 So I'm starting out with 6TB in the array, and as the budget allows I'll pop in more 3TB or 4TB drives. At some point I'll have to add in a SATA controller card, and I'm definitely looking for recommendations on that. Noise is not an issue as this is going into a well ventilated mechanical room. My biggest concern is that the PSU will handle all 12 drives and the fans. I will upload some photos as I get the box assembled. Thanks for checking this out!