captain_video

Members
  • Posts

    156
  • Joined

  • Last visited

Everything posted by captain_video

  1. Just let the system do it automatically if you're already on 6. Plugins/Check For Updates Sweet! Thanks.
  2. I upgraded to 6.0 last week and just noticed the 6.0.1 release today. I don't see anything about upgrading to the newer release if you already have version 6.0 installed. Is the process the same as older versions (i.e., replace the bzimage and bzroot files and reboot)?
  3. I'm looking to upgrade my current 5.0.6 Server Pro to version 6.0. The upgrade instructions tell me to install the new files on the flash drive while it's installed in a Windows PC. I've got the flash drive mapped so I can access it over my network. In the past I've just done the upgrades by copying over the files to the flash drive and rebooting the server using the web GUI and it's always worked fine. Is there any reason I can't do the same with version 6.0? I've already backed up the flash drive contents to my PC. Should I delete the existing files on the flash drive or just let them be overwritten when I copy them over? It looks like I need to copy the license key to the config folder. Is this correct?
  4. That's pretty much what I was thinking as well. I went ahead and switched back to the onboard Realtek NIC and replaced the Promise SATA controller with a 2-port Silicon Image SIL3132 PCI-e controller. I've got two more 3TB Seagate drives on the way so now I won't have to swap them out with two of my existing drives. I had been trying to avoid using the ports connected to the Promise controller for fear it would take forever to perform a parity check or data rebuild if it caused a severe bottleneck. If I can get the issues resolved with the Gigabyte motherboard setup I may put it back into service so I can use the Intel NIC. Thanks for your input.
  5. I've got an Intel PCI gigabit NIC. Would that be better than using the onboard Realtek NIC?
  6. Parity synch completed with no errors showing on any drive. I swapped out my last remaining 750GB drive for the 2nd 4TB drive and started the array. I hit refresh to see what was going on and it wouldn't respond. I checked the server and all of the drive activity lights were flashing so I assumed that it was doing a data rebuild on the new drive. I left my web browser open and tried to reconnect with the server but it just hung there. I left it for a while and when I checked back it had finally connected and was showing a data rebuild in progress on the new drive. It appears that the new hardware was causing the problem. I just need to determine if the hardware is defective or if flexing the SATA controller was the root cause of my problems. Once I get all of my drives updated I'll experiment with it and see what I can find out. For now, it's all good. One parting thought - The main reason I swapped out the motherboard was for the eight onboard SATA ports and two PCI-e x16 slots to house the two Supermicro 8-port SATA controllers. The Supermicro server chassis holds up to 24 drives so I wanted the ability to populate all available drive bays. Both my old motherboard and the new one each have two PCI-e x16 slots, one x1 slot and one PCI slot. The 8-port SATA controllers occupy the two x16 slots and the x1 slot currently holds an Intel gigabit NIC. I have a Promise SATA4 PCI controller to handle the remaining two drive slots, but I also have a Silicon Image SIL3132 PCI-e x1 2-port SATA controller I could use instead. The question boils down to what setup gives me the most benefit - going with the Intel NIC and the PCI SATA controller or the onboard Realtek NIC and the PCI-e SATA controller. Any thoughts or suggestions? I went with the Gigabyte board because it had eight onboard SATA ports, thereby eliminating the need for the extra SATA controller that occupied a slot. This allowed me to have 24 SATA ports as well as the Intel NIC with the option to use the PCI slot if I ever need an additional SATA port for a cache drive.
  7. If it occurs again I will do so. Right now it's running a parity synch with the old motherboard, CPU, and RAM and so far it hasn't displayed a single error at 18% completion. The new motherboard setup would have at least shown failures with drive 8 at this point in the parity synch. I wasn't all that happy with the physical alignment of the new Gigabyte motherboard with respect to the Supermicro chassis. The 1st PCI-e x16 slot was just slightly off with respect to the slot opening in the case, causing me to stress the Supermicro SATA controller slightly to get things lined up. Installing the controller in the slot turned out to be quite a chore as well. I'm thinking that the way the controller was being mounted may have caused some intermittent connections or some other issue that was causing the problem. For now, I'm keeping my fingers crossed that it makes it through the parity synch with no further incidents. If it does I'll take another shot at trying to format the other 4TB drive. The parity synch won't be complete until sometime tomorrow morning so I'm in a holding pattern until something happens or it completes parity synch.
  8. Well, that didn't work quite like I expected. Here's what transpired: Installed 4TB drives for both parity and disk 1. Disk 1 would not format after countless attempts. Swapped out disk 1 for old 3TB parity drive, created new configuration, and formatted it with no problems. Started the array and initiated a parity rebuild. Worked fine for a while and then the parity drive red balled and one of the drives (drive 8 ) showed multiple errors. I mapped drive 8 from my PC and it only listed a couple of files. I rebooted and all drives showed green once again. I went through this scenario a couple of times with the same result (i.e., parity drive red balled and drive 8 showed multiple errors). Just before going to bed, I started the array and initiated parity rebuild once again and let it run all night. When I checked it this morning there were several drives with errors in addition to drive 8 I stopped the array and all of the drives with errors were shown as not installed with red balls. I mapped a couple of the drives and they were all showing as empty. Rebooted the array and all drives that were shown as not installed were now included back in the array and had green balls. I mapped one of the questionable drives and all of the contents now appeared to be intact. I've run the SeaTools long test on several of the suspect drives and so far every one has passed. One of them was a WD green drive (2TB WDEARX) so I haven't checked that one yet with the WD Lifeguard diagnostics. I recently upgraded my motherboard, CPU, and memory so my next plan is to swap the new hardware for the old and see what happens. Several of the suspect drives are getting long in the tooth so I will probably have to think about replacing them if they fail again with the old motherboard and CPU.
  9. OK, I think I've got it figured out. I forgot there was an option on the Settings tab to wipe the current configuration and create a new one. I knew there was a simple way to do it but I forgot what it was and there's little to no documentation I could find on anything newer than version 4.7. I also discovered I could specify that the parity was valid even after creating the new configuration. This allowed me to create the new configuration with the original parity drive and replace the flaky drive with a new one of the same capacity. It's currently clearing the drive and I expect to be able to rebuild the drive using the original parity drive when I get home from work this evening. If all goes well I'll upgrade the original parity drive to one of the 4TB drives and then swap out the old 3TB parity drive and the 2nd new 4TB drive for two of the existing smaller data drives that are getting long in the tooth. I'm keeping my fingers crossed in any case. What's crazy is that I rarely have drive issues with unRAID until I try to upgrade the configuration.
  10. I decided to bite the bullet and go ahead and reformat the drive. I tried using a utility that allows me to view reiserfs drives in Windows and the drive is showing up blank. I started the array with the drive installed and tried to format it. It appeared to start OK, but when I hit refresh it just reverted back to the original screen that showed the drive as being unformatted. I tried reformatting it several more times with the same results. I swapped out the questionable 1.5TB drive with one of the new 4TB drives I had already precleared. The other 4TB drive is installed as the new parity drive. Numerous attempts to start the array and format the new drive failed so I'm back to square one. My next plan is to delete the existing configuration and start from scratch. Theoretically, the existing data drives should remain unaffected and the new blank data drive should format and then start building parity. What's the recommended way to erase the current configuration? I have several shares set up as well as a static IP. Is it OK to restore one or more of the .cfg files back to the configuration folder on the flash drive so I don't have to reconfigure everything?
  11. Just finished running the Seatools long test on the drive and it passed with no errors. I have no clue what's going on.
  12. I put the array in maintenance mode and ran the following from the command line: reiserfsck --check /dev/md1 The check failed immediately with an error message about block 2 being bad or something to that effect and basically telling me that the drive is bad. I'm running the Seatools long test on the drive to confirm the status of the drive. I just checked the warranty status and it expires on February 7, 2014, so I'm almost hoping the drive is bad so I can get it replaced under warranty. I did a spot check of the TV shows I have archived and noted which episodes of each show were missing. I can recover pretty much all of them so the only thing that's missing is some movies on Blu-Ray and DVD, neither of which are a monumental loss. I still wouldn't mind being able to recover the data using the original parity drive, but I don't even know if that's possible at this point. If it isn't, I'll just have to bite the bullet are replace the drive and rerun parity from scratch. Just a quick question - drive md1 corresponds to data disk 1, correct? I'm not sure if I actually checked disk 1 or the parity disk using this command. Here's my setup: unRAID Server Pro 5.0.4 Gigabyte F2A85XM-HD3 motherboard AMD A4 6300 CPU 2x4GB Mushkin PC3-10666 RAM two Supermicro AOC-SASLP-MV8 controllers with 0.21 firmware Intel PCI-e gigabit NIC Supermicro SC846TQ 24-bay server case Corsair HX850 PSU syslog.txt
  13. I'm no expert, but unless you've excluded any specific drives from the shares settings, I believe you should have a folder for every share on each drive. If you want to see what's in each folder of each drive, I'd just map the individual drive in Windows Explorer and take a screen shot of each folder's contents. There may be an better way, but like I said, I'm no expert.
  14. NOTE: Thread renamed for clarity Running Server Pro version 5.0.4. I just precleared two 4TB drives using preclear_disk.sh and used one of them to replace my existing 3TB parity drive. While attempting to rebuild parity on the new drive, four of the existing 21 drives showed errors during the parity check. I shut down and checked all cables and controllers to make sure everything was seated. Powered it back up and everything showed green except a blue ball for the parity drive. I re-initiated the parity rebuild and eventually had the same problem. This reoccurred one or two more times until the number of drives showing errors increased to seven. I shut down and rechecked all of my connections again. This time everything came up green except now that drive 1 is showing as being unformatted (drive 1 was one of the drives that had been showing errors during the parity check). I reverted back to version 5.0-rc16c, which I had been using prior to version 5.0.4, and the same thing happened. I'm back to 5.0.4 and I tried reinstalling the original 3TB parity drive, which is currently the same size or larger than any other drive in the array. The original parity drive is showing up as the wrong drive, drive 1 is still unformatted, and I cannot start the array as it is indicating an invalid configuration. I reinstalled the 4TB parity drive and it shows up with an orange ball with drive 1 still unformatted. I cannot rebuild the data on drive 1 with the 4TB parity drive because it never successfully rebuilt parity. The only way to do this is with the original 3TB parity drive. I have another drive of the same size as drive 1 that can be used to rebuild the data. I need to know if there's a way to restore the original parity drive so I can rebuild the data to a different drive. I'm also wondering if it's feasible to clone the parity drive from the original 3TB drive to the new 4TB drive using dd or some other method. That way the system would see the correct parity drive. My main concern using this method would be that I'm not sure that the configuration sees the parity as being valid. If I can convince it that it is valid then I perhaps stand a chance of rebuilding the lost data.
  15. Never mind. I think I found what I'm looking for in another thread. It looks like the Gigabyte GA-F2A85XM-HD3 mATX board will work perfectly. It's got two PCI-e x16 slots, one PCI-e x1 slot, and eight onboard SATA ports. The two PCI-e x16 slots can handle the two 8-port SATA controllers and the onboard SATA ports will take care of the remaining 8 drives. The PCI-e x1 slot will take care of the Intel NIC. I've got a Promise SATA300 TX4 PCI 4-port controller that I can use if I ever decide to go with a cache drive and/or an optical drive. Newegg has it for $66.99 after a $10 rebate and also has the AMD A6-5400K FM2 CPU for $59.99, both with free shipping. After rebate, the total comes to $126.98, which is only $1.98 more than my target price. I could go with the A4-4000 for $49.99, but I figure another $10 won't break the bank.
  16. I'm looking for a motherboard and CPU combo with the following criteria and would like recommendations from anyone that's using one with unRAID 5.0: two PCI-e x16 slots (have two Supermicro AOC-SASLP-MV8 8-port SATA controllers that require PCI-e x4 slots or better) 4 - 8 SATA II or III ports 2 - 3 PCI-e x1 slots (need at least one slot for an Intel NIC and additional slots for 2-port SATA controllers if the number of onboard SATA ports is less than 8 ) onboard video The combo can be either AMD or Intel, but I'd like to keep the cost for the combo at around $125 or less. As indicated, the combination of PCI-e x1 slots and the number of SATA ports must allow for a minimum connection of 8 SATA drives. At some point I may want to add a cache drive and/or an optical drive, so 1 - 2 more SATA ports can't hurt, although I could use USB ports for an external optical drive.
  17. I was looking into the MSI FM2-A85XA-G43 as a possible replacement for my current Asus F1A55-M LX unRAID setup. I'm using it in a Supermicro SC846TQ 24-bay server rack. I have two Supermicro AOC-SASLP-MV8 SATA controllers in conjunction with 6 onboard SATA II ports. The motherboard has two PCI-e x16 slots, one PCI-e x1 slot, and a PCI slot. The PCI-e x1 slot is used by an Intel Gigabit NIC and the PCI slot has aa older 4-port SATA controller card, of which only two ports are being used. I tried giving up the x1 slot so I could use a Silicon Image SIL3132 2-port controller, but that means using the onboard Realtek 8111E NIC. The transfer rates I get with the Realtek are painfully slow compared to the Intel, which is why the MSI board is looking so attractive. It's got two PCI-e x16 slots (the controllers only need x4 slots so they should work fine) and three x1 slots. I have a feeling that the two x1 slots in the middle can't be used simultaneously (I have another MSI board and this holds true for that board), but I only need two of the three to work for my configuration. Micro Center has the MSI board for $75 with a $10 rebate and has an AMD A6-6400K CPU on sale for $70 with an additional $40 discount if purchased together, for a grand total of only $95 plus tax. The ASRock board that was recommended has two PCI-e x16 slots and eight SATA ports, so it would work with my setup, but it lacks the additional PCI-e x1 slot to use my Intel NIC, forcing me to use the onboard Realtek LAN, so I'd be no better off than I currently am. It would also cost me $125 with shipping from Newegg, so the MSI is by far the better deal for me. We just had a Micro Center open up about 30 miles from me north of Baltimore about a year ago so I'm thinking about heading there tomorrow if they have the items in stock. If I get them I'll post my results and let everyone know if it works with unRAID. Crap. I just checked and both the Baltimore and Rockville stores are out of stock. Oh, well.
  18. Some further thoughts - I don't even know if this is relevant, but when I checked the BIOS settings after upgrading to firmware version .21 the Int13h setting was enabled. I disabled it on both controllers, but the long boot time still occurs. It was after I changed this setting that I swapped the 3TB drive back to one of the controller ports. I'm wondering if this setting had anything to do with the 3TB drive not being recognized initially. I don't know why it would, but it's the only thing that changed after the initial drive installation other than setting it up as the new parity drive and running a parity check, which was performed prior to disabling Int13h on the controllers. The parity check was performed with the 3TB drive connected directly to one of the motherboard SATA ports.
  19. Good news on the 3TB drive and .21. I returned the 3TB drive to it's original slot connected to one of the controllers after it rebuilt parity from scratch. The drive is now recognized by unRAID when connected to the controller as a 3TB drive. I'm still not sure why it didn't see it previously. I can only assume it wasn't seated properly on the backplane. I looked at the controller BIOS settings and didn't see anything that would likely be the cause of the long startup delay. I may have to chalk this up as one of those little mysteries that can't be solved. As long as unRAID works with 3TB drives or larger I can learn to live with it.
  20. Downgrading to .15 isn't really an option. The main reason I upgraded to .21 was so I could use 3 or 4TB drives. The fact that it's only an issue when I boot up is mostly an annoyance at this point since I don't constantly reboot my system. It makes any sort of troubleshooting a longer process than it needs to be. It works fine after unRAID is loaded. The thing that bothers me the most right now is the fact that my new 3TB parity drive wasn't even seen by unRAID. I would have expected it to at least show up, even if it was somehow limited in capacity. I'll play with it some more over the weekend when I have more time.
  21. I disabled Int13h on both controllers when I went home for lunch. I see no difference in the long boot times with it disabled. I plan on exploring the BIOS settings in more detail when I get home this evening to see if there's something else I might need to change. Otherwise, I'm stumped. I never had this issue with firmware version .15. FYI - I forgot to mention that I'm using unRAID 5.0-rc12a. Motherboard is an Asus micro-ATX FM1 model with an AMD A4-3400 CPU. I forget whether I have 4 or 8GB of RAM at the moment. Case is a Supermicro SC846TQ-R900B 24-bay server rack. SATA controllers: six SATA ports on motherboard, two AOC-SASLP-MV8's, and one Promise SATA300 TX-4 4-port controller (only using two ports since it's in a PCI slot). PCI-e x1 slot is occupied by an Intel NIC. Currently only 20 of the 24 available drive bays are occupied.
  22. I vaguely remember something about INT 13 with regards to my unRAID setup, but it's been so long ago I don't recall what it was all about. I'll search on the topic and refresh my memory. The controllers seem to be working fine after the system boots up, other than not detecting the 3TB drive in unRAID. I just refreshed my memory regarding the INT 13 setting. I'll have to check the BIOS on each controller and see how it's set. I'm pretty sure I had it disabled with firmware .15, but I assume I'll have to reset it with the new firmware.
  23. I installed a new Seagate 7200.14 3TB drive in my unRAID server yesterday and connected it to one of the upgraded controllers. Since all of my other drives are 2TB or less, I was installing it as the new parity drive. The drive showed up when each of the controllers was scanned for attached drives, but when I checked the unRAID web GUI the drive was not there. The parity drive indicated it was not installed, which is what I expected to see. When I stopped the array and looked at the drop-down menu to display the available drives to use as the parity drive there were no drives listed. It just said "Not installed." I ran the SeaTools diagnostic on the drive on another PC just to make sure there wasn't a problem with it and it passed both the long and short tests. I reinstalled it in the unRAID array, but this time I connected it directly to one of the SATA ports on the motherboard. When it booted up and I stopped the array, the new Seagate drive was listed. I have it running a parity check while I'm at work so I'm hoping it will be completed later this evening. The fact that the drive did not show up when connected to the controller was extremely disappointing. I may move it to the other controller just to see if it shows up. BTW, I timed the boot sequence beginning from the time the screen is displayed with both controllers and their associated drives with the Ctrl + M or Space options listed at the bottom to the time it continues with the boot sequence and loads unRAID. This time interval was just over four minutes. This same sequence with firmware .15 only took a second or two to complete. I may try reflashing the firmware again to see if it clears things up.
  24. I used a separate USB flash drive configured as a boot drive with the aforementioned files installed. When I tried the backup_cfg command I got an error indicating there was no virtual drive or something to that effect. I just restored the 6480.txt file that had the RAID functions disabled. I only restored it to the controller that displayed the RAID initialization message during bootup. I did adjust the delay from 0 to 1, but that's probably inconsequential. The delay is when it displays the overal configuration with both controllers listed along with the drives connected to each. The Ctrl + M/Space bar message is displayed at the bottom of the screen, but does not appear to respond to any keyboard inputs. I didn't time it to see how long it sits there, but I suspect it's on the order of several minutes. I won't get a chance to check it until tomorrow evening as I have to be somewhere after work this evening and won't get home until late. I'll take another shot at trying to backup the configuration and see what happens.