Johnm

Members
  • Posts

    2484
  • Joined

  • Last visited

Everything posted by Johnm

  1. Possible issue with RC11. It looks like unRAID dropped 1 channel on an OAC-SASLP-MV8. It red balled one drive (drive 15) Instead of emulating the missing drive as it should, it ran into trouble. it is having 1000's of read errors on the other 3 drive on the same channel. It is doing hundreds of parity corrections now. would those parity corrections not invalidate the parity for the rebuild of the lost drive? I am not onsite to perform any physical maintenance. The webgui went unresponsive at about this time. I tried to do a remote into the console and hard reboot it. the console is spammed with 1,000 of riserFS errors for disk 15. I gave it a hard reboot command and lost interactivity and it continued to spam rieserFS errors. after 12 hours in this state, it never shut down. I hard rebooted it but cutting power remotely. it came back up and auto started. it is continuing to make parity corrections with the redballed drive offline and inserted (and it now shows it made 1000's writes to 2 of the other 3 drives on the same channel). it is also spamming the log and console. can not access the array at all. at about this point it went unresponsive. now the question will be what died once i get a syslog? The driver for the controller taking out the drive. The controller itself. The drive that in turn took out the controller. I'll try and get some sys logs asap. at this time, i have lost connection to the server. I am not sure what direction to go in from here. i would assume to have the redballed disk pulled before powering it back up and see if the other 3 drives still have issues?
  2. true, but if you move a lot of data, you will run the mover more times a day. and then you have the whole array chugging and eating more power in the long run but its always ready to take data without a spinup is nice.
  3. I'm kind of with weebo on this one. and I use SSD's and raid5 arrays for my cache drives. If you pick up a 1TB - 2TB or 3TB seagate with the 1TB platters and use that for a cache, it will be faster then your network. also, you have a warm spare if a drive fails. plus, if you run out of array space and no drives are on sale that week or you're between checks, you can toss the cache drive into the array and order a new cache drive. the one huge advantage to an ssd is you can write to it while its running the mover and it will still be as fast. but it wont be a problem for most.
  4. Tom, I'll be in So-Cal for two weeks in the beginning of February. I could drop off an X9SCM, Xeon/I3 and 16 gigs of ram for you to play with while I'm in town... If you think this would help out with this issue.. you could also ship it back if you need a test rig for a bit for the 64bit testing also.
  5. Did I spot an FTP Icon under Network Services? Other then that. i'm going through the paces. no explosions yet
  6. A quick follow up. I had a power failure today and i was actually home.. so it was a good time to pull both SSD's and test them. the good SSD reported zero errors. back into the server it goes the bad SSD reported so many relocated sectors I cant count. every time i tried to do anything to the SSD that was a write, it would disconnect and go offline until i unpluged its power. Back to Corsair it goes. It didn't even last a year. I also noticed they have discontinued the Performance pro series off SSD's .. I am wondering what I get back. I also noticed none of their current drive line have enhanced garbage collection.. if they send me anything less then another Performance Pro or a Neutron ill be insulted.
  7. His are nose down. I have seen people claim the extra stress on the drive in that orientation wears it out faster. I have seen no evidence to prove that. but yes, sideways is about the same stress IMO,
  8. Yes, there are people reporting a few issues passing though some cards in ESXi with the ivybridge CPU's unknown if it is a chip or software glitch. best to stick to what the motherboard was made for. also, save the money with a 12x5 chip. that adds the hd3000 graphics chip to the CPU that you cant use with that board. stick with a 12x0 If you on unRAID 5x, moving the drives to the card will require no user interaction. if you are on 4.x you have to do some extra work. since you mention 3TB drives. I would do that upgrade first. you can even do that and test before going to esxi. yes. at anytime, you can pull the esxi thumbdrive out and it should boot right to unraid (unless it finds a bootable HDD in the server) read the first page of my atlas thread. it should cover all of your questions..
  9. Nice clean build. It reminds me a lot of my first unraid build. same setup in a different case. (and then WHS v1 before that.) that's debatable.. dell and HP both have hung HDD's in that orientation in some desktop models. As long as it is on some form of a 90deg axis it "should" be ok.. some odd diagonal, i would say no... that PSU will probably at its max before the server is maxed out.
  10. the only reason against the M1015 is the lack of warranty. I have a mountain of them. They rock.. (they rocked harder when they were $65)
  11. You are correct in the CPU upgrade. that is the minimum I would use. (you really dont need larger for most people. only if you are doing heavy CPU lifting like a bluray ripping VM or an exchange server VM.) The RAM upgrade is recommended but optional. 8Gb is minimum and limits how much you can with the system. the AOC-SASLP-MV8 pssthough in ESXi with a minor hack. 3TB (and larger) work just fine. No data loss if done correctly.. Keep in mind that installing ESXi will format ALL drives it can find. including your unRAID drives if plugged in. make sure you have all hard drives except your datastore unplugged during install
  12. I should have mentioned that the one I got was 2.5" form factor not 3.5"
  13. I forgot where I saw it. someone published instructions on how to make those from spare blanking plates..
  14. I will assume the answer is no.. I say this because I have had other LSI cards fail in Asus 16x slots on some Desktop P7 Series motherboards. This is because the 16x slot only has 16x power pin and the LSI card gets the power from the 4x (or was it the 8x?) connector. I'm not saying it runs at 4x, that is where the power pin is on many LSI cards. It never fires up. Plus it would be so slow after just a few drives.. you would pull your hair out..
  15. I have not seen any in a while. The one you linked was the one I knew of. There was a similar thread like this in [Hard] last summer. They might have had some good answers there. I would go see what they found. If you do find anything. please report back I ended up getting a Supermicro CSE-M14TB. This has a SFF-8484 connector on it. I then got a SFF-8484 to SFF-8087 Data Cable If you do go this route, that fan is horrid loud. you will have to do something about that if this for your home.
  16. I am running about 9 of these in my main unRAID box for about 9 months now. This includes one for parity (it was much was faster then the hitachi 7200 I had in there). so far not one reportable error. I did have to update the firmware on all of my drives. I also have 6 of the 2TB version. 4 in a ZFS1 array and 2 spares JOB that i use for scratch drives (and cold spares should I need them...) again no reportable errors. they are fast..... but the warranty is poor and they are not rated for 24x7 operation (turn on your power management)... I bought them for price.. so far so good.
  17. If you want to want to run a non-storage server platform, this might be a good choice for a lab environment. For a storage server, especially unRAID, I would recommend you put your money into something more appropriate..
  18. That is really to vague. the DL580 comes in about 60 configurations going back 7 generations going all the way back to before they bought compaq .. chances are, it does not take standard sata disks though.. the older ones are SCSI and the newer ones tend to be 2.5" SAS. A few might have had a SATA bay added as an upgrade option.
  19. Rafters, IO slot adapters or velcro.. I have used all of those... I just leave mine laying on the bottom of the case...
  20. Because you have a passthough card, you have to set the memory reservation to match the amount of ram you are using. Just adjust that in the VM's settings. (advanced tab under hardware settings?)
  21. OK.. Its baaaack...... I lost an SSD and lost a Flash Drive it looks like.. the flash might be fine. it did not want to boot back to the v5 flashdrive after i did a clean install of V5.1 on another Flash Drive. At few bucks each, i'll just format it and put it to use elsewhere. its not worth my time to look at it. its a flash drive.. sheesh...... The SSD, It is still in the server for now. I'll pull it out after my ZFS scrub and unraid Parity check pass. It might be fine after a reformat. it might not be. it might have ton of burned out cells. ill format, smart test and SSDlife it.. see what i get.. its got years left on the warranty I might be calling on it..... I'll let you guys know.. I can not stress enough the importance of backups!!! I thought I was up to date on my back ups. It turns out I was not. I had backups of most of my VM's (the ones that were on the ZFS array that I didnt need to restore) I did not have a good backup of my freeNAS build and I did not have a current backup of my usenet VM (7 months old then it ran out of space and i didnt look at the logs.) I was backing up the SSD's daily to the ZFS array to a share that was 100% full. then backing up the ZFS once a week. (I always say, It is not a backup until you test the restore) I also lost a few test VMs that i dont care about.. The usenet VM was not so bad because I do backup the database and the downloads are kept on a separate ZFS VMDK. restoring a 7 month old image and doing patches and the database restore was good enough. I only lost about 4 hours of data. the database backup happened right before the crash. The FreeNAS rebuild was horrid and made me question why I use this product. i had to re apply all of the hacks. after that, It would not re-import the array at all in the GUI. when I tried it from CLI, it failed the mounts and came back up as an empty array.. i decided to try an Export from CLI and then re-import from gui now that i could see it. This time it froze. i was pretty sure i had lost all of the data and would be doing a massive restore. I walked away and about 20 min later it unfroze and mounted the array. Apparently it was just doing its job while being unresponsive. Unforgiving I believe I called it before.. I still go with that. I recommend those considering this route, consider a hardware RAID card or external storage array. it might just save you a headache.. So moral of the story.. backups... also, when your hardware starts giving errors.. fix it or replace it. I had warnings this was about to happen.. Bottom line.. what a waste of a weekend.. although most of it was just doing restores.. I'm just glad it happened before i left for several months, if that even makes any sense.
  22. There has been a lot of talk in this thread about ZFS inside ESXi .. this page has some good info on datastores on nix boxes inside ESXi http://www.vmdamentals.com/?p=4465 he said the Nexenta was the only one to successfully rebuild in his testing.. I think i need to do a test myself once i get back up and running. I wish he tried Open Indiana.. (it should be the same result as Nexenta)
  23. Ouch! My worst nightmare come true. I need to badly work on a backup system and the ability for my machine to power itself down when the UPS fires off. Need to move machine off my coffee table in the living room too! I too have put off 5.1 now that I know what I was doing wrong before, I think I need it to run Win8 in a VM and I still want to get OSX going. Do please keep us posted, hopefully this won't be anything too bad for you to fix. <shiver> I found the root really fast.. If anyone remembers the SSD the I was having issues with a month or 2 ago? Well that should have been a sign of things to come. I should have backed it up and did more testing on it. unfortunately i was out town and there was not to much I could do.. Well that is the root of my issues.. I was a cache write error.. it so happens it is also the SSD my ZFS data store is on... so that locked up my ZFS. that in turn took out almost all of the rest of my VM's.. I had been keeping backups so nothing is lost.. other then maybe an SSD. As of right now, any attempted writes to that SDD cause an error and ESXi drops it. at least 5.1 does. 5.0 just left it online but frozen.. That SSD is heavily abused. That is the SSD my sql server and another 24x7 database run on and my usenet downloader.. so it sees more abuse and pounding then it should. in adition, it never goes idle for the garbage collection to do its job. i know it was going to fry eventually from this. You mentioned backups. i use both ghetto backup and veeam free. the veeam free is really damn nice for datastore migration. it goes as fast as your machines can go.. I did a final backup of the system at pretty much the speed gigabit to my windows server.. (pic attached)
  24. I woke up this morning and Atlas was locked up. A soft reboot failed.. and hard reboot resulted in a crash shortyly after the console comes up. no VMs will start... I'm getting a generic error on the ESXi side before it totally dies. just a minute or two after reboot. It looks like a license error. it could also be an esxi cache failure. I am going to bet the flash drive is finally starting to give up.. It just so happens I am home this weekend. so i can work on it. (talk about timing).... I have also been putting off upgrading to 5.1, now i have that chance... Installing to a new flash now.. I'll let you guys know the result..
  25. In my oppinion. this latest build is still pretty fresh to just slap a "done" stamp on it. there might still be other hidden bugs out there. it needs to go through the wringer... As far as the slow write issue.. I am not seeing this at all.. i have 2 boards that are the same and I am not seeing this issue..