Jump to content

ElJimador

Members
  • Posts

    234
  • Joined

  • Last visited

Everything posted by ElJimador

  1. Good to know. In that case I'll just stick it back in my home server where it's been working fine. I was considering switching it with the LSI in my shared Plex server once I finish downsizing the latter but doesn't seem worth the risk now when both cards are already working well enough where they are. Come to think of it, maybe I'll try the SAS2LP w/the new backup server running 6.4 after all. The motherboard I'm using for that has 8 SATA ports so if I run the initial array onboard to start and make sure parity is valid before I try connecting them to the card instead then there shouldn't be any risk there, right? If I get parity sync errors or dropped disks then I'll just know then to buy another card before I need to expand, meanwhile I'd be able to fix any errors by just reconnecting the disks onboard and running a new parity check like I'm going to be doing on the home server now. My only concern would be not getting any errors initially and having these problems appear only after I've expanded to the full size array. Even though the data is all backed up it would still be a pain to recover if out of nowhere I have multiple disks drop during a parity check. But there are v6 users who haven't had any problems with the SAS2LP, correct? And for those like me who have, don't they usually see it right off the bat? IDK, seems like it's worth a shot at least.
  2. Really? I've only had problems with the SAS2LP. Other than being slow on parity checks (since it only uses 4 lanes @ PCIe v.1 speed) the SASLP has been rock solid for me. Maybe I'll go with the LSI anyway though. I'm seeing that used for around $50 and the speed bump alone is probably worth the difference, especially going with an 8TB parity drive in the new server. Otherwise parity checks are going to take 2 days.
  3. Got it. Thanks Johnnie. And actually I just got on EBay and found used SASLP-MV8s for under $20, so at that price I'm not even going to bother trying to use the SAS2LP-MV8 with an earlier version.
  4. I didn't realize there were still issues using this controller with v6. I had pulled it from my shared Plex server almost a year ago after issues that I later diagnosed as being caused by a bad breakout cable instead when they persisted with the LSI 9211-8i I used to replace it (namely connected drives dropping from the array, although IIRC now there were also random parity sync errors that did not carry over even before I replaced the cable). Anyway, if any of the issues really were with the SAS2LP-MV8 itself I figured they would have been fixed by now, so I just tried putting it in my home server as an upgrade to the SASLP-MV8 I'd been using in that machine. Which was a mistake apparently. Not quite half way into the first parity check with it, it's running a touch slower than w/the SASLP-MV8 (using it in a PCIe 2.0 x16 slot) plus it's already showing 10 sync errors corrected when I never had any sync errors before and I had just run a parity check prior to swapping the cards. So 2 questions now. 1) When I put the SASLP-MV8 back in should I just run a new parity check to correct the false error corrections with the SAS2LP-MV8, or would it be a better idea to a new config with the same drive assignments to rebuild parity altogether? 2) I see now that the Hardware Compatibility wiki mentions parity sync errors if you use the SAS2LP-MV8 with 6.1.x or later. Does that mean I'd be safe to use it with 5.x or 6.0? I have a new backup server I'm building that's purely going to be a dumb file server without any of the advanced features of 6 required. So I'd rather not buy a new controller if it isn't necessary, however if there's any chance the card is not going to play nice with those earlier versions either then I would rather spend the money now and just stick with 6.4 to start. Last, if it is a safe bet to work with 5.x or 6.0 can I still download those earlier versions or would I need to email support for that? Thanks.
  5. Alright, switched the flash drives between the servers and everything went according to plan. The only wrinkle, when I first booted up the shared Plex server with the flash + Plus key that had been in been in the new one I got "unclean shutdown detected" on the main page, however rebooting again it went away and showed configuration valid. Anyway, mission accomplished and thanks again.
  6. Yeah that's what I figured. Thanks for confirming. I'm going out to the shared Plex server tomorrow and I've got the flash drive that goes w/the new Plus license ready with the rest of the contents copied from the Plex server's Flash backup + the copied syslinux folder. I'm sure that will do the trick this time but will post back then to confirm.
  7. Update: I was able to copy the syslinux folder from one of my other servers and got the new server to boot again. Then since that worked I made a new Flash backup of that then followed the same process to re-create it as the new flash I was trying to make, this time with the Pro key swapped in. That also booted but now in the banner of the GUI the licensing reads: "unRAID Server Pro - Invalid Key" and clicking the link there gives me this screen shot attached -- LEX_Registration.pdf Am I to gather from this that instead of swapping key files between the 2 flash devices that I was supposed to keep the keys where they were, swap out all the rest of the contents instead, and then physically swap the thumb drives between the 2 machines? Sorry if I'm being dense but this is turning out to be a little more complicated than I thought.
  8. Okay I always assume user error when things don't work right but I think there may be a problem with the Flash backup utility. I followed the steps above on my Windows laptop but when I put the thumb drive for the new server back in (now with the Pro key swapped out from the other server's Flash backup) I got a BIOS error on boot: "Warning: No Configuration File Found". Made sure I had fast boot disabled and all that but still no dice so I thought maybe I was supposed to run the make_bootable batch file (which is not in the instructions for the USB Creator tool so wasn't sure if that's still required?). Anyway, put the thumb back in the laptop and ran that as administrator and got "ERROR- syslinux executable not found, expected at G:\syslinux\syslinux.exe". So now I figured I just borked something in the process of unzipping the Flash backups, swapping the keys, and zipping them back up again. So I formatted the flash and this time I pointed the USB Creator tool back to the new server's original Flash backup zip file with the Plus key but still got the same errors on boot and running make_bootable. So I got curious and unzipped the Flash backups of my other 2 servers currently running and compared the contents of those to the actual flash drives, and sure enough the syslinux folders in the running flash drives are not present in the any of unzipped Flash backups. (Again, these are the original Flash backups untouched by me except to unzip them.) I'll futz around and see if copying the syslinux folder from one of the running servers and zipping it back up with the rest of the contents of the new server's Flash backup gets it to work but in the meantime please let me know if I did something wrong here on my end. If not, you might want to have someone look at the Flash backup utility and make sure it's actually backing up all the contents needed to recover from the zip file it creates.
  9. Thanks. I noticed that flash backup mentioned in the 6.4 release notes and and have already tried it. Works great! Only wish there was a way to do this remotely so I don't have to drive 100 miles to the shared Plex server that I want to switch from Pro to Plus. My niece hosts that server for me because she has fiber internet and I'm not sure anyone in her family even has a computer otherwise I'd just call her to walk her through it. But whatever. I get why you'd have to pull the thumb drive and not just copy the key files only while the servers are running and hope that rebooting them would accomplish the same thing. If there is any other way that can be done remotely (ie. without physically having to pull the thumb drive) then let me know. Otherwise I'll just coordinate with my niece and I'm sure we'll figure something out. Thanks again.
  10. That's right I just want to switch the licenses only. New Plus key applied to the shared Plex server while it's Pro key goes over to flash drive I'll be using in the new backup server. I'm still waiting for hardware for the new server, so instructions for the how part is not as big a priority to me right now as just confirming that I will be able to do this before I buy the new license. So I guess I'll wait for a mod to chime in or email support if I don't get confirmation here. Thanks for your input.
  11. No the new server will be an entirely different config. So just need to switch the actual license keys themselves then. Got it. Thanks.
  12. Thanks Hoopster. Is it just the Pro.key and Plus,key files under flash/config that would need to be switched or any other config files along with them?
  13. I have 2 Pro licenses already for a home and shared Plex server and I'm wondering if I buy a new Plus license for a third server for offsite backup would be possible to switch one of the Pro and Plus keys between servers? I overestimated the capacity needed on the shared Plex server and would like to apply the new Plus license to that and move it's Pro key over to the new backup server. Assuming that's kosher, any particular instructions for doing that? Or is it simply a matter of buying the new Plus key and then copying the Pro and Plus key files between the flash/config folders?
  14. Nevermind. Resetting the modem did the trick. Dockers are back on and the forced parity check after unclean shutdown is running now.
  15. Thanks Frank. The server is back on again but unfortunately I still can't access it through the GUI or SSH and it looks like the problem is with the network because now the 2 windows machines I have there can't see or ping each other either or any other device on the network. I've tried disabling anti-virus and windows firewall on both of those but the net view cmd still returns error 6118. I've asked my family there to reset the modem/router but not sure what else to try if that doesn't work?? Wish I had considered it was a network problem all along. Unfortunately I didn't even think to check yesterday whether the other computers could see each other either, and if that's what it was all along I could have avoided the hard power down.
  16. I was working on my remote server today (the shared Plex server in my sig below) via a Chrome Remote Desktop connection to a Windows machine I have on the same network when all of the sudden the server dropped offline and became unreachable through the GUI or via SSH. A family member there tells me that the power light is on and ethernet connected so I don't know what happened but I'm wondering if there's anything else I might be able to try here other than just hard powering down and booting back up again? If it helps to diagnose, at the time it crashed I had 1 remote user watching Plex + Resilio syncing files to one of my home machines, while from the local Windows machine I was remoted in to I had Syncback running a backup to the cache drive at the same time I was deleting files from the array through Windows -- usually 1 at a time but on the last one I tried deleting 2 large movie folders together (via the share folder, not an individual disk) and it was exactly at that point is when it hung for what seemed like a full minuteif and then crashed. All at once I got an error message from Windows on the files delete, Syncback popped up that the backup failed because the network connection was lost, and on my home machine I saw that the Resilio sync stopped also and that the server was offline (and Plex too of course). After that I haven't been able to get the unRAID GUI back up or SSH in to it either. So anything else I can try now? I really hate hard powering down but I don't know what else to try here if there's no other way back in. And of course running a parity check will be the first order of business when I do get the connection restored or rebooted (if it isn't forced on me regardless). Thanks.
  17. Hallelujah! New PSU arrived early was able to work up carefully to booting the entire array with not a single problem along the way. Parity sync / data rebuild with the new parity drive is running now. So what do you guys think after all this? I know part of the problem w/this board was not having the correct NB config in BIOS to boot with the controller in the PCI slot however I'm almost positive that I didn't touch those settings until after it stopped powering up in the first place, and I know the other board was working fine because I'd just pulled it out of a working Windows desktop not 2 days before. So do you think the PSU I started with (the Corsair V550) was the most likely culprit that started all this and killed the other board (flushing my Windows OEM license with it)? I'm thinking I've got to RMA it at this point since I just don't trust putting it in another machine.
  18. And just when I thought I was going to be able to get the server back up again: Install board & PSU in case and successfully boot first without the SATA controller in the PCI slot, then with it in, then w/SAS cable and SSD connected to it, and finally with all drives attached onboard connected too. A good start. Attach SAS cables to the 6 data drives in the second case and boot up where unRAID shows 3 of those drives (1,5 & 6) all missing. Whoops. Power down and double check the connections. One of the SAS cables (the one w D5 & 6 attached) didn't clasp in the card and had fallen out. Not sure what happened with D1 since it did appear to be connected, but whatever. Double check that all drives attached to the controller are now firmly connected on both ends to SAS cables and PSU, disconnect all the onboard drives just to be safe, and try booting again = nothing. Won't turn on. Disconnect all drives attached to the controller and remove controller from the slot = still won't turn on. Clear RTC RAM and try again = same result. Remove battery and clear RTC RAM and try again = still nothing. So now I'm back to wondering about the power supply. I remember back when I was running this server w/only 6 drives in a single case, I was using a 300w Seasonic as the PSU for some time and at one point when I had an issue I was cautioned by someone here that the unit may not be putting out enough amperage (24a on the 12v). Which is exactly why I bought the Cooler Master V550 when I expanded to the 2nd case and 12 drives. 550w seems like overkill to me to power a 9w TDP processor + 12 drives but I wanted to make sure I had plenty of overhead on the amps. Specs for the V550 says it puts out max power of 45a on the 12v which if you figure 2a per drive (WD actually says 1.75a peak for the Reds which are 10 of the 12 drives I'm using but let's just round up) that still leaves 21a to power up an Atom-equivalent super low power embedded board/processor + the controller and 6 case fans. And that's it. So there's no way that the output should be insufficient unless the PSU is just crapping out now, right? I've got another 550w PSU (Corsair RM550x) coming on monday that I was going to use in the new Ryzen desktop. Think I'll try it with this server instead and cross my fingers that I'll be able to get it to boot up again. The other PSU I was testing with earlier is a 400w w/only 30a max on the 12v so I don't see any point in even trying w/that since even if I could recover up to getting it to boot at all to begin with and then with up to 6 drives attached, I would never try to boot the whole array with it. So let me know if you think there's something else I should be trying in the meantime, otherwise I'm in a holding pattern now until I get the new PSU. BTW, if it does turn out that the V550 at fault and started this whole sh*tshow, does the warranty on a PSU typically cover connected components as well (like any decent UPS warranty does)? I've never had a problem w/a PSU before but if that's really what killed the other board (and maybe took my OEM Windows license down with it) then I'd say Cooler Master owes me more than just a replacement unit. But that's getting ahead of myself I suppose. Anyway I'm calling it a day. It's clearly time to start drinking and forget this nonsense for a while.
  19. Alright, testing again with the other PSU. H87I-Plus Starting RAM and CPU fan installed this time but nothing else = turns, no beeps (should be a code for missing VGA, right?) Add VGA only = turns on w/no beeps or video output Remove battery and clear RTC RAM = same result Add graphics card and attach VGA to that, also add unRAID thumb + ethernet = still no video output and can't access the unRAID GUI through another computer either (so whatever it's doing while it's "on" without video, it's not booting into unRAID. (BTW the fan on the graphics card turns on so there is power going ot the PCI slot.) Conclusion: Dead board just like the one on EBay. Turning on (supposedly) but w/no video = useless. C60M1-I Starting where I left off yesterday w/VGA, unRAID flash, ethernet, keyboard and mouse all connected = boots where I go into BIOS and change only the NB config from the optimized defaults. Per and old post that I finally found here, enabled multi-monitor and changed Integrated graphics from Auto to Forced while keeping primary video device as PCIe (still hoping for some magic combo that will make it work) Exit BIOS and let it boot into unRAID where I power down Add SATA controller (SASLP) = boots up this time. HEY!! Power down and connect SAS cable + SSD to the card = boots again and the drive is visible in the unRAID GUI. WOOHOO! Power down and go back to the original PSU with the same config = same result. Conclusion: It was the *#&% BIOS settings after all. Board appears to be okay though I still don't know how the BIOS settings got changed to start this whole mess in the first place, since I don't recall touching anything in the NB config until after it stopped booting with anything in the PCI slot. Still, it's great news. I will now VERY carefully reinstall it in the other Node 304 case (the one that was only housing the expansion drives) just in case there was anything in the mounting of it's twin that was creating some kind of short, and assuming it continues to work I'll start reconnecting the hard drives one at a time and then (hope, hope) start the array and move forward with the parity replace procedure. Going back for a moment to the other board, I had been using it up until a few days ago in my own Windows 8.1 Pro desktop that I was in the middle of repurposing for sister's use since I was going to be upgrading to a Ryzen myself. And unfortunately it's the OEM version of Windows with the installation media lost (tossed by my ex when we separated) AND with careless me not having saved the product key anywhere either. So any idea in this situation if I should be able to get a new key from MS to re-install it on the replacement motherboard? My understanding is that they will allow you to re-install the OEM version on a new board if your old board dies and I do still have the receipt from when I bought the Windows that was running on it, however there's no S/N or anything unique on that for them to be able to tie it to the original key and know that I'm not actually running it on another machine already. Damn I really wish I never even tried using that board once the C60M1-I BIOS got changed and stopped booting the PCI slot populated. Replacing the board AND having to buy Windows again is not something I'm looking forward to (or that my sister will be looking forward to, I should say, since she'd be the one footing the bill at this point). But hey, on the bright side at least it looks like everything is going to be good again with the original board. So thanks everyone for sticking with me through this and encouraging me to try one more time.
  20. Yeah you're probably right. I'd also probably see stuff like my monitor or the light in here flickering or that sort of thing, I can't say I've ever had any of that. Still grasping at straws I guess. BTW, check out this motherboard listing I just found on EBay. Sounds familiar doesn't it? *Broken* As-Is ASUS H87I-PLUS LGA1150 Intel H87 ITX Motherboard System powers on, but no beep or display. Only had the i5-4460 to test the board and a set of known working DDR3 RAM. No other debug was done. The CPU Socket is clean, and the I/O shield is included.
  21. You know I was just thinking. My "lab" as I call it is a converted front porch area of my little bungalow, so poorly insulated that I had to move my home server to the living room in summer for fear of the high temps, and I've been running without a UPS in this room since it followed the server into the living room. I am using a surge protector power strip from Amazon but it's a $10 item. If my landlord's contractors did as crappy a job on the wiring in this room as they did with the insulation and ventilation (also non-existant), I wonder now if I've got some kind of a dirty power problem?
  22. Yes I do and I guess just to leave no stone unturned I'll go ahead and do the same exact tests tomorrow. But as I said in my earlier posts, I've already tried swapping out the PSU in my earlier struggles and have gotten all the same boot shenanigans. And as low as the possibility is of 2 of the same kind of components (motherboard, PSU, RAM) dying at the same time, the most likely culprit would be some kind of electrical short or surge, wouldn't it? It's not likely to jjust be sheer coincidence. And if that happened which components would be the most vulnerable?
  23. Thank you Jonathan. I followed your advice and unfortunately the results weren't pretty. C60M1-I No speaker header on board so no beep codes however everything worked fine up through installation of RAM, VGA, mouse, keyboard, unRAID flash drive, and ethernet with no problems booting into unRAID startup menu and starting memtest. After that though: install SATA controller (SAS2LP) in PCI slot = wouldn't post remove SATA controller and try again = wouldn't post clear RTC RAM and try again = posts, then in BIOS choose Optimized Defaults and exit and it goes on to boot into unRAID power down and install different SATA controller (SASLP) = wouldn't post remove SATA controller and try again = still wouldn't post H87I-Plus First try w/nothing but CPU gave me appropriate beep code for missing memory w/RAM installed now (single stick only) gave me beep code that I interpreted as missing VGA (1 continuous beep followed by 3 short beeps) but in hindsight I realized was actually for hardware component failure (1 continuous beep followed by 4 short beeps) w/VGA connected got the same beep code for hardware component failure however I did get video output to a prompt to F1 into BIOS setup along with the error message that no keyboard or CPU fan were detected (might not be what you intended but I was following your steps literally and hadn't plugged it in yet, so I assumed the hardware component failure beeps on both occasions were for the missing CPU fan) hard power down, attach CPU fan and try again = fan spins up and no beeps but no video either clear RTC RAM and try again = same result remove battery + clear RTC RAM and try again = same result install video card (GT430) into PCI slot, connect VGA to it and try again = same result again So what do you think? My best guess at this point is that there probably was some kind of fluke short in the mounting (though I still have no idea what might have caused it) and in my repeated attempts to get things working again I inadvertently fried both boards. I'm certainly open to any other theories though or anything else that might still be worth trying. I suppose I could try ASUS support but since both boards are out of warranty that just seems like an exercise in further futility to me. (I wound up having to RMA a monitor with them before and I can't say I found them to particularly helpful). So unless there's some other avenue I'm not thinking of, I guess I'm buying a new motherboard??
  24. Thanks BJP. I know my frustration is showing at this point so I want to be clear that I really do appreciate anyone willing to hang with me through these long posts and continue to help me out with this. And yeah, now that you mention it I guess the case is the one constant component in all these struggles that I hadn't considered before (that and the unRAID thumb drive, as unlikely a culprit as that would be). I have checked in swapping out the different parts that there wasn't anything obvious like a loose screw wedged under the motherboard creating a short. But after swapping out everything else it probably is the most rational explanation left, and one that may not have ever occurred to me on my own. So thank you. Tomorrow I'll try a different case and see if that finally solves it. Again, thanks for the feedback and fingers crossed that things go better tomorrow.
×
×
  • Create New...