Jump to content

BrianAz

Members
  • Posts

    123
  • Joined

  • Last visited

Everything posted by BrianAz

  1. Hello @aptalca, thanks for reaching out. I did not test what you describe above using apcupsd but would be very curious to hear the results of the same test using NUT. My plate is pretty full for a while with other projects, but if I get an opportunity to do a proper test as you've laid out above with NUT, I'll be sure to post back with my results. Likewise, if you make the switch, let me know how it goes! That's a pretty awful scenario you've described, I'm curious to find out what happens with NUT. Thanks -Brian
  2. This forum is for the current stable release. Since you're running the beta, I'd recommend posting in the correct thread here: http://lime-technology.com/forum/index.php?topic=48193.0 Running a beta is usually reserved for people with multiple unRAID servers to spare so any issues/data loss are not important. Based on your post, it sounds like this is your primary/only unRAID server. I'd recommend running 6.1.9 until the beta ends to remain stable.
  3. Great to hear! Once you complete your RFS -> XFS conversion, come back to confirm the issue is gone. I had a lot of trouble finding good info on my issue and had to piece it together from multiple threads. This one seems to summarize the issue & fix well.
  4. How's it going? Like you, I think I had a lockup between every 3-7 days depending on what I was writing and how frequent. I wanted to be REALLY sure, so I think I ended up leaving it for ~ 3 weeks. That was probably overkill though.
  5. Right... in my experience, reads were never an issue, only writes to the RFS disks. With that in mind, I have a 500GB XFS Cache drive. I made sure that EVERY user share was configured to write to the Cache... which is critical in making sure nothing is writing to the array (RFS) disks: Finally, to make sure the mover itself never wrote to the RFS disks during the test, I changed the schedule to monthly and picked yesterday's date: Of course there is a slight risk since stuff you write during the test will not be parity protected as it's sitting on the Cache disk... but that was a small risk imo. Good luck! Let me know if any questions.
  6. I was in the same exact boat I think. When it locked up for me and I checked the load via ssh it was skyrocketing (well into triple digits). Nothing would get it to calm down except a reboot. Additionally, I ran a test where I wrote only to a 500GB XFS Cache drive for a month (disabled mover, all shares use cache) and I saw NO issues during that time, so I felt very confident the RFS -> XFS conversion of all my drives would work. You might check those two things to see if it is indeed the same issue as mine and several others. Even if you go back to v5 eventually, it would be nice to confirm for the future I think... I went back to v5 at least twice myself while troubleshooting. I wanted stability over features since I don't use any dockers or anything. Eventually though, I wanted things like built-in UPS support, the much nicer GUI and to make sure I was running a close enough version that if I needed support, I wouldn't be the only one running it. Took me 2-3 weeks to do all my disks, but like I said... worked perfectly after. As you say though, it is the point of no return . So everyone needs to make this call on their own.
  7. Are your disks ReiserFS? How full are they? I had an eerily similar issue when I came over from v5. On a different note... isn't the AsRock Extreme 4 motherboard LGA1155/Intel?
  8. Thanks for the feedback. I have now installed and tested the NUT plugin with my CyberPower CP1000PFCLCD unit (twice). It appears to work flawlessly. Got a notification from unRAID when I pulled the UPS power from the wall, then another notification 3 minutes later stating that unRAID was shutting down. Once the server halted, my UPS shut itself off just as I had hoped (with more than ~20 min estimated runtime left). I waited a few minutes and plugged the UPS back in simulating power restore. My UPS kicked on and showed a 10 second timer before it re-enabled power to my server (I may look to see if I can extend that to 30 seconds). Upon getting power, my server turned back on and was ready for me to manually start the array after 1-2 minutes. Shutdown was clean and no parity check required. I run hardly any plugins and 0 dockers on this server, if your situation is different this might not be for you. My intention here is that if I am away from home and we lose power for a couple hours at the beginning of the trip, I want to be able to connect in to my lan through a SSH tunnel and start my array so I can resume watching my content. I've had two instances over the last few years where we were on a week+ vacation and a brief power outage resulted in not being able to watch Plex for the remainder of the trip. As long as I don't let my server run the UPS battery down too much prior to shutdown, I think it'll work without too much risk. Thanks again for creating this awesome plugin! I wonder/hope if the source could be formally adopted by limetech in a future release. Having the choice between apcupsd and NUT should allow just about all UPS to work w/o issue. Here's a screenshot of my settings in case anyone else has this or it's sister models with higher/lower capacity.
  9. Hi - anyone using this with a CyberPower CP1000PFCLCD? I'm currently using the apcupsd option, but the "Shutdown UPS" is not working quite right. Instead of shutting the UPS down a few minutes after unRAID, it appears to set a 60 min shutdown timer and once that expires if I plug the UPS back into the wall, it does not fire up. If I press and hold the power button on the UPS, it does start and my server then automatically starts thereafter. I should have some time for a few tests soon but I thought I'd reach out in the meantime and ask. The NUT documentation pages I saw seem to indicate this should do what I am looking for. Thx
  10. Yes. Have the CyberPower CP1000PFCLCD connected to a HP N54L unRAID server running 6.1.9 currently. Curious - what behavior are you seeing when you set "Turn off UPS after shutdown:" to Yes? .... If I then plug the UPS back in to house power, nothing happens until I press the UPS power button. Upon turning on the UPS, my server fires up as well without my touching it. Don't know much about the the behavior of Cyperpower UPS but the restart of your server on reapplication of power is probably controlled by a setting in your BIOS. Have a careful check through the power settings section of the BIOS. (Possibly, you might have to download and read the manuals from the manufacturer! ) Thanks Frank. I do have this enabled and it is working. As I mentioned, simply powering on the UPS after it shuts off provides power to my server and it turns on without my touching it. Also tested yanking power cord from wall without UPS (or HDDs) in the mix and as soon as I plug it back in the wall, it fires right up. So I'm confident I'm good there. -Brian
  11. Yes. Have the CyberPower CP1000PFCLCD connected to a HP N54L unRAID server running 6.1.9 currently. Curious - what behavior are you seeing when you set "Turn off UPS after shutdown:" to Yes? For me, when I pull the UPS power from the house, it does a graceful shutdown of unRAID as expected, but then it appears per the UPS LCD display to set a shutdown timer for 60 minutes on my UPS (same model as yours). At the end of the shutdown timer, it briefly shows a power on timer, with a 3 min duration. The timer never begins counting. Then the UPS LCD goes black and all lights are off. I cannot see if there is any way to adjust the 60 or 3 min timers. If I then plug the UPS back in to house power, nothing happens until I press the UPS power button. Upon turning on the UPS, my server fires up as well without my touching it. My goal here is for unRAID to tell my UPS to turn itself off after unRAID shutdown. Unfortunately, the killpower command unRAID puts at the very end of /etc/rc.d/rc.6 initiates this hour long shutdown timer on my UPS model instead. I'm guessing that CyberPower handles this a little different than APC does and thats why it works for APC folks. I'd love to know if I'm doing something wrong though. I really want my server to shutdown and start back up after power outages vs staying down. Thx
  12. Thanks again for this plugin, I love it. The one issue I'm still having is with disktemps polled through SNMP. It looks like it's going to work when I try via snmpwalk, but I get nothing. Running the drive_temps.sh manually works fine. Polling "sharefree" via snmpwalk works as expected and my .conf looks ok, so I'm stumped. I saw some mention earlier re: scripts not having enough time to run... was that resolved? How can I see if that's what's happening to me? Linux 4.1.18-unRAID. Last login: Sat Mar 12 17:57:35 -0700 2016 on /dev/pts/0 from MacMini.geek.lan. root@Tower:~# snmpwalk -v 2c localhost -c public 'NET-SNMP-EXTEND-MIB::nsExtendOutLine."disktemp"' NET-SNMP-EXTEND-MIB::nsExtendOutLine."disktemp".1 = STRING: root@Tower:~# /usr/local/emhttp/plugins/snmp/drive_temps.sh WDC_WD30EFRX-68EUZN0_WD-WMC4N2022598: 31 Hitachi_HDS5C3020ALA632_ML0220F30EAZYD: 40 ST2000DL004_HD204UI_S2H7J90C301317: 35 WDC_WD30EFRX-68EUZN0_WD-WCC4NPRDDFLF: 36 Hitachi_HDS722020ALA330_JK1101B9GME4EF: 44 Hitachi_HDS722020ALA330_JK1101B9GKEL4F: 43 WDC_WD30EFRX-68EUZN0_WD-WCC4N4EZ7Z5Y: 36 WDC_WD30EFRX-68EUZN0_WD-WCC4N1VJKTUV: 37 WDC_WD30EFRX-68EUZN0_WD-WCC4N4TRHA67: 34 WDC_WD30EFRX-68EUZN0_WD-WMC4N0F81WWL: 33 WDC_WD30EFRX-68EUZN0_WD-WMC4N0H2AL9C: 33 WDC_WD30EFRX-68EUZN0_WD-WCC4N3YFCR2A: 36 Hitachi_HDS722020ALA330_JK11H1B9GM9YKR: 42 WDC_WD30EFRX-68EUZN0_WD-WCC4N6DVY4F0: 32 WDC_WD30EFRX-68EUZN0_WD-WCC4N3SYYN5S: 32 root@Tower:~# cat /usr/local/emhttp/plugins/snmp/snmpd.conf | grep temp extend disktemp /usr/local/emhttp/plugins/snmp/drive_temps.sh root@Tower:~# root@Tower:~# snmpwalk -v 2c localhost -c public 'NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree"' NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".1 = STRING: A: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".2 = STRING: HomeMediaBackup: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".3 = STRING: Home_Videos: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".4 = STRING: Kings: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".5 = STRING: Learning: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".6 = STRING: MMA: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".7 = STRING: Movies: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".8 = STRING: Music: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".9 = STRING: TESTMovies: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".10 = STRING: TV: 4027489538048 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".11 = STRING: Test_Media: 4027489538048 root@Tower:~# ls -la /usr/local/emhttp/plugins/snmp/ total 76 drwxrwx--- 2 root root 140 Mar 7 21:31 ./ drwxrwxrwx 13 root root 260 Mar 7 21:31 ../ -rw-rw-rw- 1 root root 164 Mar 7 21:31 README.md -rwxrwxrwx 1 root root 2658 Mar 7 21:31 drive_temps.sh* -rwxrwxrwx 1 root root 543 Mar 7 21:31 share_free_space.sh* -rw-rw-rw- 1 root root 60976 Mar 7 21:31 snmp.png -rw-rw-rw- 1 root root 571 Mar 12 17:53 snmpd.conf
  13. I've moved on to trying the LibreNMS fork of Observium. I love it so far. The community/authors seem much more friendly as well. I'd recommend it over Observium at this point.
  14. Can you really claim "slower"? Show us the same tests with the same hardware on 6.1.9 with single parity otherwise we have no baseline comparison. I haven't said it's slower for purely that reason (no benchmark testing was done on this hardware beforehand), therefore to everyone reading this I guess it is purely anecdotal. What I asked was: Only in cases of extremely weak hardware (CPU). Any data on what would be considered an extremely weak CPU? I'm running an Intel® Celeron® CPU G1610 @ 2.60GHz. Anyone else testing with this CPU? Thanks
  15. Wanted to add my thanks for this plugin. Using the Observium VM Appliance is the way to go (I set it up on my ESXi box). Looks great and all the info appears accurate.
  16. Another vote for CyberPower. I have had this model running my ESXi box for ~ 3 years. Works great and am about to buy another for unRAID. I'll do a forum search before clicking buy.... but IIRC CyberPower works fine with unRAID. CyberPower CP1000PFCLCD PFC Sinewave UPS 1000VA 600W PFC Compatible Mini-Tower
  17. There's nothing wrong with leaving the drive in your unRAID machine as long as it's not part of the array. So long as the drive is not part of the array, it won't impact your array and you can run the preclear script to pound on the drive and see if it breaks or gives you any info. If you're not clear on this, it might be best to plug it into your desktop and run a manufacturer test tool on it.
  18. Welcome! So 5x2TB=10TB (data), 1x4TB (parity) & 500 GB SSD Cache? Looks good to me, though the extra 2TB on your parity drive is wasted space until you add a larger data drive. If you happen to already have the 4TB lying around it'll work fine, but if you're buying it new I'd get the same size as your largest data drive. How many drive slots do you have? If you were to expand, would you be able to add an another data disk or would you be swapping a 2TB for a larger one? re: SSD Cache vs spinner... definitely go SSD if you're going to be running VM/Docker/Plugins.
  19. Yes. I went with these: Monoprice 0.75m 30AWG Internal Mini SAS 36-Pin Male with Latch to 7-Pin Female Forward Breakout Cable, Black (108187) https://www.amazon.com/dp/B008VLHSQO/ref=cm_sw_r_cp_awd_2xGXwb8EK47DC
  20. I'm not sure how many disks you have or how long this will take you, but you may want to (temporarily?) install a large-ish XFS cache drive and disable your mover or set it to not run for a while. While I still had all RFS data disks, I found that I did not encounter the issue while writing solely to the XFS cache. It was only when writing to the RFS array either directly or via the mover that I had the issue. I confirmed this by using a 500GB drive as my cache with mover disabled for about a month during which I had 0 system hangs. After setting the mover back to run daily, the system hung within two days. I can also confirm that moving from all RFS (each ~90% full) to XFS fixed my issue as well. It took me from Jan 10th - Jan 27th to finish all my disks, but since then I have not had a single occurrence of the problem. I am back to the stability I had with v5.0.5 and loving it! I also extend my thanks to @pickthenimp
  21. Once they are flashed, there really should not be a difference between the two. I've purchased 3 used IBM ServeRAID M1015s via eBay. Flashing them is a pain even if you understand what you're doing. I read through every page of the various flashing threads on this board a few times and then found some external blog resources as well. Based on my research and some exploration with the different motherboards I had available, I put together a process to flash cards that works for my particular situation. Yes, you can buy a card already flashed... but imo if you're buying a card like this for your own custom NAS like unRAID, you should really understand every piece yourself. There is nothing wrong with the Dell's. They are priced a fair amount lower and you'll find many people here use, like and recommend them above the m1015s (mostly due to cost I think). The only real difference I saw is that the flashing procedure is slightly different than the m1015s. I think you need to get specific files from Dell or something? I'm sure someone else can chime in with the details. Also, the connectors come out a different direction from the card. That helps for some case situations. For me, I went with the m1015s because there seem to be more of them in use by the unRAID community and I happened to have some folks I know irl that used them in unRAID and ESXi without issue. That being said, the Dell's are a fair amount less $ these days so that might make them more tempting if I were in the market today. Like I said above... do your homework on the flashing procedures before you make your choice. As for knock-offs... I generally don't buy via eBay from outside my own country (US) so I can't speak to that. Buying on eBay is always a crap-shoot. I try to find someone who has an established "eBay store" and clearly accepts returns in case you get one DOA. I think the part you read about "warning" not to flash to IT is re: attempting to use a flashed card in a Windows machine, not unRAID/ESXi/FreeNAS/etc. When you flash the card, you won't be doing it from Windows. You make your own boot disk and run your flashing tools from that.
  22. This is totally normal. My last pre-clear for a single 3TB drive was I think ~29 hours per drive, per cycle... so ~ 3 1/2 days. A 2TB drive I did last week took ~ 16 hours per cycle. I imagine each cycle for your 4TB will take around 40 hours or so?? Total guess as I only have 3TB drives. As mentioned before, your array should be up and fully operational while this pre-clear process is getting your new disk ready. Due to the time it takes to pre-clear, I generally will buy one drive more than I need in my system and pre-clear it so I am immediately ready for a data rebuild in the event of a disk failure. You don't want to rebuild on a (brand new) non-pre-clear'd disk and also you don't want to sit vulnerable while you take a week to pre-clear your replacement disk.
  23. This is amazing Zeron, thanks again!
  24. Went back to the video... now I see what you're saying. It looks like it's a more polished webui login to me. Guess we'll see. Some good things appear to be just over the horizon for unRAID. Very interested in what comes next.
  25. I didn't review the video again, but once I changed my root password, it prompts me to log into the webui: This is on 6.1.6
×
×
  • Create New...