_vern_

Members
  • Posts

    52
  • Joined

  • Last visited

Everything posted by _vern_

  1. I'm running into an error while trying install this and was wondering if you could help me out. The error comes after launching initial_setup.sh. I get the prompt asking if I'm sure I want to run this and then after a few second...this is what I get... Thank you for any assistance that you can provide... thanx,
  2. Just click the advanced settings at the top right of the section. You should be able to see it then.
  3. Thanks for the update gfjardim, the MATE works great now! Thank you!
  4. I second that approach...Crashplan is an important functionality for backups and I believe it's in the best interest of Limetech to provide first class support for the Docker image just as in the case of Plex. There shouldn't be any reason the Docker image for Crashplan can't be as self sufficient for updates and the like just like Plex. This isn't to say that there won't be times that the Docker image won't need attention for something in the future, but given the reliance that some of us have on the backup functionality, we need some communications that it is being handled and an update is on the way. I totally understand that Docker images to this point have been contributions from the community and folks get busy and can't provide attention to them as quickly as they would like. However, just as the videos and "promo" verbiage on the site for Limetech suggests...this is a whole new world for unRaid! Meaning, that if Limetech is going to provide and support the Docker functionality, it only makes sense that there will be a "core" set of Docker images that should be supported and maintained as well. This approach supports both sides moving forward...those that just want stuff to work (which is what is alluded to in the videos to make unRaid more "friendly" to the masses, hence commercialize it) as well as supporting the enthusiasts that want to create and use more esoteric pieces of software. I love unRaid and really want to see it succeed past the enthusiast realm. Anyone at Limetech considering kickstarter or sharktank?
  5. I did manage to get this working again. I ended up making several changes back and forth, but in the end, here is what I think the only changes that need to be made. The ".ui_info" file in the Windows folder "C:\ProgramData\CrashPlan" needs to be the same as the one on unRaid. Apparently this was a change in the the latest version to ensure that only a known UI can connect to the engine. So, you can either copy the file over from unRaid to Windows or just copy and paste the contents of the file...either will work. Then modify the ui.properties file in "C:\Program Files\CrashPlan\conf" on Windows to have the serviceHost=<unraid server IP>, servicePort=4243, just like it has always worked before. The issue that I was seeing using the Crashplan Desktop Docker was because it is stuck at 4.2, so it can't be used as all since the update. Once I did that, everything worked as it should and then I just commented out the lines in the ui.properties so that it would connect locally and all worked fine. You only have to do the .ui_info change on the remote instance that you are trying to connect from to unRaid, not all the machines that are backing up to unRaid.
  6. Hello all, I'm still struggling with this one...I have tried the configuration changes suggested to try and connect to the engine on unRaid. I also us the Crashplan Desktop Docker image and everything seems to indicate that the engine on unRaid is hung trying to upgrade from 4.2 to 4.3. So, if I try to connect remotely with the configuration changes suggested, I get the splash screen that says 4.3 (I'm assuming because the Windows instance updated successfully) and that is as far as it gets, it just hangs there and I have to kill it in the task manager. If I try going through RDP to the Crashplan Desktop Docker, it then shows 4.2. As mentioned earlier and it hangs the same way as trying to connect remotely. I did end up renewing the Docker image and was able to get it to start fresh and saw a message in the Crashplan window when I was trying to login that it was busy upgrading and that was when it locked up and starting showing this same behavior. So, at this point, I'm left thinking that the Crashplan engine in the Docker image isn't completing the upgrade for some reason. Any suggestions on where to go from here? Thanx!
  7. I have run into the same situation...can't connect to the instance of Crashplan in Docker on unraid. I probably did the same as some others. I blew away the Docker image and started over to see what is going on and I was able to connect to it right after adding the container back at this point. Of course it acted just like a new installation and as soon as I tried to login to the instance it gave me a message that it was updating and then it locked. So, apparently it is trying to update, but is failing as a Docker image. I haven't been able to determine a work around...I think that something will have to be modified in the Docker image to get it working again.
  8. I took the 6.0-rc2 dive from 6.0-beta14b and all went silky smooth! Had a couple of updates on docker images and checked connectivity to Plex and all is well. Looking forward to the final release of 6.0, but this looks very representative. It is very evident the changes from 5.0 to 6.0 are monumental for several reasons. Thanks goes out to the Lime Tech crew for all the hard and meticulous work that has gone into the 6.0 version! Thank you!
  9. I just wanted to pass on some kudos...the UPS support that is in 6.0-beta14b is excellent and was a much needed feature, at least for me, given how I rely so much on my unraid server. I purchased a new APC UPS and everything worked perfectly right out of the box! I think that is a huge deal. Typically no matter what I get anymore...it requires some sort of "messing around with" to get it working as expected...even if it's a toaster! The model I got is an APC Back-UPS RS 1500G. I really like it mainly because I haven't had to do a thing to it and it works perfectly with unraid. I used the link that is on the UPS page (http://apcupsd.org/manual/manual.html) in the unraid interface and performed several of the tests to confirm that everything is as it should be and it all checked out. I have all of my networking gear hooked into it...unraid server, special purpose linux server, dedicated router, cable modem, 24 port gigabit switch, wireless access point. So, all that sensitive gear has some protection as well as we will still have Internet access for a while if power goes out. I attached the UPS Status page from unraid if you are interested...it will probably answer any questions you might be thinking about. Can't wait to get the final release of 6.0!
  10. Just figured I would share the latest that I found concerning this issue...perhaps it may help someone down the road. After doing quite a bit of troubleshooting, I ended up upgrading to the latest version 6 beta (6.0-beta14b). I have been running it now for ~month without any issues whatsoever. I have not seen the MCE errors any more. I suspect that a plugin or perhaps something in the 5.0.x was causing the issue, not sure. I tried to isolate what was causing it...but the tests that I did were inconclusive in narrowing down far enough. Anyway...they didn't seem to cause any issues, but I'm still happy to not have to see them in my log anymore! Anxious to get the final release of 6.0...
  11. I have been digesting your statement since you posted and I went back and looked again and that may have certainly been the case...I think I had the splitting level set one lower than it should have been. I guess that I'm still of the mindset though, that the system should be smart enough to protect us from ourselves...especially in the case of a filled disk. Currently the only thing that happens is that things stop working and you get some errors in the log that may or may not be clear to some. Thanx for the information...
  12. I have been messing around with this for a while now and I'm not sure if it is behaving as intended or not...I can at least say that it isn't behaving as I would expect. I'll do my best to explain...I have six data disks, five of which I have had for quite some time. They were getting quite full with one near 100%. I installed a new 3TB drive thinking that it would start using the free space on there as it could knowing that because of how the share splitting works that some of the other drives would continue to get writes to them. As some time went on, my near full drive filled completely and of course I started to get some errors and so forth. I quickly diagnosed the issue and moved some files to make some room on the drive. The annoying part was that it broke my CrashPlan installation and I had to fix that...not that big of a deal, but wanted to prevent that from happening again. I went in and set all of the share's minimum free space to 5GB and the allocation method to "Most-Free". My understanding of this setting, and my expectation, is that there would have to be at least 5GB available on the disk for it to write to it and would look for the drive with the most room to spare, but that certainly hasn't been the behavior. After I initially made the adjustments and got the disk below the 5GB mark, things remained that way for a while, but it eventually filled again. I'm convinced that what is filling it up is CrashPlan. I'm currently shifting things around so that is will only be sending data to a single disk share which I think will resolve that immediate issue for a while, but I think there is still an underlying issue with preventing shares and disks becoming full and causing havoc. I would think this would be a fairly basic management capability of any NAS. I have read several postings and explanation on the share allocation methods and so forth and it is still less than clear given the behavior that I'm experiencing. The explanations/definitions of the feature don't jibe with the behavior. I get the notion of at least defining the allocation method at the share level, but there seems like there would be some sort of parameter set at the disk level to prevent over filling them as well or at least to start warning when they reach some sort of threshold. So what am I looking for...I'd like to know what I'm experiencing is correct behavior or not. And if it is correct...what am I doing wrong to prevent my drives from filling to the point that apps begin to fail? And don't get me wrong...I'm a die hard unraid fan...just hoping maybe this discussion can improve something...even if it is just to make me smarter.
  13. I have been seeing some MCE Hardware Errors in my logs and I'm looking for some advice on hunting them down. Here are some of the things that I have done so far to try and troubleshoot. I reverted the flash to a stock configuration (left the shares and super as is...) and rebooted. Let run for a little while and then started to install the apps that I really want to have running which is Plex and Crashplan. Once I installed Crashplan, I then saw a couple of the errors logged. I have no idea if related to the installation of Crashplan or not. I have looked up mce and apparently in a full installation of Linux, it would most likely exist, but it doesn't appear to be included in unRaid. I could probably mess around with it and get it installed, but wanted to reach out for some thoughts on it first. I think that I'm going to have to get some better details on these errors, which means getting the mcelog, in order to troubleshoot further. Thoughts? Syslog attached...and thanx in advance... Syslog.zip
  14. Just figured I would follow up and let you know what I ran into. I copied off all the files from the cache drive...I also had some files that were pending a move. I then clicked on the format button as it was still stating that it needed to be formatted. All the buttons grayed out for a moment and then came back and it was still saying that it needed to be formatted. Have you ever had that moment that you start questioning whether you really clicked something or not...that's what happened to me. So, I clicked it again and it did the same thing again. I know it didn't format because all the data was still there. So, I rebooted. It came back up, the reformat message was gone, the cache drive was still showing a green ball, the two shares that had files pending to be moved now showed the yellow ball as they should. I double checked the cache drive and sure enough, the data was still there. I clicked the Move button and things worked just as they should. I don't have an explanation, but we are going to call this solved anyway. Things are working just as they should now and figured I would share.
  15. Thank you for your thoughts. Something else interesting... There definitely appears to be something going on in the logic of the system. The cache still shows a green ball, the file system shows as unknown and wants to be formatted and when you go to a share to try and indicate to use cache or not, the option isn't there. Maybe it's just the green ball that is confused...the rest seems right if the cache really wasn't available. I'll check the file system on the USB drive and see how that is. I'll plan on copying everything off. Thanx.
  16. I did...didn't realize there was a command for that on the Tools page. I can get to all of the files from the command line in /mnt/cache (which means it has to be formatted...can't mount an unformatted file system), the apps still run such as Plex and CrashPlan which are installed on the cache drive. Hmmm...will have to do some cleanup...this installation has been around a long time through several upgrades, so not surprised. Just haven't gone back to do any house cleaning on it. Any thoughts on correcting the issue of it showing the cache as unformatted when it apparently still is? Thanx...
  17. I removed a drive today that was in the array that I needed elsewhere, but it didn't have anything written to it. I rebooted it a couple of times trying to figure out how to get it to forget the drive that I removed. I did some reading and determined that I needed to run the "initconfig" to get it to rebuild the drive configuration and start a new parity sync. Not sure if it happened before I ran that or after, but nonetheless, the cache drive now shows as being unformatted...it's an SSD. When looking at the details for the drive, the file system type shows "unknown". However, I can still access it and the apps on it continue to run without issue. The system is resyncing as a result of the initconfig, so that is going to take a while and I will wait until that is done before doing any more troubleshooting. I'm not sure what to do next. I'm thinking that I can just copy everything off of it, reformat like it wants and then replace all the files. I figured I would run by the community first though...BTW...running Unraid 5.0.6, Dynamix, unMenu. Syslog attached...thank you for any thoughts on the matter... syslog.zip
  18. First, I'll admit I haven't read all 62 pages in this thread...phew! So, this may have already been touched on, not sure. I really like the plugin, great stuff! I like the Stats page and the graphs, but I wanted to ask about how the memory usage is represented. Currently it appears that it shows memory usage "used" and "free" as it is shown in the Top command for "used"and "free". If my assertion is true, then that isn't really an accurate representation and would lead people to believe that they may be having a memory leak because over time the graph may show that all the memory is used. That's what I started seeing and I only have 1 GB RAM, so I was concerned. But after running the command "free -m", which subtracts the buffers and adds in the cache I then could see that only about 6% of my memory was being used because the majority of it is being reserved for disk caching, but of course is still free for applications to use. So, my question is, would it make sense to have the graph represent what "free -m" shows as this would be more in line with what is really "used" and "free". Thanx,
  19. Perfect! That gave me what I needed. I don't know why I didn't think of that...thanx! Now, do you have a couple of sticks you want to donate? Told you I was cheap!
  20. I need to add RAM to my unRAID box. Currently running 184-pin 512 MB (x2), but I don't know what speed these sticks are (400/333/266). They are old enough that the stickers are no longer there. I have two open slots and according to the MB manual, I should add in the exact same size and speed. Normally I'd use something like dmidecode, but of course unraid is really stripped down, so I'm looking for other options. I'm cheap, so I'm trying to avoid having to buy two new 1GB sticks. I guess worst case is I could pull them and have them tested, but that's just really inconvenient. Suggestions? As an aside, this is fairly old equipment, so I'm a little leery of mixing mem types, which I have done in new hardware and it simply steps down to the lowest speed. I have a feeling it won't work on this. Thanx!
  21. It isn't like me to really jump in these threads, but I feel like this point should be stated. I've noticed that there have been several folks that have asked for enhancements (some may make a point to say that they aren't enhancements, but nonetheless, they are changes that were not originally planned for) which is the epitome of scope creep. I deal with it quite often in my work as a software project manager for the government. These requests are coming at the 11th hour when limetech is attempting to get out the next stable release. Limetech is obviously wanting to do the right thing and include as much as possible, but no matter the complexity of the change, whether it takes coding/development or a simple switch during kernel compilation, all have consequences and should be rolled out incrementally and well tested. This isn't to say that limetech should not be entertaining the requests, but they should be in a queue of work, advertised what's in that queue, and then incrementally test and roll the changes (details left out). So, what I'm suggesting is that limetech be free to complete 5.0 (stable, without all the requests included right now) and maybe start a new thread somewhere, maybe there is already a place, to request changes and so forth so they can be planned accordingly. I have a few requests myself, but I don't want to muddy the water here... Ok, I'll get off my soapbox already! Hope it makes some sense. Thanx,
  22. Just wanted to pass along...I upgraded to 5.0-rc2-test and my write speeds are at the normal ~20-25 MB/s as opposed to the ~8-9 MB/s I was getting in 5.0-rc1. Everything appears to be running fine. Thanx!
  23. FYI...I downgraded to 5.0-beta14 and I'm back to ~20-25 MB/s transfers. I guess I'll just stay here until I see something different...hence beta... Thanx
  24. I have been running 5.0-beta13 for what seems to be forever now with no issues. I normally get copy speeds of 20 to 25 MB/s from my torrent box to my unraid box, no cache drive. I rolled the dice and went ahead and upgraded to rc1 and now I'm lucky to hit 9 MB/s and there is considerable stuttering during the copy operation. I don't run any add-ons and haven't changed anything other than copy over the bzimage and bzroot files. I have rebooted a couple of times just to make sure and I get the same behavior. I looked through syslog and nothing jumps out at me. The network interface still reports "eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX" and I haven't changed anything with the network, so it makes me wonder if something is wrong with SMB being that is one the things mentioned in the change log. I have attached my syslog for review. If anyone has any thoughts on it, I'd sure appreciate it. Thanx syslog_4_29_2012_1420.txt
  25. As part of adding 3 drives, I also had to add a SATA host controller so I would have enough ports for the drives. Took a bit to get the card to work because I had to update the bios on the card, so I had rebooted the thing what seemed a million times as part of getting everything configured correctly. So if you had asked me if I had tried rebooting, I probably would have said, of course I have! But after you said to try rebooting...I couldn't really recall if I had or not...so went ahead and rebooted and all is good! Couldn't believe I didn't already try that. Thanx a bunch!