brainbone

Members
  • Posts

    272
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by brainbone

  1. Thanks, all, for confirming that I can use xfs on the cache. I do like the idea of btrfs, but the apparent reality scares me a bit. Thinking more about the issues with unclean shutdowns, is there any known reason why this could be happening? Is unraid using btrfs without barriers? Does this issue only occur on drives without power loss protection? etc.
  2. There seems to be a general consensus here that btrfs doesn't handle power loss well. Seems people often loose much of the data on their cache drive(s) if power is cut to the system. Even though I'll be running with a UPS, from my point of view, total data loss of the "cache" (which is used for far more than just cache in unraid) from a simple power failure is simply unacceptable. It is not uncommon at all for a UPS to fail just at the moment you need it. What can be done to make the cache drives more resilient in the case of power loss? Can the cache be run with XFS or EXT4?
  3. You could fork it, modify your fork, and do a pull request for that modification and see if gfjardim will merge it. That, or someone else could fork it, and everyone could start using that fork instead -- but I don't believe github allows multiple maintainers for an individual repository, and there are obvious issues with sharing credentials for a single github user account. It'd be great if a version could be created that: 1 - Allows the password for the "ubuntu" user account to be changed via an environment variable instead of having it hard-coded to "PASSWD". 2 - Changes "ubuntu" user account name to something else, like "crashplan" -- or allow it to be changed via environment variable as well. 3 - Allows the URL of the CrashPlan installation package to be modified via environment variable (best if environment variable is empty to use a hard-coded default, else use value of environment variable.) 4 - Allows XRDP installation/configuration to be optionally disabled, again via environment variable, enabling a "headless" mode. (see #5 for why) 5 - Last, but certainly not least, combines CrashPlan and CrashPlan Desktop into one container, allowing both the engine and UI to automatically update. Once that's done, it probably wouldn't be a huge issue if the repository owner didn't get around to updating it immediately.
  4. It's in the folder you have mapped for /config in your CrashPlan volume mappings. For me it's /mnt/disk1/docker/appdata/crashplan/config I don't export my appdata folders, so while still SSHed into my unraid server, but disconnected from the CrashPlan bash (after the first "exit"), I just edit directly using the command: nano /mnt/disk1/docker/appdata/crashplan/config/id/.ui_info But for you, the "/mnt/disk1/docker/appdata/crashplan" portion will be different, depending on what you have entered in your volume mappings. Probably somewhere on your cache drive, if you have one.
  5. Thanks, I'm a bit new to Docker -- still wrapping my head around it. (And your crashplan desktop update is working perfectly. Thanks again.) Edit: It seems the issue with the Crashplan desktop not being updated is related to Crashplan being run headless. For those that want to run Crashplan Desktop, is there any benefit to having separate containers for the desktop app and the engine that outweigh this update issue?
  6. I'd like to change the password for CrashPlan-Desktop from PASSWD to something else. Will this change stick? Any issues with doing this?
  7. Thanks for that. Will these changes complicate application of the "real" docker update when/if that comes?
  8. Though no way to backup data to your crashplan account...
  9. I thought crashplan clients auto updated? Just installed crashplan desktop and I'm getting the "disconnected from backup engine" error. Damn. Really wanted this to work... Any other quick and easy alternatives?
  10. Followed the instructions and purchased a new plus key using the WebGUI on a new unraid server, but getting "ERROR (5)" pasting the supplied key url and hitting "Install key".
  11. Nevermind. Used Fat32format to format the flash drive as Fat32. Since make_bootable.bat worked with NTFS, figured unraid would as well. Seems to be working with fat32.
  12. I'm setting up a new Unraid 6 machine -- plan to get a pro license for it. I've formatted a Sandisk "Ultra" 64GB USB 3.0 flash drive (On Windows 8.1, using NTFS, makeboot didn't like exFAT), and have booted the machine. Seems to boot up alright. I can connect and log in via telnet and SSH, but when I try to access the web gui using either the default "tower", or the IP address of the machine, I get a "connection refused" in chrome or a "Unable to connect" in firefox. Help??
  13. With only a single drive failing, I'd suspect a power cable/connector/splitter issue more than a power supply issue.
  14. My 2¢; The write speed issue on some systems is an obvious known issue with 5.0 rc10. As such, it shouldn't be released as final. It is what it is, a "release candidate" -- a candidate that has done what it should do; expose any outstanding issues before final release. If the release of a final 5.0 with a fix for the write speed issue, or any other serious issue, is going to take while, it would be best to release an alternate version with a regressed kernel specifically for those with affected systems, and only as a stop-gap until a unified final. There should be no rush to release.
  15. However, the level of activity for a software product has a place in purchase decisions. I think the argument some are proposing that basically amounts to "I purchased Unraid with the expectation that it will eventually be better than it currently is" is a bit off. But, saying "I'm holding off on purchasing Unraid because it appears that development activity for it is waning and I'd rather invest in a more active project" is a perfectly understandable position. Further, I believe that exchanging "expectation" for "hope" in my first paraphrase makes it a perfectly acceptable position as well. Toms lack of communication is certainly diminishing that "hope" for some, and that sense of loss is manifesting as anger for a growing number of them.
  16. I'm guessing his mind is clouded with other life issues (pure speculation). It happens. He's human.
  17. The thing is, we have no idea what is going on in his life. It could very well be that there are other priorities and life events that are occupying his mind right now. He is, AFAIK, a one man company. Running a business like this myself, I know how difficult it can be balance "life" with work. In short; Cut the guy a little slack.
  18. I guess I was, unsuccessfully, trying to make the point that he won't be able to make everyone happy. But yes, he should give minimal updates, and certainly should have in the days following Oct 3rd.
  19. If the latest RC was just labeled "final", then anyone with issues would start to complain about the release of a buggy product, poor support, etc., with worse reactions than he is seeing today. That said, a simple "Still working on things as fast as I can" or "waiting on ____" every few weeks would go a long way to calming people down, though some may not be happy unless he posts what he has for breakfast every day and how many times his dog craps on each walk.
  20. As far as I can see, "wrapping to the top" won't solve the issue (except when you reach the last sectors of the disk) -- it's no different than continuing down a few more sectors. What matters is what disks are included in the Diagonal Parity (DP). Each DP excludes a different disk from its XOR, unlike Row Parity (RP) -- sometimes RP is excluded from DP, sometimes one of the data disks, you just exclude as you roll through. This simple step of excluding a different drive from each successive DP should allow you to always find a usable DP (you may need to recursively use the DP and RP of other rows to recover a given DP) to rebuild enough row data to be able to finally use RP to finish reconstruction in any double disk failure. Or, as the original author wrote much more eloquently than I; each diagonal misses one disk, and all diagonals miss a different disk. In your example above, if, say, data disk 1 and 2 fail, you wont be able to rebuild from DP for a sector on disk 1 or disk 2, because both drives are included in the XOR. But, if either one of them were excluded from DP, you would be able to rebuild using DP for the one that wasn't. Then the one that was excluded could be rebuilt using RP. Read this document, starting on page 6, "4 Row-Diagonal Parity Algorithm". You should be able to stop at "5 Proof of Correctness". Read the last 3 paragraphs in section 4 over until it sinks it -- took me a few times. (Edited to correct some glaring errors in my explanation)
  21. The example is a bit misleading, since it's not long enough to show where each DP starts and ends, nor does it even denote where each DP starts and ends -- however the example isn't that important. The meat of it is in the text: "However, since each diagonal misses one disk, and all diagonals miss a different disk, then there are two diagonal parity sets that are only missing one block." I should make a better example with different colors for each DP showing where they start and end -- but I think you'll have it figured out before I get around to it.
  22. I was having trouble with the same thing, but I think I finally wrapped my head around it. See: http://lime-technology.com/forum/index.php?topic=7874.msg81542#msg81542
  23. Sorry, looks like my email notifications for this thread were disabled. I would try changing the port from 25 to 587. You could also try some of yahoo's other SMTP server; plus.smtp.mail.yahoo.co.uk, plus.smtp.mail.yahoo.com, etc. Do these same SMTP configuration settings work for sending mail in another email client (Thunderbird, etc.)?