SilverRubicon

Members
  • Posts

    55
  • Joined

  • Last visited

Everything posted by SilverRubicon

  1. I built my own docker container using your source on github. It works fine. Maybe Code42 has changed something between the last time you built yours and when I built mine. You should just be able to recreate it and have it work.
  2. I had the same problem. Not sure why this only happens to a few of us. If I perform a clean install (remove everything including container, image, and app data), I see the login screen. If I enter my login information it states 'information is incorrect', immediately disconnects from VNC and is dead from that point forward. Not sure if it's just VNC that is dead or if Crashplan is unresponsive. I can see CP running in top and I can login on the CP website. So I know that my login info is correct. Finally managed to get this to work. Must have been user error. Or maybe installing 50 times pays off.
  3. I thought it was just bad timing on my part. I also tried disabling autoupdate and didn't have much success. I did remove all traces of my CP install (image, container, config, etc) and reinstalled your docker image. I run in the same problems I had when I disabled autoupdating. Unable to login, able to login but when adopting my server things stop working, endless reconnection loops. Generally just a nightmare. I'll just wait for you to update the docker image with the new crashplan. I imagine you're getting tired of it as well. 90% of the time the UI is unnecessary, I just decided to dive back in at the wrong time.
  4. It's not the browser but the update of Crashplan on UnRaid. I have tried three different browsers and two different VNC clients. All black screen. It's the update of the Crashplan application that is causing the problem. Not sure exactly what the issue is as there are few errors in /mnt/user/appdata/CrashPlan/log/upgrade.1435813200460_403.1461089974843.log. I assume that updating the application is still not supported without updating the docker image.
  5. I quit using Crashplan and this docker image awhile ago because it became too much work. Tried it again today (as I need to start backing up again) and thought "This is awesome!". Everything came up, vnc client worked, perfect. Then I assigned a VNC password with the variable, restart the docker image, and I now have a black screen. The Crashplan splash screen stated "upgrading" prior to the black screen of death so maybe I just have unbelievably bad timing and Crashplan has updated their software in the last hour. Either way, I also have the black screen but I like what has been done with the docker image. Ok, reinstalled and everything worked. Crashplan once again upgraded itself and now I'm back to the black screen. Now I remember why I quit using this.
  6. Very nice. This seems like a better option for how I use Crashplan. Thanks.
  7. Do you have unRAID filesystems mounted through the network (NFS, SMB, etc)? Or is there a more elegant way to access the hosts file system?
  8. Even though I'm paid up for a two or more years, I'd love to find a replacement for Crashplan. This is such a headache. At this point, I think a linux VM (with crashplan installed) and links to the unraid data directories would be a better solution. If it's possible for the VM to mount the data directories. Of course this wouldn't be an issue if I didn't move some directories around at the same time that CrashPlan was updating their software.
  9. i think the best plan of attack is to have a single Crashplan docker container that is slimmed down version of a lightweight linux. Standalone, single instance. That way everything can be updated as it should be by Code42. As long as Crashplan on linux is supported, this docker will work. I know very little about Docker, but I'm going to try and get this working on my own. If I do, I'll share what I find. This is fairly easy to modify to get started... https://github.com/gfjardim/docker-containers/blob/master/crashplan-desktop/install.sh
  10. After 5 years of trouble free use of CrashPlan and unRAID it's turned into a nightmare. My server stopped backing up about 2 weeks ago. Fought through the client and server connection issues and now I have this in my server logs... 07.10.15 11:56:35.274 INFO W721685851_Authorize com.backup42.service.peer.Authorizer ] AUTH:: Authority address: central.crashplan.com/216.17.8.11 [07.10.15 11:56:35.288 INFO W721685851_Authorize com.backup42.service.peer.Authorizer ] AUTH:: Error = CPErrors.Global.CPC_UNAVAILABLE : [central.crashplan.com:443] [07.10.15 11:56:35.295 INFO W721685851_Authorize com.backup42.service.ui.UIController ] UserActionResponse: LoginResponseMessage@958050516[ session=697590079208731733, errors=[CPErrors.Global.CPC_UNAVAILABLE : [central.crashplan.com:443]] ]{} [07.10.15 11:56:35.296 INFO W721685851_Authorize com.backup42.service.peer.Authorizer ] AUTH:: *** END *** Failed after 1min 0sec [07.10.15 11:57:25.304 INFO W771295025_SystemWat ckup42.service.peer.CPCAutoRegisterRetry] Remove AutoRegister retry system check, no longer needed. [07.10.15 11:58:25.307 INFO W771295025_SystemWat om.backup42.service.peer.CPCConnectCheck] Disconnect from CPC after 1min 0sec [07.10.15 11:58:25.308 INFO W771295025_SystemWat com.backup42.service.peer.PeerController] DISCONNECT and REMOVE CPC Is anyone else having trouble connecting to the CrashPlan servers? I have connected! This is trying my patience.
  11. How deep did you need to go? All the way to recreating the btrfs docker image? Rebooted the server and all appears well. I think my issues were with BTRFS and not docker. During shutdown, my disks (all BTRFS) could not be unmounted as the mount points were 'not found'. They were there as I could ssh into the server and see them but 'not found' was the error listed in the log files. Unclean shutdown, restart, and everything is working. Docker included. Scrubbed my docker image and an error was corrected... btrfs scrub start /var/lib/docker -B -R -d -r 2>&1 WARNING: errors detected during scrubbing, corrected. scrub device /dev/loop8 (id 1) done scrub started at Mon Sep 22 11:29:25 2014 and finished after 10 seconds data_extents_scrubbed: 42483 tree_extents_scrubbed: 6220 data_bytes_scrubbed: 1919709184 tree_bytes_scrubbed: 101908480 read_errors: 0 csum_errors: 0 verify_errors: 0 no_csum: 256 csum_discards: 6002 super_errors: 0 malloc_errors: 0 uncorrectable_errors: 0 unverified_errors: 1 corrected_errors: 0 last_physical: 5406457856
  12. Did you ever figure this out? My Crashplan docker component is now complaining that it's read only.
  13. Interesting. I absolutely had to change mine manually.
  14. I reconfigured my server so it doesn't require putty and ssh. I changed the ip address that the server binds to so that Crashplan is public on my local network. That way the UI clients can connect at will. You can do this by changing the ip address in /var/lib/docker/crashplan/config/conf/my.service.xml. Change instances of 127.0.0.1 to 0.0.0.0. I believe there are two. Restart crashplan and continue with life frustration free.
  15. I think BTRFS is more than stable enough for use in unRAID. It will only be using a subset of the feature set as I assume you will not be using redundant BTRFS arrays from within unRAID. I chose BTRFS as Docker is in the future of unRAID and Docker requires BTRFS. I don't think you can go wrong with either XFS or BTRFS. I've run scrubs on all of my drives. The time it takes depends on the amount of data on the drive. Here are a few samples... root@server:~# btrfs scrub status /mnt/disk1 scrub status for a970b6b4-73f9-47c1-9d5e-bd24ea5d89ed scrub started at Thu Sep 18 01:00:01 2014 and finished after 836 seconds total bytes scrubbed: 84.73GiB with 0 errors root@server:~# btrfs scrub status /mnt/disk2 scrub status for 6307f35a-ce94-49d0-b615-7f0c328ae7e7 scrub started at Thu Sep 18 01:15:01 2014 and finished after 12279 seconds total bytes scrubbed: 947.09GiB with 0 errors root@server:~# btrfs scrub status /mnt/disk3 scrub status for f76eee1c-fba1-4683-9257-288c1309a004 scrub started at Thu Sep 18 01:30:01 2014 and finished after 4599 seconds total bytes scrubbed: 368.77GiB with 0 errors root@server:~# btrfs scrub status /mnt/disk4 scrub status for 48a85482-b5e1-468f-9f19-6559336357e7 scrub started at Thu Sep 18 01:45:01 2014 and finished after 3777 seconds total bytes scrubbed: 277.72GiB with 0 errors root@server:~# btrfs scrub status /mnt/disk5 scrub status for 61fe317a-4d73-46d3-8351-341011f6c3ad scrub started at Thu Sep 18 02:00:01 2014 and finished after 8253 seconds total bytes scrubbed: 529.73GiB with 0 errors And this thread is now completely off topic for this forum topic.... So I'll drop it.
  16. So it may not be all that useful for unRAID unless you are using snapshots. Which I haven't tried yet. BTRFS arrays within the context of unRAID may not appropriate. Safe, but perhaps overkill. Just read more about scrubs and snapshots... doesn't do anything against bit rot. 'Snapshots work by use of btrfs's copy-on-write behaviour. A snapshot and the original it was taken from initially share all of the same data blocks. If that data is damaged in some way (cosmic rays, bad disk sector, accident with dd to the disk), then the snapshot and the original will both be damaged.' Hmmmm.... I have now set up daily snap shots for each drive in my array. Will be interesting to see if it causes any havoc with unRAID. I am using the following script and have configured it to keep up to 5 snapshots. http://pastebin.com/U1qgcPu6 What I need is a way to send an error email if a btrfs scrub fails.
  17. More modern filesystems. I know that BTRFS is designed to protect against bit rot. XFS may have similar goals but I haven't looked into it.
  18. It can. Not sure of the implications on parity if errors are found and corrected during the scrub. Probably best to rescan parity in that case. The scrub will correct errors in metadata and the actual data. It's also run periodically during normal operation. Or so I've read. Not sure how that works.
  19. 'btrfs scrub' is sufficient for must use cases. The reason why there is no checkdsk is that there is quite a bit of debate as to whether it's needed.
  20. And only copy from disk to disk, don't use user shares for this kind of copying. And I'd recommend using rsync as well. It will do a bit of data verification but I don't know if it would recognize issues with the file system.
  21. If anyone has audio files that they would like to check, Foobar2000 has a File Integrity Validator. Seems to work well. http://www.foobar2000.org/components/view/foo_verifier
  22. Question... When I was copying data between my ReiserFS disks and my BTRFS disks, I was using rsync. Would rsync be able to recognize and correct data corruption? I copied to and from ReiserFS during my migration. I've looked through my files and haven't found any obvious data corruption issues. I found 7 corrupt audio files, but I believe these predate my return to UnRaid from WHS.
  23. Stop the array, change the format on the disk, then restart the array. You will then see a 'Format disks' on the home web page. FYI, when changing to BTRFS you will see that it uses quite a bit of disk space even on an empty file system. I don't know how much of that overhead translates to actual lost disk space but the amount of space used on an empty file system can be jarring. I don't know if XFS consumes less space but if you're worried about it, try out both XFS and BTRFS. I stuck with BTRFS.
  24. Which can not be emphasized enough. Do NOT do this on Beta8.
  25. I think everyone understands this, but a bug at the file system level is an entirely different issue. I imagine that even the unraid developers were shocked at a bug being introduced into the file system. BTRFS and XFS, sure, but ReiserFS. Wow. Bugs in an UnRaid beta are a given but ReiserFS is an entirely different story.