J.R.

Members
  • Posts

    121
  • Joined

  • Last visited

Posts posted by J.R.

  1. Recently upgraded to UR V6.9.2 and my Crashplan reports are coming back as failed to backup. So I finally get a chance to try logging on and see what the problem might be and I keep getting 'unable to sign in' through the GUI. So I figured I'd try resetting the PW...but there doesn't seem to be any way to do that. The text is there on the GUI, but there doesn't seem to be an actual link attached to it.

     

    Appreciate some help here for resetting PW and hopefully figuring out why my backup isn't working. Thanks!!

  2. On 5/21/2019 at 9:23 PM, itimpi said:

    If that is the output from a file system check, then the last line is telling you that it ran a read-only check without actually attempting to fix anything.    The -n flag needs removing if you want it to actually attempt a repair.

    Thanks, let's 'pretend' I'm an idiot here... what is an '-n flag' and how does one go about removing it?

     

    And why was it added/only recently become a problem...?

  3. Seem to be having a problem with mounting my external drives (caused a hell of a problem with Crashplan!) a couple weeks ago when I updated all my apps/dockers. Here's what it's telling me:

     

    Quote

    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used. Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
    - scan filesystem freespace and inode maps...
    sb_fdblocks 309880806, counted 310375434
    - found root inode chunk
    Phase 3 - for each AG...
    - scan (but don't clear) agi unlinked lists...
    - process known inodes and perform inode discovery...
    - agno = 0
    - agno = 1
    - agno = 2
    - agno = 3
    - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
    - setting up duplicate extent list...
    - check for inodes claiming duplicate blocks...
    - agno = 0
    - agno = 3
    - agno = 2
    - agno = 1
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
    - traversing filesystem ...
    - traversal finished ...
    - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

     

    What does that all mean and why won't my drives mount all of a sudden? lol

     

  4. On 5/20/2019 at 7:55 AM, Djoss said:

    Normally this is fixed by clearing your browser's cache.

    I tried clearing Chrome's browser cache and it still has the same "Execution Error" when I try to start CP. Is there another step? Do I need to restart Docker after or something?

     

    Is there a way to perhaps revert to the previous version, pre-update, before I started having issues?

     

    Just tried re-installing and I'm getting this error:

     

    Quote

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='CrashPlanPRO' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='000' -e 'APP_NICENESS'='' -e 'DISPLAY_WIDTH'='1280' -e 'DISPLAY_HEIGHT'='768' -e 'SECURE_CONNECTION'='0' -e 'X11VNC_EXTRA_OPTS'='' -e 'CRASHPLAN_SRV_MAX_MEM'='4096M' -p '7810:5800/tcp' -p '7910:5900/tcp' -v '/mnt/user':'/storage':'ro' -v '/boot':'/flash':'ro' -v '/mnt/disks/1TB Backup 02/Backup 02/':'/1TB Backup 02/Backup 02':'rw,slave' -v '/mnt/disks/2TB Backup 01/Backup 01/':'/2TB Backup 01/Backup 01':'rw,slave' -v '/mnt/user/appdata/CrashPlanPRO':'/config':'rw' 'jlesage/crashplan-pro' 

    eace45a380b2a482aebb8a33230268bdb6777d694cbfecd57afd54953e5f0395
    /usr/bin/docker: Error response from daemon: error while creating mount source path '/mnt/disks/1TB Backup 02/Backup 02': mkdir /mnt/disks/1TB Backup 02: file exists.

     

    Looks like my external drives aren't mounting and that's maybe causing the issue. Now I need to figure out why they're not mounting...

     

    Deleted them from my CP  settings and now CP's loaded fine...don't know how/why my external drives won't mount but at least I can backup to cloud now.

  5. Not sure what happened here...was doing a large backup to CrashPlan cloud...not sure if that had anything to do with it but then Docker stopped working and won't reload.

     

    I've tried rebooting the system, disabling Docker and erasing the img file and restarting it but no luck.

     

    Here's what the log says:

     

    Quote

    Oct 2 20:11:35 tower root: failed to create image file
    Oct 2 20:11:35 tower emhttpd: shcmd (122): exit status: 1
    Oct 2 20:12:17 tower emhttpd: req (5): cmd=/plugins/community.applications/scripts/updatePLG.sh&arg1=community.applications.plg&csrf_token=****************
    Oct 2 20:12:17 tower emhttpd: cmd: /usr/local/emhttp/plugins/community.applications/scripts/updatePLG.sh community.applications.plg
    Oct 2 20:12:22 tower root: plugin: running: anonymous
    Oct 2 20:12:22 tower root: plugin: running: anonymous
    Oct 2 20:12:22 tower root: plugin: creating: /boot/config/plugins/community.applications/community.applications-2018.10.02.txz - downloading from URL https://raw.github.com/Squidly271/community.applications/master/archive/community.applications-2018.10.02.txz
    Oct 2 20:12:22 tower root: plugin: checking: /boot/config/plugins/community.applications/community.applications-2018.10.02.txz - MD5
    Oct 2 20:12:22 tower root: plugin: running: /boot/config/plugins/community.applications/community.applications-2018.10.02.txz
    Oct 2 20:12:22 tower root: plugin: running: anonymous
    Oct 2 20:13:09 tower ool www[14631]: /usr/local/emhttp/plugins/dynamix/scripts/emhttpd_update
    Oct 2 20:13:09 tower emhttpd: req (6): cmdStatus=apply&csrf_token=****************
    Oct 2 20:13:09 tower emhttpd: Starting services...
    Oct 2 20:13:17 tower ool www[14631]: /usr/local/emhttp/plugins/dynamix/scripts/emhttpd_update
    Oct 2 20:13:17 tower emhttpd: req (7): cmdStatus=apply&csrf_token=****************
    Oct 2 20:13:17 tower emhttpd: Starting services...
    Oct 2 20:13:17 tower emhttpd: shcmd (150): /usr/local/sbin/mount_image '/mnt/user/docker/docker.img' /var/lib/docker 40
    Oct 2 20:13:17 tower root: Creating new image file: /mnt/user/docker/docker.img size: 40G
    Oct 2 20:13:17 tower shfs: error: shfs_create, 2074: No medium found (123): assign_disk: docker/docker.img
    Oct 2 20:13:17 tower root: touch: cannot touch '/mnt/user/docker/docker.img': No medium found
    Oct 2 20:13:17 tower shfs: error: shfs_create, 2074: No medium found (123): assign_disk: docker/docker.img
    Oct 2 20:13:17 tower shfs: error: shfs_create, 2074: No medium found (123): assign_disk: docker/docker.img
    Oct 2 20:13:17 tower root: failed to create image file
    Oct 2 20:13:17 tower emhttpd: shcmd (150): exit status: 1
    Oct 2 20:16:42 tower ool www[14631]: /usr/local/emhttp/plugins/dynamix/scripts/emhttpd_update
    Oct 2 20:16:43 tower emhttpd: req (8): cmdStatus=apply&csrf_token=****************
    Oct 2 20:16:43 tower emhttpd: Starting services...
    Oct 2 20:17:16 tower ool www[19174]: /usr/local/emhttp/plugins/dynamix/scripts/emhttpd_update
    Oct 2 20:17:16 tower emhttpd: req (9): cmdStatus=apply&csrf_token=****************
    Oct 2 20:17:16 tower emhttpd: Starting services...
    Oct 2 20:17:16 tower emhttpd: shcmd (182): /usr/local/sbin/mount_image '/mnt/user/docker/docker.img' /var/lib/docker 40
    Oct 2 20:17:16 tower root: Creating new image file: /mnt/user/docker/docker.img size: 40G
    Oct 2 20:17:16 tower shfs: error: shfs_create, 2074: No medium found (123): assign_disk: docker/docker.img
    Oct 2 20:17:16 tower root: touch: cannot touch '/mnt/user/docker/docker.img': No medium found
    Oct 2 20:17:16 tower shfs: error: shfs_create, 2074: No medium found (123): assign_disk: docker/docker.img
    Oct 2 20:17:16 tower shfs: error: shfs_create, 2074: No medium found (123): assign_disk: docker/docker.img
    Oct 2 20:17:16 tower root: failed to create image file
    Oct 2 20:17:16 tower emhttpd: shcmd (182): exit status: 1

     

     

    Any help?

  6. 11 hours ago, Djoss said:

    You need to add additional "Path".  You click on "Add another Path, Port, Variable, Label or Device" in container settings.  You can then map your external drive to a path inside the container.

    Well on a good note, the cloud backup appears to be doing things!

     

    However, I'v tried adding the root location of the external drives (/mnt/disks), the individual drives (/mnt/disks/2TB Backup 01) and the folders on the drives (/mnt/disks/2TB Backup 01/Backup 01/) as new 'paths' and Crashplan can't seem to see any of them as an available backup location. Not sure what I'm missing there?

  7. On 9/8/2018 at 6:03 PM, Djoss said:

    This is a permission issue.  Where are located the files you want to backup? Under /mnt/user ?

    Also, the paths in CrashPlan don't seem to fit what you have in your container config.

    Yeah, under mnt/user etc. Am I to gather that 'Storage' is not the path for the backup storage locations but the data I want backed up? (That's not very intuitive if so...)

     

    How do I go about adding my external drives as alternate backup paths then?

  8. I'm so lost on this. I did the migration to CP for SB months ago and it said it successfully uploaded 300+GB to the cloud (and had the internet usage to prove it, it would seem). I thought everything was running fine as I kept getting green check mark backup reports until I noticed the fine print of 0MB's being backed up... >:[ (it's been a VERY busy year).

     

    Crashplan can't seem to actually see some unraid folders for some back up sets and I can only 'see' ~40GB of the supposed 300+ that were uploaded to the cloud. Like where's the rest of my stuff...? How come I can see some UR server folders in some back up sets but not others? How come although I can see some folders in some backup sets, Crasphplan can't actually seem to see them/back them up?

     

    I tried contacting Crashplan and got this:

    Quote

    Thank you for contacting Code42 support!

    Can you make sure CrashPlan has read/write access to that /data/user folder and its sub-folders? It looks like CrashPlan is unable to read your file selection at all.

    Then, run a file verification scan. To trigger a scan, follow the instructions below:

    Open the CrashPlan app.
    Press Ctrl + Shift + C
    The CrashPlan command-line area opens.
    Enter this command: backup.scan
    Press Enter.

    This should trigger a scan.

    Let me know how that goes.

    I tried adding a container path named 'User' ( /mnt/user/ ) under 'Add another path, port, variable or device' with full read/write access but all that accomplished was making CP unable to load. So I deleted that obviously.

     

    Please help :/ Let me know if there's any other screen captures, logs or any other info that would be useful.

     

    BU1 public and music photos.jpg

    BU2 media only musicphotos.jpg

    cloud media only movies.jpg

    CP on UR.jpg

    Main Page.jpg

  9. 15 minutes ago, Djoss said:

    So you disabled and re-enabled docker service (in Settings->Docker)?

    If your containers were installed by the CA plugin, you should be able to re-install them easily using the CA plugin: select the "Previous Apps" section and choose the ones you want to re-install.

     

    No it wouldn't even 'enable' until I re-pointed it to the img file.

     

    Anyway, I digress... Did that just not clear now if I should re-install old version of Crashplan before I migrate to this one or just install your new version?

  10. 19 minutes ago, trurl said:

    You can reinstall your dockers with the same settings as before in Community Applications - Previous Apps.

    Thanks!

     

    Working for Plex, not showing Transmission (not a big deal) but if I'm migrating to the new version of Crashplan...should I re-install the old one or?

     

    NVM, Djoss say's I can just install new one and follow instructions to migrate.

  11. 5 hours ago, landS said:

    Would one of the Guinea pigs be so kind as to reboot their Unraid servers and see if the CP's Pro update is retained / works smoothly?  Thanks! 

     

    Haven't rebooted yet as I wanted to make sure all was running and backing up and it's doing a large'ish backup now.

     

    5 hours ago, Hoopster said:

     

     

    And did they migrate with gfjardim Crashplan docker or the jlesage Crashplan docker?  is there a difference as far as the migration to CP Small Business, the client updating to Pro and surviving a reboot?  I have used both versions, but, I am currently using the jlesage docker, so, I am curious to know if they behave the same or differently in the migration.

     

    I'm running the gfjardim version. Can confirm, so far it's been seamless. Might have to wait until next week to confirm reboot.

     

    3 hours ago, Djoss said:

     

    Has noted by others, gfjardim docker will auto upgrade itself to the PRO version.  Mine won't for now.  But I will offer an upgrade path that will be pretty easy and seamless, without having to go through the adoption process.

     

    One important thing to know is that if the gfjardim docker is not updated, you won't be able to use it in the case you need or want to re-install from scratch, i.e. without an existing appdata.  The reason is that the docker will initially start with the home version, meaning that you won't be able to login: after being migrated, you cannot use your credentials to login to a "home" account...

     

    2 hours ago, landS said:

    I did not know we now have 2 CP dockers.   So for now, we let gfjardim Docker update.... And if a reinstall is necessary we jump over to djoss/jlesage Docker? 

     

    Interesting. I'll have to try to keep on this if/when I ever need a fresh install.