• Posts

  • Joined

  • Last visited


  • Gender

tigerdoc's Achievements


Newbie (1/14)



  1. Well, the only symptom I noticed before was that docker containers would disappear on restart and this time they stayed and restarted (the ones with autostart turned on anyway). I'll call it a win. Thanks to @JorgeB and @trurl for the help!
  2. Replaced all 6 SATA cables in the case. New diags attached. For reference ATA3 = sdd = spare disk not in array. (I read incorrectly yesterday when I concluded that was parity. Parity is ATA5 [sdf].) ATA4 = sde = cache disk. tardis-diagnostics-20210113-1757.zip
  3. New SATA cables coming tomorrow for all the disks in the case. I think the PS is non-modular, so stuck there at least for now. Thank you!
  4. Thanks, @trurl. I hadn't thought of that being an issue, but you're right, all my shares are capitalized, without any lower-case duplicates. All the docker containers point to Appdata (not appdata). Aside from VMs, would anything else point to system vs System? If not, I suppose I could just rename the shares and edit the docker containers. Or I could leave as is. @JorgeB, I hadn't noticed those errors. Thank you for catching. I compressed-air dusted all the drives and connectors. Maybe that did it, because on restart, the containers are back. Attached is the latest diag file. At least some of the SATA errors remain, though. For reference, the two disks you pointed to are parity and cache, the latter of which is where the docker file is. tardis-diagnostics-20210112-1016.zip
  5. Hi, all. Thanks in advance for the help. I did a search and couldn't find anything really the same, so I'm left asking for help. And I've noticed this behavior for some time (at least a couple of years), but finally annoyed enough to try to fix it. Every time I shut down my server, I lose my docker containers. On restart, I just see a blank screen under dockers. I can reasonably quickly add them back using the existing templates, and the appdata remain, but I have to do this manually each time. Thankfully that isn't very often, but still not the best (like when I woke up this morning and realized I didn't get a nightly backup complete email and remembered I needed to reinstall the containers after yesterday's restart). Anonymous diagnostics attached for anyone willing to take a look. And I'm happy to answer any questions. Help is greatly appreciated. TD tardis-diagnostics-20210112-0832.zip
  6. I think I got it now. I can set whatever username I want in the Docker container setup and then use that in the native GUI. Thank you!
  7. Would the username then be the "ID" listed on my CloudBerryCentral Settings page (36-character/32-bit hex)? When I log into cloudberrycentral, I use email address.
  8. Apologies for that misunderstanding. Docker log as follows: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 00-app-niceness.sh: executing... [cont-init.d] 00-app-niceness.sh: exited 0. [cont-init.d] 00-app-script.sh: executing... [cont-init.d] 00-app-script.sh: exited 0. [cont-init.d] 00-app-user-map.sh: executing... [cont-init.d] 00-app-user-map.sh: exited 0. [cont-init.d] 00-clean-logmonitor-states.sh: executing... [cont-init.d] 00-clean-logmonitor-states.sh: exited 0. [cont-init.d] 00-clean-tmp-dir.sh: executing... [cont-init.d] 00-clean-tmp-dir.sh: exited 0. [cont-init.d] 00-set-app-deps.sh: executing... [cont-init.d] 00-set-app-deps.sh: exited 0. [cont-init.d] 00-set-home.sh: executing... [cont-init.d] 00-set-home.sh: exited 0. [cont-init.d] 00-take-config-ownership.sh: executing... [cont-init.d] 00-take-config-ownership.sh: exited 0. [cont-init.d] 00-xdg-runtime-dir.sh: executing... [cont-init.d] 00-xdg-runtime-dir.sh: exited 0. [cont-init.d] 10-certs.sh: executing... [cont-init.d] 10-certs.sh: exited 0. [cont-init.d] 10-cjk-font.sh: executing... [cont-init.d] 10-cjk-font.sh: exited 0. [cont-init.d] 10-nginx.sh: executing... [cont-init.d] 10-nginx.sh: exited 0. [cont-init.d] 10-vnc-password.sh: executing... [cont-init.d] 10-vnc-password.sh: exited 0. [cont-init.d] 10-web-index.sh: executing... [cont-init.d] 10-web-index.sh: exited 0. [cont-init.d] cloudberrybackup.sh: executing... [cont-init.d] cloudberrybackup.sh: Generating machine-id... useradd: invalid user name '[EMAILADDRESS]' [cont-init.d] cloudberrybackup.sh: exited 3. [services.d] stopping services [services.d] stopping s6-fdholderd... [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.
  9. Ah, forgot to mention that the container log (Appdata/CloudBerryBackup/logs) is empty except a folder called nginx, and that folder is empty. Thank you.
  10. I'm hoping someone can help with this one. I installed CloudBerryBackup with default settings, adding my email address for CBB's web interface username and using the hash procedure for password. Within a few seconds of starting the container, it stops. Attached are the unRAID syslog for the last time that I started it and it stopped and the docker inspect output. Thanks in advance for help. syslog.txt CloudBerry_inspect.txt
  11. @ljm42 -- Thanks for the memory tip. I updated to 3 GB and it has been running now for about a half hour without crashing. That may have solved it, at least for me. @Helmonder -- If you're using the new Appdata configuration, you'll find it at /mnt/user/Appdata/crashplan/bin/run.conf
  12. Thanks for the suggestion. Tried stopping and restarting CP --> no difference. Forced update and restarted CP --> no difference. CP appears to crash and restart about every 60 seconds. If I'm connected to webUI at the time, VNC says it lost the connection and then a few seconds later shows CP restarting.
  13. Has anyone had any issues since last night? Specifically, CrashPlan is stopping and restarting every minute or so. I am able to connect via web GUI and see the CP history showing entries starting CP 4.8.0 with the correct GUID and scanning for files. Those entries repeat about every minute, since 12:00 am today. I'm running CP 4.8.0 and Unraid 6.2.1. Thanks.
  14. I just updated to 6.2 from 6.1.9 and found I don't have the System share created. It is not listed in the share tab. I have a cache device, which the cache tab lists correctly. If I try to manually create a System share, I get a message that the System share has been deleted. Thoughts? Edited to add: Looking at the log, my server thinks the cache disk is full. From log: But the array lists it has having 1 TB free (it's a 1-TB disk with 28 MB on it currently). Any ideas why it would like the cache is full?
  15. Barakthecat and I have been having a problem for the last 43 days and I'm curious if anyone has had it and fixed it already. We were backing up very happily to each other for a while without trouble and now, at least it seems all of a sudden, neither can connect to the other. CrashPlan seems to be running fine in the dockers, but they both say the other person is offline. Both are upgraded to 4.4.1. Both of us can use the GUI in the CP-Desktop docker. Neither of us has changed routers or router settings, so can't think of any new firewall issues. Temporally, it seems the problem started the same time as 4.4.1 came out (and I coincidentally upgraded my workstation to Windows 10, but I can't see how that would affect CP on the unRAID server). Anyone have ideas? Thanks in advance. - TD