snowboardjoe

Members
  • Posts

    207
  • Joined

  • Last visited

Everything posted by snowboardjoe

  1. SMTP configuration was accepted, but no email. So, I'm blocked setting up additional accounts. On the forums, seeing several complaints about this, so it appears to be a known and widespread problem for 6.4.54. Sigh.
  2. One other thing I just discovered today, all logging has stopped since running in 6.4.54. All logs enabled, but it's been dead silent for a long time now.
  3. Of the 20 containers I'm running, this container is the largest at 1.35GB. Is this normal? I did some digging yesterday into the container and did not immediately find anything that was not mapped properly to my /mnt/user/appdata/deludge directory.
  4. Ah, found it. I set to version-6.4.54 for now. Oddly enough it chose to re-download all of the software again, but seems stable for now. That should lock things down. My next thing was trying to sort out how to create a separate admin account for Unpoller to grab telemetry from Unifi, but still stuck on that for now. Any attempts to add it complains there was a problem sending email (I don't expect that to be working). Without that email part, it won't be able to send an invite and you can't set a password manually. Will see if there are some SMTP settings that can mitigate this. Update... Found the setting for mail server. The config claims it can use the cloud to do the mail if remote access is turned on (it is), but I'm guessing that just fails with this docker image. Hate to setup another SMTP config, but will see if that gets me over the finish line here. I don't want to grant a monitor access to modify my network configuration by using my own credentials. Creating a user should not be this hard, nor should it be dependent on email.
  5. Just now catching up on why there have been so many changes in UI for this Docker. I, too, was using the latest tag and regret it now. Currently on 6.4.54 and miss some of the features from before. Wondering if there is any way to back rev at this point and how I can keep it stable going forward (other than just stop updating that container).
  6. I upgraded unraid 6 days ago, but that log shows running solid ever since.I was wondering if the uptime from the Docker view would accurately represent the true time it was up. Will keep monitoring it. Alternatively, starting to look at AWS Glacier and using s3sync container for the really large media that are static and get that out of CP, but then I'm paying for both services. One project at a time. Want to keep this current config stable for now. I think CP changed some retention options over time. I found some ridiculous settings in there keeping extra versions way too long. Right when I did that they announce they're doing the same things globally. Odd. I think some configuration slipped in there generating extra versions for a lot of clients.
  7. Backups restored. I think it was a setting with the frequency I had that caused the everlasting file synchronizations. Support pretty much told me to add 10GB of RAM to the container (they don't know this was a container) to have reliable backups because that was their recommendation and ended it there despite I'm only backup up 81K files. When I explained I'm getting good backups, that told me that was likely a false status as it's not possible to backup 11TB of data with 2GB of RAM available to the container. Really? Wow. Can't verify the files can completes successfully and this is one of your core functions. But, if I add gobs of RAM, they'll support me and the file scans will be successful. How will they know if it's successful it's already reporting a false positive? Brilliant. I'll be looking at backup alternatives.
  8. CPU is likely not the issue at all. It's been some time, but years ago with large backups the client would run a data deduplication job to find and consolidate redundant data. This was painstakingly slow. There was a setting I added long ago that told the client to NOT do this and backups were off and flying at record pace. I don't know if this is still a thing. I've not customized that setting in ages and may be gone for all I know.
  9. Day 42 and still no complete backup. Been working with support and they claim my issue has been escalated, but it does not seem to have changed anything on the urgency to resolve this. They're currently throwing up some ideas and I've rejected most of them because they don't make sense. For example, they said I needed to increase the Java memory allocation from 2GB to 9GB. Uh, no. I'm only using 700MB. They seem to think this thing is crashing over and over again when it's not. Having to wait 3-5 days for a synchronization to complete is a big problem. Having it repeat this endlessly is a bigger problem. Not sure what to tell them and I don't want to share this is a container for fear they'll just hang up on me.
  10. For the past two weeks CrashPlan is telling me maintenance is still in progress and not to authorize my client until I hear from them. I'm now at 29 days of no backups for this host. I don't know how to escalate this issue with them.
  11. I also updated the container to the latest version (8.7.0.780) and the counter has reset back to 0%. Based on my estimates and the current rate, it will be another 36 hours before this completes. This is unacceptable.
  12. Wondering if I need to do the same thing. No backups for over a week now because it's "Synchronizing block information". I don't recall it ever taking this long. Currently at 57%. Not happy.
  13. SMART settings altered after 6.8.3>6.9.1. This includes temperature settings and the monitoring or attribute 197 of my SSD's. Is that expected when upgrading the OS of unRAID?
  14. Did you find a solution that worked for you? I, too, want to set a preset that automatically selects passthru for all tracks. I don't know how to set this from the GUI. Guessing possibly from the config itself?
  15. I'm getting much faster rates than that and have been using the service for many years now. Is your rate dropping over time? Did it ever complete the initial full backup?
  16. Fixed the issue. Not sure if this is already documented somewhere. The /storage mount point is strictly configured to be read-only (and a safe thing to do for security). In order to restore files back, you need to create a new mount point in the container configuration. In my case, I just added /restore and mapped to /mnt/user/scratch/restore. Provided destination /restore to restore job and it worked just fine.
  17. Restores are still failing here: root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0 I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB) I 12/23/20 10:25AM 41 Restoring files to original location I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored @ 445.2Mbps W 12/23/20 10:42AM 41 2 files had a problem W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system) W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system) Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that? UPDATE: Set UID/GID to 0 and restores are now in progress. UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now.
  18. I'm seeing the same thing. Dashboard reports I'm using 83% of my 16GB docker image. A report on container size shows I'm only using 4GB. Numbers are not adding up. The main thing that caught my attention is directory /var/lib/docker/btrfs is using 42GB? Confused.
  19. 5.9 is getting pretty old. The memory leak is annoying, yes, but manageable. So many features you're missing out on in 5.12.
  20. Also have a look at this where this was discussed. I had the same problem and this fixed it.
  21. My proxy setting is off--never enabled it. I don't think you need it since the container is managing that for you already.
  22. I was getting ready to point out that the WebUI was not responding. Knowing now that the error at the bottom was non-fatal, I reviewed the output and saw this: 2020-01-11 08:57:11,626 DEBG 'start-script' stdout output: [warn] Unable to load iptable_mangle module, you will not be able to connect to the applications Web UI or Privoxy outside of your LAN [info] unRAID/Ubuntu users: Please attempt to load the module by executing the following on your host: '/sbin/modprobe iptable_mangle' [info] Synology users: Please attempt to load the module by executing the following on your host: 'insmod /lib/modules/iptable_mangle.ko' I ran that and restarted the container and now all is well. Thanks!
  23. Been running this for a few years, but today it appears to not be working. I think it stopped working when I went through and updated all of my Docker containers and upgraded unRAID OS to 6.8.0. I'm using NordVPN. I keep failing with this error in particular: 2020-01-10 18:56:02,252 DEBG 'watchdog-script' stdout output: [info] Starting Deluge Web UI... [info] Deluge Web UI started 2020-01-10 18:56:02,501 DEBG 'watchdog-script' stderr output: Unable to initialize gettext/locale! 2020-01-10 18:56:02,501 DEBG 'watchdog-script' stderr output: 'ngettext' Traceback (most recent call last): File "/usr/lib/python3.8/site-packages/deluge/i18n/util.py", line 118, in setup_translation builtins.__dict__['_n'] = builtins.__dict__['ngettext'] KeyError: 'ngettext' Any ideas on what may be causing this issue?