snowboardjoe

Members
  • Posts

    261
  • Joined

  • Last visited

Everything posted by snowboardjoe

  1. I'm about 50% of my way through migrating 4 disks from ReiserFS to XFS. So far, no issues until something I noticed this morning after formatting disk 2 last night and brining the array back online. Everything is up and running, but noticed that /mnt/disk2 has no subdirectories of the shares present. I was tinkering with the shares to exclude disks I was working on at the time and stop unraid from adding more files to those locations. As I reformat the disks, I go back and enable them again to use all disks and disable for the next. Now I feel like something is confused with the configuration. How do I enable disk2 to start accepting new data again? I did not have this issue with /mnt/disk1. I'm running 6.9.2 and have paused my migration work for now (what I had planned to do anyway for a few days). laffy-diagnostics-20220210-1216.zip
  2. Has anyone mapped the container path /config/Library/Application Support/Plex Media Server/Media/localhost to a separate share? I'm over 100GB now for this directory and wondering if there is any benefit moving this off the cache (would still use the cache for new files).
  3. SMTP configuration was accepted, but no email. So, I'm blocked setting up additional accounts. On the forums, seeing several complaints about this, so it appears to be a known and widespread problem for 6.4.54. Sigh.
  4. One other thing I just discovered today, all logging has stopped since running in 6.4.54. All logs enabled, but it's been dead silent for a long time now.
  5. Of the 20 containers I'm running, this container is the largest at 1.35GB. Is this normal? I did some digging yesterday into the container and did not immediately find anything that was not mapped properly to my /mnt/user/appdata/deludge directory.
  6. Ah, found it. I set to version-6.4.54 for now. Oddly enough it chose to re-download all of the software again, but seems stable for now. That should lock things down. My next thing was trying to sort out how to create a separate admin account for Unpoller to grab telemetry from Unifi, but still stuck on that for now. Any attempts to add it complains there was a problem sending email (I don't expect that to be working). Without that email part, it won't be able to send an invite and you can't set a password manually. Will see if there are some SMTP settings that can mitigate this. Update... Found the setting for mail server. The config claims it can use the cloud to do the mail if remote access is turned on (it is), but I'm guessing that just fails with this docker image. Hate to setup another SMTP config, but will see if that gets me over the finish line here. I don't want to grant a monitor access to modify my network configuration by using my own credentials. Creating a user should not be this hard, nor should it be dependent on email.
  7. Just now catching up on why there have been so many changes in UI for this Docker. I, too, was using the latest tag and regret it now. Currently on 6.4.54 and miss some of the features from before. Wondering if there is any way to back rev at this point and how I can keep it stable going forward (other than just stop updating that container).
  8. I upgraded unraid 6 days ago, but that log shows running solid ever since.I was wondering if the uptime from the Docker view would accurately represent the true time it was up. Will keep monitoring it. Alternatively, starting to look at AWS Glacier and using s3sync container for the really large media that are static and get that out of CP, but then I'm paying for both services. One project at a time. Want to keep this current config stable for now. I think CP changed some retention options over time. I found some ridiculous settings in there keeping extra versions way too long. Right when I did that they announce they're doing the same things globally. Odd. I think some configuration slipped in there generating extra versions for a lot of clients.
  9. Backups restored. I think it was a setting with the frequency I had that caused the everlasting file synchronizations. Support pretty much told me to add 10GB of RAM to the container (they don't know this was a container) to have reliable backups because that was their recommendation and ended it there despite I'm only backup up 81K files. When I explained I'm getting good backups, that told me that was likely a false status as it's not possible to backup 11TB of data with 2GB of RAM available to the container. Really? Wow. Can't verify the files can completes successfully and this is one of your core functions. But, if I add gobs of RAM, they'll support me and the file scans will be successful. How will they know if it's successful it's already reporting a false positive? Brilliant. I'll be looking at backup alternatives.
  10. CPU is likely not the issue at all. It's been some time, but years ago with large backups the client would run a data deduplication job to find and consolidate redundant data. This was painstakingly slow. There was a setting I added long ago that told the client to NOT do this and backups were off and flying at record pace. I don't know if this is still a thing. I've not customized that setting in ages and may be gone for all I know.
  11. Day 42 and still no complete backup. Been working with support and they claim my issue has been escalated, but it does not seem to have changed anything on the urgency to resolve this. They're currently throwing up some ideas and I've rejected most of them because they don't make sense. For example, they said I needed to increase the Java memory allocation from 2GB to 9GB. Uh, no. I'm only using 700MB. They seem to think this thing is crashing over and over again when it's not. Having to wait 3-5 days for a synchronization to complete is a big problem. Having it repeat this endlessly is a bigger problem. Not sure what to tell them and I don't want to share this is a container for fear they'll just hang up on me.
  12. For the past two weeks CrashPlan is telling me maintenance is still in progress and not to authorize my client until I hear from them. I'm now at 29 days of no backups for this host. I don't know how to escalate this issue with them.
  13. I also updated the container to the latest version (8.7.0.780) and the counter has reset back to 0%. Based on my estimates and the current rate, it will be another 36 hours before this completes. This is unacceptable.
  14. Wondering if I need to do the same thing. No backups for over a week now because it's "Synchronizing block information". I don't recall it ever taking this long. Currently at 57%. Not happy.
  15. SMART settings altered after 6.8.3>6.9.1. This includes temperature settings and the monitoring or attribute 197 of my SSD's. Is that expected when upgrading the OS of unRAID?
  16. Did you find a solution that worked for you? I, too, want to set a preset that automatically selects passthru for all tracks. I don't know how to set this from the GUI. Guessing possibly from the config itself?
  17. I'm getting much faster rates than that and have been using the service for many years now. Is your rate dropping over time? Did it ever complete the initial full backup?
  18. Fixed the issue. Not sure if this is already documented somewhere. The /storage mount point is strictly configured to be read-only (and a safe thing to do for security). In order to restore files back, you need to create a new mount point in the container configuration. In my case, I just added /restore and mapped to /mnt/user/scratch/restore. Provided destination /restore to restore job and it worked just fine.
  19. Restores are still failing here: root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0 I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB) I 12/23/20 10:25AM 41 Restoring files to original location I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored @ 445.2Mbps W 12/23/20 10:42AM 41 2 files had a problem W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system) W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system) Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that? UPDATE: Set UID/GID to 0 and restores are now in progress. UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now.
  20. I'm seeing the same thing. Dashboard reports I'm using 83% of my 16GB docker image. A report on container size shows I'm only using 4GB. Numbers are not adding up. The main thing that caught my attention is directory /var/lib/docker/btrfs is using 42GB? Confused.
  21. 5.9 is getting pretty old. The memory leak is annoying, yes, but manageable. So many features you're missing out on in 5.12.
  22. Also have a look at this where this was discussed. I had the same problem and this fixed it.
  23. My proxy setting is off--never enabled it. I don't think you need it since the container is managing that for you already.