• Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About DBJordan

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For some odd reason, qdirstat isn't able to delete anything. I keep getting this error when I try (this is from the console, but it's the same error message the gui has): /storage/Saidar # rm syslog.log rm: remove 'syslog.log'? yes rm: can't remove 'syslog.log': Read-only file system This is the only Docker I've got that has a problem creating and removing files. Any thoughts?
  2. Not sure -- is there a way to make mover's output more detailed? It just says this: root@Truesource:~# mover Specified filename /mnt/disk6/appdata/CrashPlanPRO/.code42/log does not exist. Edit: I got it working now that I realized it ws in CrashPlanPRO/.code42/log and not CrashPlanPRO/log. Just deleted the directory in the /mnt/user/appdata share, cycled the docker container, and invoked the mover again. It finished without error. Not sure how it got borked in the first place, but all is working now. Thanks!
  3. When /user/local/sbin/mover runs, it exits with status 1. Because mover redirects its logging to /dev/null, I executed it manually from the command line and it complained because this link is invalid: root@Truesource:/mnt/user/appdata/CrashPlanPRO/log# ls -ld log lrwxrwxrwx 1 root root 11 Jun 12 10:20 log -> /config/log The link is valid within the container's context, just not outside the container. Is anyone else seeing this? If so is there any way to fix it? I'd prefer to see mover exit 0 in my syslog! Thanks.
  4. Yeah, seems to be the case. Since Windows 10 WSL has a 9p built-in for its own use, I started wondering if I could mount a 9p drive on a Windows VM but didn't get very far. Even if I figure it out, Windows would probably see it as a network drive and BB wouldn't back it up.
  5. I tried setting this up with a BackBlaze trial and CloudBerry trial. CloudBerry generated a lot of errors indicating I hit some kind of BackBlaze bandwidth/data daily cap. Yet BackBlaze says they offer unlimited space? I've found some stuff in their help pages that seems to indicate their "Personal Backup" cloud vaults are unlimited, but using the "B2 Cloud" has all sorts of data and bandwidth daily caps. Also, I found the B2 data caps information page for my account and I do seem to have hit their daily free limits. Is there a way to use the docker to back up to the unlimited unca
  6. Thanks for the sanity check -- I was only looking at the backplane. It does indeed have a USB 2 header on the MB. I'll connect it and give it a whirl. Thanks!
  7. Is there a way to downgrade a usb 3 port to only support usb 2? I have no usb 2 ports on my motherboard.
  8. Yes, it will shutdown the VM to do the backup. Not sure about the snapshot feature.
  9. Hi @binhex, I was looking for change notes at Sonarr's github, but their releases only go up to build 5322. Do you know where they're storing the 5338 packages? Edit: oh, I think I found them here: https://aur.archlinux.org/packages/sonarr/ Looks like they don't update https://github.com/Sonarr/Sonarr/releases anymore.
  10. I agree if it's not hard, but since the VMBackup plugin is beta free software, I'll take whatever is easiest.
  11. Sounds like a great idea. Sent from my iPad using Tapatalk
  12. Just did a test with a VM that I created using @SpaceInvaderOne Macinabox docker. The docker creates a VM with this structure: root@Truesource:/mnt/user/domains/Stedding# find . . ./icon ./icon/catalina.png ./icon/highsierra.png ./icon/mojave.png ./icon/osx.png ./ovmf ./ovmf/OVMF_CODE.fd ./ovmf/OVMF_VARS.fd ./macos_disk.img ./Clover.qcow2 ./Catalina-install.img VMBackup only backs up the following files: root@Truesource:/mnt/user/Saidar/Backups/Stedding# find . -name '202*' ./20200204_1952_Stedding.xml ./20200204_1952_OVMF_VARS.fd ./20200204_1952_Clover.qcow2 ./20200204_1952_Catalin
  13. When I captured that I was using Brave Version 1.1.20 Chromium: 79.0.3945.74 (Official Build) (64-bit).
  14. I think this was actually caused by a hardware issue. Unraid does odd things when one of the cache pool drives mysteriously disappears while the array is online... I don't think it had anything to do with the docker container, but thank you for your consideration.