Wildcat76

Members
  • Posts

    28
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Wildcat76's Achievements

Noob

Noob (1/14)

0

Reputation

  1. IT WORKED! I deleted the /mnt/user/appdata/CrashPlanPRO directory and restarted the container and all is well. I'm starting a new backup from scratch just to be sure. Thanks for your help, DJoss! 🙌
  2. By "Remove", do you mean remove that text from the Config Directory line in the container settings, or do you mean entirely delete that directory from the file system?
  3. Circling back around on this. Any ideas as to what may be causing my issue? Thanks again for your help.
  4. CrashPlanPRO cannot connect to its background service. I'm at my wit's end. I've had this issue for 5 months and I haven't been able to configure my client in that time. I finally have had a couple of days where I could deep dive on it but I'm coming up empty. Full disclosure: I'm no Linux/UNRAID/CL master by a long shot. What I haven't been able to do is restart the "engine", even after several uninstall/reinstall cycles. At least, I don't think I've been able to. My command line skills are a 2 out of 10, at best, but I was finally able to at least get to the engine logs. I've attached the error log. Does anybody have any ideas as to what's going on? I really want to get this back up and running. engine_error.log
  5. Parity check complete with a little under 1000 errors (all corrected). I'll take it. Thanks again to everyone for your help!
  6. Looks like we have a winner. The drive I suspected was the old parity drive did indeed have an unreadable file system when I mounted them all as data drives (with nothing mounted as parity). I'll confirm that the important data is all there and accessible, then stop the array, assign the parity drive, and let it rebuild parity. Thanks so much for the help!
  7. I did make a backup of the entire drive, unfortunately I accidentally (and rather embarrassingly) overwrote the bz files when I was trying to get a new stick formatted with 6.8.1. I have backups of all of the files from the old stick except bzroot, bzconfig, and go. A dumb mistake that I won't make again, but now I'm just trying to get back to a stable working condition with no data loss. What would happen if I assign a slot to all of the data drives and then start the server? Would it just rebuild the parity drive without writing anything to or deleting anything from the data drives?
  8. EDIT: Still had a backup of my old config folder. Copied the whole folder to my new (fresh 6.8.1) USB stick, rebooted, and had everything back the way it was before. Also learned that you can identify your parity drive by assigning all drives as data drives, NO PARITY DRIVE, and then starting the array. Old parity drive will appear as having a file system that can't be read. I upgraded the guts of my server and and trying to bring the array back up, but I don't have a backup of my previous bzroot and bzconfig files. I can get into the interface just fine but I'm scared to start assigning drive for fear of losing data. I'm 95% certain I know which drive was my parity drive. A couple of questions: Is there a way to determine with certainty which drive was the previous configuration's parity drive? What steps should I take to get back a working server with the old configuration without risking data loss? I'm happy to endure an entire rebuild if necessary. I have 11 drive in total. Thanks in advance for the help.
  9. My biggest fear is losing data, although I'm comforted a bit by knowing that orange text means it's a duplicate, opposed to a split file (and thus a file that's potentially missing half of its data). I just wish I could access that directory and see what's in there. I'm running a deep scan right now using the Fix Common Problems plug-in. I intend to run a parity check afterwards, but I want to be sure that's the right move. Can you guys confirm whether I should be doing something else?
  10. Filesystem Size Used Avail Use% Mounted on tmpfs 128M 2.0M 127M 2% /var/log