khager

Members
  • Posts

    90
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

khager's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. OK - skip the "how do you create docker.img" question. It was already created. I followed your instructions and, by doing so, I learned a few things. I still don't fully understand what's part of the docker and what's part of the backup. In any case, I'm up and running so thanks for posting those instructions (again <sheepish>).
  2. That's what I read but I was confused by restoring the backup before adding the previously-installed docker apps. And - how do you "recreate your docker.img file"? I thought that got created when you installed the docker apps (?) ...told you I was confused...
  3. Steps to restore? My cache drive crashed. I replaced it. Now I want to restore my dockers from backup. Is there a step-by-step? The wiki didn't make sense to me - please help me understand. Seems like I should 1) reinstall my dockers from the "Previous apps" page 2) run the restore process using CA Appdata Backup/Restore plugin 3) maybe restart 4) done, right? That's not what it seemed like the wiki was telling me to do. Can someone please point me in the right direction for the order of tasks? Thanks, Kyle
  4. @JorgeB - thank you. Your advice got me where I wanted. Unfortunately, that cache disk is too far gone to be fixed by xfs_repair. After I get the new drives delivered, I'll have to rebuild my dockers from backup. It was a good try but didn't work out this time. Kyle
  5. Here's my situation: Currently running v6.8.2. My Cache (single disk) was configured to be used ONLY for dockers (no actual "caching" of the array disks). I got an error on the Cache disk (unmountable / no file system) and decided I wanted to get 2 new disks to mirror it. While waiting for the 2 new disks to arrive, I unassigned and removed the Cache disk. Currently unRaid is running with no cache and the HDD that was the cache disk is sitting on my desk. In other words, I impulsively decided to chalk up the loss (I have backups of Appdata) and move on. Now I regret that decision because I never tried an xfs_repair (which I have done on other disks with success and it saved me a lot of heartache). What I would like to do: Reinstall that old cache disk and make unRaid think it's in the state it was before I removed it. I don't want to format it and I don't want to clear it. I want it back in that previous state so I can attempt an ifs_repair. Any suggestions on how to install that old Cache disk without clearing/formatting and get the system settings back to their original state (e.g., don't use it for caching the array disks)? Should I try the xfs_repair first - then assign it back as Cache if successful? Thanks, Kyle
  6. @TRURL - thank you. Worked like a charm. Now I have an empty slot to mirror my cache drive and my OCD is satiated. I keep losing cache drives for some reason - maybe because I keep using my old disks that I pulled out of the array (?). I have two new WD Reds on order for mirroring cache this time so, hopefully, it'll last for a while. I don't post here very often (unRaid user & member since 2009) so I just have to say - again - I love unRaid. Especially the later versions. Thanks to everyone for a great product.
  7. I read the whole thread and I just want to double-triple-quadruple check that I'm doing this the right way. I'm running v6.8.2. I have 1 parity drive and did have 7 data drives. Disk5 was old and starting to get errors. I have enough space that I can just remove that disk from the array (and it'll free up a slot for me to be able to mirror my Cache drive). So far I have: - Moved all data off of disk5 onto other disks - Used the "shrink array" instructions in the wiki to unassign disk5 - Parity check is currently running ("Parity is valid" was NOT checked) When that's done I want to: - shut down the server - physically remove what was disk5 from the server and throw it away - physically move disk6 to the slot previously occupied by disk5 - physically move disk7 to the slot previously occupied by disk6 What will I see when I bring it back up after physically moving those 2 drives to different SATA cables (and removing the one that's now unassigned)? Can I just do a New Config and reassign the drives that were "disk6" and "disk7 to "disk5" and "disk6"? When I restart the array after this New Config, should I indicate parity is valid or not? Thanks, Kyle
  8. Well...now restores are working. I do not know what changed. Maybe some folder permissions, maybe I'm crazy. In any case, restores are working like they should and I no longer care why they didn't before. I was able to restore a photograph library from 2016 that got corrupted some time between Jan and May this year. I'm happier about that than I am curious why the previous restore attempts didn't work. So, in the words of the late, great Roseanne Roseannadanna.... .... ...never mind...
  9. I've tried restoring to just about anywhere the interface will let me. Example: the container path of /storage/... maps to the host path of /mnt/user. This allows me to backup from anywhere on the Unraid array. I've tried restoring to several different shares in that path. I've also tried to restore to /config in the container - I log into the container but there are no restored files in that path. I've also tried restoring to "original location" but that yielded the same results. restore_tool_app.log contains these lines for the most recent attempt: INFO : 2020/05/22 06:28:07.715174 restore_tool.go:99: RestoreTool: Start INFO : 2020/05/22 06:28:07.763725 restore_tool.go:100: Runtime directory: /tmp/com.code42.restore/app INFO : 2020/05/22 06:29:47.520276 restore_tool.go:472: Received terminate gracefully message INFO : 2020/05/22 06:29:47.520737 restore_tool.go:474: Received keep-alive message INFO : 2020/05/22 06:29:47.526648 restore_tool.go:197: Error received reading size (possibly end-of-file). err=EOF INFO : 2020/05/22 06:29:48.525365 restore_tool.go:171: RestoreTool: Graceful Exit Tail of history.log.0 contains: I 05/22/20 06:28AM Starting restore from CrashPlan PRO Online: 3,445 files (135.70MB) I 05/22/20 06:28AM Restoring files to /config I 05/22/20 06:29AM Restore from CrashPlan PRO Online completed: 3,445 files restored @ 52.4Mbps The last line in restore_files.log.0 contains: 05/22/20 06:29AM 41 Restore from CrashPlan PRO Online completed: 3,445 files restored @ 52.4Mbps Tail of service.log.0 contains: [05.22.20 06:29:48.318 INFO ub-BackupMgr om.backup42.service.AppLogWriter] WRITE app.log in 499ms [05.22.20 06:29:48.350 INFO ub-BackupMgr 42.service.history.HistoryLogger] HISTORY:: Restore from CrashPlan PRO Online completed: 3,445 files restored @ 52.4Mbps Tail of ui.log contains (note times in this log are UTC and I'm in CDT so UTC -5 for local time): 2020-05-22T11:28:07.392Z - info Restore: Successfully created restore job 2020-05-22T11:28:07.688Z - info: Launching process: (20016) /usr/local/crashplan/bin/restore-tool -userName=app -logDir=/config/.code42/log /tmp/restore-pipe-955293482861554839-request /tmp/restore-pipe-955293482861554839-response 2020-05-22T11:29:48.532Z - info: Process exited cleanly with code 0 and signal null A restore takes as long as you would expect, counting up the amount of data it's restoring, etc. The log files even show the throughput figures. it's just not putting the restored files anywhere I can find.
  10. I am unable to restore files from my CrashPlan backup. I have the Access Mode set to Read/Write. When I restore, CrashPlan goes through the motions of downloading - takes several minutes to get 4GB down, etc. But then no files are ever restored. restore_files.log lists all the files being restored and ends with "Restore from CrashPlan PRO Online completed: 1,463 files restored @ 49.3Mbps" - but no files were restored. I've tried restoring to the original location & a different location. I've tried setting to "overwrite" and "rename" but still nothing. Any ideas?
  11. the "oneFilePanel" entry in .cloudcmd.json is set to "false" I have deleted the docker and cleaned up the leftover file in appdata. I'll try and install again and see if it behaves.
  12. I love this tool so far - been using it for about a week while I reorganize my shares. One problem: I can no longer see 2 file panels - just one (larger/full-width) panel). I don't know what i did to turn it off but if I show the config pop-up, "One File Panel" is NOT checked. I tried clicking it on and back off but no good. I tried stopping and restarting the docker - still only one file panel. How do I get the 2-file-panel view back on? Thanks,
  13. Those settings are very similar to the several SMTP servers I've tried. In my case, it times out without getting any response back from the SMTP server. My thinking is that the only way I would get "No reply" / timeout error is if I got the address or port wrong. Other incorrect settings would likely produce a different error - is this a correct assumption?
  14. I haven't been on this forum in several years - a testimate to unraid reliability and the online help I get here. I'd like to setup email notifications but can't get past the setup. Every time I try to test the SMTP settings, I always get "No reply from email server". I have access to several SMTP servers and I get this reply every time. It's not even trying to log in. I've searched and searched the forum with no luck - this has to be something simple I've overlooked. A router or firewall setting? Some other setting in unraid? It's just not that hard to contact an SMTP server - I must be paying my stupid-tax. Any help? Edit: I'm running 6.2.4
  15. Oh well. At this point I think I'll just wait for my new disks to arrive, preclear on that controller, bring one into the array and see what happens. I've already prepared a couple old drives and that worked fine but I've decided to wait on the new disks before bringing one into the array. I'll post back here on that once I have some info.