khager

Members
  • Posts

    110
  • Joined

  • Last visited

Everything posted by khager

  1. <for posterity.; I pieced together what happened: I did NOT have a static IP address set on the unraid server like I remembered. Instead, I had a DHCP reservation set on my router. I vaguely remember doing this when my ISP decided to change DNS servers and not tell anyone. Using a reservation let me keep the local IP address the same but passed through whatever name servers the ISP decided on. That's the piece I forgot. A few days ago, my router decided to revert back to factory settings so I lost that reservation. My mistake was not associating the router event with the problem accessing the unraid server via the web interface. So - I got a random IP address assigned to the unraid server. The IP addresses on the dockers didn't change so they kept working. My shares are "remembered" by OS-X (as opposed to a script I used to use to mount drives) so those kept working. Mystery solved (except for why my router decided to reset to factory settings - but that's for another day).
  2. I figured it out. A simple "ifconfig" command at the console told me the IP address is NOT what I set it to and NOT what it has been for years. I'm in and all is good. And I can change the IP address back to what it's supposed to be. I just don't know how it was changed. I haven't touched the settings in years. Any ideas?
  3. Attached is the diagnostics file created using the "diagnostics" command from the console Any assistance would be very much appreciated (can't connect to web interface / can't SSH but everything is working - tried rebooting the server already - tried multiple browsers & client workstations). Thanks hunraid01-diagnostics-20220829-1952.zip
  4. I read the link - thank you - I'll get those tomorrow.
  5. I don't - but I can bring one home from work tomorrow. Once connected, what should I do?
  6. At this time, everything on my Unraid server is working (dockers, shares, etc.) - EXCEPT - I cannot access the web interface and I cannot access the command line via SSH. Nothing changed - it was working yesterday and today it isn't. I don't even have auto updates turned on (I do them manually myself so I know when something changes). I tried a cold reboot (hold down the power button until it turns off, wait, turn it back on, wait) - still nothing. Going to the web interface gets me the error "cannot open page because the server where the page is located isn't responding" SSH just times out Where do I start troubleshooting? I can access the "flash" share if there's something on there that will give a hint. Any help would be appreciated. Thanks
  7. Not sure if this is the right forum but I felt compelled to post this somewhere. Move it or delete it if you must. I just wanted to say how much I appreciate this docker. After many years of struggle, I finally got rid of CrashPlan so this might come off as something everyone else already knew and I just discovered. What a joy to get a backup solution that works. No mysterious stopping half-way through a backup. No hang-ups, no wondering what will get backed up next ... everything is clear as day and, as I said, it just works. Restores are easy too (which, I supposes, is the most important part of a backup plan). I liked it so much I turned off "Time Machine" on my Macs and got the CloudBerry solution. Turns out it works too. Time Machine was too ... magical ... for my liking. Plus it was a lot slower getting the job done. I guess more processing power is required for time travel than for backing up some files. Thanks to the developers, the contributors, to everyone who had a hand in this. I like it because it works.
  8. I added another subfolder with an apostrophe in the name and it got backed up successfully So, I can only conclude that .sparsebundle files are purposefully excluded by CloudBerry and/or Backblaze. Can anyone confirm this?
  9. I added another folder to my backup source and a file within that folder - these backed up successfully. My working theory is either: a) CloudBerry excludes .sparsebundle files (which are "special" on MacOS but are just folders on Linux) b) CloudBerry doesn't like folders/files with an apostrophe in the filename Any ideas?
  10. I just installed CloudBerry & signed up for Backblaze. My first backup bucket is a folder that contains 1 hidden file and 1 sparsebundle (it's the sparsebundle and everything in it that I'm interested in). When I trigger a backup, it only backs up the hidden file. The backup report shows no errors and only the 1 file backed up. What'd I get wrong?
  11. I just had a thought - are .sparsebundle folders excluded somehow?
  12. I set this up and when I run a backup, it only backs up the files in the folder I selected. It does not backup subfolders nor any files within those subfolders. No errors generated. What'd I do wrong?
  13. I started with no Pinned Apps (that option was grayed out / disabled on the left sidebar). I pinned plex (linuxserver/plex). This is the only app I pinned. Now when I click Pinned Apps (using Safari), I get this message: Something really wrong went on during pinnedApps Post the ENTIRE contents of this message in the Community Applications Support Thread OS: 6.9.2 Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15 Language: <br /> <b>Warning</b>: usort() expects parameter 1 to be array, null given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1270</b><br /> {"status":"ok"} If I click Pinned Apps using Chrome, I get this: Something really wrong went on during pinnedApps Post the ENTIRE contents of this message in the Community Applications Support Thread OS: 6.9.2 Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36 Language: <br /> <b>Warning</b>: usort() expects parameter 1 to be array, null given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1270</b><br /> {"status":"ok"} Any ideas?
  14. Both valid points. I did disable the dockers & VMs before running Mover. maybe I got something in the wrong order. I did wait long enough - or at least the UI said docker wasn't running anymore. All that got relegated to the curious when I realized my appdata backup would suffice.
  15. Thanks everyone for lending a hand on this. Just closing the loop with this post. I got my dockers up and running. I finally understand this in my brain. The docker image ONLY contains the dockers as released. NO configuration/user data is there. ALL configuration/user data is in appdata. For a long time, that never sank in. Once I understood this, it was an easy fix since I have appdata backed up. Create a docker image, tick off previous apps & install, restore appdata, Bob's your uncle - or, in the words of the late, great John Madden, "BOOM! It's over." A few notes: - The trick of setting cache on the shares to "Yes" and using Mover didn't work for me. It took forever and files were left behind. Setting it back to "Only" afterward didn't move the files back. I had to clean this up manually both directions. Using "CA Backup / Restore Appdata" to restore my backup was much cleaner and works a peach. - I need a new motherboard & SATA3 controller. This is completely unrelated to my problem but now that I've recovered my dockers I'm on to expanding the array. 17-19 hours to reconstruct a disk is awful. - That new motherboard needs a couple M.2 slots for cache drives. Thanks again.
  16. memtest showed no errors I didn't have to reformat my cache drive(s). I just deleted the old docker.img and created a new one. Then I disabled docker, restored appdata from backup (using the plugin), enabled docker, and rebooted. Now I get this on my docker tab: How do I get my docker apps running again. Do I have to install them fresh (then maybe overwrite appdata again?)
  17. I've done both. How do I reformat the cache drive?
  18. Gotcha. I'll have to dig out a monitor and keyboard - it'll be a minute.
  19. I've run Balance and Scrub but no dice. is memtest a RAM tester? Can you provide the syntax? It's not a valid bash command on my server. I'm fine with a 128GB - I just thought it might have run out of space (forgot I did that). Is it possible to reformat, recreate the docker image, and copy the apps/files back over to the cache disk?
  20. I've searched the forum and can't find anyone who's had an issue like this (or the posts are several versions old). I'm adding larger disks to my array. So far I've only replaced the parity drive. Then I noticed Docker wasn't running. I get this screen when I click the Docker tab The Cache drive (Raid1) is up and available / share works / I can access files so I changed my cache settings from "Only" to "Yes" and ran mover. I tried creating a new docker image but that didn't help. Obviously I'm out of my league here. I'd really like to get my docker apps back and running with the same settings and data I had before (one of them is Plex and I really don't want to rebuild that from scratch since I have all the data - plus backups). Any help would be most appreciated. I've attached diagnostics. Thanks hunraid01-diagnostics-20220102-0611.zip
  21. OK - skip the "how do you create docker.img" question. It was already created. I followed your instructions and, by doing so, I learned a few things. I still don't fully understand what's part of the docker and what's part of the backup. In any case, I'm up and running so thanks for posting those instructions (again <sheepish>).
  22. That's what I read but I was confused by restoring the backup before adding the previously-installed docker apps. And - how do you "recreate your docker.img file"? I thought that got created when you installed the docker apps (?) ...told you I was confused...
  23. Steps to restore? My cache drive crashed. I replaced it. Now I want to restore my dockers from backup. Is there a step-by-step? The wiki didn't make sense to me - please help me understand. Seems like I should 1) reinstall my dockers from the "Previous apps" page 2) run the restore process using CA Appdata Backup/Restore plugin 3) maybe restart 4) done, right? That's not what it seemed like the wiki was telling me to do. Can someone please point me in the right direction for the order of tasks? Thanks, Kyle
  24. @JorgeB - thank you. Your advice got me where I wanted. Unfortunately, that cache disk is too far gone to be fixed by xfs_repair. After I get the new drives delivered, I'll have to rebuild my dockers from backup. It was a good try but didn't work out this time. Kyle
  25. Here's my situation: Currently running v6.8.2. My Cache (single disk) was configured to be used ONLY for dockers (no actual "caching" of the array disks). I got an error on the Cache disk (unmountable / no file system) and decided I wanted to get 2 new disks to mirror it. While waiting for the 2 new disks to arrive, I unassigned and removed the Cache disk. Currently unRaid is running with no cache and the HDD that was the cache disk is sitting on my desk. In other words, I impulsively decided to chalk up the loss (I have backups of Appdata) and move on. Now I regret that decision because I never tried an xfs_repair (which I have done on other disks with success and it saved me a lot of heartache). What I would like to do: Reinstall that old cache disk and make unRaid think it's in the state it was before I removed it. I don't want to format it and I don't want to clear it. I want it back in that previous state so I can attempt an ifs_repair. Any suggestions on how to install that old Cache disk without clearing/formatting and get the system settings back to their original state (e.g., don't use it for caching the array disks)? Should I try the xfs_repair first - then assign it back as Cache if successful? Thanks, Kyle