khager

Members
  • Posts

    110
  • Joined

  • Last visited

Posts posted by khager

  1. <for posterity.;

    I pieced together what happened:

    I did NOT have a static IP address set on the unraid server like I remembered. Instead, I had a DHCP reservation set on my router. I vaguely remember doing this when my ISP decided to change DNS servers and not tell anyone. Using a reservation let me keep the local IP address the same but passed through whatever name servers the ISP decided on. That's the piece I forgot.

    A few days ago, my router decided to revert back to factory settings so I lost that reservation.

    My mistake was not associating the router event with the problem accessing the unraid server via the web interface.

    So - I got a random IP address assigned to the unraid server. The IP addresses on the dockers didn't change so they kept working. My shares are "remembered" by OS-X  (as opposed to a script I used to use to mount drives) so those kept working.

    Mystery solved (except for why my router decided to reset to factory settings - but that's for another day).

    • Like 1
  2. At this time, everything on my Unraid server is working (dockers, shares, etc.) - EXCEPT - I cannot access the web interface and I cannot access the command line via SSH.

    Nothing changed - it was working yesterday and today it isn't. I don't even have auto updates turned on (I do them manually myself so I know when something changes). I tried a cold reboot (hold down the power button until it turns off, wait, turn it back on, wait) - still nothing.

     

    Going to the web interface gets me the error "cannot open page because the server where the page is located isn't responding"

    SSH just times out

     

    Where do I start troubleshooting? I can access the "flash" share if there's something on there that will give a hint.

    Any help would be appreciated.

    Thanks

  3. Not sure if this is the right forum but I felt compelled to post this somewhere. Move it or delete it if you must.

     

    I just wanted to say how much I appreciate this docker. After many years of struggle, I finally got rid of CrashPlan so this might come off as something everyone else already knew and I just discovered.

    What a joy to get a backup solution that works. No mysterious stopping half-way through a backup. No hang-ups, no wondering what will get backed up next ... everything is clear as day and, as I said, it just works.

    Restores are easy too (which, I supposes, is the most important part of a backup plan).

    I liked it so much I turned off "Time Machine" on my Macs and got the CloudBerry solution. Turns out it works too. Time Machine was too ... magical ... for my liking. Plus it was a lot slower getting the job done. I guess more processing power is required for time travel than for backing up some files.

     

    Thanks to the developers, the contributors, to everyone who had a hand in this. I like it because it works.

    • Like 1
  4. 26 minutes ago, khager said:

    I added another folder to my backup source and a file within that folder - these backed up successfully.

    My working theory is either:

    a) CloudBerry excludes .sparsebundle files (which are "special" on MacOS but are just folders on Linux)

    b) CloudBerry doesn't like folders/files with an apostrophe in the filename

     

    Any ideas?

    I added another subfolder with an apostrophe in the name and it got backed up successfully

     

    So, I can only conclude that .sparsebundle files are purposefully excluded by CloudBerry and/or Backblaze.

     

    Can anyone confirm this?

  5. 12 minutes ago, khager said:

    I just installed CloudBerry & signed up for Backblaze. 

    My first backup bucket is a folder that contains 1 hidden file and 1 sparsebundle (it's the sparsebundle and everything in it that I'm interested in).

    When I trigger a backup, it only backs up the hidden file. The backup report shows no errors and only the 1 file backed up.

     

    What'd I get wrong?

    I added another folder to my backup source and a file within that folder - these backed up successfully.

    My working theory is either:

    a) CloudBerry excludes .sparsebundle files (which are "special" on MacOS but are just folders on Linux)

    b) CloudBerry doesn't like folders/files with an apostrophe in the filename

     

    Any ideas?

  6. I just installed CloudBerry & signed up for Backblaze. 

    My first backup bucket is a folder that contains 1 hidden file and 1 sparsebundle (it's the sparsebundle and everything in it that I'm interested in).

    When I trigger a backup, it only backs up the hidden file. The backup report shows no errors and only the 1 file backed up.

     

    What'd I get wrong?

  7. I started with no Pinned Apps (that option was grayed out / disabled on the left sidebar).

    I pinned plex (linuxserver/plex). This is the only app I pinned.

    Now when I click Pinned Apps (using Safari), I get this message:

    Something really wrong went on during pinnedApps
    Post the ENTIRE contents of this message in the Community Applications Support Thread
    
    OS: 6.9.2 
    Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15
    Language: 
    
    <br /> <b>Warning</b>: usort() expects parameter 1 to be array, null given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1270</b><br /> {"status":"ok"}

     

    If I click Pinned Apps using Chrome, I get this:

    Something really wrong went on during pinnedApps
    Post the ENTIRE contents of this message in the Community Applications Support Thread
    
    OS: 6.9.2
    Browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36
    Language:
    
    <br /> <b>Warning</b>: usort() expects parameter 1 to be array, null given in <b>/usr/local/emhttp/plugins/community.applications/include/exec.php</b> on line <b>1270</b><br /> {"status":"ok"}

     

    Any ideas?

  8. Both valid points. I did disable the dockers & VMs before running Mover. maybe I got something in the wrong order. I did wait long enough - or at least the UI said docker wasn't running anymore. All that got relegated to the curious when I realized my appdata backup would suffice.

  9. Thanks everyone for lending a hand on this.

     

    Just closing the loop with this post. I got my dockers up and running. I finally understand this in my brain. The docker image ONLY contains the dockers as released. NO configuration/user data is there. ALL configuration/user data is in appdata. For a long time, that never sank in.

     

    Once I understood this, it was an easy fix since I have appdata backed up. Create a docker image, tick off previous apps & install, restore appdata, Bob's your uncle - or, in the words of the late, great John Madden, "BOOM! It's over."

     

    A few notes:

    - The trick of setting cache on the shares to "Yes" and using Mover didn't work for me. It took forever and files were left behind. Setting it back to "Only" afterward didn't move the files back. I had to clean this up manually both directions. Using "CA Backup / Restore Appdata" to restore my backup was much cleaner and works a peach.

    - I need a new motherboard & SATA3 controller. This is completely unrelated to my problem but now that I've recovered my dockers I'm on to expanding the array. 17-19 hours to reconstruct a disk is awful.

    - That new motherboard needs a couple M.2 slots for cache drives. 

     

    Thanks again.

  10. memtest showed no errors

     

    I didn't have to reformat my cache drive(s). I just deleted the old docker.img and created a new one.

    Then I disabled docker, restored appdata from backup (using the plugin), enabled docker, and rebooted.

    Now I get this on my docker tab:

    1687805473_ScreenShot2022-01-02at7_43_26AM.thumb.png.22bc2ecb810cf03ca73382e5a29de8aa.png

     

    How do I get my docker apps running again. Do I have to install them fresh (then maybe overwrite appdata again?)

     

  11. 2 minutes ago, Squid said:

    Use Mover to get everything off of the cache drive (stop the docker service and set the appdata share to be useCache: yes

     

    Alternatively, back up the appdata with the appdata backup plugin (destination on the array) and when ready, restore it back to the cache drive

     

    I've done both. How do I reformat the cache drive?

  12. I've run Balance and Scrub but no dice.

     

    16 minutes ago, JorgeB said:

    Btrfs is detecting data corruption, start by running memtest.

    is memtest a RAM tester? Can you provide the syntax? It's not a valid bash command on my server.

     

    19 minutes ago, Squid said:

    Also, why a 256G docker image?

    I'm fine with a 128GB - I just thought it might have run out of space (forgot I did that).

     

    Is it possible to reformat, recreate the docker image, and copy the apps/files back over to the cache disk?

  13. I've searched the forum and can't find anyone who's had an issue like this (or the posts are several versions old).

    I'm adding larger disks to my array. So far I've only replaced the parity drive. Then I noticed Docker wasn't running. I get this screen when I click the Docker tab

    773406000_ScreenShot2022-01-02at6_03_17AM.thumb.png.a77b73ec4f8f3c6075c95b19441bd8cd.png

     

    The Cache drive (Raid1) is up and available / share works / I can access files so I changed my cache settings from "Only" to "Yes" and ran mover. I tried creating a new docker image but that didn't help. Obviously I'm out of my league here.

     

    I'd really like to get my docker apps back and running with the same settings and data I had before (one of them is Plex and I really don't want to rebuild that from scratch since I have all the data - plus backups).

     

    Any help would be most appreciated. I've attached diagnostics.

     

    Thanks

    hunraid01-diagnostics-20220102-0611.zip

  14. OK - skip the "how do you create docker.img" question. It was already created.

    I followed your instructions and, by doing so, I learned a few things. I still don't fully understand what's part of the docker and what's part of the backup.

     

    In any case, I'm up and running so thanks for posting those instructions (again <sheepish>).

  15. Steps to restore?

     

    My cache drive crashed. I replaced it. Now I want to restore my dockers from backup.

    Is there a step-by-step? The wiki didn't make sense to me - please help me understand.

    Seems like I should

    1) reinstall my dockers from the "Previous apps" page

    2) run the restore process using CA Appdata Backup/Restore plugin

    3) maybe restart

    4) done, right?

     

    That's not what it seemed like the wiki was telling me to do. 

    Can someone please point me in the right direction for the order of tasks?

     

    Thanks,

    Kyle

  16. Here's my situation:

    Currently running v6.8.2. My Cache (single disk) was configured to be used ONLY for dockers (no actual "caching" of the array disks). I got an error on the Cache disk (unmountable / no file system) and decided I wanted to get 2 new disks to mirror it. While waiting for the 2 new disks to arrive, I unassigned and removed the Cache disk. Currently unRaid is running with no cache and the HDD that was the cache disk is sitting on my desk.

     

    In other words, I impulsively decided to chalk up the loss (I have backups of Appdata) and move on. Now I regret that decision because I never tried an xfs_repair (which I have done on other disks with success and it saved me a lot of heartache).

     

    What I would like to do:

    Reinstall that old cache disk and make unRaid think it's in the state it was before I removed it. I don't want to format it and I don't want to clear it. I want it back in that previous state so I can attempt an ifs_repair.

     

    Any suggestions on how to install that old Cache disk without clearing/formatting and get the system settings back to their original state (e.g., don't use it for caching the array disks)?

    Should I try the xfs_repair first - then assign it back as Cache if successful?

     

    Thanks,

    Kyle