Jump to content

CorneliousJD

Members
  • Content Count

    190
  • Joined

  • Last visited

Community Reputation

13 Good

About CorneliousJD

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Cool, thanks Squid! I did already have auto-update plugin installed and up to date and I did still get the update notice last night of a ton of containers updating, even though one in particular (Deluge) i have set to be locked at a specific version. This isn't a big deal to me though, it's just a notice I can ignore daily since I assume this will be fixed in the next unraid release as well? Server: Docker Auto Update Community Applications Deluge Grocy Heimdall HomeAssistant Jackett LetsEncrypt Lidarr MariaDB Nextcloud Ombi PiHole Portainer Radarr Sonarr UniFi Automatically Updated normal
  2. Got this over night last night on scheduled run of scanning for common problems (Fix Common Problems plugin) Event: Fix Common Problems - Server Subject: Warnings have been found with your server.(Server) Description: Investigate at Settings / User Utilities / Fix Common Problems Importance: warning **** Template URL for docker application Bitwarden is missing. **** **** Template URL for docker application FileBot is missing. **** **** Template URL for docker application Grocy is missing. **** **** Template URL for docker application HA-Dockermon is missing. **** **** Template URL for docker application HomeAssistant is missing. **** **** Template URL for docker application PiHole is missing. **** **** Template URL for docker application Portainer is not the as what the template author specified. **** **** Template URL for docker application Tautulli is not the as what the template author specified. **** **** Template URL for docker application UniFi is not the as what the template author specified. **** **** Template URL for docker application UniFiVideo is missing. **** I re-ran the scan and am still seeing this. Nothing had changed previously, server uptime is almost 18 days right now. PS I had been getting repeat notices of docker containers updating every night despite not having an update available, but I've gathered that's a known bug in this version of unraid, maybe these two are related. Wondering how I can fix this, or both? Not sure if it's safe to apply template URL from fix common problems or if I should be waiting this out. So here I am
  3. Just dropping by to say this affected me when adding a new RAID1 member to cache, thankfully johnnie.black is on the forums directing users here, was able to make sure metadata is updated correctly to RAID1 now.
  4. Woo, wonderful! Back to showing a 500GB drive now. I also made a full backup before I did this just incase, but I will plan to follow the link from your first post to add a new drive and have it be 2 slots This will RAID1 the 2 drives and I shouldn't loose any data from what I'm understanding?
  5. Ok, I did the conversion and it resulted with Done, had to relocate 5 out of 176 chunks But when I do btrfs device delete missing it results with btrfs device delete: not enough arguments: 1 but at least 2 expected
  6. +1 here as well, this also just happened to me, every time array starts it tries to re-balance.
  7. I'll be replacing the missing one - so I had a RAID1 of 2x 500GB SSDs, and one went back to Samsung for RAM which is why it's missing now. They're sending me a new drive today that will replace the missing RAID1 member, I'm assuming it will be a whole new drive/new serial number.
  8. Before I do this, I want to make sure I need to because that deleted device will be replaced with a new SSD later this evening (when it arrives from Samsung RMA dept). Do I need to still convert to single or can I simply replace the drive later today and then after run btrfs device delete missing? Thanks in advance. You always come to the rescue when I have issues haha. If you have a paypal/venmo I'd happily send you some coffee/pizza money sometime!
  9. As requested, attached. server-diagnostics-20190828-1322.zip
  10. perfect, I knew there had to be a way, very helpful, thanks Squid!
  11. I somehow turned off the "notify me of replies" on this so I missed your (helpful) reply. I had a power failure today and the server shut down cleanly but upon booting back up the server noticed cache1 was gone and it ran a balance on the cache array. Now it thinks it's a 1TB cache for some reason (only ever had 2x 500GB drives in RAID1). My replacement drive should finally be here from Samsung tomorrow, but I'm worried now about simply stopping array and placing this into the cache pool, if it thinks it's 1TB?
  12. I've been having a lot of issues with an SSD (i'm finally replacing it) but it's caused me to have to rebuild my docker.img a lot lately. I do keep seeing all my old docker containers and names and I would like to clean these up so that if it happens again a year or two down the road I won't be confusedo n which version is which that I should be picking, and only have the active/live ones that I want to reinstall there? Screenshot to show what I mean during the "Add container" portion of adding a docker container when rebuilding docker.img. Thanks in advance.
  13. I had 2x 500GB SSDs running cache, however one has been failing and blinking offline all the time no matter the backplane slot, so it's going back for RMA. Is there a clean way to remove that one SSD from my cache for now and use the live data left on the other drive w/out rebuilding everything? Then after the new drive arrives am I able to cleanly add it back in? Or would I need to destroy and rebuild the cache each time? Trying to find the best way of doing it. Right now the server knows the drive is missing so /var/log is 100% full and I don't know how long it will be before a replacement drive arrives.
  14. Entering! Thanks guys. Almost 2 years user now. I love having all the docker containers in one place ran by an amazing community. Plus of course all that file storage!
  15. Yep, I am, just thought it was odd that it was the only one showing that error - the rest of the scripts I have exit cleanly. I'll go ahead and schedule that for hourly runs. I created a scrub script and a script to reset error counts incase this happens again. the SSD is on a backplane so I can't really change any cables, and it hasn't happened for over a year until now. I'll be prepared this time though with the new scripts added incase it happens again. Thank you again for your time and info!