FreeMan

Members
  • Posts

    1458
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Location
    Indiana

Recent Profile Visitors

2841 profile views

FreeMan's Achievements

Community Regular

Community Regular (8/14)

77

Reputation

  1. I've only got 1TB, and as noted, the DB itself is only 22GB. I was misinterpreting what I was looking at the first go round. I have from 1.6 already. Again, I appreciate all your efforts on this and don't want to appear to minimize any of the work you've put in!
  2. Problem solved! There was some sort of issue starting the docker and it was spamming the log with its startup info. Heading off to the LS.IO support thread, I discovered that as of June, 2022, the docker has been deprecated! Since I hardly ever used it. I've stopped the docker and deleted the config. I'm now at 23GB out of 30GB used, and should be good to go for quite a while.
  3. derp! That makes complete sense. I see that now. I glossed over that one. I'll see what I can do to trim those logs, and I may go ahead and up the img size, too. TYVM!
  4. Note that scrutiny is listed twice so I knew I didn't miss any lines. I was pretty sure that my docker file was 30GB, and my memory didn't fail me! If UNRAID says the containers only take 20.2GB, why is it reporting that I'm at 100% img file use when the image file is 30GB?
  5. It's got NOTHING to do with what I thought it did... I had a totally unrelated docker going crazy with log files growing to > 7GB of space. That docker's been deprecated (any wonder why), so I shut it down and deleted it. I've just hit 100% utilization of my 30GB docker.img file, and I think I've nailed the major culprit down to the influxdb docker that was installed to host this dashboard. It's currently at 22.9GB of space. Is anyone else anywhere near this usage? Is there a way to trim the database so that it's not quite so big? I'm trying to decide whether I want to keep this (as cool as it was when it was first launched, it's wandered into a lot of Plex focus on later updates and, since I'm not a Plex user, they don't interest me, plus I've grown a touch bored with this much info overload), or if it's time to just scrap it. I totally appreciate the time and effort that went into building it and @falconexe's efforts and responsiveness in fixing errors and helping a myriad of users through the same teething pains. I'm just not sure it's for me anymore...
  6. I think I found the problem. appdata\influxdb is 22.9GB. I believe the majority (if not all) of that is from "Ultimate UNRAID Dashboard" data. Short term, I think the best solution is to simply enlarge the .img file, while long term, I need to decide how much (if any) of that to actually keep. Still open to any other insights anyone may have.
  7. My docker.img file is 30GB and is at 100% utilization as we speak. I know that the usual cause of this is poorly configured dockers that are writing things to the wrong path, however, this has been a long, slow fill, and I believe that I've simply filled space with logs or... something... over time as this hasn't been a sudden situation. I got the warning that I was at 91% about 2 weeks ago, but I've been away from the house and wasn't able to look into it, and it wasn't my #1 priority as soon as I returned home. I'm attaching diagnostics which will, I hope, point to what's filling the img file, in the hopes that someone can point me in the right direction of where to cull logs or whatever I need to do to recover some space. If it appears that I have, legitimately filled the img file, I'll recreate it and make it bigger. There are two potential culprits that I can think of: * I installed digiKam a couple of months ago. I had it load my library of > 250,000 images and I've begun cataloging and tagging them. I think that the database files that support this are on the appdata mount point and that they could be getting rather large. (Yes, I realize that I should probably migrate the DB to the mariaDB docker I've got, but that's still on my to do list, and even if I do, it will simply move the space utilized, possibly reducing it somewhat, but not eliminate it.) * I've been having issues with my binhex-delugevpn docker. It's been acting strangely and I noticed that when I restarted the container yesterday, it took about 15 minutes for it to actually properly start and get the GUI up and running. I had the log window open for a good portion of that time, and noted that it was writing quite a bit to that log. It's possible that it's filling and rotating logs and that these are using a fair bit of space. I'm looking into these two to see if they are causing issues, but I'd appreciate another set of eyes and any other tips/pointers on where I may be wasting/consuming unusual amounts of space, and recommended solutions. nas-diagnostics-20220701-1451.zip
  8. Fair enough. I've noticed in the past, this is the first time I've really paid attention enough to document it. It's now in writing, so my job here is done.
  9. This is a MINOR, LOW PRIORITY UI issue. I noticed that in 6.10.2, the [close all notifications] button doesn't actually close any of them, instead, it just rearranges them. The issue persists in 6.10.3. I had a bunch of disk usage notifications and took a screen shot (left), I then clicked the [close all notifications] and it rearranged them (middle), clicked it again and it rearranged them again (right). It also gives issues when attempting to close one at a time. From the last arrangement above, right, I clicked the "X" on the 6/18/22 12:05 notification (the bottom one). It briefly disappeared, then reappeared at the bottom of the list: They are now all gone. I believe (I had to leave between grabbing screen shots & making the report) that I had to close them from the top of the list down, and couldn't close from the bottom of the list. i.e., I had to close the 12:07, 12:28, 12:39 etc., instead of simply putting the mouse on the 12:05, closing, and waiting for the list to redraw then close 12:03, 12:01, etc and not have to move the mouse. The work around is a very minor inconvenience, but it's mildly annoying. nas-diagnostics-20220619-1249.zip
  10. Not sure where you're wasting space... I've got 8TB data disks that have been filled to within 100MB of max capacity. Just remember, this isn't the OS for everyone.
  11. I used -f for a "fast" preclear, but I don't recall ever having used any other command line options, and I don't recall ever having had this issue in the past. As a matter of fact, I just precleared & installed a new drive a few weeks ago and didn't run into this issue. I know preclear isn't necessary anymore as the base OS will do it without having to take down the whole array for hours while it happens, I like having it as a handy disk check utility for new drives. I know there are various theories on this, it's my preference.
  12. I just ran a preclear on a new drive (using binhex's preclear docker). After 40 hours it finished with no reported issues. I stopped my array, added the new drive, and started the array. Now it says that a clear is in progress. Why would it start clearing the drive again? Did the preclear somehow fail to properly write the correct signature to the disk? nas-diagnostics-20220406-1510.zip
  13. When doing a manual add, I did specify HTTP, not HTTPS.
  14. I can reach it via IP in a browser from my phone (though I get warnings about it being HTTP instead of HTTPS). If there are any rules blocking it in the phone, I'm certainly not aware of it, plus, the "Discover" method of adding it can find the server (by IP, I presume?). I honestly don't have a clue what might be blocking access to the server from the phone when the phone can clearly see that the server's there.
  15. hmmm... bizarre. The address bar pic is from my desktop machine. My phone cannot resolve nas.local in a browser window, saying "site cannot be reached". When I try adding the server manually by IP address, I get: It's attempting to convert the IP to a host address, it seems, then is failing to resolve the host address back to an IP.