FreeMan

Members
  • Posts

    1520
  • Joined

  • Last visited

Posts posted by FreeMan

  1. My docker.img file is 30GB and is at 100% utilization as we speak.

     

    I know that the usual cause of this is poorly configured dockers that are writing things to the wrong path, however, this has been a long, slow fill, and I believe that I've simply filled space with logs or... something... over time as this hasn't been a sudden situation. I got the warning that I was at 91% about 2 weeks ago, but I've been away from the house and wasn't able to look into it, and it wasn't my #1 priority as soon as I returned home.

     

    I'm attaching diagnostics which will, I hope, point to what's filling the img file, in the hopes that someone can point me in the right direction of where to cull logs or whatever I need to do to recover some space. If it appears that I have, legitimately filled the img file, I'll recreate it and make it bigger.

     

    There are two potential culprits that I can think of:

     

    * I installed digiKam a couple of months ago. I had it load my library of > 250,000 images and I've begun cataloging and tagging them. I think that the database files that support this are on the appdata mount point and that they could be getting rather large. (Yes, I realize that I should probably migrate the DB to the mariaDB docker I've got, but that's still on my to do list, and even if I do, it will simply move the space utilized, possibly reducing it somewhat, but not eliminate it.)

     

    * I've been having issues with my binhex-delugevpn docker. It's been acting strangely and I noticed that when I restarted the container yesterday, it took about 15 minutes for it to actually properly start and get the GUI up and running. I had the log window open for a good portion of that time, and noted that it was writing quite a bit to that log. It's possible that it's filling and rotating logs and that these are using a fair bit of space.

     

    I'm looking into these two to see if they are causing issues, but I'd appreciate another set of eyes and any other tips/pointers on where I may be wasting/consuming unusual amounts of space, and recommended solutions.

     

    nas-diagnostics-20220701-1451.zip

     

  2. 5 hours ago, jcofer555 said:

    see why waste of space is a big thing here, oh well time to figure something out, gonna have fun moving files for days over something so silly

    Not sure where you're wasting space... I've got 8TB data disks that have been filled to within 100MB of max capacity.

     

    Just remember, this isn't the OS for everyone.

  3. 6 minutes ago, Squid said:

    With the binhex preclear container, there's probably some extra command line options you needed to give it for it to be recognized properly

    I used -f for a "fast" preclear, but I don't recall ever having used any other command line options, and I don't recall ever having had this issue in the past. As a matter of fact, I just precleared & installed a new drive a few weeks ago and didn't run into this issue.

     

    I know preclear isn't necessary anymore as the base OS will do it without having to take down the whole array for hours while it happens, I like having it as a handy disk check utility for new drives. I know there are various theories on this, it's my preference.

  4. 1 minute ago, jbrodriguez said:

    right, that seems to be the issue, maybe you have and blocker, or some firewall rules for your ip/cellphone

     

    I can reach it via IP in a browser from my phone (though I get warnings about it being HTTP instead of HTTPS).

     

    If there are any rules blocking it in the phone, I'm certainly not aware of it, plus, the "Discover" method of adding it can find the server (by IP, I presume?).

    218769921_2022-04-0211_50_41.thumb.png.17d4332d8084520752c77e7e19b5e58c.png

     

    I honestly don't have a clue what might be blocking access to the server from the phone when the phone can clearly see that the server's there.

  5. 2 minutes ago, jbrodriguez said:

    You're browsing from your cellphone ?

    Can you try using the ip address instead of the hostname ? Just to check if that works.

    hmmm... bizarre. The address bar pic is from my desktop machine.

     

    My phone cannot resolve nas.local in a browser window, saying "site cannot be reached".

     

    When I try adding the server manually by IP address, I get:

    1390509540_2022-04-0211_37_05.thumb.png.f1d6cd7434c7ee969605363987c94e03.png

    It's attempting to convert the IP to a host address, it seems, then is failing to resolve the host address back to an IP.

  6. 1 minute ago, jbrodriguez said:

    Hi, can you try deleting the server and adding it back ?

    I presume deleting the server is by pressing the circled red x in the corner of the server list. There is no response to that.

  7. For the last couple of months, The controlR app on my phone (Android) has shown me my server, but I can't tap on the server to get any additional info about it, and it shows a red x in a circle next to it.

     

    I've not done any trouble shooting on this in particular, but the server's IP address hasn't changed in ages. The plug in is still running on the server. I have CA auto-update running, so the plug in should be the latest available (v2021.11.25), and I presume that my app is the latest (5.1.1), as I've got autoupdate enabled on the phone too.

     

    I just tried clicking on the "Spin Up" option on the Servers list. It popped up a little box with a spinner for a while, but nothing happened on the server itself.

     

    Any recommendations on what to check?

  8. 13 minutes ago, itimpi said:

    the unBalance plugin and aborting it in mid process

    I may have done that. However, leaving a file on cache instead of on a diskx seems odd. This does seem to be a reasonable explanation, I suppose, though.

     

    I've learned to do a copy/delete instead of move when I'm manually working with files in Krusader. I tend to avoid using Windoze for file management (somewhat) because it's a lot slower. I think I avoid most of those other situations, but certainly couldn't guarantee it. I guess that's why I'm finding this a bit perplexing.

     

    I'll just do a manual clean up (I've got several other files in this situation, too, I think).

     

    Thanks for the insight.

  9. Looking at my TV share info, I see this:

    image.thumb.png.0c124c37ce39cfd36b4538ad80d261ef.png

    Looking at it from a terminal session, I see:

    root@NAS:/mnt/cache/TV/Frankie Drake Mysteries/Season 04# ls -la
    total 4
    drwxrwxrwx 1 nobody users  20 Mar  2  2021 ./
    drwxrwxrwx 1 nobody users  18 Jan 26 19:26 ../
    -rw-rw-rw- 1 nobody users 331 Jan 27  2021 season.nfo
    root@NAS:/mnt/cache/TV/Frankie Drake Mysteries/Season 04# ls -la /mnt/disk5/TV/Frankie\ Drake\ Mysteries/Season\ 04/
    total 4
    drwxrwxrwx 2 nobody users  32 Jan 22 19:04 ./
    drwxrwxrwx 3 nobody users 124 Jan 26 19:31 ../
    -rw-rw-rw- 1 nobody users 331 Jan 27 07:16 season.nfo
    

     

    How is it that the second listing (on /mnt/disk5) didn't/doesn't get overwritten by the file residing on the cache drive when the mover runs?

     

    Disk5 is a reasonably full 8TB drive, but it's still got almost 700GB of free space - more than plenty to store a 331 byte file, and even then, it shouldn't matter, because the file in Cache should simply overwrite the file on the array.

     

    I can, and probably will, simply delete the file from the cache dir, but why does it seem that the mover isn't doing its job here?

  10. I just picked up an IcyDock Fat Cage and installed it in my Zalman MS800 case.

     

    Slid right in with no problems (I long ago bent down the drive mount tabs to fit my old 5x3 cages in). I was able to use one of the case's quick locks to hold the dock in place instead of using the provided screws. This is a big full-size tower case with 10 5.25" bays front accessible, so there's plenty of room for the dock. It even allowed me to gain access to an additional 15-pin power connector and run it to the very top bay to plug in the last SSD that I'd installed but hadn't yet been able to power up. (Lack of 4-pin Molex connectors to adapt to 15-pin SATA, and hadn't yet purchased a 15-pin extender.) I had a heck of a time getting the one SATA cable plugged in that goes on the MoBo side of the case, so when I have to remove the dock, I'll leave that plugged into the doc, and pull it from the MoBo, instead.

     

    I say "when" I have to remove the dock because I've just ordered a Noctua NF-B9 fan because now I'm sitting next to a vacuum cleaner. The stock fan in this is loudI!

     

    Also, I may have to return the whole thing since one of the drive trays was bent. The bottom of the tray curved into the drive. I had to flex the tray a bit to get the screw holes to line up with the drive, and it was still difficult to get the tray to slide into the dock. Because of this, the server wouldn't recognize the drive, no matter which slot in the dock it was plugged into. I put the drive into another tray and the server was most happy. I've contacted the seller to see if I can get a swap on just the tray or if I'm going to have to send the whole thing back.

     

    I haven't done a parity check yet, but in normal use (less than 24 hours since install), drive temps for the 3 drives that are in the dock are on par with the other drives, so I'm going to guess that they'll stay that way. I've got loads of little bits of packing foam, so I may try cutting some filler blocks to put into the unused trays to see if that helps improve air flow.

  11. I've got two cache pools:

    1. Name: Apps
      1. Consists of a single SSD
    2. Name: Cache
      1. Consists of a pool of 3 SSD

    The astute among you will see the issue here.

     

    I've already set my Apps pool cache setting to "Yes" (from "Prefer") so I can migrate data onto the array. (Involves stopping all dockers & the docker service. VM service isn't running.)

     

    Once I've got the data off the Apps and Cache pool, what's the best way to swap the names so I can have all my dockers live on the actual pool for some drive failure resistance?

     

    I'm thinking:

    1. Rename "Apps" to "temp"
    2. Rename "Cache" to "Apps"
    3. Rename "temp" to "Cache"
    4. Set Apps cache setting back to Prefer
    5. Run mover
    6. Restart docker service & dockers.

    Does this make sense? Is there an easier way? Have I missed something?

  12. 3 minutes ago, trurl said:

    Probably.

     

    I often hotswap Unassigned Devices and they sometime take several seconds to show up.

    OK. I'll wait patiently.

     

    I'm pretty sure it was more than "seconds" more like "at least a minute". I know I'm an impatient fella, but it really was slower than "seconds". As a matter of fact, I'd waited a bit, then I typed up this question and it still hadn't shown up.

     

    Could just be that my machine isn't the fastest thing out there. I'll be sure to be patient in the future.

     

    Thanks as always!

  13. 4 minutes ago, trurl said:

    Settings - Disk Settings.

     

    Forest, meet trees. Sheesh. :( I did actually look at that, but it just didn't register.

     

    `If set to 'Yes' then if the device configuration is correct upon server start-up, the array will be automatically Started and shares exported.`

     

    I am set to "Yes". However, since the config wasn't correct, it didn't auto start.

    -------------------------------

     

    OK, that small drama is resolved. However, I'm still curious what caused the server to recognize that the drive was there. Was it:

    1) Passage of time? i.e. the server polls every couple of minutes looking for a drive to "magically" appear (expecting that there's a hot-swap cage and it might.)

    2) I looked at the right setting somewhere that caused it to rescan drives and notice that the disk was now there?

     

    If it's the first, I now know to be patient and wait. If it's the second, I'd like to know what I looked at so I can trigger it intentionally the next time.

     

  14. When the server booted, the array didn't start and it shows:

    image.png.4537a797b4e03d31ce1b714858f7f70f.png

    image.png.b803e5a0b89c8dc5fc834d8d5e16c5b9.png

     

    So, if you would, please have a chat with my server and let it know what it's supposed to be doing. TBH, that does make sense that it should have started the array with a missing disk. However, it didn't and here we are...

     

    As it turns out, the caddy I put that particular disk in was slightly bent fresh out of the box. Again - an issue to take up with the vendor.

     

    Is there a way to now get unraid to recognize that I've plugged the disk back in other than rebooting the server?

  15. I just installed a new Icy Dock Fat Cage hot-swap 3x5.

     

    I put the same disks in it as were in a non-hot-swap cage. When I power it up, it seems that one of the slots is not receiving power (an issue to take up with the vendor).  Because of this, the array didn't start because a drive was missing.

     

    I pulled the drive from the dead slot and put it into an empty slot (I'm only using 3 of the 5 slots right now), and the power light came on indicating that the cage recognized there is a drive there.

     

    How do I get UNRAID to recognize the drive is there without rebooting the server? That is, after all, the whole point of hot-swap cages (well, at least one of the points).

  16. On a slightly more serious note...

     

    I just noticed that the log for this disk (from Unassigned Devices) has a multitude of these entries:

     

    Mar 3 10:51:16 NAS emhttpd: spinning down /dev/sdn
    Mar 3 11:21:17 NAS emhttpd: spinning down /dev/sdn
    Mar 3 11:51:18 NAS emhttpd: spinning down /dev/sdn
    Mar 3 12:21:19 NAS emhttpd: spinning down /dev/sdn
    Mar 3 12:51:20 NAS emhttpd: spinning down /dev/sdn
    Mar 3 13:21:21 NAS emhttpd: spinning down /dev/sdn
    Mar 3 13:51:22 NAS emhttpd: spinning down /dev/sdn
    Mar 3 14:21:23 NAS emhttpd: spinning down /dev/sdn
    Mar 3 14:51:24 NAS emhttpd: spinning down /dev/sdn

     

    Why would the OS be trying to spin down a disk every 30 minutes during a preclear run? (Yes, sdn is the device that I'm preclearing.)