Nanobug Posted March 18, 2021 Share Posted March 18, 2021 It's probably one of my containers since I'm limiting them on RAM. When I look up the dockers nothing is stopped, and ctop in the terminal, is not always near maximum. But how do I find out which one(s) it is? Quote Link to comment
Nanobug Posted March 29, 2021 Author Share Posted March 29, 2021 (edited) I still get this issue and sometimes it uninstalls a docker. I tried looking at the diagnostics, but I can't seem to find out which one(s) there's causing it. I've added the diagnostics here. nanostorage-diagnostics-20210329-0830.zip Edited March 29, 2021 by Nanobug Quote Link to comment
trurl Posted March 29, 2021 Share Posted March 29, 2021 Why is your system share on the array? Also domains and appdata. Usually you want these to be all on cache and configured to stay on cache so your dockers / VMs won't keep the array spunup with their always open files, and so dockers / VMs won't have performance impacted by slower parity writes. Why have you given 40G to docker.img? Have you had problems filling it? 20G is usually more than enough. The usual reason for filling docker.img is an application writing to a path that isn't mapped. The usual reason for docker filling RAM is mapping a host path that isn't actual storage. Quote Link to comment
Nanobug Posted March 29, 2021 Author Share Posted March 29, 2021 4 hours ago, trurl said: Why is your system share on the array? Also domains and appdata. Usually you want these to be all on cache and configured to stay on cache so your dockers / VMs won't keep the array spunup with their always open files, and so dockers / VMs won't have performance impacted by slower parity writes. Why have you given 40G to docker.img? Have you had problems filling it? 20G is usually more than enough. The usual reason for filling docker.img is an application writing to a path that isn't mapped. The usual reason for docker filling RAM is mapping a host path that isn't actual storage. I haven't done that intentionally if it is. I'll change that though. The docker image got full, so I just said fk it, and gave it twice as much. To my knowledge, I'm not writing to the docker image. Quote Link to comment
trurl Posted March 29, 2021 Share Posted March 29, 2021 Also notice many of your shares are using Most Free allocation. That may be less efficient than Highwater since it could cause constant switching to other disks and keeping disks spunup just because one disk temporarily has more free than another. Making docker.img larger won't fix problems with filling it. It will only make it take longer to fill. How many dockers do you have? I am running 18 dockers and they are taking only about half of 20G docker.img To get appdata, domains, system shares off the array you will have to Set them to cache-prefer. Go to Settings and disable both Docker and VM Manager. Run mover and wait for it to finish. Then post new diagnostics. Quote Link to comment
Nanobug Posted March 29, 2021 Author Share Posted March 29, 2021 1 hour ago, trurl said: Also notice many of your shares are using Most Free allocation. That may be less efficient than Highwater since it could cause constant switching to other disks and keeping disks spunup just because one disk temporarily has more free than another. Making docker.img larger won't fix problems with filling it. It will only make it take longer to fill. How many dockers do you have? I am running 18 dockers and they are taking only about half of 20G docker.img To get appdata, domains, system shares off the array you will have to Set them to cache-prefer. Go to Settings and disable both Docker and VM Manager. Run mover and wait for it to finish. Then post new diagnostics. 32. But only 29 is running atm. I used Most Free Allocation so I could spread it out more. I guess my idea was when one of the dies, I'll be without that data for shorter. I'll run the mover in the morning. I've set it to cache only atm. should I still change it to prefer? Quote Link to comment
trurl Posted March 29, 2021 Share Posted March 29, 2021 There is a Help (?) toggle for the whole webUI on the menu bar. You can also toggle help for a specific setting by clicking on its label. Take a look at the help for the cache setting for that User Share. Long story short, Mover ignores cache no or only. To get Mover to move from array to cache, it must be cache prefer. The reason you have to disable dockers / VMs is because Mover (or anything else) can't move open files. Mover also won't move duplicates (files that already exist on the destination). Diagnostics after moving will tell us if everything got moved. But you can also see how much of each disk is used by each User Share by clicking Compute... for the share, or use the Compute All button. Quote Link to comment
Nanobug Posted March 29, 2021 Author Share Posted March 29, 2021 1 hour ago, trurl said: There is a Help (?) toggle for the whole webUI on the main bar. You can also toggle help for a specific setting by clicking on its label. Take a look at the help for the cache setting for that User Share. Long story short, Mover ignores cache no or only. To get Mover to move from array to cache, it must be cache prefer. The reason you have to disable dockers / VMs is because Mover (or anything else) can't move open files. Mover also won't move duplicates (files that already exist on the destination). Diagnostics after moving will tell us if everything got moved. But you can also see how much of each disk is used by each User Share by clicking Compute... for the share, or use the Compute All button. Yeah, I just didn't get my head around it properly. I've done the changes now, and added new diagnostics. I'm also using unbalance to start moving files one drive at a time. nanostorage-diagnostics-20210330-0141.zip Quote Link to comment
trurl Posted March 30, 2021 Share Posted March 30, 2021 27 minutes ago, Nanobug said: using unbalance I haven't used that in a long time. Does it work with cache? The builtin Mover is the usual way to get things moved to/from cache according to the settings for each user share. You can directly run it from Main - Array Operation. Quote Link to comment
Nanobug Posted March 30, 2021 Author Share Posted March 30, 2021 6 hours ago, trurl said: I haven't used that in a long time. Does it work with cache? The builtin Mover is the usual way to get things moved to/from cache according to the settings for each user share. You can directly run it from Main - Array Operation. I did just the regular mover. Just using unbalance to move from one disk to the rest, just to get the "high water" usage on all of the disks. Quote Link to comment
trurl Posted March 30, 2021 Share Posted March 30, 2021 8 hours ago, Nanobug said: Just using unbalance to move from one disk to the rest, just to get the "high water" usage on all of the disks. Shouldn't be necessary. Just changing the setting will take care of future writes. Quote Link to comment
Nanobug Posted March 30, 2021 Author Share Posted March 30, 2021 3 hours ago, trurl said: Shouldn't be necessary. Just changing the setting will take care of future writes. Then I won't have to stay up late! Thank you. It's really nice to have you guys around when you mess up and don't know that you messed up. Thank you, so much! Quote Link to comment
Nanobug Posted April 4, 2021 Author Share Posted April 4, 2021 At 04:40 AM my time, I get the notification about this issue. I can't see what, or if it killed off a process, or at last I don't know how. How do I find out what it is? I can't find anything in the logs, but I don't know where to look either. Quote Link to comment
trurl Posted April 5, 2021 Share Posted April 5, 2021 2 hours ago, Nanobug said: I can't find anything in the logs, but I don't know where to look either. Neither can I since I don't have them. On 3/29/2021 at 4:22 PM, trurl said: post new diagnostics. Quote Link to comment
Nanobug Posted April 5, 2021 Author Share Posted April 5, 2021 14 hours ago, trurl said: Neither can I since I don't have them. I'll see if the problem is there tomorrow again, and I'll let you know what happens. Quote Link to comment
Nanobug Posted April 6, 2021 Author Share Posted April 6, 2021 Still had the problem today when I checked from work. I've attached the diagnostics. I can also see the log is getting filled up. Where does it say from what so I can adjust it? nanostorage-diagnostics-20210406-0826.zip Quote Link to comment
Squid Posted April 6, 2021 Share Posted April 6, 2021 The logs are being filled because you've got atop installed via NerdPack. It's notorious for that. 1 Quote Link to comment
Nanobug Posted April 6, 2021 Author Share Posted April 6, 2021 5 minutes ago, Squid said: The logs are being filled because you've got atop installed via NerdPack. It's notorious for that. Dang... I didn't know that, and I thought I uninstalled most of them again, since I looked through what could be used. Must have forgotten that one. Uninstalling it right away. Thanks! I'll let you guys know tomorrow if it continues. Quote Link to comment
Nanobug Posted April 7, 2021 Author Share Posted April 7, 2021 So far, that did the trick. Thank you, so much! Quote Link to comment
Nanobug Posted April 8, 2021 Author Share Posted April 8, 2021 It came back today..... nanostorage-diagnostics-20210408-0825.zip Quote Link to comment
Nanobug Posted April 8, 2021 Author Share Posted April 8, 2021 49 minutes ago, trurl said: Do you have any VMs? Yes, 1. A Windows 10 VM for remote control. Quote Link to comment
trurl Posted April 8, 2021 Share Posted April 8, 2021 for remote control of what? Quote Link to comment
Nanobug Posted April 8, 2021 Author Share Posted April 8, 2021 50 minutes ago, trurl said: for remote control of what? From work and when I'm not at home. I'm using my servers, but there's thing I can't do, or prefer to do in a VM. Quote Link to comment
trurl Posted April 8, 2021 Share Posted April 8, 2021 Wireguard VPN is builtin to Unraid. How much RAM do you have dedicated to the VM? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.