je82

Members
  • Posts

    468
  • Joined

  • Last visited

Everything posted by je82

  1. Think i found the problem, it will take some time to test I had defaults vm.dirty_background_ratio 10% vm.dirty_ratio 20% with 128gb ram looks like a bad thing to do, i hope this will resolve the oom-killer issue, time will tell. If you have other ideas, i'd love to hear them.
  2. Ok i managed to trigger the vm oom-killer again, it is the rsync script, not sure if i can limit the amount of memory is uses? This is the command i am running, do you see any problems with it perhaps? Any advice on the matter would be greatly appreciated
  3. Total memory 128gb, 64gb allocated to vm, 64gb on unraid, how is it running out of memory? Seems unreasonable. Here's the log server dump from occurance to kill of virtual machine, if anyone can understand whats happening better than me could you take a look and see? Or if you have any other tips how you would go about tracking down the culprit, im starting to think it is the rsync backup script that is the problem because both times this problem has arised the script has been actively running. But if it was that script shouldn't "ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n" pretty much give me a statement that SHFS is taking all the ram? It goes from bottom to top in timeline.
  4. I have issues since i have moved my VM envoirment to unraid. This is the second time a massive production server has been shutdown due to unraid suddenly spiking in memory and panics and shuts down whatever process it finds that holds the most memory, this is per automatic going to be the production server that takes 64gb ram on boot. How can prevent this from happening? My unraid never reaches above 70% memory consumption, yet this has happened twice now. I have a log server and it does NOT show what service is growing prior to this occuring so i have no way to knowing why this is happening. Running the command : "ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n" right after it happens yields no clues either, the server has no service that seems to have eaten ram, if i boot the VM again directly after it occurs it boots fine and ram remaining on the system is around 30gb. I need to prevent unraid from continuing doing this, it is only a matter of time before the system goes corrupt if this keeps happening. I need some kind of way for it to skip this particular thing to shutdown and instead shut down some useless docker container or something. Ideas?
  5. has anyone noticed that this docker may be leaking ram? I have a problem right now where my qbittorrent is taking 12gb of ram, it seems unreasonable high, it starts drawing more and more ram as time goes, anyone else have this issue with this container?
  6. Trying to understand how that happened, there is a timeout value i must have missed somewhere? From the point where the UPS told unraid "Battery charge below limit! - Initiating system shutdown" and until the power cut off was only 32 seconds of time. In settings > disk settings > Shutdown timeout : 600 In settings > vm manager > VM shutdown time-out: 600 Is there another value somewhere? I dont understand why it would cut power to itself after only 32 seconds. Or have i misunderstood how these values work? When do these timeouts start ticking? When the the system is on ups power, or when the ups is telling unraid it is below % limit set in the ups daemon? Either way non of the scenarios makes sense, because it was only on ups power for a total of 7 minutes and 21 seconds, which is also less than the timeout value set in unraid. And the ups itself never actually ran out of power either EDIT: I realize now there was also a rsync job running which probably was locking unraid from shutting down, and maybe the numbers the ups is reporting can't be trusted and it was running very low on power quickly. Is there a way to tell unraid to stop all running scripts that may be running in userscripts in case of shutdown initiation?
  7. Shutdown timeout set to 600 seconds, wm manager shutdown timeout set to 600 seconds, this is 10 minutes, unraid completely shut the power off after 7 minutes 21 seconds, hmm.
  8. This was the last message from unraid to the logserver: "mpt2sas_cm0: sending message unit reset !!" This was just an info, so not critical, not sure what it means but from what i gather it seems unraid just cut the power to itself before it had shutdown properly, can anyone speculate into why this could occur? The battery still had over 40% power left when this occured, bug in unraid?
  9. here are my ups settings which i guess is going to be of question here: please help me understand why my system was not shutdown properly even though all the logs says it was, the ups never got below 30% resources so power was never cut
  10. Also very confusing, from logs: 2022-10-0613:14:13UserNoticeOct 6 13:14:13 NAS root: Shutting down VM: VMServer (Windows 2019 Server) 2022-10-0613:14:42KernelInfoOct 6 13:14:42 NAS kernel: kvm: exiting hardware virtualization So VM was shutdown, yet when i start the VM now it tells me it was not gracefully shutdown.
  11. Had my first power failure since started using unraid, everything shutdown as expected, ups took care of it. Checking the logserver: Power failure occurs, battery check below limit, initiating shutdown. Why is unraid doing a parity check when the system was properly shut down? And what exactly does the message "root: /usr/local/sbin/powerdown has been deprecated" mean? I am so confused now. Please help me understand unessesary parity checks are a pain in the ass, takes nearly 24 hours and draws a hell of a ton of resources on the system, i have a ups to avoid bad shutdowns... why is it doing a parity check anyways?
  12. Hi, My unraid install seems to have ran out of memory and killed my big nested vm host i am using. From my understanding the VM itself cannot grow beyond the memory limit that has been set, i also do not use ballooning (i set both min-max mem usage in the xml so the memory for the VM is reserved upon start. The question is now, i do have a log server, but i cannot see from the logs what program in unraid that suddenly started consuming so much memory that unraid decided to close the most memory consuming part of the system which is the VM. My guess is that unraid runs some kind of a script when it detects out of memory and then scans for whatever process consumes the most memory and kills it, it doesn't have to be that process that is the problem though, am i correct in this understanding? My guess is that it may be any of many docker containers running, or an rsync script that started running right before this error occurred, is there anyway to tell from logs? Here is the log from when the event occured (bottom, to when it killed my VM at the top) can you tell what process suddenly grew or is it not provided via logs?
  13. Is it possible to map a path directly to an ssd used as an cache for unraid? I have a usecase where i want to have a VM being able to directly access the unraid cache and have as fast as possible IO with this disk, meanwhile unraid also needs to have access to this disk and as fast as possible IO too. Is there any good approach to doing this?
  14. Unraid is amazing, but one thing makes me wonder why it hasn't been done, there has to be a good reason for it. Right now you have to tell unraid through the share setting the "minimum free space" and from what i gather the mover checks at every file it tries to move if the minimum free space has been reached, and if it has not been reached, it proceeds to move the file to that disk according to the split level setting. The problem with this is if you set "Minimum free space" to 5gb but the file it is currently moving is 6gb, it won't understand that the file won't fit. Why not? Does the mover not already have the information about the file it is currently moving and its size? It shouldn't require that much more performance to just have another check where it compares the minimum free space settings with the current file it is suppose to be moving? Maybe there's a good reason to why it is the way it is, but i don't understand it right now.
  15. sounds like smb issues, i had these too, i fixed them by modifying the smb configuration on unraid: in settings > smb > smb extra configurations added: force create mode = 0666 force directory mode = 0777 force user = nobody force group = users create mask = 0666 These will make sure all files/folders created via smb on unraid no matter what user is making them it ends up with permissions and owner nobody:user which is how unraid likes to operate. Since you had smb issues chance is a lot of files have weird permissions, you might wanna run Tools > New Permissions on all your shares after adding that to your smb configuration, then all files will have the correct permissions and all new files created via smb will also have the correct permissions going foward.
  16. Think i answered my own question, i just expected it to be faster, my cpu literally sleeps while writing at 110mb/s to the xfs encrypted cache volume so to me it seems like there is a hardcoded limitation either inside the xfs encrypted protocol or inside unraid for whatever reason.
  17. Interesting read here: Is unraid just that slow with xfs encrypted? It seems like a crazy performance hit, i have been running my array encrypted for years but i never write directly to the array (the mover does that at night) so the 110mb/s limiter that may be in place somewhere isn't a big problem there, but its definitely a huge issue if on the cache. Is there a limit hardcoded into unraid how fast it can write to a xfs encrypted volume?
  18. I have a very strange issue with my unraid installation. I previously ran 2x 1tb ssd in a btrfs raid partition as a cache for 3 years and never had any speed problems but i wanted to have the volume encrypted and also save 1x ssd so i changed this pool to a single 1tb ssd with xfs encrypted instead. Everything works fine, here is the hickup. Network speed from outside rack to clients = 2500mbit Network speed from inside rack between servers = 10000mbit Next test, setting up a virtual machine inside unraid with a smb share on the same nic/network and try the speeds: Third and final test is to send a file from another server inside the rack that has a 10gbit connection directly to the unraid cache. As you can see clearly in the last test, something is very off, even on 10gbit connectivity i am getting 110mb/s when sending to the cache, but receiving i am getting 10gbit speeds. The issue started the same day i removed my btrfs cache raid partition and changed it into a xfs encrypted. I have already swapped the SSD to a brand new one because i intially though this issue must have been due to 3 years of wear and then xfs encryption on top of that was causing weird slowdowns, brand new ssd, exact same issue.
  19. just to be clear, this was the command that worked to silence these messages in the syslog.
  20. Sorry for opening this thread again, but i have a similar issue but my disks spins down and i don't want it to, i installed unassigned devices and set the disk to "pass through" but i still cannot control the spin up/down from the settings page of the disk where as other disks that are in my array i have the option, see screenshots below: Passed through disk: disk in unraid array:
  21. Hi, New to wireguard and love how easy it was to setup, i am sure everyone will be using it soon. The problem i see is the fact that anyone with a few seconds of time with access to your client machine could potentially steal the peer info and then use it to access your local network, therefor some added enhanced feature set would be nice if possiible. 1. A way to enable/disable peers easily in the gui, and on top of that it would be really cool to be able to schedule when the peer is allowed access, so lets say i have a work schedule, i can easily setup so that my work peer is allowed to access the tunnel on the same hours that i work preventing the tunnel to be used on off hours. 2. A way to get an notification and a little information about connecting client when someone connects to wireguard, i am not sure how much information is exposed to unraid but it would be really nice to be able to setup some kind of notification when a connection has been made which makes it a lot easier to notice if someone that should't have access is having access.
  22. nevermind i see now that i somehow glossed over this, it was mentioned in there, all is good. edit: a good thing to add for future unraid releases is the notice that you have to enable this option "Host access to custom networks: Enabled" under the advance docker setting in order for it to work, right now it only says you have to make a static route in your router and nothing more. otherwise big props to the developers, wireguard was very simple to setup and works liek a breeze!
  23. Hi, i am trying to setup wireguard for my local network and i want the route to have access to the docker containers as well as the rest of the network, all seems to work except the docker containers. I have setup a route a static route in my router to route traffic from the wireguard network to the unraid lan ip as instructed, but no luck. My question is, do i have to enable "Host access to custom networks: Enabled?" under the docker settings tab? I couldn't see this option mentioned anywhere in the guide on how to reach docker so i expected it not to be a thing but found it when i was looking around, should i enable this or am i doing something else wrong? I'm using br0 for all my docker containers with each of them having their own ip on my lan.
  24. Thanks, looks like its gone! root@NAS:~# ls /boot/config/plugins/nvidia-driver/packages/ 5.15.46/ root@NAS:~#
  25. root@NAS:~# ls /boot/config/plugins/nvidia-driver/packages/ 5.10.28/ 5.15.46/ root@NAS:~#