-
Posts
680 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Michael_P
-
-
1 hour ago, rfKuster said:
Any Idea why this happened?
Nothing jumps out, but your UPS is going nuts in the log - disconnecting and re-connecting repeatedly
-
14 hours ago, JorgeB said:
This is not normal.
14 hours ago, carotna said:Thank you, will try the half-half method.
Start with the container using this IP, it's the one opening a ton of ports
-container-ip 172.18.0.11
- 1
-
1 hour ago, John321 said:
Any other suggestions or ideas where to have a look on
The reaper is killing the same process, but it doesn't look like it's using a terribly large amount of RAM at the time. If it happens during backup, disable dockers and plugins one at a time until you find the culprit.
Also, make sure whatever backup process you're using isn't using RAM as a temporary filesystem
-
On 5/17/2024 at 8:09 PM, ruohki said:
I have no influence on that. The storage is supplied as is, with the promise: we wont look at the data. Thats why i want to encrypt/decrypt files on the unraid side
Then you'll need to store it as an encrypted blob on the remote server - look into Veracrypt
-
9 hours ago, ruohki said:
Is it possible to mount a remote share in a way that files are encrypted on the remote share but visible as decrypted to unraid?
Use an encrypted FS on the remote share's disk, anything written to it will be encrypted
-
11 hours ago, IaMs12 said:
Then leave the failing drive in the array till it goes out completely.
A flaky drive shouldn't be used in the array, as each disk is needed to rebuild it if another drive should bow out. If you're only running 1 parity, and you want to increase its size, I'd go with adding a new parity drive, rebuilding parity, then replace the failing disk. And do it all in maintenance mode to preserve parity if it goes south - so you can re-add the old parity drive and re-build the array.
In any case, 1 error would lead me to toss the drive, just isn't worth it.
-
Unpaper was using ~2 gigs at the time it was killed, check the config, add RAM, or the swapfile plugin
- 1
-
18 minutes ago, xxDeadbolt said:
if I broke anything
As long as you don't try to pass it thru to a VM at the same time - that will crash the host
- 1
-
-
7 hours ago, seecs2011 said:
Adding anonymized diagnostic data here for review, sent a ticket request directly to unraid
Looks like your VPN details may be in plain text (I've obfuscated it below), wonder if @limetech is going to get that sorted in the next release. For now, you should remove it from your post
root 566 0.0 0.0 23824 12724 ? S 22:53 0:00 | \_ /usr/bin/openvpn --reneg-sec 0 --mute-replay-warnings --auth-nocache --setenv VPN_PROV protonvpn --setenv VPN_CLIENT openvpn --setenv DEBUG false --setenv VPN_DEVICE_TYPE tun0 --setenv VPN_ENABLED yes --setenv VPN_REMOTE_SERVER us-chicago.privacy.network --setenv APPLICATION qbittorrent --script-security 2 --writepid /root/openvpn.pid --remap-usr1 SIGHUP --log-append /dev/stdout --pull-filter ignore up --pull-filter ignore down --pull-filter ignore route-ipv6 --pull-filter ignore ifconfig-ipv6 --pull-filter ignore tun-ipv6 --pull-filter ignore dhcp-option DNS6 --pull-filter ignore persist-tun --pull-filter ignore reneg-sec --up /root/openvpnup.sh --up-delay --up-restart --keepalive 10 60 --setenv STRICT_PORT_FORWARD yes --setenv VPN_USER YWQpRx******** --setenv VPN_PASS SBKmfs********** --down /root/openvpndown.sh --disable-occ --auth-user-pass credentials.conf --cd /config/openvpn --config /config/openvpn/us_chicago.ovpn --remote ******* 1198 udp --remote ******* 1198 udp --remote ****** 1198 udp --remote-random
-
16 hours ago, droopie said:
I'm out of ideas besides new cpu, mobo, ram
It's almost certainly not hardware related, tho adding RAM may help (or the swap file plugin)
-
Just now, patthe said:
how do I know if they are OK? I just have to assume?
or start watching 'linux isos'
- 1
-
11 minutes ago, patthe said:
How do I know all my "linux iso" are not broken and still usable / not corrupt?
Checksums against both the original data and the 'new'
-
12 hours ago, Napoleon said:
apparently a new board
Sounds like a bad board to me, even one of those tiny little LGA pins slightly out of place will do weird things to RAM
-
Check for socket damage, bent pins or the like
-
Just now, droopie said:
any suggestions on what to do next?
If you're limiting it in the container settings, it didn't work The only other thing is to figure out why it's using so much, bad config or bad install. You can blow it up and install from scratch to see if that does it.
-
9 hours ago, droopie said:
could it be faulty ram
Nah, Prowlarr was using ~11GB when it was killed by the reaper so something is wrong with it
-
15 hours ago, droopie said:
-Extra Parameters: -cpus=".5" --memory=4G on all dockers
And verify you have two dashes front of cpus (--cpus=".5")
-
Just now, patthe said:
and add the Parity drive after?
Yes
- 1
-
1 hour ago, casperse said:
Just wondering are we getting really close
The word is it's going to be in 6.13
-
If you have open dashboard window open in a browser too long, this is the likely cause
-
4 minutes ago, binhex said:
this would be true for any env vars for any container, not just this one
Right, which is why I created a bug report instead of each docker container's support thread
- 2
-
6 minutes ago, bmartino1 said:
Binhex vpn.
There may be a log or memory leak that shows. can a client running this go to /Tools/Diagnostics
Tools > Diagnositc and download there log the in the zip file go to
system/ps.txt
then do a search for their password to confirm this memory leak?
I couldn't reporduce.
Here's on in this thread - system/ps.txt shows their user name and password in plain text
-
2 minutes ago, planetwilson said:
Thanks, done.
If you want to re-post, extract it and edit it out of system/ps.txt then re-zip and post
Out Of Memory errors detected on your server
in General Support
Posted
Your best bet now is to start disabling docker containers until you find the culprit