• Posts

  • Joined

1 Follower


  • Gender

Recent Profile Visitors

2629 profile views

Jerky_san's Achievements


Enthusiast (6/14)



  1. Sorry I don't know why I said blank.. HTTP challenge over port 80. Even though the port is totally accessible it seems it has trouble completing the challenges stating "Timeout during connect (likely firewall problem)". It will even fail to do the challenge on subdomains it just did a few minutes ago when adding another subdomain to the list. But if I spin up "NginxProxyManager" as a test container just to see if other containers fail. It is able to challenge via http without issue. To my knowledge when it does the HTTP challenge the server redirects to the let'sencrypt folder where the challenges are stored but for some reason it times out sometimes on one or more subdomains and succeeds on others. I almost wonder if fail to ban is kicking in because I have so many subdomains.
  2. @aptalca Hey sorry to bother.. I was wondering to do an HTTP blank setup on let's encrypt does it have to have anything special set anywhere besides the standard stuff in the docker like subdomains and stuff? I had an issue trying to add a subdomain but another container would set it properly so made me think I might have something configured improperly. Though I couldn't for the life of me figure it out. The error I was getting was "Timeout during connect (likely firewall problem)”. But if I just pointed my ports to the other container HTTP worked. The other strange thing is sometimes it would work for a subdomain and other times it wouldn't after just a restart. I assume it's something I'm doing but just wondering if you heard of this ever happening. I ended up doing a DNS challenge and it all worked fine. Thanks for any insights Edit Should also mention I only use cloudflare for my DNS now and no longer use it as a pass through so it shouldn't be that to my knowledge. Also the other container shouldn't of worked if that was the case. I have 6-7 subdomains.
  3. Slightly off top topic but @Dyonany chance you could make a wireguard docker container for SABnzbd?
  4. Figured it out.. Had an old docker that also had 8080 allocated(thought it was set to stay stopped) but it wasn't and it was starting before qbit was. Btw so far the experience has been much better than deluge. Downloads are much faster so thanks a lot for putting out a wireguard docker.
  5. Installed it to try. After restarting though it showed it wasn't running even though the log showed it was. Attempting to start or the service even after disabling docker and re-enabling shows "server error" very strange..
  6. I always use CPUZ to get an idea of where my processor is vs baremetal . Course multi threading won't be as good you should be able to divide the numbers on and see how much your getting vs a baremetal machine.
  7. What type of VM are you running? Perhaps a fedora VM?
  8. It is located in a VM and ive already installed the agent off of the virtio disk. Hmm I always just mount the virtio iso and go to the device manager and tell it to just scan the top level like E:\ and click the "include subfolders" and let it find whatever it wants. maybe we will see this is running alot better now that ive used your xml file items.. Thanks so much for that. What program do you use to bench mark within the VM to see what improvements are working ect. Welcome o-o glad it works for you. Honestly I mostly use CPUZ since it's fast/easy. I also use AIDA64 to get memory times. I have noticed when I'm in the unraid interface moving between areas in unraid. sometimes the webpage will go unresponsive. I'm unable to ping it on the network. I have to power cycle and when it comes back up i have to manually start the VM manager. This started after the bios changes. so i think my freezing is back. Anyway to prove that with logs? I think this has happened maybe once a day? Hmm now this is interesting. I know I used to see this when the USB controller I was passing had an issue. It would affect other USB controllers that weren't passed. You can usually see it in the syslog about the USB thumbdrive becoming dismounted. Were you able to find the power supply thing I had talked about? I believe on my old zenith it was in the AMD CBS menu. If the thumbdrive disconnects it sadly won't write anything to the local logs but if you have a syslog you can get it out of there.
  9. Alright so did my tests.. First question do you have the Dynamix Trim Plugin installed? If you have it installed make sure its running regularly. If you don't make sure you get that. Second under "settings", "global share settings", "enable direct io". I would personally do the trim first test. Then do the direct io second & test. Below is my results.. The problem I ended up hitting is my nvme started to get to hot and thus throttling. It was at 550megs a second which is pretty good transfer speeds. It did crater twice but that was also when it started throttling down pretty hard.
  10. Docker "should" be fine but I guess it depends if your dockers are doing a whole lot like plex or something. I've seen mine bog pretty hard when something really hits all of the cores.
  11. Spin locks I believe are when something just constantly sits there waiting on something. I can say your allocating core0/12. I would say don't do that because unraid will ALWAYS use core0/12 even when you try isolating it. It just doesn't work so I'd highly suggest removing that one. I'm still researching but who knows it might fix it lol
  12. Btw if your wondering where you can get info on all this
  13. Ok I may of been slightly stupid on this. Because your not spanning both nodes you should be able to simplify this. Change <numatune> to <numatune> <memory mode='strict' nodeset='1'/> </numatune> Drop the <numa> part. The reason I say this is because I was unable to come up with a way for you to say <cell id='0' cpus='' memory='0' unit='KiB'/> <- since CPUS= can't be blank. So a little switching around here. Also you may want to consider the <emulatorpin cpuset="7-9"/> to keep the emulator from jumping out of the numa as well.
  14. @celbornI'll be honest on this.. I have only dealt with QEMU/KVM when it comes to AMD. But given that you have dual numa nodes very similar to the 2990wx what I wonder may be occurring is that ram is being allocated from the wrong numa node. The reason I say this is because at least when I was doing this with my 2990wx it ALWAYS allocated from node0 no matter what unless node0 was out of ram. Then it would start taking from the other node. To fix this you need to tell the VM how things look better and how to get its stuff "better". I am going to paste an example XML below. On the numatune you'll need to say nodeset='0,1' and adjust the memnode's accordingly as well. Under <numa> I believe you'll simply have cell id='1' with all cpus='0-11' and divide the memory in half since you only have 8gb allocated. I HIGHLY recommend you copy your whole xml config to notepad++ or something before manually tinkering so you can always just slap it back in. Remember this is just a guide and not intended to be directly copy/pasted. Should also say I have no idea if this will fix your performance problems but I assume crossing numa to get ram isn't working great for you. <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='36'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='37'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='38'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='39'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='40'/> <vcpupin vcpu='10' cpuset='9'/> <vcpupin vcpu='11' cpuset='41'/> <vcpupin vcpu='12' cpuset='10'/> <vcpupin vcpu='13' cpuset='42'/> <vcpupin vcpu='14' cpuset='11'/> <vcpupin vcpu='15' cpuset='43'/> <emulatorpin cpuset='4-11,36-43'/> </cputune> <numatune> <memory mode='strict' nodeset='0,2'/> <memnode cellid='0' mode='strict' nodeset='0'/> <memnode cellid='1' mode='strict' nodeset='2'/> </numatune> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/99642e81-2f13-a916-682c-90191636d75f_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='KVM Hv'/> <frequencies state='on'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <numa> <cell id='0' cpus='0-7' memory='16777216' unit='KiB'/> <cell id='1' cpus='8-15' memory='16777216' unit='KiB'/> </numa> </cpu>
  15. o-o yeah that was what I was thinking lol. I had a problem with ram a few weeks back when all my equipment got toasted and I was pulling things out of my dust bin trying to get what I had left alive again. Anyways glad it seems to be working better. Hopefully the other issues will be easier to solve.