itz4blitz

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by itz4blitz

  1. Edit: video is working. I'm going to update the BIOS and I'll update results from memtest.
  2. Thanks for the reply. I usually just access my server via browser from another system. I'm having issues getting anything other than a black screen when I switch my monitor to the Unraid server sadly. Once I resolve that issue, I'll run the memtest.
  3. Also, I wanted to mention that I've come across several posts that mention checking the connection to the drive, however this is an nvme ssd.
  4. Last night I had this happen after a reboot, I've made no other changes outside of the reboot. blitzraid-diagnostics-20230127-1518.zip
  5. Some additional steps I've taken: Removed Gluetun and attempted to reinstall my applications Disable Gluetun and reinstall my applications They still continue to fail. As soon as I re-install Sonarr, Radarr, Prowlarr, Sab, etc. I get this error: Edit: I tested another container that isn't having issues (Nginx), and I was able to update it successfully. It seems only these containers which are routed through Gluetun are affected. Edit 2: I manually started GluetunVPN. It seems Sonarr/Radarr/etc re-installs fine, but they fail to start. If I try to manually restart them, I get the following:
  6. I have GluetunVPN acting as a network container for Binhex Sonarr, Radarr, Prowlarr and SabNZBd. Everything has been great for months. Today I needed to regenerate my VPN token. When I restarted Gluetun, everything broke. I rebooted and now when I try to start any of the containers behind Gluetun I see this: Execution error Image can not be deleted, in use by other container(s) I came across this post when troubleshooting. It references this post which I followed the steps in - however it hasn't resolved my issue. I've attached my syslog and dockerlog in hopes someone can help me resolve this. Thanks. syslog.txt dockerlog.txt
  7. I've also been trying to set this up and I have the same issue as @lukeoslavia. @Glassed Silver is this image still maintained? Edit: works perfectly. I found this comment which helped me resolve it: https://github.com/directus/directus/discussions/6480#discussioncomment-1106735 Thanks @Glassed Silver for this
  8. I tried searching pretty extensively for anyone else experiencing this, but I only found posts about freezes during normal operation. To clarify: my system works fine when it is fully booted. It's whenever I manually restart or use a scheduled restart that my system is hanging up during the boot process. Typically I have to physically power the device off and back on a few times before it will boot to USB fine. When it has issues, the system will POST and then just sit at a black screen after. For some months this was fine because I was not restarting my system at all. Before I got my UPS it was only an annoyance whenever we lost power. However now that I have my UPS in place, its felt more like I'm using it as a band-aid to cover up the freezes during boot, rather than as a backup for power loss. Recently (a week or so ago), I setup a script to automatically reboot my system 2 times per week in the early hours. I'm waking up to a server/services that are offline. When I check my system it's powered on but not reachable and a blank screen if I power on my attached tv. If I disable this reboot script, it can run for weeks on end without any issues.
  9. Installed the newest firmware, rebooted, started the array and let it do it's thing for a few hours. Came back a few hours later - seems to still be having tons of errors in the logs. unraid-diagnostics-20220417-1043.zip
  10. Today I started receiving this error. It's only been about 1-2 days since I've restarted the server. I wanted to seek some assistance in figuring out what's going on with my server. The only "issues" I'm running into are occasional issues related to docker containers - and a VPN container I have running which is having issues with my torrent client (was only able to connect to a few seeds. This morning it's now completely stopped). I'm able to do a curl ifconfig from the console of containers running through the VPN app. They return the VPN IP address, so they are working. I had read some posts regarding the use of vpn containers and issues with having their ISP modem in bridge mode (which I have). They recommended switching their VPN container to use TCP instead of UDP. I've tried both with no success. I went from getting 50-60MB/s+ down while using the VPN container to 3-4MB/s max, to 0b/s today. Fwiw, I have a Ubiquiti Dream Machine. I've tried disabling all, few, and no security settings - nothing I do in my UDMP seems to help. Any help is appreciated! Happy Easter! unraid-diagnostics-20220417-1043.zip
  11. Hi Binhex, Thanks for your response. I connected locally and I'm actually getting an error: Error loading webview: Error: Could not register service workers: SecurityError: Failed to register a ServiceWorker for scope ('https://192.168.1.122:8500/static/out/vs/workbench/contrib/webview/browser/pre/') with script ('https://192.168.1.122:8500/static/out/vs/workbench/contrib/webview/browser/pre/service-worker.js?v=4&vscode-resource-base-authority=vscode-resource.vscode-webview.net'): An SSL certificate error occurred when fetching the script.. And to answer your original question: I'm having the same issue there when trying to login to the account: it just shows "Loading..."
  12. I'm having an issue trying to use the "Account" login inside the app: I am using cloudflared + NginxProxyManager for SSL. In the devtools console I am seeing the following warning: I'm also seeing this warning in the Docker logs: [12:17:36] Switching to using in-memory credential store instead because Keytar failed to load: Cannot find module 'keytar' Everything works great aside from this issue. I'm hoping to be able to login to my MS/Github account to sync my settings from my local client. If anyone has any ideas on what steps I can take to get this working, it's greatly appreciated.
  13. It seems to have resolved itself after another reboot! To answer your question though, I only have 6 drives in this build. I'll report back if the issue returns. Thanks.
  14. I just built a new server. I'm able to press F8 during post to get to the boot menu and select my USB and boot up just fine. But if my server restarts for any reason, after POST it goes to a black screen and no further. All help is appreciated!
  15. I wanted to say thanks as well. I just built a new server with an Asus H610M-A D4 motherboard and found this response - it solved my issue as well.
  16. Fixed the VMs. Stopped the array, went into Settings > VM Manager (Advanced View) and changed Libvirt storage location to: /mnt/cache/system/libvirt/libvirt.img. Started the array and all the VM's reappeared. Now to fix docker. Edit: Fixed docker containers. Went to Apps > Previously Installed and installed everything. Solved issue.
  17. Here's the output from: find /mnt -name libvirt.img /mnt/user/system/libvirt/libvirt.img /mnt/user0/system/libvirt/libvirt.img /mnt/cache/system/libvirt/libvirt.img /mnt/disk1/system/libvirt/libvirt.img Here is the libvirt log: 2021-09-18 00:45:43.325+0000: 10263: info : libvirt version: 6.5.0 2021-09-18 00:45:43.325+0000: 10263: info : hostname: Tower 2021-09-18 00:45:43.325+0000: 10263: warning : networkNetworkObjTaint:5292 : Network name='default' uuid=a67b7b9b-bef4-488b-b243-c9e5f391a3b1 is tainted: hook-script
  18. I have an 3 x 8TB HDD's in an array with one acting as a parity. That drive was unplugged at some point when redoing some cable management. When it came back up it appeared as an unassigned device. I did some searching and found someone with the same issue that was instructed to go to Tools > New Config and build a new config and then assign their drives. I did this. After starting the array however, all of my docker images and VM's are not showing up. I've attached my diagnostic logs. Help is greatly appreciated. tower-diagnostics-20210917-2048.zip
  19. Update: I got some advice from @fmp4m that my GPU may have not been stubbed properly. After a bit of searching, I came across this great thread: ** VIDEO GUIDE - How to Easily Dump a vBIOS from any GPU directly from the Server for passthrough - VM Engine (KVM) - Unraid I followed this guide. When I went to actually dump the vBIOS my Unraid would actually crash. I did a bit more research - and I also came across this video: After I followed the above video, I then returned back to the vBIOS script and it had no issues dumping the vBIOS. I booted up (after having to remove some of the missing USB devices from my VM's XML), got into Overwatch and was seeing 250 FPS
  20. Just an update, I opened the VM log and then from my VM entered in game. It crashed shortly after entering a match. This was the error log: qemu-system-x86_64: hw/usb/host-libusb.c:838: usb_host_ep_update: Assertion `udev->altsetting[i] < conf->interface[i].num_altsetting' failed. 2021-09-16 18:53:40.232+0000: shutting down, reason=crashed
  21. I was having this issue on a previous VM which was working fine but ended up after a few days with the gaming experience becoming completely unplayable. So I decided to create an entirely new VM this morning and make sure everything was 100% configured and on point before testing it. I followed Space Invader's videos on setting up a VM with GPU passthrough enabled. I also followed his 3 video series on tuning the server for ultimate performance. I have all my docker containers update. I have Windows 10 version 21H1 (OS Build 19043.1237). My GPU driver was installed (clean install) directly from Nvidia during setup. Like I said, I followed Space Invader's videos entirely as I have previously except now I cannot game at all without my VM shutting down as soon as I enter a match (Overwatch). I just replaced my previous R5 2600x with an R9 5900x - but the issue is the same. I thought maybe the issue was related to CPU core pinning and isolation. So I isolated 4 core and 4 threads specifically for the VM and restarted unraid and the VM. But the result is the same with the VM stopping as soon as the game logs in past the game menu. Also there is no warning. Screens go completely black like the system shut off, but tower is still on and Unraid is still accessible via my phone and browser. I booted back up and grabbed the syslogs. I've attached them here. Also fwiw, I have R9 5900x, 64GB DDR4 ram @ 3200Mhz, Asus B450M-A motherboard, 3 x 8TB HDD's (one of which, is no longer showing up despite me checking the plugs but hopefully unrelated? still see shares but no parity drive - this has been ongoing since CPU replacement, whereas crashing issue has existed prior to this even), MSI Gaming X 1080Ti, Corsair 750W PSU. All help greatly appreciated. VM XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Gaming VM</name> <uuid>8dc5cdb3-d52a-dfda-b9df-0ee2b3c0d879</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='15'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='16'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/8dc5cdb3-d52a-dfda-b9df-0ee2b3c0d879_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Gaming VM/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Win10_21H1_English_x64.iso' index='2'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <interface type='bridge'> <mac address='52:54:00:51:77:93'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Gaming VM/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/system/roms/MSI.GTX1080Ti.vbios1.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x4'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc539'/> <address bus='1' device='2'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0951'/> <product id='0x16b7'/> <address bus='3' device='5'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0a12'/> <product id='0x0001'/> <address bus='1' device='3'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='4'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1038'/> <product id='0x12b3'/> <address bus='3' device='3'/> </source> <alias name='hostdev6'/> <address type='usb' bus='0' port='5'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1b1c'/> <product id='0x0c13'/> <address bus='1' device='4'/> </source> <alias name='hostdev7'/> <address type='usb' bus='0' port='6'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1b3f'/> <product id='0x2008'/> <address bus='3' device='6'/> </source> <alias name='hostdev8'/> <address type='usb' bus='0' port='7'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x26e0'/> <product id='0x3c13'/> <address bus='3' device='4'/> </source> <alias name='hostdev9'/> <address type='usb' bus='0' port='8'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> syslog
  22. Sorry to dig up an old thread - but in case this helps anyone else - I was also having this issue. When I try to save from the settings > CPU Pinning page, after clicking apply, getting sent to Settings, and clicking CPU Pinning again, I'd return and all of my settings were reset to whatever they were previously. However I checked in the individual docker with advanced view enabled, and sure enough, the isolated CPU pinning was being applied.
  23. I wanted to post an update since the CPU arrived a day early. I removed the R5 2600x, gave the case a nice clean, installed the R9 5900x (after watching a few videos on the best way to paste the R9), got the cooler's heatsink reattached, and booted it up. As soon as it powered on it detected a new CPU and made me click Y to proceed. I was then able to get into BIOS. To my astonishment, it was running at 70C in BIOS. I was pretty concerned. I restarted the system and this took it back down to the upper 40C range. Fast forward - I got my Win 10 gaming VM booted up. Installed Nerdpack for Perl and Dynamix System Temp - been chilling at about 50C after 30 minutes of moderate usage on the VM. I assume Unraid would default to removing any CPU pinning. But instead it kind of did it's own thing: I attempted to spin up Overwatch - I watched my FPS get up above 300, and immediately back down to 50-60 like I had been having after a day or two of using a new gaming VM. But then it crashed immediately. This has been an issue a few times on this VM - I'm planning to recreate the VM tomorrow and really focus on fine tuning everything. I had the same issue with my R5 2600x as far as gaming becoming unplayable a day or two after setting up the gaming VM (and it working flawlessly). This is a whole other issue, and if I'm still having issues with it, I expect I'll open a thread up in the appropriate section. Now, back to the reason I wanted to write this post: the new CPU. All I can say is, wow. It's a night and day difference. I expected an improvement, but this is much more of an improvement than I expected. The motherboard seems to be doing just fine. The voltage was pretty high (like 1.41) for CPU when I first installed it (which could account for those high temps I saw), but it was way lower when I checked earlier (1.100V). Oh, one last thing I wanted to mention: trying to get my 64GB DDR4 RAM to run at 3200MHZ using XMP or manually setting the speed in BIOS was a huge pain. At best I could get it to run stable for a while but 5-25 minutes into a game it would crash, reboot, and reset the memory speed in order to successfully start. This wasn't an issue once I installed the new CPU. I wasn't able to enable D.O.C.P - but I was able to manually set the ram to run at 3200Mhz. Anyhow, I appreciate everyone's help!
  24. Hey all, I'm currently doing some system upgrades. I have Ryzen 5 2600x, 1080Ti, Corsair 750Watt PSU, an ASUS B450M-A CSM motherboard. I just purchased 64GB (4 x 16GB) DDR4 RGB Corsair Vengeance LPX 3200Mhz ram which I've installed. I have a Ryzen 9 5900x supposed to be here on Thursday/Friday. I currently have a Corsair H115i Pro CPU cooler. 3 x 8TB 7200RPM Seagate HDD's, a 500GB Samsung Evo 840 SSD cache drive, and a 2TB Crucial Sata SSD for docker and VM's. My concern is with my motherboard. It was a budget board when I originally built this system in 2018. It says it supports the Ryzen 9 5900x - but I wanted to get feedback / opinions here on if it's going to limit performance in any way. Thanks!