casperse

Members
  • Posts

    791
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

casperse's Achievements

Collaborator

Collaborator (7/14)

35

Reputation

3

Community Answers

  1. So my troubleshooting have located my problem. The Bios is always set to use the internal iGPU (Correct) and I get all the startup during boot to my monitor but when it should start the GUI I get a prompt in the upper left corner. Removing my Nvidia card and placing my HBA controller in the first PCIe slot WORKED! (Removing all other cards) and after some time I finally got the GUI on my monitor. JUHU! I then tried placing my Nvidia GPU (NVIDIA GeForce RTX 3060) in the 3 Pci slot (8x) and after boot I am back to the prompt and no UI? Something breaks during boot (The GPU have no output to my monitor). Any suggestion on what I should do next? UPDATE: I found a new Bios from 2024 (My board is from 2019) I updated the Bios and set everything up from scratch. Same thing. Cursor in top left corner of the monitor no UI.
  2. SOLVED: I am an IDIOT.... Checking the network settings on the ubuntu: sudo nano /etc/netplan/00-installer-config.yaml Somehow the gateway was wrong? Also installing the Guest service: Step 1: Log in using SSH You must be logged in via SSH as sudo or root user. Please read this article for instructions if you don’t know how to connect. Step 2: Install qemu guest agent apt update && apt -y install qemu-guest-agent Step 3: Enable and Start Qemu Agent systemctl enable qemu-guest-agent systemctl start qemu-guest-agent Step 4: Verify Verify that the Qemu quest agent is running systemctl status qemu-guest-agent
  3. Hi All All my VM's work except the AMP VM for my gaming server? I can see it doesn't get any IP? (This configuration is the same as my Windows VM and that worked perfectly after moving?) Configuration: XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>AMP_Game_server</name> <uuid>106257ad-bf64-1305-df79-880b565808af</uuid> <description>Ubuntu server 20.04LTS</description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="/mnt/user/domains/AMP/amp.png" os="ubuntu"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>10</vcpu> <cputune> <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/106257ad-bf64-1305-df79-880b565808af_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='5' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/AMP/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:84:10:41'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='da'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> I can connect to the Webserver UI? But it have no internet connection, so strange
  4. This is actually strange, cloned my Unraid flash for a second server (New license). On the new server it now works (With the settings above), I get the GUI output on the iGPU HDMI port (MB) BUT! Same cloned USB on my old server gives me a prompt with a "_" blinking screen in the top left corner of the monitor? On this MB its a Displayport again with iGPU output (Displayport) and I get the full boot on the screen right up to the end? Both servers have the iGPU as the primary and only output!
  5. My problem is that i occurs every 10-12 hours so with the amount of dockers I have this would be very hard to do. Update: So this could be caused by a single docker with a memory limit that breaks it? Anyway to identify the docker, from the error message?
  6. Please can anyone help me? I have installed the swapfile plugin I have set a memory limit on all my dockers to 1G (If all dockers 0bey the rules of the limit then I shouldn't see anymore errors?) I have tried stopping all dockers and only some dockers but I still get the Memory error? Is there anyway to find out what is causing this? Would syslog be able to find out? This happened again today: Systems are still running, but the error is resulting in Unraid killing random processes?
  7. I followed the guide for the i7 (I believe) and the only difference is that the efficient cores are all auto. Doing the Passmark I can see that I pretty much have 50% of the score with these settings, witch is fine (Its a beast!).
  8. After Plex removed the Sync feature and replaced it with the new Download feature, its not so bad, and the RamScratch gets empty pretty quickly. Some restraint to the amount and size of files should be limited, but my initial tests have been okay. But I am not using this now, I am focusing on eliminating the memory error - so it has no impact on the errors I currently get. I have now set memory limit for all my dockers, hope to see a difference. No morelogs since 16:00 what does this mean?
  9. Yes, from the above error I can see a docker ID starting with c9e4ebfe searching for this I get the culprit? : But I can see that this docker already have a limit of 1G: --memory=1G --no-healthcheck --log-opt max-size=50m The dev/shm will always be set to 50% of the available memory right? So any input on what to do next? any other settings to limit memory for specific dockers
  10. I can confirm this works! I just had to update the format of my old cache drive before starting the server - because I converted it to ZFS after cloning the USB for the backup server! Worked great! Both the Appdata, Domains, Systems and the docker VM worked and started up without errors. I just whish I had reformatted all the older drives before adding them to the new array. But I just used the filemanager to delete old files. And I am now adding parity drives so this is great! EVERYONE talks about how easy True Nas is moved between servers, but Unraid is better! Here I rebuilt my array and keept every App & settings on a "new" server with ease!
  11. Ok so my trouble shooting continues 🙂 I Have installed the swapfile plugin (You recommended above) successfully (Standard size setting is around 2G) I have moved the RamScratch settings to each of the dockers and also set memory limits (PLEX) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/PlexRamScratch,tmpfs-size=68719476736 --memory=64G (EMBY) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/EmbyRamScratch,tmpfs-size=8589934592 --memory=8G (JELLYFIN) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/JellyRamScratch,tmpfs-size=8589934592 --memory=8G After doing this I get a docker Warning: Your kernel does not support swap limit capabilities? (Running the RamScratch as a script I never saw any warnings like below, did I do something wrong?) I can see that the memory limit is implemented on the docker webpage: But I still see this: And again today: If these are to be just ignored then it would be nice not to have them in RED letters 🙂
  12. Hi All What I am trying to do: Build a backup server and keep my old settings Shares, Users, Config & old cache drives with my Appdata/Domains/System/Dockers/Plugin BUT I want to build a new array with all new drives? SO FAR: I have succesful cloned my old Unraid USB and bought a new pro license. changed IP and server name in config. I can boot and I have all the old drives listed as missing I would like to keep my old cachedrives (Appdata/Domains/System/Dockers) and all my shared folder settings. But I would like to build a completely new array with new drives. (I already moved everything to my new server) Is this at all possible and how would I go about this? Currently I can boot up and it remembers all the old drives and see all my new drives. The "New config" under settings looks like the right way to do this? - But then I will loose all my old cache drives? Or can Unraid "see" the old formatted cache drive and the naming of the old orig. cachepools - if I just plug them in? (NVME drives) Sorry if this ia a stupid Q, but I want to be sure before pressing the "New config" button 🙂
  13. I just discovered I have both the script and the advanced settings for RAMscratch --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp,tmpfs-size=8589934592 And the above scripts, to create the RAMscratch 🙂 Any recommendation to use one over the other? (I cant remember if one solution is better to cleanup than the other in RAM?) And did you want me to do a "--memory=8G" for the Plex docker? Any size recommendation for the swapfile? Sorry I have read many of the posts (Really old) and I am curious to if this have any effect on my memory problems? (I also now have ZFS drives and I can see they also allocate more RAM)
  14. Thanks JorgeB, I can see my Plex is one update short. Will update right away! mgutt helped me long time ago setting up at RamScratch folder for Plex at boot (Script), but I guess you talk about the docker advanced settings memory limit? I was told that it would be best to remove them but that was in 2022 🙂 #!/bin/bash mkdir /tmp/PlexRamScratch chmod -R 777 /tmp/PlexRamScratch mount -t tmpfs -o size=40g tmpfs /tmp/PlexRamScratch (The 40g size is to accommodate the download feature in Plex). I did install the swapfile plugin and created it on a single U3 cache drive with btrfs - any recommendation on the size? I went with the default values (Size: 20G) I still think its strange that after upgrading from 64G to 128G of newer and faster RAM I have these low on RAM problems? Is it fragmentation or is this some Kernel Memory Limit causing the OOM on Docker hosting?
  15. The btrfs question was related to the plugin you suggested for the swapfile looks like it needs btrfs formatted drives? Also got a new error, I havent seen before but resulting in the same memory error message? I have not installed any new dockers? (Just moved everything to the new server now with ZFS cache?)