GrehgyHils

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by GrehgyHils

  1. @shawnngtq I ended up getting my UI to pop up by simply pressing esc...
  2. Okay interesting. May I ask, what Linux distro are you trying to install? Also, did you get to the initial boot when you installed the OS or was it a black screen from the beginning? Agreed, the docker ui is lovely.
  3. @shawnngtq negative, I haven't made any progress on this. Are you experiencing the same thing?
  4. Hey everyone, I'm using Unraid version 6.9.2 and I'm attempting to create a Linux VM, specifically Pop OS 21.10, without Nvidia drivers. When I first created the VM, it booted to the installation screen and I walked through the wizard just fine. Once the VM rebooted, I've only been met with a black screen... Here's the XML of the VM: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>pop-os</name> <uuid>d4e2a03b-c580-f7e2-50ce-629a924bbab4</uuid> <description>21.10</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='6'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d4e2a03b-c580-f7e2-50ce-629a924bbab4_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='1' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/pop-os/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/linux/pop-os_21.10_amd64_intel_3.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:f6:79:45'/> <source bridge='virbr0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> I've tried connecting to the VNC session via the built in Unraid web browser approach and also alternatives like Tight VNC viewer. Any help is appreciated! Thanks, Greg
  5. I've verified that making this modification dropped my docker.img from 92% down to 66%. I have not attempted to connect to the server and will not be able to till after work or tomorrow but can report back. Thank you again ich777 and JonathanM. I would not have figured this out without the two of you
  6. Noted, I'll hold off on mapping anything. I'll attach the output of that command as requested. So, I may have some information that will help here. I did not experience any issues when I was running the Pavlov VR server using the default maps. Now this is just based on my observations, but the problem seemed to start to arrive when I started experimenting with community maps, hosted on the Steam Workshop. I was talking to the community a bit, as I was running into an issue, and they informed me that the directory `/tmp/workshop` is where community maps will be downloaded to. I imagine this is what we're observing here. EDIT: Just noticed you replied while I was writing my initial reply. Looks like we're on the same page. I downloaded the maps two different ways. 1. Using rcon to trigger a MapSwitch, specifically using a command like `SwitchMap UGC[workshop map id] [game mode]` 2. Configuring the `/serverdata/serverfiles/Pavlov/Saved/Config/LinuxServer/Game.ini` to contain workshop map IDs I believe both 1 and 2 above will trigger the application to go fetch the map data and store it in `/tmp/workshop/ output.txt
  7. Oh woops, you're absolutely right. I totally overlooked that. So I should be able to stop the Pavlov container, mount some directory from the host OS to the Pavlov container, say a newly made share. Then when I start the Pavlov container, it should notice that its `/tmp` directory is empty and redownload everything, I believe? My only other outstanding question is, if the above works and "stops the bleeding", is there any easy way to reduce the storage that was accidentally written to my `docker.img` file? Thanks for your help on this, you both have made exploring self hosting a game server a really rewarding experience.
  8. Sure thing, here's the output: 6.1M /bin 0 /boot 0 /dev 1.9M /etc 0 /home 13M /lib 6.2M /lib32 4.0K /lib64 0 /media 0 /mnt 8.0K /opt 0 /proc 32K /root 4.0K /run 4.2M /sbin 2.5G /serverdata 0 /srv 0 /sys 4.9G /tmp 174M /usr 26M /var So are you thinking that the data in the Pavlov VR container's `/tmp`, somehow got written to my unraid's `docker.img` file, on the host OS?
  9. Okay, understood. I don't believe I've modified anything in the template that would cause this issue, but I will attach a full screenshot for completion. The only modifications that I made was to the ports. Perhaps I've mis-configured something on my entire Unraid server itself and this doesn't have to do with the Pavlov template specifically?
  10. @ich777 One more follow up question. I have been playing with the Pavlov VR server this afternoon and it has been an absolute blast. I've been downloading custom maps etc... I just noticed alerts from my Unraid server saying: > Warning: Docker high image disk utilization (at... After running `$ docker system df -v`, I've noticed that this container has been writing a lot of data to the `docker.img`. ``` CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 0fae144b0f31 ich777/steamcmd:pavlovvr "/opt/scripts/start.…" 0 5.33GB 10 hours ago Exited (143) 10 minutes ago PavlovVR ``` Are there any specific volumes would should be aware of to create when playing with this image? IE, so that the large amount of disk space is say written onto the cache directory or the array itself, rather than in the `docker.img`?
  11. Okay that makes sense, I was able to get rcon running as described. Thanks ich777 That makes sense, it didn't dawn on me that there's probably only one dedicated linux server for the game and I could just safely assume you're using that. Appreciate your explanation and help!
  12. @ich777 That helps a ton, I should be able to easily follow this and recreate it, so a big thank you! Let me ask you this, how would one figure this out on their own, without resorting to asking on this thread? Is there some documentation that I may have missed? I ask because I've been resorting to exploring the container itself to try to see which ports are expected, what is running, etc. My whole exploration is based quite a bit on luck and exploration ha Thanks again, Greg
  13. Is there any support for Rcon for Pavlov VR? Perhaps I'm unsure where to look for documentation for a specific game
  14. No official word from the Lime Tech folks if this is going to be officially fixed?
  15. Is git lfs still offered by this pack? I have it installed but seemingly do not have access to this tool
  16. This is amazing work mgutt. This issue has plagued me for a long time and has destroyed two of my nice SSDs already. Hoping to see this officially fixed in an unraid update... Anyone have any idea if they'll officially reply?
  17. That's unfortunate to hear. Can you share your results of going back to BTRFS when you have them in a few days? Also, what's the thought process behind going to XFS? Additionally, how many cache drives did you have when you were using BTRFS?
  18. Hey everyone, I wanted to report that I believe i'm seeing this bug demonstrated on a 6.9.2 Unraid box. I had a cache pool of two 480 GB SSDs in RAID 1 that stopped working, which I believe it was due to excessive writes. I replaced the hardware just this morning and put only `appdata`, `domains` and `system` shares on the cache using the setting `prefer`. Being concerned about the number of writes, I checked thees and with the server being online for ~26 minutes, the cache has experienced already 110,519 writes (~55,000 per disk). Installing `iotop` with Nerdpack allowed me to run `iotop -ao` which showed that `[loop2]` is responsible for the majority of the writes. ``` Linux 5.10.28-Unraid. root@tower:~# tmux new -s cache Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 13149 be/0 root 564.00 K 135.61 M 0.00 % 0.44 % [loop2] ``` I've read that some people have been unable to have their cache drives me unencrypted and experience less writes. That's not something I'd like to do... I searched online for any advice on how to fix this, found this threads: which pointed me to this bug report. Any advice on how to resolve this? Thanks, Greg
  19. Okay so it looks like the scrub process finished successfully: UUID: some-uuid Scrub started: Mon Jan 25 08:41:42 2021 Status: finished Duration: 1:08:11 Total to scrub: 318.25GiB Rate: 79.66MiB/s Error summary: verify=18593 csum=1805142 Corrected: 1823735 Uncorrectable: 0 Unverified: 0 Everything is back to working as expected! So a big thank you to JorgeB and Trurl. I'm going to document what happened and what I did so that the next person can hopefully have less panic than I experienced. What happened: One of my two cache drives, that are in an array together, disconnected at some point I reconnected the cache drive. This caused problems to happen when reading or writing to sections that were updated on the first disk Unrelated, but my `docker.img` disk use was climbing and I ignored it, until it hit 100% and all my Docker containers stopped as did the Docker daemon What I did to resolve the problem: Stopped the docker service ran a cache scrub by selecting the first disk in the array (selecting the "repair corruptions...") verified the corruptions were fixed by running `$ btrfs dev stats /mnt/cache` backed up my `docker.img` file just in case (this might not be needed) deleted the original `docker.img` file moved the `docker.img` file to `mnt/cache/docker.img` opposed to the original location of `/mnt/user/system/docker/docker.img` lowered the `docker.img` filesize fom 60 GB to 40GB, as this was an experiment I was performing to try to fix the issue turned on Docker which created a new file Went to the Apps tab and saw the "previous apps" which allowed me to batch install all my old Docker containers with their original templates already selected What I do not have figured out yet or resolved Figure out what container was the original culprit in filling my `docker.img`. Lots of forum posts, and replies above, suggest I have a misconfigured container that is writing incorrectly to the `docker.img`. If anyone has any tips on how to debug this it'd be appreciated! Thanks again everyone
  20. Ah! That was absolutely my problem here. Okay! I began a scrub with "repair corrupted blocks". Since I have two 500 GB SSDs, I imagine my slow CPU might take awhile. I'll let this command run then rerun the above command to ensure no more errors occur. From there I'll learn what "recreate the docker image" means in this context and give that a go. Thanks fro you help so far!
  21. I apologize but I'm still not seeing this. If I navigate to the Cache drive (sdc1)'s page I see sections like: Cache 2 Settings SMART Settings Self-Test Attributes Capabilities Identity I see the SMART tests I could run but I do not see, nor did CTRL + F find, anything named scrub. Am I misunderstanding something?
  22. Apologies, I just reread what I wrote and realized what I wrote wasn't clear. I'm trying to express that I don't actually follow what command one runs to perform the scrub.I ran btrfs dev stats -z /mnt/cache and the output now shows no errors: If the btrfs dev command was not the correct way to perform a scrub, can you help me understand that? I've googled this, with respect to unraid, and have not been able to piece that together. Thank you for your patience
  23. Ah okay that makes sense, I remember one of my two cache disks disconnecting. I did not realize that would cause an issue. I ran the `$ btrfs dev stats /mnt/cache` script and got the output of [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush_io_errs 0 [/dev/sdb1].corruption_errs 0 [/dev/sdb1].generation_errs 0 [/dev/sdc1].write_io_errs 1507246927 [/dev/sdc1].read_io_errs 137577961 [/dev/sdc1].flush_io_errs 19733411 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 Which is what you showed me from the diagnostics output above. So I'm a bit confused from your link above, and trying to be extra careful to not result in any data loss as I have like 50+ containers configured. I've re-seated the cable to the cache that disconnected and believe that is resolved. I've also reset the btrfs dev stats. I'm now at the point where I want to: As I want to be able to bring the Docker containers back online. Any advice?
  24. Hey all, I noticed my Docker disk space was at 100% as all my containers were stopped. I've read quite a few threads that point to potentially setting up a container incorrectly where the data it downloads goes to the wrong folder. I have been unable to figure out what is responsible. I upped the Docker "Docker vDisk size:" from 40gb to 50gb and still the Docker service reports "Docker Service failed to start." Any advice is appreciated! Attached is my diagnostics, as I've seen many people ask for this data. tower-diagnostics-20210124-1449.zip
  25. @itimpiGiven my two options are virb0 and br0, which is appropriate for wanting my VM to be able to accept SSH connections from anywhere on my network? I wish my router to assign the VM an IP address. When I select br0, my VM does not get internet access. When I select virb0, my VM has internet access, I can VNC into the VM from Unraid or a random machijne on my network. I can SSH into the VM from unraid but NOT from any machine on my network. My goal is to SSH into the VM from any machine on my network. Any advice?