Oestle

Members
  • Posts

    4
  • Joined

  • Last visited

Oestle's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hello everyone, i'm somewhat new to Unraid and have had a couple issues over the past months, some i could resolve, one really annoying one i still can't figure out, maybe you guys can help. So from the beginning i've had the problem that my cache pool (2x 1TB Samsung 860 EVO) kept going into read-only mode, whenever there was a lot of files written to it (for example Minecraft Dynmap was rendering map and mover was running at the same time; or sometimes mover running was enough already). At the same time i had my nginx reverse proxy container on my Ubuntu VM crash every now and then and complaining about stale NFS file handles (back then i mounted a Unraid share via NFS into the Ubuntu VM, because i din't know there was another "better" way), while most of the other containers seemed to be running fine (nextcloud + db, nginx letsencrypt companion, portainer, Teamspeak3, Minecraft game server + db + website). So after googling, searching the forums here and on reddit i first checked and replaced my SATA cables to both SSDs, which didn't help and then decided to remove the cache pool and instead have one single xfs cache ssd and a second one unassigned where i'd regularly copy backups to. My steps for this were disabling docker and vms, removing one ssd from the btrfs pool, formatting the one left as cache with xfs, mounting the other one with the degraded option, copying the files back over to the ssd cache and then turning it all back on. Now i was hoping that this would resolve all my problems, but it didn't. The cache is running fine now, no more crashing into read-only mode, but the stale NFS file handle errors kept coming and crashing my proxy container. After more looking around i figured out, that it was in fact possible to pass a share through to a vm directly and after a bit of tinkering with the vm settings in xml view and fixing the virtual network card address, that changed after adding the share, i was able to mount my share directly in my vm. Proud and hopeful i started my containers back up just to realize, that every database container, teamspeak and portainer kept crashing and restarting. This is where i need you guys. I've tried many different things now in changing the fstab, deleted the file mariadb is complaining about and did a lot of searching through these forums, but i just can't get rid of this problem. I'll insert my vm settings, fstab on the vm and docker logs below, maybe someone can figure it out, i'd really appreciate it. VM: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>Ubuntu</name> <uuid>02a0ca3a-f80b-e650-3051-6c325e7b6493</uuid> <description>for docker deployments via ansible</description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>24641536</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='20'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='21'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='22'/> <vcpupin vcpu='6' cpuset='10'/> <vcpupin vcpu='7' cpuset='30'/> <vcpupin vcpu='8' cpuset='11'/> <vcpupin vcpu='9' cpuset='31'/> <vcpupin vcpu='10' cpuset='12'/> <vcpupin vcpu='11' cpuset='32'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/02a0ca3a-f80b-e650-3051-6c325e7b6493_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='6' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Ubuntu/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/ubuntu-18.04.1-live-server-amd64.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/srv/'/> <target dir='srvshare'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:32:39:20'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Ubuntu/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='de'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> fstab: srvshare /srv 9p trans=virtio,version=9p2000.L,msize=262144,rw,_netdev 0 0 docker logs: teamspeak: 2018-11-15 17:13:45.927896|INFO |ServerLibPriv | |TeamSpeak 3 Server 3.5.0 (2018-10-26 06:48:07) 2018-11-15 17:13:45.935065|INFO |ServerLibPriv | |SystemInformation: Linux 4.15.0-38-generic #41-Ubuntu SMP Wed Oct 10 10:59:38 UTC 2018 x86_64 Binary: 64bit 2018-11-15 17:13:45.935734|INFO |ServerLibPriv | |Using hardware aes 2018-11-15 17:13:45.952384|INFO |DatabaseQuery | |dbPlugin name: SQLite3 plugin, Version 3, (c)TeamSpeak Systems GmbH 2018-11-15 17:13:45.952846|INFO |DatabaseQuery | |dbPlugin version: 3.11.1 2018-11-15 17:13:45.966609|INFO |DatabaseQuery | |checking database integrity (may take a while) 2018-11-15 17:13:45.983846|ERROR |DatabaseQuery | |db_exec failed disk I/O error 2018-11-15 17:13:45.984133|ERROR |DatabaseQuery | |integrity_check faileddisk I/O error 2018-11-15 17:13:45.984595|CRITICAL|ServerLibPriv | |Server() DatabaseError disk I/O error nextcloud mariadb: 2018-11-15 17:14:23 0 [Note] mysqld (mysqld 10.3.10-MariaDB-1:10.3.10+maria~bionic) starting as process 1 ... 2018-11-15 17:14:23 0 [Note] InnoDB: Using Linux native AIO 2018-11-15 17:14:23 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2018-11-15 17:14:23 0 [Note] InnoDB: Uses event mutexes 2018-11-15 17:14:23 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2018-11-15 17:14:23 0 [Note] InnoDB: Number of pools: 1 2018-11-15 17:14:23 0 [Note] InnoDB: Using SSE2 crc32 instructions 2018-11-15 17:14:23 0 [Note] InnoDB: Initializing buffer pool, total size = 256M, instances = 1, chunk size = 128M 2018-11-15 17:14:23 0 [Note] InnoDB: Completed initialization of buffer pool 2018-11-15 17:14:23 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2018-11-15 17:14:23 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2018-11-15 17:14:23 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2018-11-15 17:14:23 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2018-11-15 17:14:24 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2018-11-15 17:14:24 0 [Note] InnoDB: 10.3.10 started; log sequence number 26453856; transaction id 23393 2018-11-15 17:14:24 0 [Note] Plugin 'FEEDBACK' is disabled. 2018-11-15 17:14:24 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2018-11-15 17:14:24 0 [Note] Recovering after a crash using tc.log 2018-11-15 17:14:24 0 [ERROR] Can't init tc log 2018-11-15 17:14:24 0 [ERROR] Aborting Thank you
  2. Hey guys, yesterday i experimented with pi-hole, but because Port 53 on my Unraid Machine was already used for some reason i decided to try the feature giving the container it's own IP. Everything works great and as expected, when other Devices use it for DNS or when trying to connect to the WebUI. Then i thought cool, why not use it for OpenVPN clients as well, so i set up the OpenVPN plugin, which also works great, but then when trying to connect via VPN to pi-hole...nothing. Time out. A little lot of searching later and there's not much i found. Apparently that's the expected behaviour, that a host and therefore also connected VPNs from the host, can't connect to Containers with their own IP. So now my question, has anyone ever found a workaround to still get this to work. I mean i could probably switch to OpenVPN in Docker on a VM and as i understand it that would probabably work, but i kinda like having the UI. Or i could probably put pihole in it's own little VM, i just thought it might run "better" when i run it directly on unraid. Would it work setting up a proxy on another machine/VM for accessing the WebUI? Is there a way to "proxy" DNS through a VM? Maybe someone already ran into the same problem. Would appreciate any hints for workarounds.
  3. Awesome plugin and easy to use, but is it possible to push custom dns servers? Would love to use my pihole as DNS Server for remote connected clients. I see the option in the openvpnserver.ovpn, but will it persist, when i change it manually, or would i have to change that over and over again? Thanks if someone knows something
  4. Hey guys, i'd love to see the following features in Unraids Docker GUI: support for custom docker registries (for example private gitlab) support for custom docker parameters (for example --volumes-from -> this way you don't have to add every new docker parameter to your gui) support for Docker Compose Files possibility to configure Containers started via cli or alternatively you could somehow integrate https://portainer.io/ into Unraid, which would add most of the above features and many more.