xanvincent

Members
  • Posts

    32
  • Joined

  • Last visited

Everything posted by xanvincent

  1. I just wanted to say that this is the best feature-set of a major release I've seen in a long time. You guys managed to pack in a lot of community requested items and we love to see it. Great work!
  2. Have you tried restarting the docker service (settings -> docker -> enable docker set to no)?
  3. I had the same issue. I changed my unraid's DNS server to 1.1.1.1 and 1.0.0.1 and it works fine now.
  4. Is the Adguard Home container working? Console doesn't work (just a black screen) and WebGUI doesn't load. I don't see anything created in the conf directory either.
  5. Can you get it to boot successfully with no USB peripherals attached? I'd imagine you'd need to use the Solaris x86 version and force the CPU to one of the supported Intel ones (like Westmere). EDIT: I recall from my sysadmin days that Solaris x86 was hot garbage, can you use something more modern like Illumos or Open Indiana?
  6. You need to add the following to your /etc/security/limits.conf (in your debian VM): * soft memlock <number> * hard memlock <number> <number> can also be "unlimited", which is the default unraid setting.
  7. 1) That's beta software, don't expect it to be stable or use it on a system with data you care about losing. 2) You should post this bug with relevant diagnostics to the bug report forum (prereleases).
  8. You will never see 3000 Mbps on wifi 6 (ax) devices today. That's just marketing wank (they add all the 5GHz and 2.4GHz bands together, devices don't work that way). Fastest wifi6 speeds I've seen real-world was ~700Mbps, and that was achieved 5 ft away from the AP. Assuming your server doesn't sit next to your AP, there are other APs in your area (interference) and you have multiple other wifi devices on your network, you're gonna see a drop in that 700 number real fast. Wireless connections are also unreliable, dropped packets are pretty common, meaning data has to be resent, slowing your overall speeds. I get the want to have Unraid and wifi, I just wouldn't expect it to consistently and reliably approach gigabit ethernet speeds. By the time it does, 10 gig ethernet will be cheap like 1 gig is today.
  9. The best security is provided by the most abstraction. I'd spin up a full VM to do any external forwarding instead of Docker containers. unRAID is always advertised to be not internet-facing so keep that in mind.
  10. The PGID and PUID commands map to an internal user (abc) within the container that runs and owns files for whatever app. Root still exists there (in the container) and that's the problem. Root id in container = root id outside of container. Implementing userns is THE fix for this security concern but has to be implemented at the command line when the container is run.
  11. My number one request as well. I would like Unraid to catch up to modern distros here and maybe we could even start recommending it to SMBes as an alternative to FreeNAS and OMV. PS I can definitely help with implementing a few of these. I just would like to use a better solution than running a script at boot to do it.
  12. Based on what I know about LinuxServer.io's docker containers, this simply maps the internal user of the docker (abc?) to the specified UID and GID. AFAIK, this does not affect the root user in the container. It would also only apply to LSIO containers. If someone knows different, feel free to correct me. I believe the only way to change the root user mapping (that is needed for interfacing with the host's resources) is with namespaces ie. --userns-remap USER.
  13. It is definitely possible to break out of containers. This was 'recently' exploited successfully per CVE-2019-5736: https://www.twistlock.com/labs-blog/breaking-docker-via-runc-explaining-cve-2019-5736/ This was more a vulnerability of runC, not docker itself. It was, of course, patched some time after the CVE was identified. The point to using namespaces here is, if an exploit like the above is used on a container you're making available to the world, you want to minimize damage to the host system. With namespaces implemented, even if you escaped as root, you'd be some non-usable user on the host system, instead of root on Unraid. This is also a big reason not to use 777 permissions on the host OS too. Let's say your public container was compromised with such an exploit, now I have access to all of the data in all of your shares.
  14. @Eadword: I have a thread opened for Docker Isolation with no replies, but you should add Linux namespaces / subuids for Docker to your list, as well as not using root and 777 permissions everywhere in the OS would be some good changes. It's been this way for a long time, I doubt it will change because unraid has always been a "don't expose to outside world" kind of distro for a long time.
  15. In unraid's implementation of docker, since it runs as root, are the containers leveraging Linux namespaces / subuids to provide some additional isolation from the host? For example, if I gain access to root on a privileged container and break out, is that user still root or am I mapped to some useless subuid? Docker provides this functionality but for whatever reason it is off by default.
  16. Hi all, Getting this error when trying to build a Linux VM with the same settings as my (successful) Windows 10 VM. I am currently only attempting to pass through the RX 580. This works flawlessly on my Win10 VM, but every time I try to start the Linux VM, I get this error. Here's the tail of syslog when I started the VM: root@iron:~# tail -f /var/log/syslog Dec 21 11:51:05 iron avahi-daemon[5365]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:05 iron avahi-daemon[5365]: New relevant interface vnet1.IPv6 for mDNS. Dec 21 11:51:05 iron avahi-daemon[5365]: Registering new address record for fe80::fc54:ff:fe50:3b5e on vnet1.*. Dec 21 11:51:50 iron avahi-daemon[5365]: Interface vnet1.IPv6 no longer relevant for mDNS. Dec 21 11:51:50 iron avahi-daemon[5365]: Leaving mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron kernel: device vnet1 left promiscuous mode Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron avahi-daemon[5365]: Withdrawing address record for fe80::fc54:ff:fe50:3b5e on vnet1. Dec 21 11:56:43 iron login[4031]: ROOT LOGIN on '/dev/pts/1' VM settings: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Arch</name> <uuid>73998f8d-71bc-b076-9372-5e2952d398b0</uuid> <metadata> <vmtemplate xmlns="unraid" name="Arch" icon="arch.png" os="arch"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='11'/> <vcpupin vcpu='2' cpuset='12'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/73998f8d-71bc-b076-9372-5e2952d398b0_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='5' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Arch/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:50:3b:5e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x01' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x02' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Hardware: AMD Ryzen 1700x 32GB DDR4 AMD RX 580
  17. Can you post your KVM XML configuration for your Windows 10 VM (edit the VM, click upper right switch that says "form view")? Try using SeaBIOS if you aren't already. I couldn't get AMD graphics to play nice with OVMF.
  18. This might explain some of the intermittent errors I've noticed with my files too, namely ones written to cache initially... Does the mover program implement checksumming? Something like rsync --checksum?
  19. I'm not sure if this is a bug with 6.5.2 or not, but, since updating plex dockers are not working (and were working before). I've tried both linuxserver.io's plex docker and the official PMS one. Both exhibit the same problem. I get "site refused to connect" error when trying to browse to the WebUI. Here is what is shown in the logs: Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-plex-update: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-plex-update: executing... No update required [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting dbus-daemon Starting Plex Media Server. dbus[277]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted [services.d] done. Starting Avahi daemon Starting Plex Media Server. Found user 'avahi' (UID 106) and group 'avahi' (GID 107). Successfully dropped root privileges. avahi-daemon 0.6.32-rc starting up. No service file found in /etc/avahi/services. Joining mDNS multicast group on interface eth0.IPv4 with address 10.0.0.7. New relevant interface eth0.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for 10.0.0.7 on eth0.IPv4. Server startup complete. Host name is f3b0f9d6a685.local. Local service cookie is 1480182292. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. [...]
  20. Via the reboot button in the WUI. It may be that a timer is being exceeded when trying to shutdown my Windows VM?
  21. My install consistently tries to parity check on every reboot now. It is thinking every shutdown is unclean?
  22. Curious, what is your use-case for this?
  23. I do, but it'd be under the two video cards, I'm not sure I can tell unRAID to pick the third card right? I thought it will just pick the first card it sees...
  24. Appreciate the response. I think this is the next step. I'm gonna try to dump the 1070 ROM and use that as the primary. I don't really have room to add a third card, so I really need this to work, as everyone says it should...