Jump to content

xanvincent

Members
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

1 Neutral

About xanvincent

  • Rank
    Member
  • Birthday 01/28/1989

Converted

  • Gender
    Male
  • Personal Text
    UNRAID 6.5.1-rc6

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Based on what I know about LinuxServer.io's docker containers, this simply maps the internal user of the docker (abc?) to the specified UID and GID. AFAIK, this does not affect the root user in the container. It would also only apply to LSIO containers. If someone knows different, feel free to correct me. I believe the only way to change the root user mapping (that is needed for interfacing with the host's resources) is with namespaces ie. --userns-remap USER.
  2. It is definitely possible to break out of containers. This was 'recently' exploited successfully per CVE-2019-5736: https://www.twistlock.com/labs-blog/breaking-docker-via-runc-explaining-cve-2019-5736/ This was more a vulnerability of runC, not docker itself. It was, of course, patched some time after the CVE was identified. The point to using namespaces here is, if an exploit like the above is used on a container you're making available to the world, you want to minimize damage to the host system. With namespaces implemented, even if you escaped as root, you'd be some non-usable user on the host system, instead of root on Unraid. This is also a big reason not to use 777 permissions on the host OS too. Let's say your public container was compromised with such an exploit, now I have access to all of the data in all of your shares.
  3. @Eadword: I have a thread opened for Docker Isolation with no replies, but you should add Linux namespaces / subuids for Docker to your list, as well as not using root and 777 permissions everywhere in the OS would be some good changes. It's been this way for a long time, I doubt it will change because unraid has always been a "don't expose to outside world" kind of distro for a long time.
  4. In unraid's implementation of docker, since it runs as root, are the containers leveraging Linux namespaces / subuids to provide some additional isolation from the host? For example, if I gain access to root on a privileged container and break out, is that user still root or am I mapped to some useless subuid? Docker provides this functionality but for whatever reason it is off by default.
  5. Hi all, Getting this error when trying to build a Linux VM with the same settings as my (successful) Windows 10 VM. I am currently only attempting to pass through the RX 580. This works flawlessly on my Win10 VM, but every time I try to start the Linux VM, I get this error. Here's the tail of syslog when I started the VM: root@iron:~# tail -f /var/log/syslog Dec 21 11:51:05 iron avahi-daemon[5365]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:05 iron avahi-daemon[5365]: New relevant interface vnet1.IPv6 for mDNS. Dec 21 11:51:05 iron avahi-daemon[5365]: Registering new address record for fe80::fc54:ff:fe50:3b5e on vnet1.*. Dec 21 11:51:50 iron avahi-daemon[5365]: Interface vnet1.IPv6 no longer relevant for mDNS. Dec 21 11:51:50 iron avahi-daemon[5365]: Leaving mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron kernel: device vnet1 left promiscuous mode Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron avahi-daemon[5365]: Withdrawing address record for fe80::fc54:ff:fe50:3b5e on vnet1. Dec 21 11:56:43 iron login[4031]: ROOT LOGIN on '/dev/pts/1' VM settings: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Arch</name> <uuid>73998f8d-71bc-b076-9372-5e2952d398b0</uuid> <metadata> <vmtemplate xmlns="unraid" name="Arch" icon="arch.png" os="arch"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='11'/> <vcpupin vcpu='2' cpuset='12'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/73998f8d-71bc-b076-9372-5e2952d398b0_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='5' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Arch/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:50:3b:5e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x01' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x02' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Hardware: AMD Ryzen 1700x 32GB DDR4 AMD RX 580
  6. Can you post your KVM XML configuration for your Windows 10 VM (edit the VM, click upper right switch that says "form view")? Try using SeaBIOS if you aren't already. I couldn't get AMD graphics to play nice with OVMF.
  7. This might explain some of the intermittent errors I've noticed with my files too, namely ones written to cache initially... Does the mover program implement checksumming? Something like rsync --checksum?
  8. I'm not sure if this is a bug with 6.5.2 or not, but, since updating plex dockers are not working (and were working before). I've tried both linuxserver.io's plex docker and the official PMS one. Both exhibit the same problem. I get "site refused to connect" error when trying to browse to the WebUI. Here is what is shown in the logs: Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-plex-update: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-plex-update: executing... No update required [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting dbus-daemon Starting Plex Media Server. dbus[277]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted [services.d] done. Starting Avahi daemon Starting Plex Media Server. Found user 'avahi' (UID 106) and group 'avahi' (GID 107). Successfully dropped root privileges. avahi-daemon 0.6.32-rc starting up. No service file found in /etc/avahi/services. Joining mDNS multicast group on interface eth0.IPv4 with address 10.0.0.7. New relevant interface eth0.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for 10.0.0.7 on eth0.IPv4. Server startup complete. Host name is f3b0f9d6a685.local. Local service cookie is 1480182292. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. Starting Plex Media Server. [...]
  9. Via the reboot button in the WUI. It may be that a timer is being exceeded when trying to shutdown my Windows VM?
  10. My install consistently tries to parity check on every reboot now. It is thinking every shutdown is unclean?
  11. Curious, what is your use-case for this?
  12. I do, but it'd be under the two video cards, I'm not sure I can tell unRAID to pick the third card right? I thought it will just pick the first card it sees...
  13. Appreciate the response. I think this is the next step. I'm gonna try to dump the 1070 ROM and use that as the primary. I don't really have room to add a third card, so I really need this to work, as everyone says it should...
  14. Pfsense and ipfire run well under KVM. It is discussed a lot on the forum. Not sure about the others. Give it a shot!
  15. AMD Ryzen 1700x build, per some advice on here, I went ahead and bought another video card. Since the Ryzen CPU doesn't have onboard graphics, I'm having a bit of trouble getting them to passthrough correctly. First PCIe 16x slot: AMD Vega 56 Second: Nvidia GTX 1070Ti The second card passthrough works just fine, and is a huge success; however, if possible, I'd like to make use of the first slot as well. I've added: vfio-pci.ids=1002:687f,1002:aaf8 to my syslinux.cfg, but it doesn't seem to make a difference. Have tried SeaBIOS and OVMF. Is it worth trying another nvidia card in the first slot? I thought AMD cards were supposed to "just work"? I've also tried another AMD card, the RX 580 in this slot, couldn't get that to work either. Ideas greatly appreciated! Log Output: https://pastebin.com/13Uh2iCv XML: https://pastebin.com/rxKstPEe