Jump to content

xanvincent

Members
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

4 Neutral

Converted

  • Gender
    Male
  • Personal Text
    UNRAID 6.5.1-rc6

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Is the Adguard Home container working? Console doesn't work (just a black screen) and WebGUI doesn't load. I don't see anything created in the conf directory either.
  2. Can you get it to boot successfully with no USB peripherals attached? I'd imagine you'd need to use the Solaris x86 version and force the CPU to one of the supported Intel ones (like Westmere). EDIT: I recall from my sysadmin days that Solaris x86 was hot garbage, can you use something more modern like Illumos or Open Indiana?
  3. You need to add the following to your /etc/security/limits.conf (in your debian VM): * soft memlock <number> * hard memlock <number> <number> can also be "unlimited", which is the default unraid setting.
  4. 1) That's beta software, don't expect it to be stable or use it on a system with data you care about losing. 2) You should post this bug with relevant diagnostics to the bug report forum (prereleases).
  5. You will never see 3000 Mbps on wifi 6 (ax) devices today. That's just marketing wank (they add all the 5GHz and 2.4GHz bands together, devices don't work that way). Fastest wifi6 speeds I've seen real-world was ~700Mbps, and that was achieved 5 ft away from the AP. Assuming your server doesn't sit next to your AP, there are other APs in your area (interference) and you have multiple other wifi devices on your network, you're gonna see a drop in that 700 number real fast. Wireless connections are also unreliable, dropped packets are pretty common, meaning data has to be resent, slowing your overall speeds. I get the want to have Unraid and wifi, I just wouldn't expect it to consistently and reliably approach gigabit ethernet speeds. By the time it does, 10 gig ethernet will be cheap like 1 gig is today.
  6. The best security is provided by the most abstraction. I'd spin up a full VM to do any external forwarding instead of Docker containers. unRAID is always advertised to be not internet-facing so keep that in mind.
  7. The PGID and PUID commands map to an internal user (abc) within the container that runs and owns files for whatever app. Root still exists there (in the container) and that's the problem. Root id in container = root id outside of container. Implementing userns is THE fix for this security concern but has to be implemented at the command line when the container is run.
  8. My number one request as well. I would like Unraid to catch up to modern distros here and maybe we could even start recommending it to SMBes as an alternative to FreeNAS and OMV. PS I can definitely help with implementing a few of these. I just would like to use a better solution than running a script at boot to do it.
  9. Based on what I know about LinuxServer.io's docker containers, this simply maps the internal user of the docker (abc?) to the specified UID and GID. AFAIK, this does not affect the root user in the container. It would also only apply to LSIO containers. If someone knows different, feel free to correct me. I believe the only way to change the root user mapping (that is needed for interfacing with the host's resources) is with namespaces ie. --userns-remap USER.
  10. It is definitely possible to break out of containers. This was 'recently' exploited successfully per CVE-2019-5736: https://www.twistlock.com/labs-blog/breaking-docker-via-runc-explaining-cve-2019-5736/ This was more a vulnerability of runC, not docker itself. It was, of course, patched some time after the CVE was identified. The point to using namespaces here is, if an exploit like the above is used on a container you're making available to the world, you want to minimize damage to the host system. With namespaces implemented, even if you escaped as root, you'd be some non-usable user on the host system, instead of root on Unraid. This is also a big reason not to use 777 permissions on the host OS too. Let's say your public container was compromised with such an exploit, now I have access to all of the data in all of your shares.
  11. @Eadword: I have a thread opened for Docker Isolation with no replies, but you should add Linux namespaces / subuids for Docker to your list, as well as not using root and 777 permissions everywhere in the OS would be some good changes. It's been this way for a long time, I doubt it will change because unraid has always been a "don't expose to outside world" kind of distro for a long time.
  12. In unraid's implementation of docker, since it runs as root, are the containers leveraging Linux namespaces / subuids to provide some additional isolation from the host? For example, if I gain access to root on a privileged container and break out, is that user still root or am I mapped to some useless subuid? Docker provides this functionality but for whatever reason it is off by default.
  13. Hi all, Getting this error when trying to build a Linux VM with the same settings as my (successful) Windows 10 VM. I am currently only attempting to pass through the RX 580. This works flawlessly on my Win10 VM, but every time I try to start the Linux VM, I get this error. Here's the tail of syslog when I started the VM: root@iron:~# tail -f /var/log/syslog Dec 21 11:51:05 iron avahi-daemon[5365]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:05 iron avahi-daemon[5365]: New relevant interface vnet1.IPv6 for mDNS. Dec 21 11:51:05 iron avahi-daemon[5365]: Registering new address record for fe80::fc54:ff:fe50:3b5e on vnet1.*. Dec 21 11:51:50 iron avahi-daemon[5365]: Interface vnet1.IPv6 no longer relevant for mDNS. Dec 21 11:51:50 iron avahi-daemon[5365]: Leaving mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron kernel: device vnet1 left promiscuous mode Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron avahi-daemon[5365]: Withdrawing address record for fe80::fc54:ff:fe50:3b5e on vnet1. Dec 21 11:56:43 iron login[4031]: ROOT LOGIN on '/dev/pts/1' VM settings: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Arch</name> <uuid>73998f8d-71bc-b076-9372-5e2952d398b0</uuid> <metadata> <vmtemplate xmlns="unraid" name="Arch" icon="arch.png" os="arch"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='11'/> <vcpupin vcpu='2' cpuset='12'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/73998f8d-71bc-b076-9372-5e2952d398b0_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='5' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Arch/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:50:3b:5e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x01' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x02' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Hardware: AMD Ryzen 1700x 32GB DDR4 AMD RX 580
  14. Can you post your KVM XML configuration for your Windows 10 VM (edit the VM, click upper right switch that says "form view")? Try using SeaBIOS if you aren't already. I couldn't get AMD graphics to play nice with OVMF.
  15. This might explain some of the intermittent errors I've noticed with my files too, namely ones written to cache initially... Does the mover program implement checksumming? Something like rsync --checksum?