Jump to content

PixelPerfect

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by PixelPerfect

  1. 20 hours ago, Snuups said:

    That sounds like unraid will support Mac Pro 4.1 maybe natively in Version 6.9? That would be great.

    camphoto_684387517.jpeg

    No, we didn’t know. It might. It might not, but I’m pretty sure 6.9 is already out? This thread was never resolved and I just run macOS now. 

  2. 11 minutes ago, madwyn said:

    My goal is to get the Mac Pro a 10GbE SFP+ card, an NVMe PCIE card + NVMe SSD (or U.2 SSD) to build a 10GbE capable unraid server for the "cheap". I like the Mac Pro because it uses ECC RAM and has server grade components which made it very durable.

     

    I'm glad that you found a workaround for the moment. Unfortunately, I can't bear the mind that Mojave introduces extra overheads for performance, stability and energy consumption.

     

    As Mac Pro 5.1 and older models lost OS support since Catalina, I assume more and more of these machines will end up on eBay and people will want to convert them into servers or PCs. I hope unraid an provide official support for this machine.

    I was going to be running Mojave anyway in a VM so it made sense. iCloud Cache and another app I use only run under macOS. 
     

    Using a Mac Pro because it uses ECC isn’t worth it. Either forget ECC because it really doesn’t matter, or get a PC platform that’s a quarter the price for the same setup. 

  3. 33 minutes ago, madwyn said:

    Did you have any progress on this?

     

    I went through the same like you did with my Mac Pro 4.1. It is now flashed with the latest firmware 144, but it still shows the same error.

     

    I've tried to boot Fedora Live CD and another Linux distro, they all worked without issue. I do think this is unraid specific. I'm stuck and thinking of converting the Mac Pro into a PC.

    Not at all. I’m now running Docker on Mojave with the costs and benefits that entails. It works. It does what I needed it to. (time machine, iCloud cache, docked services) and it’s just a little sad my shares are now explicitly on drives. I don’t use parity anyway. (I know)

  4. On 5/25/2019 at 5:40 AM, itimpi said:

    At that point Unraid itself has not got involved.   I suspect this is a problem with the generic Linux drivers that are handling the boot process at that point.  As such it is something that would need to be fixed at the core Linux level - not in the Unraid specific code.

    Yeah, so it's looking like I'd have to rebuild or replace the kernel entirely similar to how there's a custom HP build of unRAID, I'd bet, but honestly worse. I'll have to look a little deeper into what hardware drivers are present and go from there if I ever get time. Would definitely be neat to get unRAID booting on this hardware, and even the Mac Mini Dual (rip your chance for parity, though, unless you just want one drive)

  5. 1 hour ago, Marshalleq said:

    Did these models have USB3?  If so, you could try booting off an external USB controller I suppose.  Though a kernel argument would be nicer.  Or try the UEFI version of the usb stick - which I think is more relevant when using EFI.  EFI is the original intel standard I think, with UEFI being an implementation of it, or the other way around.  I can't remember.

    My only USB3 card works under macOS, but doesn't show at all in the EFI. There's a chance I could upgrade my CPU to be able to flash a slightly newer firmware that enables NVME booting, which might also enable me to boot from pci-usb but I don't actually know.

     

    It could also be the card doesn't have a compatible EFI level driver. 

  6. 23 minutes ago, Marshalleq said:

    Did these models have USB3?  If so, you could try booting off an external USB controller I suppose.  Though a kernel argument would be nicer.  Or try the UEFI version of the usb stick - which I think is more relevant when using EFI.  EFI is the original intel standard I think, with UEFI being an implementation of it, or the other way around.  I can't remember.

    The first shot I posted was the EFI based panic. Same issue. These models are all 2.0, but I have a 3.0 controller that should work with my system. Once this system update is done I can try it out.

     

    Wouldn't help OP, as he doesn't have PCIe slots, and yeah, a kernel parameter would be nicer.

  7. Update: I tried using rEFInd to boot in legacy mode, and it didn't help much. I even tried specifying things like root=sda1 or 0B01 based on the following and previous screenshots

     

    I'm getting a device listed now, but I'm concerned I might just be seeing the macOS disk installed already, although it should have 3 partitions, not one.

     

    I've tried multiple drives and multiple ports.

    IMG_20190521_182538.jpg

  8. I'm glad I did a little bit of hunting. I have a Mac Pro from that same era. Officially a Mac Pro 2009, 4,1 flashed to work as a 2010/5,1. I have an identical kernel panic. By the looks of it, it has something to do with how the EFI passes off XHCI, which, mind, isn't something we can control. I'm going to try some tricks with Plop Boot Manager later, though.

     

    The only thread I can find, mind you, talks about changing usb drives, and that fixed it for him. Sucks for us.

     

    image.thumb.png.b856cfdaf7348d0d32769f0031357a73.png

    • Like 1
  9. On 12/29/2018 at 1:02 AM, russdyer77 said:

    rolling back around to this as I don't want to rip apart walls to run more cables - would it be possible to use Apple's Thunderbolt-to-LAN adapter on both ends to run over a CAT5e connection?  

     

    server - Thunderbolt expansion card, Thunderbolt-to-LAN adapter, CAT5e

    upstairs - CAT5e, Thunderbolt-to-LAN adapter, plugged into Thunderbolt dock

     

     

    No, oh god no

    Thunderbolt is overkill anyway. Just run either within spec usb and hdmi, or an active repeater for them. Honestly, though, I don't know how economical this even would be compared to some cheaper desk/laptops, and if there's really any good reason.

  10. First things first:

    Ryzen 7 1700

    MSI B350m Gaming Pro

    32GB RAM

    MSI GTX 1060 3GB (Armor)

    (I also have an MSI AIR OC Vega 64 that I can test with)

     

    No IGPU, no second 8/16x slot to have my other GPU in. None of that. I have an edited TechPowerUp vBIOS and a booting Win10 VM

     

    The MOMENT I try and pass a GPU to the VM, I get my monitor to flash, then nothing. VM never boots. Can't hit it via network. First core assigned is pinned.

     

    unRAID does try to use the GPU when booting, but after it inits, it stops. I can disable stubbing if that doesn't change this situation.

    My RX Vega 64 behaves the same way, also, so I don't think this is isolated to Nvidia cards?

    Also, behavior is the same in both EFI and BIOS boot modes for the host.

    Also, stubbing got me around one error that involved releasing a lock on the gpu. Other option was to run this:

    echo 0 > /sys/class/vtconsole/vtcon0/bind
    echo 0 > /sys/class/vtconsole/vtcon1/bind
    echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

    Here's my VM Config. GPU is even stubbed and IOMMU hacks are both enabled, and the GPU was in its own group to begin with

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Windows 10</name>
      <uuid>bec44a89-0549-31f4-9944-d13453bf6416</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>8388608</memory>
      <currentMemory unit='KiB'>2097152</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>8</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='8'/>
        <vcpupin vcpu='1' cpuset='9'/>
        <vcpupin vcpu='2' cpuset='10'/>
        <vcpupin vcpu='3' cpuset='11'/>
        <vcpupin vcpu='4' cpuset='12'/>
        <vcpupin vcpu='5' cpuset='13'/>
        <vcpupin vcpu='6' cpuset='14'/>
        <vcpupin vcpu='7' cpuset='15'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/bec44a89-0549-31f4-9944-d13453bf6416_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <kvm>
          <hidden state='on'/>
        </kvm>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='8' threads='1'/>
      </cpu>
      <clock offset='localtime'/>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/Windows 10/vdisk1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/W10X64.RS5.ENU.DEC2018.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0x14'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0x15'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:f6:26:8c'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x1f' slot='0x00' function='0x0'/>
          </source>
          <rom file='/mnt/user/isos/MSI.GTX1060.3072.160805.rom'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x1f' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x046d'/>
            <product id='0xc332'/>
          </source>
          <address type='usb' bus='0' port='2'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x04fc'/>
            <product id='0x05d8'/>
          </source>
          <address type='usb' bus='0' port='3'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

    Here's a log with the GPU passed through

    2019-01-05 00:11:30.182+0000: starting up libvirt version: 4.7.0, qemu version: 3.0.0, kernel: 4.18.20-unRAID, hostname: Tower
    LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'guest=Windows 10,debug-threads=on' -S -object 'secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Windows 10/master-key.aes' -machine pc-q35-3.0,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,kvm=off -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/bec44a89-0549-31f4-9944-d13453bf6416_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 8192 -realtime mlock=off -smp 8,sockets=1,cores=8,threads=1 -uuid bec44a89-0549-31f4-9944-d13453bf6416 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=26,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 2019-01-05 00:11:30.182+0000: Domain id=1 is tainted: high-privileges
    2019-01-05 00:11:30.182+0000: Domain id=1 is tainted: host-cpu
    2019-01-05T00:11:30.233844Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/0 (label charserial0)
    2019-01-05T00:19:22.974789Z qemu-system-x86_64: terminating on signal 15 from pid 5850 (/usr/sbin/libvirtd)
    2019-01-05 00:19:24.377+0000: shutting down, reason=destroyed

    And with VNC

    2019-01-05 00:25:55.589+0000: starting up libvirt version: 4.7.0, qemu version: 3.0.0, kernel: 4.18.20-unRAID, hostname: Tower
    LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'guest=Windows 10,debug-threads=on' -S -object 'secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-Windows 10/master-key.aes' -machine pc-q35-3.0,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,kvm=off -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/bec44a89-0549-31f4-9944-d13453bf6416_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 8192 -realtime mlock=off -smp 8,sockets=1,cores=8,threads=1 -uuid bec44a89-0549-31f4-9944-d13453bf6416 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=27,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-r2019-01-05 00:25:55.589+0000: Domain id=2 is tainted: high-privileges
    2019-01-05 00:25:55.589+0000: Domain id=2 is tainted: host-cpu
    2019-01-05T00:25:55.639506Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/0 (label charserial0)

    Other than the shutdown, I don't really see any difference. (The shutdown was a force-off that I sent)

     

    Any ideas?

  11. Inspired by your video my friend, I am producing a small video myself which walks you through how to set this up!

     

    Standby!

     

    I feel like I'm missing something REALLY simple o.O

     

    You're not. I know what the issue is. Sent a PM but you haven't seen it so Ill just post it here.

     

    Switch the Container to Bridge Mode in the Docker Tab. Add an additional port (e.g. 1194 UDP), call it whatever you want. Then hit save. Then it will work. The connectivity issue has to do with the Container running in Host Mode.

     

    I am going to co-ordinate some changes to the guidance centrally. Its a bit of pain as it was originally setup to run in Bridge Mode but was changed (due to issues experienced by users) to run as Host. Now it appears it has to go back again.

     

    Yet...post 3 or so says things NEED to be HOST and PRIVILEGED

  12. Wall of text making me look like a noob

     

    Thanks! I have been using linux for years, but never really played with Docker. I was under the impression that it was connecting to the LDAP server in unRAID (unRAID is using LDAP, right???)

     

    Edit: wait no that didn't help

     

    root@Tower:~# docker exec -it OpenVPN-Server-Test passwd admin
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    root@Tower:~# docker exec -it OpenVPN-Server-Test adduser pixel
    Adding user `pixel' ...
    Adding new group `pixel' (1000) ...
    Adding new user `pixel' (1003) with group `pixel' ...
    Creating home directory `/home/pixel' ...
    Copying files from `/etc/skel' ...
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Changing the user information for pixel
    Enter the new value, or press ENTER for the default
            Full Name []:
            Room Number []:
            Work Phone []:
            Home Phone []:
            Other []:
    Is the information correct? [Y/n]
    

     

    Restarted container and still have Refused Connection

  13. Okay, so, I've read through a good portion of this thread and haven't found a fix for my issue. My other docker containers are running fine at this point.

     

    I DO HAVE A SECOND USER ADDED IN UNRAID'S WEBUI, BUT I DIDN'T ADD A USER VIA SSH. I notice one of the solutions was to add a second user. Didn't seem to help.

     

    I run a clean install of OpenVPN-AS from the Apps tab in unRAID 6.2, in the Trial mode, as I want to get everything running before I shell out for it.

     

    I'm on a q6600 with 4GB of RAM if that makes any difference. I also have two NICs, my main one is eth0 in unRAID.

     

    Edit: logs and configs removed

     

    I'm getting connection refused in Chrome whenever trying to open the WebUI, either via tower:943 or local-ip:943

     

    Normally, I'd try toggling port mappings and try switching to bridge mode or something like that, but this time, I'm just going to leave it alone. I might stop the array and reboot, but that's it.

  14. Hey

     

    Probably been asked here before, but what exactly is required to get this container up and running as a basic Apache server?

     

    I've changed my unRAID webUI port to 8080, rebooted, then tried to use the LinuxServer.io repo to install Apache through the automated docker install wizard in unRAID, then tried to access the "WebUI" for the plugin, or just port 80 on my server.

     

    I'm getting refused connections. Seems to be an issue with another docker plugin too (OpenVPN-AS), yet not Deluge. Odd. Only other thing running is the default PlexMediaServer LimeTech Docker

×
×
  • Create New...