Mike Howles

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mike Howles's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Sorry if this has been answered recently, but searching hasn't given me anything too helpful. I have been using this plugin with no problem with my GTX 1080s, however I have a second Unraid build that I'd like to use an Nvidia Tesla K80 that I picked up for some machine learning use cases. Since this plugin did not seem to have an option/driver for Tesla GPUs from what I could see, I was wondering if there is a manual process to install the Tesla driver after installing the plugin? I've tried passing through the GPU to a Ubuntu VM however I'm having other driver install issues there (probably user error on my part) so I was just wondering if it was feasible to do in Unraid with any manual steps. Basically I just want to run some stuff in Docker containers, whether that be direct in Unraid's Docker or a VM with PCIe pass through. Anyone got any experience getting a Tesla to work in either approach in Unraid? EDIT: Just an update, the 'regular' NVidia drivers worked. My problem was that I was using an old motherboard/CPU that did not support IOMMU for PCIE passthrough. A new motherboard and CPU solved this.
  2. Apologies if this has already been answered, but I searched and came up blank here and when Googling it. I get the following error in the log when trying to create a regular RDP connection to a new Windows 10 box. guacd[948]: INFO: User "@b28e83be-7933-4609-88c6-3752f3d9e8a6" disconnected (0 users remain) guacd[948]: INFO: Last user of connection "$464a76e4-48b9-4a35-a8b1-f46fd2646f22" disconnected guacd[14]: INFO: Connection "$464a76e4-48b9-4a35-a8b1-f46fd2646f22" removed. guacd[14]: INFO: Creating new client for protocol "rdp" guacd[14]: INFO: Connection ID is "$464a76e4-48b9-4a35-a8b1-f46fd2646f22" guacd[948]: INFO: Security mode: ANY guacd[948]: INFO: Resize method: none guacd[948]: INFO: User "@b28e83be-7933-4609-88c6-3752f3d9e8a6" joined connection "$464a76e4-48b9-4a35-a8b1-f46fd2646f22" (1 users now present) guacd[948]: INFO: Loading keymap "base" guacd[948]: INFO: Loading keymap "en-us-qwerty" connected to 192.168.1.186:3389 creating directory /root/.config/freerdp creating directory /root/.config/freerdp/certs creating directory /root/.config/freerdp/server certificate_store_open: error opening [/root/.config/freerdp/known_hosts] for writing SSL_read: Failure in SSL library (protocol error?) SSL_read: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error guacd[948]: ERROR: Error connecting to RDP server I have not changed or disabled any RDP parameters on the Windows 10 box aside from simply enabling Remote Desktop itself. The connection is set up to use NLA (Win 10 default from my reading) and Ignore Server Certificate. Any thoughts? I can connect fine via Mac RDP and Windows RDP clients to this box.
  3. StarTech.com 4 Port PCI Express (PCIe) SuperSpeed USB 3.0 Card Adapter w/ 2 Dedicated 5Gbps Channels Chipset: Renesas/NEC - µPD720202 Findings: Purchased this because it had 4 individual USB controllers on the board thinking the extra price would allow for more flexibility with my VMs. Requires SATA/Molex power, works fine when booting native to Windows 10. It is recognized in Unraid under Settings -> Hardware however when passing through the PCIE card to a VM, while the card is detected, it always has an error with Code=10. This is with the standard Windows drivers and also with the drivers from the manufacturer's website. There are other posts online with the same result (Code=10 error in Device Manager) but not promising workarounds. I suspect this is something at the Linux level as when I do not pass through the PCIE card and hook up USB devices directly to the card, they also are not recognized by Unraid. I give up on this card.
  4. @takkkkkkk did this work after you removed the space after the = sign? Mine passes through but I get a Code=10 in Device Manager in any VM I pass it to.
  5. Piling on to this thread. Spent $80 on this card thinking it would work and it recognizes and then just Code=10s in Device Manager :(
  6. Based on my reading, I see that I can add new keyfiles to my encrypted array drives with a command such as: cryptsetup luksAddKey /dev/md1 --key-file /root/keyfile Where md1 = disk1 and md2 = disk2 etc.. However I cannot seem to figure out what the naming convention is for cache drives. Can someone fill me in? Thank you. EDIT: I think I found it in the Disk Log information. Appears to be /dev/sdd1 in my case. Leaving here in case this helps others.
  7. So I've done a semi-equivalent of what you guys are doing here except I'm using scp to pull my key down from an AWS micro instance (might switch this to a Pi Zero on premise though - thoughts on drawbacks/benefits here?) that is always on. I like the examples where I can delete the key after the disks are mounted which is why I stumbled onto this thread. One idea I had was to somehow enhance the scp/ftp/whatever protocol process to somehow fire off a process on the remote machine that holds the key to send maybe an SMS text to my phone where I respond with a PIN code to allow the key transmission to take place. Are there any inherent flaws with pursuing this? One concern I have is there will obviously be a lag between when the ssh command between Unraid go script and the key and be transmitted back and then it's a race condition? Is there a shell command on the Unraid side that I can call again to mount the drives once the file has been received?
  8. Yeah, I picked the Startech one for a similar reason. Maybe I'll return it and get the one you have. And regarding the Oculus working, I've seen others mentioning they've gotten Oculus working, however my issue is how Windows Mixed Reality setup is interpreting the USB plugin on the headset not being quite right. I'll give the Sonnet a go probably and see if this solves it. Thanks!
  9. I've had a normal experience thus far in Unraid with docker. The web UI is good enough to see what is running, and Portainer runs perfectly for me in Unraid. When you pull an image via Docker CLI or Portainer, it will show up in the Unraid web UI, however many of the convenience additions and options are disabled, but they at least visually show up for you there, so they both can co-exist nicely.
  10. @jonp @limetech thanks for that quick update. I understand and it is quite a a minor trade-off. To be quite honest, many of the PC games that I am playing do not benefit tremendously from SLI anyway. For instance, even with a single 1080, I am able to push 144fps to my 144hz monitor in a game like Overwatch (both bare-metal, and passthrough, which is mind blowing to me.) So what this should actually be able to free me up to do is use one GPU to game on, and the second for possibly some long running CUDA task in a separate VM. I'd rather keep all the CUDA stuff separate from my Windows 10 gaming OS anyway. This makes Unraid a great fit for me. For my other limitation, I've nearly gotten VR to work (Lenovo Windows MR setup) and it actually will pass through the Lenovo USB headset and acts like it wants to run through the setup, however it then ends with an error code that leads me to believe that I actually will need to pass through an actual USB controller itself. Last week I purchased a Startech USB PCIE card (the one with 4 controllers thinking that would be beneficial). I was able to follow through some tutorials to expose this PCIE device to my VM, and it is passed through, however all 4 controllers have a yellow warning error sign with error code = 10 message. My Googling turned up a few others having the same result. Perhaps I chose these wrong card, which is a shame, as it was (for a USB card) relatively pricier at $80. I'll keep investigating this and maybe post in a support thread at a later time to see if there is a better approach or figure out what I am doing wrong. It would be GREAT to not have to boot bare metal into Win 10 for VR! Thanks again on the SLI tip. That saves me trying to find a solution in vain!
  11. Hello folks, So I'm close to biting the bullet and moving from tinkering with Unraid and going full crazy with it as a strange hybrid gaming/NAS/Docker/VM/development rig. Fortunately I already had an overpowered gaming rig, so I'm hopeful that Unraid will allow me to use it to its fuller potential. Specs: ASUS STRIX Z370-E Gaming Motherboard Intel i7 8700K 64 GB DDR4 RAM 2 Geforce GTX 1080s with SLI bridge (the bridge doesn't seem to come across when I expose both cards to a VM at the moment, though) I've been running the trial increasingly over the past 2 or 3 weeks, slowly shucking old external drives and consolidating the data (and discovering along the way a few of the drives were in their death throws). So I'm currently in the "musical chairs" stage of moving the data to fewer but larger drives, so after a few trips to Costco, I now own 3x Seagate 8GTB drives ($139/ea, was surprised how cheap), where 2 will stay as Array drives, and one will serve as Parity. And finally, I've got a handful of SSDs that 1 of which will act as Cache drive, and another is currently nativly booting to Windows 10 (for SLI and VR reasons) but also is bootable within Unraid VM via /dev/disk/by-id/xyz123 method for other gaming where VR or SLI is not needed. As someone who has been running VMWare Vsphere for 2 years now on 2 Intel NUC Skull Canyons, this is definitely a neat change and different approach to running a VM/Docker/storage/development home lab. The benefit I've found so far is that the Web UI is quite easy to use, and the storage and shares seem quite flexible. While I was not ever able to try PCIE passthrough in a NUC because it was a NUC, PCIE passthrough in Unraid made it very easy. I had initially been Googling KVM and QEMU and passthrough and it looked intimidatingly complicated, however folks like Linus Tech Tips and Space Invader One have been invaluable in their videos and I cannot give them enough props for such encouraging and helpful videos. Hopefully as the final 8TB drives complete clearing and I can button the case back up, I'll have a fun photo to post of the final "build" (are builds ever actually DONE though? ;)) and also I hope to be able to return some knowledge or tutorials if I come up with new techniques with some of my programming experience. Great product!