jordanmw

Members
  • Posts

    288
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by jordanmw

  1. I have several qnaps and can't understand why you would ever want to do this. It will be underpowered and slower than snot. Especially the unit you have- if it was at least a celeron- there might be hope- but not with atom. Dockers are out of the question and the overhead of unraid itself will probably eat most of your resources with a basic file copy.
  2. I think the issues with that process are related to windows activation and hardware changes that would happen between the VM and the physical machine. I think other people have posted some workarounds, but this will definitely be a bigger pain in the rear then just excluding a drive from unraid all together and booting to windows on that drive. It would depend on the setup once you boot a physical windows copy- to access any other drives that unraid has access to. Technically, you are not really allowing unraid to do anything but be a lightweight hypervisor with some unassigned devices that are consumed by VMs when they boot.
  3. I think the biggest issue that you will have is doing the initial setup and config. Once that is done, things smooth out considerably but almost need physical access for the initial install and troubleshooting. I guess it would depend on where you are in the world and what consultation you are looking for. What did you have in mind? I have 3 unraid builds that I am supporting as multi-headed gaming machines, usually it's not difficult, but when things go wrong, I usually restore from backup instead of troubleshooting the issue. A lot of issues can be prevented with the right setup- so it's important to get it right the first time and heavily stress all the VMs/dockers before determining stability. The primary functions of the system also need to be run for testing and using those can be tricky to max without real world resource usage.
  4. Man do I miss MCEBuddy and WMC!! It was actually the most polished solution for a set top device that could do it all.... I mourned it when it died. It seems like unraid might be a good fit for your use case. You could have a VM running 24/7 with your WMC up for all to enjoy. The only major challenge I can see coming for you is physical layout and usb passthrough for the devices to those locations. I would keep your m.2 for either a passthrough for a win 10 vm, or maybe for your cache drive. The speed with be helpful to the VMs boot drives.
  5. I've got the TaiChi x399 and a 1920x currently setup as a 4 player gaming computer- you could save a little money and go with that CPU. If it is plenty for my setup, yours seems like an even lighter use case. I just passed through disks directly for each vm for a data location, then put the main image on cache. Then, just backup the image for the OS and all the data will stay safe. I do that process nightly, and a restore of all 4 vms takes about 20 minutes. I have 4 kiddos and each of them have their own workstation.
  6. Yeah- most people have had to add a USB extender- and sometimes even that fixes the issue- if you read anything from that thread.....
  7. https://en.wikipedia.org/wiki/Choke_(electronics)#/media/File:Ferrite_bead_no_shell.jpg It's just a ring that goes around a usb cable to prevent interference. Similar issue here:
  8. This is likely either an issue with usb interference- that can be solved by a choke on the cable, or it's the power settings on the mouse in device manager. Logitech mice are really sensitive to electrical interference- others here have had similar issues.
  9. Might have your hookup: https://fortcollins.craigslist.org/sys/d/fort-collins-2x-dell-r210-ii/6850738482.html
  10. USA- Northern Colorado- Paypal or local pickup. Shipping determined by seller. 2 Dell servers for sale. Great for a home lab. Identical servers- $400 each. 2x Dell R210 II 1U servers: Xeon 3.3Ghz Quad core CPU 32 GB DDR3 ECC RAM 2x onboard broadcom 1Gb NIC 2x Intel 1Gb network bypass NIC 500Gb Hard Drive Really great servers that I have used as my lab environment for the last year or so. Housed in a data center, these servers have been well taken care of, only selling because I lost my free datacenter hookup. Also have a Qnap TS-269 2 bay NAS with 2x Gb networking I'll throw in for another $75. Same for a 48 port Cisco catalyst 1Gb switch- $75. Gives you everything you need to setup a powerful VMware HA cluster with the Qnap as the datasource. $900 for everything.
  11. That's why I went with the asrock taichi x399.... all the important features without the price.
  12. Answered privately, but here for the forum: https://www.microsatacables.com/u2-sff8639-to-pcie-4-lane-adapter-sff-993-u2-4l I had a corsair air 740 which does have a small front 2.5 bay location that I adapted by USB card to fit into.
  13. +1 would love this support
  14. There are still the 60hz limitations from the ones I tried. Yeah, you'd break out the audio from the monitor. Thunderbolt is great- but as you mentioned- expensive.
  15. Probably the most elegant solution with the best performance latency wise- is to get separate hdmi and usb over IP adapters. Many of them don't need power and since the hdmi can handle the audio also- you are really only talking about 2 cables to each station- plus power for the monitor(with speakers). My wife and I game on a 4 headed setup with another couple and I just went the long cable route since I am only 40ish feet from the rig. I tested some adapters but didn't have enough cat6 to get it where I wanted- seemed to work well latency wise with a GB switch. Let me know what you end up going with- and what your performance is like- I may go that route in the future.
  16. My 960 required a bios rom for my first video card- but not the other 3. Was the only way I could get that one up.
  17. Ok, everyone here saying you have way too low end of a VM for ARK are 100% correct, but the network share is also the issue. I run 4 gaming VMs and an ARK server VM on my unraid setup, and used to have 2 of the machines setup with a network vdisk- ARK would not work on those 2 machines until I passed through an SSD and installed there. So, while your VM is FAR TOO underpowered to expect anything but crashes from ARK, the network share is likely the cause. I have a gtx960 with 10Gb RAM and 2(4smt) physical cores passed through from my 1920x- and it still looks like doggy doo. It runs, and if you turn down virtually all the eye candy- gives decent framerates- mostly:(
  18. Heh, of course you do. We are talking about practical reasons- not ethical ones. Anyone know of any functional limitations besides theme changes?
  19. I've also had no issues on CS or any game using VAC. Looks like it must be a non issue at this point, I have a wide selection installed- and have yet to run into any issues with anything. I will also note that I have the hypervisor flag set to false.
  20. No, I can see temps but not control fan/pump speed. If you have a USB header for the pump- you can pass that through to a windows machine and install the control software if you have some. Otherwise I just did all the tweaking with fan/pump speed in the bios- that is really the best way.
  21. So, when I started building my TR build, I went through the usual paces and noticed that my thermals were insanely high. I started freaking out and exchanged my water cooler- but still the temps were through the roof. I did some research and found that there is a 27 degree offset for that CPU. That means that it reports a 21 degree higher temp on specific sensors. I run about 50C at the high end and idle around 30C once I choose the sensors I know will report without the offset, everything looks great. I never get thermal throttling, so not sure what is going on there, but I did set my pump to 1400rpm- 900 sounds way slow- and unless you have a loud pump, you should get it at least to 1200.
  22. Glad to hear it russ... hope everything is smooth sailing for you from here on out.
  23. Yeah- that is where my current licenses came from- but can't justify spending 180ish on licensing 4 windows copies just to get rid of the watermarks.
  24. No- this is expected. If you didn't exclude the card at boot, then unraid will boot with that card for video and if the VM is booted with that GPU assigned, it will take over that card. He has a threadripper- so no IGP.
  25. I had some similar issues and it really boiled down to dumping the bios from the actual card I was using- edit out the nvidia header with hex editor- then point it to that rom when building the machine. Then ran the full windows install from the physical monitor and it worked flawlessly. If I added VNC at any point it did not load video drivers.