sweigh

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by sweigh

  1. I'm trying to setup the Terraria-tModLoader server, and I can't figure out how to get the mods to load. It seems to just run like a normal Terraria server - any idea what I'm missing? Edit: Disregard - I needed to add an enabled.json file to the Mods folder to enable the mods.
  2. Good advice, thanks. I haven't encountered any access issues yet, but I'm using a dedicated z-net controller with it's own IP, so that's likely why.
  3. I know this post is dated, but since I came across it while attempting to do the same setup, I figured I'd follow up to let you know that I was able to use the Homeseer container found here to easily setup a container in Unraid and it's been working great. I've attached a screenshot of my container settings for reference in case it's needed. I modified port 80 to 81 since I use 80 for the Unraid webUI.
  4. What's your GPU? Is it being used by the console? Is your VM set to autostart? Just a guess, but if the GPU gets passed through to the VM on boot, but then gets assigned back to the host when the VM is restarted, maybe the host doesn't want to relinquish control of the device back to the VM afterward? Like I said, just a guess, but I've seen weird things happen when a GPU is shared between host and VM, especially with Nvidia cards.
  5. Sounds similar to an issue I experienced last year with AMD vs Nvidia cards. I never ended up resolving the issue, but my fix was to use an AMD card to boot the VM. Something about Nvidia not liking passthrough when it's initially used as the boot/console device. There was a mention that if both the AMD and Nvidia cards were installed, and the AMD card was used as the boot/console device, then the Nvidia card might get passed through. If you can fit both cards in the system, I'd give that a try.
  6. Hey Wedge, I don't believe integrated graphics can be passed through to VM's currently.
  7. Hmm, if that's the case (Nvidia GPU not allowing passthrough if it's the boot/console GPU), that could explain Jonp's comment in another thread that it will likely work if I have both the Nvidia and AMD cards installed together, especially if the AMD card is used for boot/console. Now that I'm curious, I'll do some testing this week to see if I can replicate it.
  8. You have two servers in a rack with xeon grade hardware and you're complaining about the lack of a second key promo?!?! Man...tough crowd... touché
  9. I've got an HBA in the other slot at the moment, but that's something to consider for the second machine. Assuming that works, why would it be the case? I tried both slots with the Nvidia card alone, and got the same results.
  10. Looks like it's a video card compatibility issue. the EVGA GT 730 I was trying to use wouldn't get passed through into any VM, but when i swapped it for an AMD Radeon 5000 series it works just fine.
  11. So it turns out Unraid doesn't like working with GT 730 video cards, because that was the culprit. I swapped it for an AMD Radeon and it fired right up! Also, since I searched high and low about the Xeon 5420 and vt-d support, let this be confirmation that it does work just fine assuming your chipset supports it! This makes me very happy, as I have 2 identical systems in my rack and I was really hoping to virutalize all my HTPC's into one of them. If only the Unraid second key sale will still on...
  12. I'll give a few more things a try and see where I end up. I'll post back with the results.
  13. Actually, it says enabled: Regardless, I'm going to assume that the issues are due to the CPU, as I don't expect it supports VT-d based on it's original release date. Oh well.
  14. I'm still scratching my head with the issues mentioned below. I tested some USB passthrough devices on another VM to ensure vt-d is working, and it looks like it is. I am guessing that maybe the syslinux.cfg changes didn't provide the right solution. Any ideas? edit: Nevermind, based on my discussions in this post, http://lime-technology.com/forum/index.php?topic=42770.msg407518#msg407518, I'm assuming that my issues are hardware related.
  15. so I ran a quick test on another VM (Windows 8.1), passing through a couple USB devices without issue. Is it safe to say vt-d is operational? If so, then than narrows down my video issues with Openelec somewhat, though I'm still at a loss as to why I don't get any video output.
  16. Good idea, I'll play with some other devices to see if I can get any results. I wasn't sure if the BIOS setting would be present regardless of CPU, but it's definitely there and definitely enabled so I'm still hopeful...
  17. I'm running into some issues with the Openelec VM, and I'm wondering if it's vt-d related. I am running dual Xeon E5420's (http://ark.intel.com/products/33927) in an Asus DSEB-DG/SAS motherboard (https://www.asus.com/Commercial-Servers-Workstations/DSEBDGSAS/). The issue I'm having is that the GPU passthrough doesn't get anything on the display. The motherboard supports VT-D, and it's enabled via BIOS, however the CPU doesn't list VT-d on the Ark site, so I am not sure if it supports it or not. I have assume by the issues I'm having that it likely doesn't, but I'm looking for confirmation, or a way I can verify vt-d compatibility.
  18. Might have made some progress... I edited my syslinux.cfg to include append intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream initrd=/bzroot as shown below: default /syslinux/menu.c32 menu title Lime Technology prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append initrd=/bzroot append intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream initrd=/bzroot label unRAID OS Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot unraidsafemode label Memtest86+ kernel /memtest label Xen/unRAID OS kernel /syslinux/mboot.c32 append /xen --- /bzimage --- /bzroot label Xen/unRAID OS Safe Mode (no plugins) kernel /syslinux/mboot.c32 append /xen --- /bzimage --- /bzroot unraidsafemode It got the VM up and running, but I just get a blank screen. I also am unable to shut down the VM via the VM Manager. I've attached my system diagnostics for your review. I'll continue troubleshooting tomorrow I suppose. tower-diagnostics-20150906-0128.zip
  19. Hey Jonp, I've got an issue similar to SuperW2 on the first page, only it doesn't seem to be an IOMMU group conflict, but a permission issue. Take a look: Just for reference, here's my IOMMU Group list: /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:0b:00.0 /sys/kernel/iommu_groups/1/devices/0000:0b:00.1 /sys/kernel/iommu_groups/2/devices/0000:00:05.0 /sys/kernel/iommu_groups/2/devices/0000:0a:00.0 /sys/kernel/iommu_groups/3/devices/0000:00:09.0 /sys/kernel/iommu_groups/3/devices/0000:05:00.0 /sys/kernel/iommu_groups/3/devices/0000:05:00.3 /sys/kernel/iommu_groups/3/devices/0000:06:03.0 /sys/kernel/iommu_groups/3/devices/0000:07:00.0 /sys/kernel/iommu_groups/3/devices/0000:07:02.0 /sys/kernel/iommu_groups/3/devices/0000:08:00.0 /sys/kernel/iommu_groups/3/devices/0000:08:00.1 /sys/kernel/iommu_groups/4/devices/0000:00:0f.0 /sys/kernel/iommu_groups/5/devices/0000:00:10.0 /sys/kernel/iommu_groups/5/devices/0000:00:10.1 /sys/kernel/iommu_groups/5/devices/0000:00:10.2 /sys/kernel/iommu_groups/5/devices/0000:00:10.3 /sys/kernel/iommu_groups/5/devices/0000:00:10.4 /sys/kernel/iommu_groups/6/devices/0000:00:11.0 /sys/kernel/iommu_groups/7/devices/0000:00:15.0 /sys/kernel/iommu_groups/7/devices/0000:00:15.1 /sys/kernel/iommu_groups/8/devices/0000:00:16.0 /sys/kernel/iommu_groups/8/devices/0000:00:16.1 /sys/kernel/iommu_groups/9/devices/0000:00:1c.0 /sys/kernel/iommu_groups/9/devices/0000:00:1c.2 /sys/kernel/iommu_groups/9/devices/0000:00:1c.3 /sys/kernel/iommu_groups/9/devices/0000:02:00.0 /sys/kernel/iommu_groups/9/devices/0000:03:00.0 /sys/kernel/iommu_groups/10/devices/0000:00:1d.0 /sys/kernel/iommu_groups/10/devices/0000:00:1d.1 /sys/kernel/iommu_groups/10/devices/0000:00:1d.7 /sys/kernel/iommu_groups/11/devices/0000:00:1e.0 /sys/kernel/iommu_groups/12/devices/0000:00:1f.0 /sys/kernel/iommu_groups/12/devices/0000:00:1f.1 /sys/kernel/iommu_groups/12/devices/0000:00:1f.2 /sys/kernel/iommu_groups/12/devices/0000:00:1f.3 I tried relocating the GPU to other PCIe slots, as well as enabling ACS Override, to no avail. The GPU is an Nvidia GT30. Motherboard is an Asus DSEB-DG/SAS. My next step is to review the BIOS settings, but I'm hoping you can chime in with something.
  20. Not sure about your original link, but there is a different docker release for a dedicated Terraria docker found here: https://github.com/ryansheehan/terraria I converted it using the Docker Search plugin in Unraid, and it seems to be operational, but I can't figure out how to run it and pass through my world. Hopefully someone else can chime in with more success.
  21. Ah ok, that makes sense then. I guess I'll stick with 802.3ad then. Thanks!
  22. I'm also looking for clarification on bonding setup, specifically balance-rr(0). I've got Link aggregation setup/LACP enabled on my switch (Cisco SG200-50), however I'm getting about 75% packetloss. I've got 4 ports bonded, but 3 are always in standby, with 1 active. I assume this is normal as it's a round-robin active/standby setup, but it's so slow that it's virtually useless. If someone who has successfully setup balance-rr could post their network.cfg or go file, I would appreciate it.
  23. Well Plex was only the first issue that I noticed after virtualizing my services: When Plex starts deleting, it loses it's access to the mounts (I can't stream any of the movies it's already scanned in). Sabnzbd gives a warning that it cannot create a folder that already exists. It will still download to this folder however, but the downloads thus far have been incomplete Sickbeard gives an error for about half my TV shows that "Error trying to add show: Invalid folder for the show!" my XBMC Servers (which connect to a virtualized SQL server), are unpredictable at best on whether the scans to/from the SQL VM work So, thus far, every service I've virtualized seems to be having network related issues. I've assigned IP's to the VM's via MAC address on the router, so nothing complicated on this setup, and I've got three VM's going (arch_plex, arch_sql, arch_usenet), all of which are having some sort of issue that I feel is similar. EDIT: so it seems some of these issues are caused by issues with autofs/nfs. Others are due to permissions problems. Some people had similar issues that were discussed on pages 21-23. my workaround was to manually mount my shares via FSTAB, as well as assign the appropriate permissions for Sickbeard/sabnzbd. The above issues are resolved now.
  24. Has anyone had issues with Plex not playing nice with the array shares? I can point plex to my movie library, and it'll start scanning for a period of time until it eventually seems to lose it's connection to the shares and then starts deleting everything it's just scanned in. I'm wondering if maybe this is a timeout issue with autofs? Just a guess... And I don't know if it's related, but when I do manage to get the array to respond to plex and run a movie, I get the following prompts on console: [ 414.020623] xen_netfront: xennet: skb rides the rocket: 21 slots [ 414.021177] xen_netfront: xennet: skb rides the rocket: 19 slots [ 464.418772] xen_netfront: xennet: skb rides the rocket: 19 slots [ 506.667887] xen_netfront: xennet: skb rides the rocket: 21 slots [ 506.671802] xen_netfront: xennet: skb rides the rocket: 19 slots Edit: Further investigation leads me to believe I'm getting packet loss across all my VM's (3 total). That can't be good... I'll try installing a different NIC and see if that helps.
  25. I'm interested to see where this plugin progresses, as it would be a useful next step to "quarantine" my downloading habits. Cutting out a middle man always simplifies the process, and I have a garage full of hardware that i can start throwing together for projects like this. As always, I'll put my dollar behind the guy that can get it functioning/supported.