Jump to content

JesterEE

Members
  • Posts

    169
  • Joined

  • Last visited

Everything posted by JesterEE

  1. I see this too ... it's infrequent unless I also install the GPU Statistics Plugin, then it's constant. GTX 1060 here. With the the GPU Statistics Plugin: Mar 19 20:32:46 Tower kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Mar 19 20:32:49 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] A few minutes later after uninstalling the GPU Statistics Plugin (no caller line in the log): Mar 19 20:37:08 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] M
  2. https://developer.nvidia.com/video-encode-decode-gpu-support-matrix TL;DR; I wouldn't bother unless you have lots of h264 content and need more streams than you CPU can do.
  3. So I did what I said in the previous post. Nuked my Unraid USB (after backing it up, of course!), loaded 6.8.2, created a dummy array with another single USB drive, set a SSD as an Unassigned Device for a VM (actually, just used the same SSD/VM I have been using), isolated half the cores for the VM, stubbed my GPU, and played with a few benchmarks and games to test it out. Butter...so, so smooth. Bare metal performance. So, now I know it's possible ... now to figure out what's causing it to stop being that way. The struggle continues... -JesterEE
  4. Config update Tried @zeus83's recommendation and we still having similar issues. Loaded @Skitals 6.8.0-RC5 kernel but was having issues passing through my Nvidia card, so I scrapped that effort. Upgraded to 6.8.2, same expected issues, but now my Nvidia card is passing through fine with QEMU 4.2. My last straw is thermonuclear ... Maybe I set an Unraid configuration option or loaded a plugin somewhere along the line that is interfering with the VM performance. I'm going to temporarily disable my array, load a fresh install on my USB drive, and just run the virtual machine. No docker, no plugins, no tweaks, no nothing ... Vanilla. I'll try 6.8.2, and 6.8.0-RC7. If one works really well I know it's something I did and I'll re-setup my array and try to reconfigure Unraid as I like it until I figure out what is causing the issue, but I have low expectations. -JesterEE
  5. Anyone using this kernel with a Nvidia GPU VM passthrough? A VM will start once, but if restarting the VM, or starting another which uses the GPU, the VM will start but I will get no signal from the card. I need to restart the server to get any signal again. Was trying this kernel for the audio passthrough on the x570. Was this the issue that caused Limetech to pull the NAVI patch from the 6.8.0-RCs? -JesterEE
  6. @zeus83 Thanks for the comment. When I reverted back to the simple XML those lines were removed and I still have stuttering. My XML now is what Unraid generates in the GUI + edits for CPU feature policy='require' name='topoext', multifunction corrected GPU/HDMI pass through, and disk cache='none'. It's hard to quantify better or worse stuttering between VM settings since it's very subjective. My not scientific bar is "Can I do something with medium hardware intensity for 10-15 mins and notice significant lag/stutters?". The answer to the question is always yes 😭. I'm currently running the default VM clock setup: <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> <timer name='tsc' present='yes' mode='native'/> </clock> My current_clocksource is tsc and my available_clocksource is tsc hpet acpi_pm. My system does not sleep, so it should always be on the tsc clock. I did not however run the kernel parameter 'tsc=reliable', so that's something I will try! Thank you so much for the recommendation! Getting rid of clock stability checks doesn't seem like a great idea overall ... but it's worth a try ... and some more reading if it actually works. Note the DPC Latency Checker you linked is not working on new versions of Windows (>8). From their website: The only software I know for DPC Latency checking that working in Windows 10 is LatencyMon. General thread update: I guess I didn't quite raise the white flag ... I'm just not investing a lot of time and trying everything under the sun anymore. I did try to add another GPU (GeForce GTX 240 🤣) as the primary (PCIE1 8x) and the GeForce 1060 (PCIE2 8x) as secondary unbound from the Linux kernel with vfio-pci.cfg. Didn't help. Pretty much the same as running just the GeForce 1060 in PCIE1 16x either bound or unbound from the kernel, and adding the kernel parameter 'video=efifb:off' so it's not held by the host. This is what I expected, but now it's validated. I think the next thing I'm going to try is Skitals 6.8.0-RC5 kernel so I can try my on-board audio instead of the HDMI audio. Maybe I'm asking the GPU bus to do too much 🙄. -JesterEE
  7. From Unraid 6.8.0-RC7 root@Tower:~# /usr/bin/sensors -A k10temp-pci-00c3 CPU Temp: +61.4°C (high = +70.0°C) MB Temp: +61.4°C -JesterEE
  8. Quick update: I was working with another build of Unraid that allows the temperatures to be read by the temp sensors plugin and ControlR started working correctly ... confirming the bug.
  9. Getting 5-6 PCIE card slots with a Ryzen series configuration, while not impossible, is going to be really, really rare, if even manufactured at all. The Ryzen 3000 architecture accommodates 24 lanes, and the x570 chipset has 16 PCIE lanes for a total of 40 total lanes; and that's the best you can do! This is for everything ... PCIE cards, NVME, SATA buses, USB, LAN, etc. While it's 'possible' to have this all dedicated to physical PCIE cards, you'd be sacrificing a lot of other computing functionality to get it. I don't think you're going find a mainstream configuration that will do that. Possibly years from now when people take old hardware and make new, strangely and unsupported configured boards with them and sell them on AliExpress. https://www.techpowerup.com/255729/amd-x570-unofficial-platform-diagram-revealed-chipset-puts-out-pcie-gen-4 Compare that to the Threadripper that accommodates 64 lanes (with a x399 chipset). You will usually only see lots of PCIE lanes on server grade, or near server grade hardware combinations. You said you wanted to accommodate lots of hard disks with these lanes. You don't need lots of lanes for lots of hard disks ... especially spinners that max out at ~150MBps on a good day. Look into Host Bus Adapters (HBAs) and HBA Expanders. For example, something like the combination of a the LSI 9211-8i and the Intel RES2SV240 SAS 2 Expander. Wouldn't cost a lot to get a lot. -JesterEE
  10. I'm really starting to wonder if the search function on this forum works ...
  11. OK, I give up! I tried so many things ... and so many things just failed to provide a different end experience. Since my last post I tried: 4 different builds of Unraid ... v4 and v5 Linux kernels 6.8.0-Stable 6.8.0-Stable LinuxServer.io NVIDIA build 6.8.0-RC7 6.8.0-RC7 LinuxServer.io NVIDIA build w/ and w/o VFIO GPU/USB pass-though w/ and w/o CPU isolation various number of vCPUs various vCPU assignments various memory allocations fresh Windows 10 1909 VMs both i440fx and Q35 variants various virtio driver versions fresh KVM XMLs various BIOS and Unraid settings In my last post I thought I had it figured out ... but after using it in that configuration for a time, it was better, but not as good as I originally thought, so the quest continued. When I started working with KVM in Unraid, I thought it would be a great use of my equipment, maximizing it's efficiency for my usage case, and a chance to have some fun and tinker for a while with my new hardware. At the end of all this effort, it has been very unfulfilling and frustrating. I've probably spent >120 hours reading posts all over the internet, and trying different things to get the VM experience to a place where I can almost forget I'm working with a VM. I honestly don't think it's possible and in hindsight, not worth the effort! I don't know how others have done it, and I'm coming to think it's more of a personal perception thing. I.E. what's "good enough" for someone else is not "good enough" for me. So, I'm done trying. I'm back to a simple VM configuration, where I was almost at the beginning of the search for the perfect settings, that works well enough ... and it's "not quite good enough" for me to use regularly, but will suffice till I can build yet another computer. I uploaded a video of what I'm seeing on YouTube. I would appreciate if other's that have "behaving game oriented VMs" can give it a watch, and see if they are seeing things like I am as a sanity check for me. Maybe the small glitching is just part of working with a VM in this environment and I'm being hypersensitive. Thanks for your feedback! -JesterEE
  12. SSH into your server, transcode something, watch to see where the files go. If they don't go into your /tmp directory you did it wrong.
  13. @labbz0re @Jake G I'd like to know how you are trying to do this ... Windows doesn't have a 2.0 release. If you're asking if you can use 1.3.X to 2.0.X the answer is no and it's not an Unraid/docker issue.
  14. Welcome to the community @lviperz! I think your build looks completely reasonable and a great place to start with Unraid to get your feet wet. I started much the same way as your intended build, and Unraid is very flexible and allowed me to do exactly what you're intending. Though the baked in functionality works for your use case, it works better with some community plugins. You may want to check some of them out and see what will work best for you. If you haven't watched @SpaceInvaderOne's video's on YouTube, I'd start there. I posted one in particular that will give you a taste of what the community has added to the OS. For backups from Unraid to Qnap, I think you will have a choice on how you want it to go. After you get your network configuration figured out (at the router level), you can write a simple script with a rsync command and have it fire at a set interval using the CA User Scripts plugin. Coming from a QNap, you may wonder where the custom action (i.e. cron) scheduler is. And ... it's not there in Unraid 😭. Commonly requested, feature ... hopefully we will see it one day. Till then, the User Scripts plugin is likely good enough for most things you will want to do. If you want to get a little more sophisticated for just backups, you can get a rclone or duplicati docker and use those. Some setup required, but pretty straight forward. I use a VPN docker (PIA) as a client and have other containers attach to it via docker network bindings. There are some limitations to docker network configurability, and those change based on how you choose to configure your LAN, but it's completely doable (and isolated!). This is one of those "you probably need to play with it yourself" things. It may take a while to get working like you want it, so give yourself some time in the sandbox. The cache works a little differently than I think, you think it does. Note: SpaceInvaderOne has a video about this too, you should watch it! Quickly though, the cache intercepts user directories (based on your chosen user share configurable settings) located on the array, so it writes data to the cache first without bothering the array. The Mover offloads that data on a schedule (i.e. daily) to the array which will cause the array to recompute parity as files get migrated. So, when you write to the array, the only thing that gets utilized is the cache ... Unraid understands that the files are split between the cache and the array at that point so if you look for your data, it is transparent. Note, unless you use a cache RAID1 (i.e. 2 cache drives, Unraid uses BTRFS for this), the data on the cache is unprotected until the mover runs. IMO, you should not use the array for storing any files you don't want to store. That's what the cache (or Unassigned Devices [another plugin]) is for. If you use the array, you will constantly be rewriting parity to make sure the array is protected. Unraid will do it, but you're asking it to do a lot more than it probably needs to ... and your taxing not only the drive that you're writing to, but the rest of the array as it computes parity. You can keep a user share dedicated to the cache so it never gets moved, so that's an option if your cache is big enough to store these files AND the files you intend on moving to the array. Note though, the cache can fill up! Since the mover only runs at a set schedule, if you overrun your cache size, you will write directly to the array till the cache is moved at the set schedule. And if a directory is dedicated to the cache, and you fill it up ... honestly I don't know what will happen, but it will likely be an error of some kind. I use an unassigned device (i.e. parity unprotected drives) to store things that I don't care if I lost ... like a DVR recording, my Plex database, or some Linux isos I download 😀. Even most applications that run databases offer a data backup feature, so when those run, I place the backup location on the array (e.g. backup -> array location (cached) -> mover -> array location) so I have parity protected backups in case something goes wrong ... but those get written on a less frequent schedule. Hope this helps and again, welcome! -JesterEE
  15. @klipp01 @Hoopster Thanks for sharing your recommendations for the ASUS and Netgear Nighthawk. How is the UI these days on those units? How "advanced" are the advanced configuration options? I haven't used a stock firmware in 10 years, partially because they have always been rather dumb and feature starved even for relatively "commonly needed" things. Hoping not to have that issue in the next purchase. @uldise I have never used a Mikrotik or looked at RouterOS. I know the company has a good reputation with networking people but I always thought they were more pro than pro-sumer in pricing. After looking at their website when you posted, I was surprised that they have some pretty affordable options geared toward a home consumer. In your opinion, what sets the hardware and software apart from what companies ASUS and Netgear are offering? Are the RouterOS features the same on all the hardware variants or does it scale up/down with hardware complexity/price-point? I'd likely go with the router you recommended ... the features seems to be inline with what I want and the price is certainly right at <$75 USD! I see they have an x86 image of RouterOS available. I might try to spin it up in a VM and test out the interface. @Hoopster @jumperalex Ya, the Ubiquity ecosystem looks nice, but is way more than I need or plan on needing in the immediate future. I think of them as the Apple of the networking world ... in both good and bad ways 🙄. Maybe one day when I have a 30,000 sqft. castle with need of a dozen APs. 😋 @1812 @jonathanm I see where you're going and I think you both have a point. I think this may be an issue for me because I'm still fairly "new" to Unraid only migrating my server ~6 mos ago. I seem to be continuously modifying configurations for both dockers and VMs, and I tend to need to reboot the server or stop the VM manager semi-frequently while I get stuff ironed out. This would completely sever my connection to the local network if I were running a VM router. I typically only interface with the server via a SSH or WebUI so this could cause some issues with locking myself out. I could do a second video card and add a monitor for terminal access, but I'm trying to avoid that and run administration headless. This is more a physical/PITA concern than anything else. Also, if I need another video card, I would be out of PCIE ports on my motherboard (16x/8x VM dedicated GPU, 4x HBA, 8x currently empty) ... so no Ethernet NIC card! Also, OT, I may want to add another video card for Unraid anyway to dedicate to CUDA tasks on the host, so the PCIE might all be spoken for anyway. My motherboard does have 2 Ethernet NIC adapters and 1 WiFi NIC adapter though so it may be doable. In your experiences, does VFIO pass-though work well with pfSense VMs? I envision I could use the motherboard for the router and AP (pass-through 1 Ethernet NIC and the WiFi NIC) and a managed switch for the WAN and LAN. Is using a wireless NIC device as an AP possible in pfSense? I'd also have to see how good the wireless signal is but like I said, I have no WiFi range concerns currently. A single dipole antenna would probably be just fine. I have never run pfSense personally, and configuring it has always scared me to be honest 😲. It's also way more than I think I need ... like using Thor's hammer for a 1d nail! I always wanted to spin up a VM and dive into what it can do, but this has been so far off the back-burner it will likely never happen unless I need to do it. Maybe now's the time... Thanks everyone! -JesterEE
  16. Found this the other day. This might be related, it might not be. Take a look:
  17. I'm hesitant to do that because the network will do down if the array goes down. Not that it would be a huge deal, but a concern. But if I'm already going to buy a NIC and WiFi AP ($80-$120), why not just buy a slightly better WiFi AP that does enough of the router stuff to make me happy? At that point, another $25 will give me a dedicated appliance. I think that's worth the money. If I intend to go to a more commercial grade firewall in the future, I will surely virtualize it first to get my feet wet.
  18. 150Mbps WAN (Verizon Fios in my area). 1Gbps LAN. 10Gbps LAN can be on my next upgrade 🤤. I do VPNing in Unraid. I used to do it on my router, but it would really tax the 600 MHz Broadcom chip in the N66U. The Unraid Wireguard support is really good for incoming connections, and I have dockers for outgoing connections ... I don't see going back to using the router for VPN. -JesterEE
  19. So it's time to upgrade. I have an ASUS RT-N66U Dark Knight that I have been using since 2013 currently on a 01/2019 version of DD-WRT. It's been a good workhorse, having lots of firmware flashed to it over the years (Asuswrt-Merlin, Shibby Tomato, BrainSlayer DD-WRT) but it's starting to show it's age. It has 256MB RAM, so it handles a lot of concurrent connections well in the routing table, but something in DD-WRT isn't behaving and it drops network connectivity after about a week of up-time till I (hard) reboot it. It's on the fringe of active firmware support being so old, and honestly, I don't want to be bothered trying to fix it flashing yet another firmware. I think I got my moneys worth by now 🤣. I'm looking for community recommendations (please)! I don't need anything crazy, just stable. I toyed with the idea of a small pfSense box + TBD WiFi AP, but for me, right now, that is SUPER overkill. That route quickly approaches $400+ for a "tiny" solution. That's like 3x what I want to spend, plus, it would have the WiFi AP decoupled from the router which is not ideal for me. Also, I only need about 1000 sqft. of WiFi coverage and I have a centrally located location for the router, so really, any router will be fine for coverage; no need for a mesh. Here's my want list: <$150 new or used. Lower is obviously better. Stable! Maybe a scheduled soft reset once a week. Great stock firmware. Bells and whistles included! Standard stuff like static port mapping, port forwarding, and DMZ More advanced stuff like VLANS, bandwidth monitoring, traffic logging (RFLOW), blocklists, WiFi "client mode", etc. Standard sized residential oriented router no virtualized solutions, no re-purposed PCs 4+ port GbE switch 1 GHz+ dual+ core CPU Fair amount of RAM (128 MB+) Dual band 2.4 GHz 802.11n and 5 GHz 802.11ac support The newer 802.11ax [WiFi 6] is good too, but I don't need it and don't really want to pay the early adopter tax MU-MIMO WiFi Not needed, but it's good tech ... I'd like it if possible. Basically, a solid 2018-2019 router: ASUS RT-AC series, Neargear Nighthawk series, TP-Link Archer Series, etc. I haven't personally used any of their firmware, so it's hard to know what boxes they all tick even if the hardware specs are good. And you can only troll so much YouTube looking for hints in year old videos. Anyone have experience with any pro-sumer router equipment that likes it and wants to throw out a recommendation? Thanks -JesterEE
  20. I see 1 active Unraid community developer on that list ... I hope there is something "extra special" for them that the rest of us plebs aren't privy to on this milestone occasion! Congrats to the winners! I secretly hate you all 😝
  21. I have issues (stutters) with my gaming VM with my Ryzen 3800X. I've almost given up. I thought 6 months was enough for the software to catch up with the hardware, but I was wrong. I still have one last ditch effort moving to the v5 Linux kernel on the last batch of RCs, but after that, I'm surrendering. Anyway, I looked at your XML and I have a couple recommendations (though I don't think it's going to help ... Still maybe better in general though). I expect you did all the normal things, so only going to comment of what's there. I see you're using your emulator pinning on your lowest core. I wouldn't do that. In my testing I have seen an idling VM take a fair amount of CPU usage for the emulator and it can spike under load. You really don't want the scheduler working around some host tasks that usually run on the lowest core. Also, the emulator process is a single thread. I have been using a HT core (i.e. HT coreA, HT coreB) for the emulator too, but just last night I monitored my emulator process and saw it hopping between both HTs while only having 1 active at any given time. Probably ok, maybe not the best for a long running process where you want to minimize latency. Try keeping it to one vcpu. I saw no noticeable difference in either case, but since the process stays alive the whole time the VM is on, there's no reason to have the scheduler move it around. IMO, let the scheduler work around the emulator process. I would try shifting all your cores to the highest cores on your CPU and leave the lowest for the host. Also, maybe limit your VM cores to the ones on the same CCX so they don't have to talk over the InfinityFabric. I tried to find an lstopo of the 3900x on Google but I couldn't find one. Maybe post yours here? Unrelated to the CPU, you are passing in your GPU and HDMI sound as 2 separate addresses. SpaceInvader One talked about it in his Advanced GPU passthrough YouTube video. Probably won't change anything, but take a look at that video and try passing the GPU as a multifunction device. Also, are you stubbing your GPU? @Skitals has a x3900. Maybe he has some more recommendations for you. -JesterEE
  22. Unfortunately, the trend in 2020 computing does not favor our collective need for a reasonably sized, quiet, mid/full-ATX chassis with LOTS of mounting locations to accommodate mass storage solutions. It simply doesn't exist ... and really, TBH, it never really did. Fitting 3x 3.5" to 5.25" bay cages for 15 total drives in a tower has always been a workaround, and a pretty expensive one at that when each cage can cost $50-$150! By the time you configure the chassis with the required bays and cooling, you're looking at, at least, $350 ... and that's if you really try hard not to overspend! That's more than I'm willing to drop on a bad solution to a niche problem, but I can't speak for everyone. Also, IMHO, most of the chassis that accommodate these builds are ugly! Even if they were/are reasonably priced, I would not want to build in them even if it is going to sit in the corner or a closet. The notable exception, the Silverstone Temjin TJ11. But at $350 when it was released ~2014 ... and $750 now for the few places that have 1 or 2 left in the back of the stockroom, this was ALWAYS an expensive build. And to uglify it with hot-swappable cages, to me, seems like a sin. Also, a nod to @Harro's functional build. I'd love to find an original Cooler Master Centurion 590 in a dumpster or Good Will somewhere 🤣! Going forward I think we all need to bite the bullet and realize this is just not going to happen unless the winds change direction and personal storage solutions that don't include overpriced and under-powered systems (*cough* Synology) become common place. I think most people that find this thread know that there are server grade racked solutions out there with reasonable second hand prices. But, for someone that really wants to keep their build to a workstation, this is not an option. It even bugs me a little when people flippantly post something like, "just buy a used 4U server ... 24+ bays, problem solved." Not helpful ... at all! If "we" wanted a 26"-30" deep, 100+ lb. tank with jagged corners and awful non-racked standing solutions that sounds like a jet engine ... then yes, problem solved! But, "we" don't. I don't intend to ignite a flame war here ... but after all, this thread is for tower cases. It says it right in the title. Why are people even bringing up rack solutions? I'll give a pass to those recommending casters for rack chassis though. Not what I'm after, but OK. I contacted 45 Drives about their Storonator Workstation. I was pretty sure they only sell them as a configured solution, and they don't post the chassis dimensions, but it couldn't hurt to ask when it has space for 11x 3.5" and 8x 2.5" drives on a single back-plane 🤤. But alas, they don't offer it bare 😭. Here's to hoping that one day they change their mind or make something better to fill this dearth in our niche market segment. For a reasonably sized average 45L volume chassis, with a full back-plane for 10+ drives ... I would pay a premium! There are still some options in production by Fractal Design, Phanteks, and Rosewill that will support 10+ drives without breaking the bank only costing ~$200-$250 all in. Here are some as of January 2020: Fractal Design Fractal Design Vector Fractal Design Define R6 10x 3.5" drives (requires finding/purchasing 4 additional trays) Phanteks Phanteks Enthoo Pro Phanteks Enthoo Luxe 3x 5.25" + 6x 3.5" = 11x 3.5" drives (with 1 cage) Rosewill Rosewill Thor v2 (Might actually be out of production ... but currently available (in white) for $120 USD from Newegg) 6x 5.25" + 6x 3.5" = 16x 3.5" drives (with 2 cages) Also, as other's have reported, you can still find semi-reasonable prices on some second hand towers from the early-2010s. This is the option I am going with for my server. I just bought a NZXT H440, which is a hell of a nice case (IMHO), with lots of space for fans and breath-ability for keeping the drives cool. I plan on doing a simple mod to mount a total of 11x 3.5" drives in this platform. Actually, the last mounting location is odd, sitting on the bottom of the chassis, and I probably will not use it. But this will leave a good amount of "free space" which I was thinking about filling with a 6x 2.5" drive cage if I can find/tap a good mounting solution. I haven't built in it yet, so this is still TBD. But, for $80 shipped (after some hunting ... and sniping) ... DONE! -JesterEE
  23. On the fringe of being related, here is a forum post I made on the Plex forum in 2018 cataloging some of my experiences while working with a RAM drive on a Windows server. Plex does some really strange stuff under the hood. TL;DR: Don't waste your time ... unless you have lots of RAM and nothing better to do with it (*looks down as to not make eye contact with anyone here* 😉). https://forums.plex.tv/t/plex-transcoder-ram-drive-experience
  24. And they'll still blame you for "bad quality content" because they can't use the app right to increase the quality setting .... SMH 🤣. This is why I don't share my library (of Creative Commons content 🙃) with non-techies anymore 😋.
×
×
  • Create New...