rollieindc

Members
  • Posts

    103
  • Joined

  • Last visited

1 Follower

Converted

  • Gender
    Male
  • Location
    Washington DC USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rollieindc's Achievements

Apprentice

Apprentice (3/14)

11

Reputation

  1. Thanks Jonathan - Just came to that realization too. Ultimately, I wanted this for Plex transcoding. At first, I had tried to use it in a VM, then removed it from the VM (because didn't work within the VM) -> but when I tried to install the nVidia driver plugin, it said it can't find the GPU card, "NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running." - even though the driver is compatible (GTX 1050 Ti), and I can still see it in the IOMMU listing (and appears to be properly captured/identified as the only one in that group.) IOMMU group 16: [10de:1c82] 04:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1) [10de:0fb9] 04:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1) Is there something I need to "un-re-do?" that you can think of?
  2. Update: 17 APR 2022 - nVidia on the cheap Managed to snag a nVidia GTX 1050 Ti (4GB) for $100, so that's replacing the MSI Radeon R7 240 video card. Will be an upgrade to allow transcoding for Plex and light graphics (gaming) use. Card was a little difficult to get into the chasis, but seems to be addressable from the VMs. Need to add the plugin for it, and allow it to be accessed by the Dockers.
  3. Update: 04 APR 2022 - Just another phase First thing to report, the system remains very stable, and after a couple of years - remains very usable. Having a UPS on the system makes it much more "stable" and less likely to have any power drops - or if the local power drops, gives it enough time (10min) to wait for power to return, or does an automated shut down with enough power to complete the entire recovery process. That was definitely worth the money. Oh, my T310 is on a 750VA UPS from APC, and that seems more than up to the task. Plus I keep an eye on the battery and can change it out if needed. The upgraded CPU (Xeon 3480, 3GHz) seems to have been a worthwhile investment too, and been rock solid. So- still few successes with the VM engine on the system working with my server. Still think that Q35 engine for the KVM is part of the solution, but it's so finicky with Windows and the AMD drivers, that I may never really have a stable solution for it. For now, I am just running one with a VNC connection, and then doing what I need to from it. Recently, I also started seeing a severe drop off in use on my docker image with BOINC, so that is off the table as well. If I can find a cheap nVidia card (>1050) to put into it, those might become more relevant - but since my purpose for the server was to have this be a NAS, I'm happy - since it's doing quite well at that. So well, that I think I need to add my second parity disk into the system soon. And while the price of spinning disks and SSDs are coming down, I still think that I made the right decision with using 4TB drives. New WD Red and Seagate Ironwolfs are sub-$100 brand new, so replacements are easy to come by. The question might become one of needing another set of larger drives to increase overall pool size. For now, I don't see that as necessary. Also for now - I have single disk failure covered, but I really want to get two disk failure covered next. A second parity drive should cover that concern for me. (Drive size economics dialog) But if I went to a larger drive, I'd have to then invest in at least three larger drives just to see any increase and make that worthwhile. I'd guess I'd need to move to 8TB drives, so might start looking at the market for those in the future. (Many 8TB drives are now selling new for $150, so that would mean that I'd have to drop $450 in drives to cover the first two parity drives, and see an increase one NAS drive from 4TB to 8TB.) If I went and replaced all 7 of my current 4TB drives, that means dropping $1,050. I think I need a cheap graphics card before I need a larger NAS. I think my past thinking on the economics of the drives were right. Keep adding inexpensive 4TB drives, go double parity, perhaps consider updating the entire server in 2-3 years, and maybe go "pro" on unraid. IMHO, 4TB drives are going to remain the mainstay of many small businesses for just those sort of economic reasons. Besides, have you ever had to rebuild an 8TB drive from parity? (shudders) - That takes lots of time and CPU clicks. Ultimately, it will probably be a bigger server (Dell R720 or SmartMicro equiv.) with dual Xeon CPUs that can get me to 4Ghz that I put money into. Recently saw some R900 servers come on the market on Craigslist for $250, so will keep an eye out for those. Still hoping to score a good SmartMicro server MB & CPU set for that 3U chassis I got for next to nothing.] So lesse... new MB or R900, unRAID Pro License, add another SAS card... yeah, I'd still be a lot further ahead than dropping a grand on 8TB drives at that point. The math still works out for me (a hobbyist). If I was a pro in a smaller business, I'd probably need the growth space that the 8 or 10TB drives were giving me now. And if I was doing video, I'd just start adding SSDs as I could. SSDs in 5 years are going to drop in price, and overtake mechanical drives in performance, long-life & cost factors. Heck you can get 8TB SSDs now for less than $750. (NVME drives are still too high, but not much over a grand each!) As service integration improves, I can see those being the next level of automated systems that are connected to google & amazon services in the home/business place. [Other stuff] I also installed Shinobi - for having a system look at and record my security net cams. The DLink services are being shut down soon (which was the basis of most of my cameras to date), so I needed to find a back up for that service - and after trying a few, and watching SpaceInvader's tutorial, I can say that Shinobi is pretty nice for the average home user. My framing rates aren't spectacular, but I'm running the cameras over 802.11g that's on WPA, so I can get it to do reasonably well. If I ever feel like I need better throughput, I can always hardwire them into their own sub-net. I can also do PoE with them, so that's an option too. But at some point, the cameras will become smaller, cheaper, and 4K compliant, with google, ADT & amazon integration, so why worry. At some point, I also need to rotate out those cache drives in the server currently to smaller ones. 256GB ones seem more than adequate for most uses than the 1TB one I am using now. Most of my drives are uncached anyway, since I want to know the files are on the NAS hard drives. I have a UPS, which is integrated really well into the UNRAID plug in (THANK YOU WHO EVER DID THAT!) - but I still don't like the idea of being in mid-transfer and loosing some of the files or gunking up the drives in the cache and not knowing where I stopped. About the only other thing to report is that I have an older laptop that was dogging it under Windows 10, and now literally "flies" with ubuntu studio 20.04 LTS. It's becoming my "everyday lightweight laptop." It won't do "power lifting" - but it is what I am using to type this into the internet portal for unRAID.
  4. Update: 19 FEB2022 - Pandemic to Endemic Phase? So, I am back to fighting with the Dell T310 server and the VM for using any kind of GPU card. Tried the recommendation of using Windows 10 VM with the Q35 4.2 version as the emulator to do GPU passthrough - and it continues to be unstable. Sometimes working, others not. So, I've just given up and plopped in separate PC system on the network - with the ability to log into the server via a browser. Works well with various gaming and other remote desktop options (Chrome) - but been trying the newest "No Machine" RDP - which I like even better. Sure, I have to "run" another PC, but the stability is far better than a server VM, and the overall performance (again, headless for the new PC) is unquestionably better. I'm just sad I didn't think to do this earlier, as it is a lot less of a headache for me to run this way. Yes, the Dell T310 is still a great remote file server and I am still running Plex from it, which I am happy about, but I am really tired of the KVM platform. If I could run VirtualBox off it, I would - as I know those work, and have worked well for me. At some point, I might look at ESXi - but not right now. I have more things that I need to do - and get done. This is probably more about the Dell T310 hardware than the unRAID software, but I am just tired of trying to beat this problem. My system is built, and I just need to load files onto it and use it that way (as a NAS). Oh, and those NIMBUS 400GB SSDs - they were both crap (end of life), so I was able to return them and got two Hynix 256GB SSDs instead from another source.
  5. Problem: Win 10 Pro VM using a GPU won't allow RDP or VNC from within LAN FWIW, I've had lots of successes running similar VMs on VirtualBox on Windows (within Windows, Linux or OSX), but this KVM system, I've got to be missing something... and I'll happily buy a beer (or two) for anyone who has an answer that will allow me to solve this problem. Server Specs: Server: Dell T310, Xeon X3480 (4c/8t, 3Ghz, 30-50% CPU utilization), 32GB DDR2, 6x4TB drives (20TB+Parity), with battery UPS LAN: Gigabit ethernet, Google NEST router (stable) connected to VerizonFIOS (300/300) GPU: AMD R7/240 in PCI Express Generation 2 8X (in Full Length Slot 2) OS: unraid 6.9.2 Background: Yes, just to say it, I've been through SpaceInvaderOne's VM primer (THANK YOU!)... and re-watched it about 50 times. But there's something I am missing. And it's frustrating the heck out of me. The goal is to have a windows 10 VM using a GPU that I can remote into, away from my local network, but at present, I'm not even able to remote into otherwise stable VMs when using my graphics card within my own LAN. The Dell T310 Server has running the 6.9.2 unRAID - stably (for weeks, and before that, for months), and I have been running Windows 10 (Pro x64) VMs using the RedHat QXL Dirver and VNC into it without much of an issue. But unfortunately I have been having significant issues with remoting into them whenever I start to use a GPU. The server's main GPU is on the motherboard (Intel) - but the add on GPU is a MSI Brand AMD R7/240 card, in the second PCIe (8x) slot. (The SAS controller is in the primary slot.) If I build/start the VM running windows 10 Pro x64 using the redhat QXL Driver, everything seems to run well, and can continue running for days (weeks) - and I can locally get into it with VNC, Google CRD or MS/RDP without problem. However, when I "switch" any stable VM over to using the R7/240, sometimes it will work, and sometimes it will not. Sometimes, I will have to detele the VM as even going back to the QXL driver won't allow me to VNC into it from within the LAN. And often, it will work for a day or so, then it will become "unavailable" to wither CRD or RDP . Sometimes I catch it trying to update the drivers or Windows, and can halt that - but often, something still "happens" and I loose the ability to connect - perhaps a day or two later. Restarting the VM or the server seems to have no real positive effect. Sometimes, if I change the driver back to QXL, the VM seems to work fine again and I can VNC into it. But afterwards - when I switch primary video over to the GPU - while the VM status board it says it's started, I can't remote (regardless of trying VNC, RDP, or CRD) into it. And yes, I've let it run for "days" just in case it was a windows update, but that doesn't appear to be "the issue" either. I also tried switching it to a separate (unassigned) SSD Drive, and while that helped execution speed, it did nothing for the ability to RDP into it. I even tried RealVNC server (from ninite) - and just got the same results (unable to remote into it). Also when I try to add QXL as a second (or first) video option in the VM, to do a dual video- I also loose any ability to connect into it remotely via VNC, RDP or CRD. The same results were also happening with a nVidia 610 card. Although I got far more 43 type errors with every try I made. As far as I can tell, I don't appear to have any IOMMU conflicts, and I've tried various VM settings for machine type (i440fx-3.1 or Q35-51 or lower) and bios (OVMF or SeaBios), with no success. Again, sometimes I can get a VM to start up fine, run for a day or two... then it seems to get lost, and I can't remote or VNC into it again, until I go back to the QXL video option. And then, after I can remote into it again using QXL - switching or adding the GPU - causes it to "get lost" again. Am happy to provide any additional details (please point to whatever procedural diags you want the data from, as at this point I happy to post them, I am just unfamiliar with what is the right thing to post for getting help to solve this problem.) So, what am I doing wrong? Thank you for reading... this has got me pulling my hair out, and I don't have enough of it to spare these days.
  6. Update: 23NOV2021 - Seeing elsewhere on the forum - that it's recommend to use a Windows 10 VM with the Q35 4.2 version as the emulator to do GPU passthrough. Hopefully this fixes it - more after I try it out. (And "nope", that didn't work either!)
  7. Headline: 22 NOV 2021 Upgrades & downgrades and Virtual Machines that just won't work Symptom: Windows 10 Pro VM under 6.9.2 with a AMD R7-240 GPU passthrough causes issues. Goal: Be able to remote into a Win10VM with a headless GPU (MSI Radeon R7 240). Discussion: So, I managed to add a SSD into the system with use of a PCIe Card that could hold two 2.5 inch SSDs. Unfortunately, the SSDs I have are SAS, and the connector is SATA. So, have a SAS-SATA passthrough connector on the way. (And no, I don't want to Dremel the PCIe card, it's a nice one!) Until then I am using an older 240GB inland SSD and while the VM seems stable when running in on the QXL VNC/Redhat graphics driver, the moment I move it to the MSI Drivers - all heck breaks loose. Yes I used the original drivers, yes I tried the latests, yes I tried the beta drivers, yes I tried Chrome RDP, yes I tried Microsoft RDP. No I won't try teamview, as they "fouled" my system the last time and I will never give them or anyone else who routes my VM through their serves another chance. Interesting bit is, that it works at first - then some upgrade (windows or the AMD Radeon software) does an upgrade to the driver - and I am locked out from being able to get back into the machine until I go back to the redhat QXL and VNC drivers. And personally, that won't cut it for the work I need to do. Also it won't let me work the card as a second graphics card either, or the primary with the MSI/AMD GPU and the VNC as secondary.(And yes, I have checked and the I0MMUs are separate number/call slots) Plans: At some point though, two Nimbus 400GB SAS SSDs will go into the system, the VM will move to it, and I will be beating the graphics into submission. In short, it's frustrating as heck. I might pull the card completely, and try another card (nVIDIA GT680 or 610) that I have, but the nVIDIA were similarly cursed with ERROR 43 problems. I might be up against it with this T310 in that the VM just will not allow a separate GPU to run on it. If that's the case, then I will start looking to offload this machine and move to a SuperMicro dual Xeon system that I have and can upgrade.
  8. Headline: 12 OCT 2021 And then...BOOM: "Automatic unRaid Non-Correcting Parity Check will be started" Symptom: Running a Windows 10 Pro VM under 6.9.2 with a GPU passthrough causes system wide crash. Goal: Be able to remote into a Win10VM with a headless GPU (MSI Radeon R7 240) and remote into it. Preface: Yes, I've watched SpaceInvader's VM video guide about a dozen times and a few others and followed the passthrough process for a GPU. Yes, I've tried using nVidia cards (GT610, GT1030, etc) without any success - only seeing the dreaded "Error 43" pop up repeatedly. I decided to go Radeon and picked up a MSI R7 240 card at MicroCenter to give that a go. Easy, right? No. Far from it so far. At this point, I'm seeking advice. I'm about to the point where I am considering a dedicated gaming laptop machine with a remote desktop access alternative vice trying to continue to use an unRAID VM with GPU passthroughs. Background: I've been trying to remote into a Win10Pro x64 VM for about two years on my Dell T310 and had been "now and again" successful - but ONLY with using the VNC QXL controller. That seems to work, and I've had one running for about 6 months stablely (at least as much as Windows is stable.) Its not bad, but not able to handle a gaming program (SecondLife) that I enjoy playing. And ultimately, I want to be able to do this with a remote connection (tried Chrome Remote Desktop, which kinda-sorta worked) but most of the time - while the RedHat QXL controller was working fine - anytime I switch to using the R7 240 GPU as passthrough and remote connect into it, the system hangs in one way or another. Still, I felt like I had been making slow progress by "tinkering" with the system in the hope I could get it to work. But last night - I had an interesting new artifact start to occur (recently upgraded to unRAID 6.9.2), when I changed the VM to use the installed MSI Radeon R7-240 card - then things went really bad. At first it worked, but would lock up with any updates to the MSI video drivers. (Ugh) But I got past that - and now it just crashes the entire system from a fresh install when I change the VM over to use the GPU - and not just the VM - the entire server. When I reboot the server system, I get a "Automatic unRaid Non-Correcting Parity Check will be started" in the log file. And the last time it ran, it detected no errors. (Is it possible since I am running the VM from a disk share (disk5), something is going wrong there? Should maybe I pop in a separate "unassigned" HDD for the VM?) Anyway - I've tried all sorts of VNC programs, MS Remote Desktop for Windows, and various means of connecting the card within the VM (although not Guacamole- "yet")... and just not sure what's the real issue. Not even sure what information to post that might be helpful (Feel free to post a link to the standard reporting protocol for the forum, that's probably where I need to start.) One thing though, please do not recommend Teamviewer - I essentially was "blackmailed" by their system admin process to cough up the price of a commercial license (Hack-hack at $50.90/mo "Are you out of your [censored]" ) - and my VM system was unreachable for over a week - while I was basically told "Pay up deadbeat." So I finally deleted that VM, and vowed never to trust that program again to log into a VM. Anyway.... still going to keep plugging at this problem, but if anyone is interested in helping out... drop me a line. I'd appreciate it. At least worth a beer (or KoFi). (Added - Diagnostics Download) nastyfox-diagnostics-20211012-2227.zip
  9. That's exactly why I keep doing it, to save others trouble and let them "get on with it!" - And am ecstatic to hear that you got the H200 flashed - and your system up and running, Jason! Let me know how you get along with it. If you start your own thread - I will want to follow it. Feel free to drop me a note here or via email anytime. Glad to share what I've learned and compare notes.
  10. 06 SEPTEMBER 2021 - The DELTA COVID Periodic Update Post Personal update: Wow, where does the time fly? Oh, right, virtually while at work. Had two scares of COVID Delta in my office, so had to get tested for both (cue the brain swab music) and thankfully tested negative both times. Daughter also had two separate surgeries (mostly minor, but these days, nothing is really minor), and I had a visit to the ER with Kidney Stones (thankfully small and easy to pass.) System Updates: So my Dell T310 has been working well (24/7/365) for the most part, and I've made a couple of minor upgrades on the server. I did move from the Xeon X3440 to a X3480 to boost the clock up to 3.06Ghz, via a Chinese eBay refurb CPU for $71. That seems to have helped with the overall utilization of tasks/data flow, without impacting the thermals (both chips are rated at 95W, so it was an easy swap and allowed me to put new thermal paste on the CPU.) I also updated unRAID to 6.9, (now on 6.9.2) and that seems to be running without much of an issue. I do like the new options, user interface layout and overall system information flow. I am mostly still using the box for a NAS and a PLEX media server. I also moved my BOINC from a virtual machine on Windows 10/x64 Pro to a Docker image. That was a significant boost in both compute speed within BOINC (Rosetta) and overhead (memory) reduction. Oddity: I did notice the other day that the memory went from 32GB to 16GB "available" in the memory status. Not clear why that was, but I suspect it was a combination of VM and Docker usage. The memory returned after a re-boot, but I am going to be keeping an eye on it. Unfinished work left to do: There's still some issues that I run into when I run a VM on unRAID, mostly that the graphics/GPU card seems to be unable to be passed correctly out using any VNC or remote desktop I've tried (and I've tried a bunch). I saw that there was a new option out for using Guacamole, so that might be something I try soon-ish. The VM runs basically fine using the included VNC graphics drivers available, but it's just not as "snappy" as using the GPU. Other stuff: In the meantime, I also acquired a Mac Pro (Mid 2010) and upgraded Bootdrive (SSD), memory (32GB), MacOS X, and the CPUs (now dual HexCore x5680's @ 3.33Ghz each) in it - and even have it running Win 10/x64 as well as High Sierra or Mojave. I'll probably slap on OpenCore soon so I can just multiboot across any of the three OS's. I did go with the AMD RX580 GPU, after struggling with a nVIDIA 680 GPU, and glad I did. But that is another story for another time. And I also upgraded my home's wireless from a DLink WiFi 6 router (DIR-1950) to a Google Wifi (not NEST) mesh system - connected to 200mbps Verizon FIOS. Didn't affect the server at all, but I try to keep track of what's going on with my network here. That's about all I have right now. And I have to update the signature block...
  11. Hi Jason! Sorry for the long period not posting... it's been a slog for me. But glad you got something out of this. So, to your questions: I looked at other cards, and the H200 has been the best so far for me. Nothing wrong with the H700, but I had some concerns about being able to flash it and use drives larger than 2TB as I recall. Plus, there was the use of the battery for cache back up that always concerned me. Not a huge issue, but I needed about 20TB for my system, so that made the Flashed H200 a better choice for me. Going to do an update here shortly... but this COVID isolation has been a real slog for me.
  12. Feb 6, 2021 - New DDoS Plex Media Server Vulnerability? Seems to be a number of recent news posts, like this one that PLEX Media Server is enabling distributed denial-of-service (DDoS) attacks across a number of vulnerable servers/systems. My understanding is that this is as much a network configuration issue as a PLEX software issue, as it seems to reply exploiting router port configuration (32400-32414) vulnerabilities. As PLEX is configured, users often enable external (internet) access to media (movies, music, etc) from one their server to other external devices (iPhones, tablets, etc) through the configuration process, when using protocols like universal plug and play (UPnP). UPnP allows systems on the same network (Server->Router) to seek each other out and share file Access. UPnP often uses simple service discovery protocol (SSDP) in order to do this. This is apparently where external hackers/attackers take advantage by leveraging the exposed SSDP in DDOS amplification attacks in the specific router ports. I don't understand all the dynamics of it, and am looking for that and other insights - especially where it comes to unRAID and PLEX interacting. My questions are: 0) Should I be concerned? (I temporarily stopped/took my PLEX docker server offline on my unRAID server, and closed the port on my router. Am also on Verizon FIOS - so not sure if they are "intercepting" the DDoS within their network?) 1) Anyone seen artifacts of a DDoS like this on their unRAID systems (either in VM or Dockers?) 2) Anyone know if the vulnerability would likely exist with port forwarding typically seen with most home routers and a PLEX (unRAID) Server? Would/Could other local networked systems be compromised? How would you tell (on unRAID or other)? 3) Would PLEX Media Server be more or less (or equally) vulnerable as a VM or as a Docker on unRAID? 4) PLEX said they would be issuing a patch in the next few days, any idea how long that would take to propagate into the Docker versions that are in the Community Distributions in unRAID? Thanks for reading, and thanks especially for anyone more knowledgeable than me to provide additional insight and knowledge. It's greatly appreciated, and this forum is great - thanks to those who share information, and help keep it running!
  13. Nothing I can really share. I do have an APC UPS attached, and it seems to run around 150 watts on idle. If I get data from it, I’ll post it in this thread. FWIW- The 1050 is a nice add -IF- you plan to do transcoding with Plex, otherwise, I’d not bother. The VM performance on the T310 is “less than stellar” of late with build 6.8.3. And I’m happier to just have a headless NAS, with a Win10x64 image for a VM running a qbitorrent, and a Plex Docker. If I had it to do over, I’d build on a more modern SuperMicro base. But hey, cheap hardware. Update: So, I went back to look at my UPS Logs, and my system seems to pull between 180-210 watts, depending on loads. I'm running a couple of VMs (one with BONIC for nCOVID folding & qbitorrent), and a couple of Dockers (one is Plex), but nothing too heavy. If I was running harder, I could see this going up to 300 watts easily. I probably top 350 watts at start/spin up. If I was concerned about power, I'd definitely go to a laptop (90watts, 3Ghz i7) with dual SSDs (m.4's in RAID) and a SATA external drive based expander with larger (>10TB) drives that could spin down. But to be honest, I'm still ahead of the game on cost with this system (I think.) If I was concerned about throughput, then I'd be getting a old Dell r720 rack mount. But I'm not sharing media, and I don't need the hastles of users complaining. This is a small business & my digital photo archive. I don't need power, I need reliability. This gives it to me at a decent price-point.
  14. 22 OCTOBER 2020 - System & Network re-configuring. So I moved the Dell T310 server from "Cold standby" to our temporary rental house, that has Verizon FIOS (200mbs) service. So, sadly, nothing much to report - except more trials and tribulations setting up my network. Went to a new IP home address system. Systems came up ok, but the VMs are giving me issues, and only partially connecting to services. No idea why. Example, a google search will produce links, but clicking on any link ends up just hanging in FireFox like there is a DNS conflict - yet everything looks good when I ping at the same address. Weird! More later, I'm not done tweaking things yet...
  15. So, bit of an update here. The Zotac P106-90 Card just arrived. (Looks like crap, but might still work.) For $30 on eBay, worth the shot. I might be able to do something with a "folding" computational machine, if nothing else. And if it's a working card, and I can't do anything with it, back on eBay it goes at a really good price. (shrug) Hardly a loss when I think about it- as I will have learned something. Reminder: My goal is PLEX Transcoding within a Docker using the NVIDIA port of unRAID; not to run a gaming Windows 10 VM. (Sorry if that disappoints, but hey... these are my priorities.) And yes, I've seen the LTT discussion, and the potentials for unlocking the NVENC hardware decoder. Still working my way through some of those posts on REDDIT, and none of them actually gave it a "fair shot" on linux imho. And considering all risks involved, that's maybe not a huge thing. If I do this, it's going to be on tested on a rig I have with a unRAID 30-day trial license first. Not going to risk my main system on this "thing" (yet.) If I do get it to work, my goal is going to be testing PLEX with H.265 NVENC for some 4K files, and see what happens. Heck, to be honest, I really just want a few streams of 1080p to be cast to some TVs on gigabit lan set up, and maybe speed up some other offerings like the BOINC protein folding for nCOVID-19 project. It's probably going to be 30-60 days before I get any real results, so be patient. (Just got news that I am moving to a rental house for a year while we do a major home reno! So I get to move all my LAN and Server gear... so - Press [F] to pay Respect! to my bank account... ow.)