rollieindc

Members
  • Posts

    104
  • Joined

  • Last visited

Posts posted by rollieindc

  1. Update: 08 February 2024 - All good things must come to an inevitable end.

     

    So, with a couple of power line sags after returning to our remodeled home, a power voltage regulator on the T310 motherboard was damaged. The system stopped as it was booting, and showed a CPU under-voltage error, and wouldn't restart regardless of the configuration changes.

     

    I had a replacement T310 on order, but the order couldn't be filled due to a warehouse inventory error. So, after contacting the company, they agreed to upgrade my order to a T320. With some thought, this "upgrade" will include a 10c/20thread Xeon e5-2470 v2 CPU, updated BIOS, 96GB of DDR3 ECC - and I was able to transfer the existing 7 HDD 3.5" Drive unRAID array and IT/mode H200 controller card. It also would take the nVidia 1050Ti that I had bought. It didn't like my 1x PCIe SSD card, but that's a minor issue.

     

    After moving everything over, I crossed my fingers as the system booted and the array spun up. Surprisingly, no real issues. The array did do a parity check, that came back with zero errors. (Whew!) The good news is that Plex and the other dockers that I use, also spun up naturally with little to no fixing required. My VMs were down due to the PCIe error issue, but those were inconsequential and will be "fixed" soon.

     

    I did have a Seagate IronWolf Drive show a x0330 smart-drive error (pre-mature failure), but after looking at the forums - that error seems like a "non-issue" for the drives, and leads to incorrect drive assumptions being made. Still I have migrated the critical data off that drive as a precaution, and am considering re-formatting it, which apparently will clear the x0330 errors. After a week, I am seeing no new errors - so will follow up on that later. (Just to be clear, I am running two drives as parity!)

     

    So, that's kinda the end for the T310 unRAID home server project. It wasn't a bad solution, but it also was time to move on, and I'm still tinkering with the T320 and a T710 that I have laying on a bench. Also I am likely going to be parting out the T310, perhaps putting in a new motherboard before selling it. Dunno. Let me know if there is any interest in any of the parts. I'm not in any rush to change things from where they are right now. I'm not a IT professional, I'm a home hobbyist. And I like it that way. 😃

  2. 6 hours ago, JonathanM said:

    Those two things are currently mutually exclusive. You can change back and forth with a reboot though.

    Thanks Jonathan - Just came to that realization too.

     

    Ultimately, I wanted this for Plex transcoding. At first, I had tried to use it in a VM, then removed it from the VM (because didn't work within the VM) -> but when I tried to install the nVidia driver plugin, it said it can't find the GPU card,

    • "NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running."

    - even though the driver is compatible (GTX 1050 Ti), and I can still see it in the IOMMU listing (and appears to be properly captured/identified as the only one in that group.)

    • IOMMU group 16:
      • [10de:1c82] 04:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)
      • [10de:0fb9] 04:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

    Is there something I need to "un-re-do?" that you can think of?

  3. Update: 17 APR 2022 - nVidia on the cheap

     

    Managed to snag a nVidia GTX 1050 Ti (4GB) for $100, so that's replacing the MSI Radeon R7 240 video card. Will be an upgrade to allow transcoding for Plex and light graphics (gaming) use. Card was a little difficult to get into the chasis, but seems to be addressable from the VMs. Need to add the plugin for it, and allow it to be accessed by the Dockers.

  4. Update: 04 APR 2022 - Just another phase

     

    First thing to report, the system remains very stable, and after a couple of years - remains very usable. Having a UPS on the system makes it much more "stable" and less likely to have any power drops - or if the local power drops, gives it enough time (10min) to wait for power to return, or does an automated shut down with enough power to complete the entire recovery process. That was definitely worth the money. Oh, my T310 is on a 750VA UPS from APC, and that seems more than up to the task. Plus I keep an eye on the battery and can change it out if needed. The upgraded CPU (Xeon 3480, 3GHz) seems to have been a worthwhile investment too, and been rock solid. 

     

    So- still few successes with the VM engine on the system working with my server. Still think that Q35 engine for the KVM is part of the solution, but it's so finicky with Windows and the AMD drivers, that I may never really have a stable solution for it. For now, I am just running one with a VNC connection, and then doing what I need to from it.

     

    Recently, I also started seeing a severe drop off in use on my docker image with BOINC, so that is off the table as well. If I can find a cheap nVidia card (>1050) to put into it, those might become more relevant - but since my purpose for the server was to have this be a NAS, I'm happy - since it's doing quite well at that. So well, that I think I need to add my second parity disk into the system soon. And while the price of spinning disks and SSDs are coming down, I still think that I made the right decision with using 4TB drives. New WD Red and Seagate Ironwolfs are sub-$100 brand new, so replacements are easy to come by. The question might become one of needing another set of larger drives to increase overall pool size. For now, I don't see that as necessary. Also for now - I have single disk failure covered, but I really want to get two disk failure covered next. A second parity drive should cover that concern for me.

     

    (Drive size economics dialog) But if I went to a larger drive, I'd have to then invest in at least three larger drives just to see any increase and make that worthwhile. I'd guess I'd need to move to 8TB drives, so might start looking at the market for those in the future. (Many 8TB drives are now selling new for $150, so that would mean that I'd have to drop $450 in drives to cover the first two parity drives, and see an increase one NAS drive from 4TB to 8TB.) If I went and replaced all 7 of my current 4TB drives, that means dropping $1,050. I think I need a cheap graphics card before I need a larger NAS. I think my past thinking on the economics of the drives were right. Keep adding inexpensive 4TB drives, go double parity, perhaps consider updating the entire server in 2-3 years, and maybe go "pro" on unraid.

     

    IMHO, 4TB drives are going to remain the mainstay of many small businesses for just those sort of economic reasons. Besides, have you ever had to rebuild an 8TB drive from parity? (shudders) - That takes lots of time and CPU clicks. Ultimately, it will probably be a bigger server (Dell R720 or SmartMicro equiv.) with dual Xeon CPUs that can get me to 4Ghz that I put money into. Recently saw some R900 servers come on the market on Craigslist for $250, so will keep an eye out for those. Still hoping to score a good SmartMicro server MB & CPU set for that 3U chassis I got for next to nothing.] So lesse... new MB or R900, unRAID Pro License, add another SAS card... yeah, I'd still be a lot further ahead than dropping a grand on 8TB drives at that point. The math still works out for me (a hobbyist). If I was a pro in a smaller business, I'd probably need the growth space that the 8 or 10TB drives were giving me now. And if I was doing video, I'd just start adding SSDs as I could. SSDs in 5 years are going to drop in price, and overtake mechanical drives in performance, long-life & cost factors. Heck you can get 8TB SSDs now for less than $750. (NVME drives are still too high, but not much over a grand each!) As service integration improves, I can see those being the next level of automated systems that are connected to google & amazon services in the home/business place.

     

    [Other stuff] I also installed Shinobi - for having a system look at and record my security net cams. The DLink services are being shut down soon (which was the basis of most of my cameras to date), so I needed to find a back up for that service - and after trying a few, and watching SpaceInvader's tutorial, I can say that Shinobi is pretty nice for the average home user. My framing rates aren't spectacular, but I'm running the cameras over 802.11g that's on WPA, so I can get it to do reasonably well. If I ever feel like I need better throughput, I can always hardwire them into their own sub-net. I can also do PoE with them, so that's an option too. But at some point, the cameras will become smaller, cheaper, and 4K compliant, with google, ADT & amazon integration, so why worry.

     

    At some point, I also need to rotate out those cache drives in the server currently to smaller ones. 256GB ones seem more than adequate for most uses than the 1TB one I am using now. Most of my drives are uncached anyway, since I want to know the files are on the NAS hard drives. I have a UPS, which is integrated really well into the UNRAID plug in (THANK YOU WHO EVER DID THAT!) - but I still don't like the idea of being in mid-transfer and loosing some of the files or gunking up the drives in the cache and not knowing where I stopped.

     

    About the only other thing to report is that I have an older laptop that was dogging it under Windows 10, and now literally "flies" with ubuntu studio 20.04 LTS. It's becoming my "everyday lightweight laptop." It won't do "power lifting" - but it is what I am using to type this into the internet portal for unRAID.

  5. Update: 19 FEB2022 - Pandemic to Endemic Phase?

     

    So, I am back to fighting with the Dell T310 server and the VM for using any kind of GPU card. Tried the recommendation of using Windows 10 VM with the Q35 4.2 version as the emulator to do GPU passthrough - and it continues to be unstable. Sometimes working, others not. So, I've just given up and plopped in separate PC system on the network - with the ability to log into the server via a browser. Works well with various gaming and other remote desktop options (Chrome) - but been trying the newest "No Machine" RDP - which I like even better. Sure, I have to "run" another PC, but the stability is far better than a server VM, and the overall performance (again, headless for the new PC) is unquestionably better. I'm just sad I didn't think to do this earlier, as it is a lot less of a headache for me to run this way. Yes, the Dell T310 is still a great remote file server and I am still running Plex from it, which I am happy about, but I am really tired of the KVM platform. If I could run VirtualBox off it, I would - as I know those work, and have worked well for me. At some point, I might look at ESXi - but not right now. I have more things that I need to do - and get done. This is probably more about the Dell T310 hardware than the unRAID software, but I am just tired of trying to beat this problem. My system is built, and I just need to load files onto it and use it that way (as a NAS).

     

    Oh, and those NIMBUS 400GB SSDs - they were both crap (end of life), so I was able to return them and got two Hynix 256GB SSDs instead from another source.

  6. Problem: Win 10 Pro VM using a GPU won't allow RDP or VNC from within LAN

     

    FWIW, I've had lots of successes running similar VMs on VirtualBox on Windows (within Windows, Linux or OSX), but this KVM system, I've got to be missing something... and I'll happily buy a beer (or two) for anyone who has an answer that will allow me to solve this problem.

     

    Server Specs:

     Server: Dell T310, Xeon X3480 (4c/8t, 3Ghz, 30-50% CPU utilization), 32GB DDR2, 6x4TB drives (20TB+Parity), with battery UPS

     LAN: Gigabit ethernet, Google NEST router (stable) connected to VerizonFIOS (300/300) 

     GPU: AMD R7/240 in PCI Express Generation 2 8X (in Full Length Slot 2)

     OS: unraid 6.9.2

     

    Background: Yes, just to say it, I've been through SpaceInvaderOne's VM primer (THANK YOU!)... and re-watched it about 50 times. But there's something I am missing. And it's frustrating the heck out of me. The goal is to have a windows 10 VM using a GPU that I can remote into, away from my local network, but at present, I'm not even able to remote into otherwise stable VMs when using my graphics card within my own LAN.

     

    The Dell T310 Server has running the 6.9.2 unRAID - stably (for weeks, and before that, for months), and I have been running Windows 10 (Pro x64) VMs using the RedHat QXL Dirver and VNC into it without much of an issue. But unfortunately I have been having significant issues with remoting into them whenever I start to use a GPU. The server's main GPU is on the motherboard (Intel) - but the add on GPU is a MSI Brand AMD R7/240 card, in the second PCIe (8x) slot. (The SAS controller is in the primary slot.)

     

    If I build/start the VM running windows 10 Pro x64 using the redhat QXL Driver, everything seems to run well, and can continue running for days (weeks) - and I can locally get into it with VNC, Google CRD or MS/RDP without problem.

     

    However, when I "switch" any stable VM over to using the R7/240, sometimes it will work, and sometimes it will not. Sometimes, I will have to detele the VM as even going back to the QXL driver won't allow me to VNC into it from within the LAN. And often, it will work for a day or so, then it will become "unavailable" to wither CRD or RDP . Sometimes I catch it trying to update the drivers or Windows, and can halt that - but often, something still "happens" and I loose the ability to connect - perhaps a day or two later. Restarting the VM or the server seems to have no real positive effect.

     

    Sometimes, if I change the driver back to QXL, the VM seems to work fine again and I can VNC into it. But afterwards - when I switch primary video over to the GPU - while the VM status board it says it's started, I can't remote (regardless of trying VNC, RDP, or CRD) into it. And yes, I've let it run for "days" just in case it was a windows update, but that doesn't appear to be "the issue" either. I also tried switching it to a separate (unassigned) SSD Drive, and while that helped execution speed, it did nothing for the ability to RDP into it.

     

    I even tried RealVNC server (from ninite) - and just got the same results (unable to remote into it). Also when I try to add QXL as a second (or first) video option in the VM, to do a dual video- I also loose any ability to connect into it remotely via VNC, RDP or CRD. The same results were also happening with a nVidia 610 card. Although I got far more 43 type errors with every try I made.

     

    As far as I can tell, I don't appear to have any IOMMU conflicts, and I've tried various VM settings for machine type (i440fx-3.1 or Q35-51 or lower) and bios (OVMF or SeaBios), with no success. Again, sometimes I can get a VM to start up fine, run for a day or two... then it seems to get lost, and I can't remote or VNC into it again, until I go back to the QXL video option. And then, after I can remote into it again using QXL - switching or adding the GPU - causes it to "get lost" again.

     

    Am happy to provide any additional details (please point to whatever procedural diags you want the data from, as at this point I happy to post them, I am just unfamiliar with what is the right thing to post for getting help to solve this problem.)

     

    So, what am I doing wrong?

     

    Thank you for reading... this has got me pulling my hair out, and I don't have enough of it to spare these days.

  7. Update: 23NOV2021 -

    Seeing elsewhere on the forum - that it's recommend to use a Windows 10 VM with the Q35 4.2 version as the emulator to do GPU passthrough. Hopefully this fixes it - more after I try it out.

     

    (And "nope", that didn't work either!)

  8.  

    Headline: 22 NOV 2021 Upgrades & downgrades and Virtual Machines that just won't work

     

     

    Symptom: Windows 10 Pro VM under 6.9.2 with a AMD R7-240 GPU passthrough causes issues.

     

    Goal: Be able to remote into a Win10VM with a headless GPU (MSI Radeon R7 240).

     

    Discussion: So, I managed to add a SSD into the system with use of a PCIe Card that could hold two 2.5 inch SSDs. Unfortunately, the SSDs I have are SAS, and the connector is SATA. So, have a SAS-SATA passthrough connector on the way. (And no, I don't want to Dremel the PCIe card, it's a nice one!) Until then I am using an older 240GB inland SSD and while the VM seems stable when running in on the QXL VNC/Redhat graphics driver, the moment I move it to the MSI Drivers - all heck breaks loose. Yes I used the original drivers, yes I tried the latests, yes I tried the beta drivers, yes I tried Chrome RDP, yes I tried Microsoft RDP. No I won't try teamview, as they "fouled" my system the last time and I will never give them or anyone else who routes my VM through their serves another chance. Interesting bit is, that it works at first - then some upgrade (windows or the AMD Radeon software) does an upgrade to the driver - and I am locked out from being able to get back into the machine until I go back to the redhat QXL and VNC drivers. And personally, that won't cut it for the work I need to do. Also it won't let me work the card as a second graphics card either, or the primary with the MSI/AMD GPU and the VNC as secondary.(And yes, I have checked and the I0MMUs are separate number/call slots)

     

    Plans: At some point though, two Nimbus 400GB SAS SSDs will go into the system, the VM will move to it, and I will be beating the graphics into submission. In short, it's frustrating as heck. I might pull the card completely, and try another card (nVIDIA GT680 or 610) that I have, but the nVIDIA were similarly cursed with ERROR 43 problems. I might be up against it with this T310 in that the VM just will not allow a separate GPU to run on it. If that's the case, then I will start looking to offload this machine and move to a SuperMicro dual Xeon system that I have and can upgrade.

  9. Headline: 12 OCT 2021 And then...BOOM: "Automatic unRaid Non-Correcting Parity Check will be started"

     

    Symptom: Running a Windows 10 Pro VM under 6.9.2 with a GPU passthrough causes system wide crash.

     

    Goal: Be able to remote into a Win10VM with a headless GPU (MSI Radeon R7 240) and remote into it.

     

    Preface: Yes, I've watched SpaceInvader's VM video guide about a dozen times and a few others and followed the passthrough process for a GPU. Yes, I've tried using nVidia cards (GT610, GT1030, etc) without any success - only seeing the dreaded "Error 43" pop up repeatedly. I decided to go Radeon and picked up a MSI R7 240 card at MicroCenter to give that a go. Easy, right? No. Far from it so far.

     

    At this point, I'm seeking advice. I'm about to the point where I am considering a dedicated gaming laptop machine with a remote desktop access alternative vice trying to continue to use an unRAID VM with GPU passthroughs.

     

    Background: I've been trying to remote into a Win10Pro x64 VM for about two years on my Dell T310 and had been "now and again" successful - but ONLY with using the VNC QXL controller. That seems to work, and I've had one running for about 6 months stablely (at least as much as Windows is stable.) Its not bad, but not able to handle a gaming program (SecondLife) that I enjoy playing. And ultimately, I want to be able to do this with a remote connection (tried Chrome Remote Desktop, which kinda-sorta worked) but most of the time - while the RedHat QXL controller was working fine - anytime I switch to using the R7 240 GPU as passthrough and remote connect into it, the system hangs in one way or another.

     

    Still, I felt like I had been making slow progress by "tinkering" with the system in the hope I could get it to work.

     

    But last night - I had an interesting new artifact start to occur (recently upgraded to unRAID 6.9.2), when I changed the VM to use the installed MSI Radeon R7-240 card - then things went really bad. At first it worked, but would lock up with any updates to the MSI video drivers. (Ugh) But I got past that - and now it just crashes the entire system from a fresh install when I change the VM over to use the GPU - and not just the VM - the entire server. When I reboot the server system, I get a "Automatic unRaid Non-Correcting Parity Check will be started" in the log file. And the last time it ran, it detected no errors. (Is it possible since I am running the VM from a disk share (disk5), something is going wrong there? Should maybe I pop in a separate "unassigned" HDD for the VM?)

     

    Anyway - I've tried all sorts of VNC programs, MS Remote Desktop for Windows, and various means of connecting the card within the VM (although not Guacamole- "yet")... and just not sure what's the real issue. Not even sure what information to post that might be helpful (Feel free to post a link to the standard reporting protocol for the forum, that's probably where I need to start.)

     

    One thing though, please do not recommend Teamviewer - I essentially was "blackmailed" by their system admin process to cough up the price of a commercial license (Hack-hack at $50.90/mo "Are you out of your [censored]" ) - and my VM system was unreachable for over a week - while I was basically told "Pay up deadbeat." So I finally deleted that VM, and vowed never to trust that program again to log into a VM.

     

    Anyway.... still going to keep plugging at this problem, but if anyone is interested in helping out... drop me a line. I'd appreciate it. At least worth a beer (or KoFi).

     

    (Added - Diagnostics Download)

    nastyfox-diagnostics-20211012-2227.zip

  10. 15 minutes ago, joleger said:

    ...thanks in no small part to your post.

     

    That's exactly why I keep doing it, to save others trouble and let them "get on with it!" -  And am ecstatic to hear that you got the H200 flashed - and your system up and running, Jason! Let me know how you get along with it. If you start your own thread - I will want to follow it. Feel free to drop me a note here or via email anytime. Glad to share what I've learned and compare notes.

  11. 06 SEPTEMBER 2021 - The DELTA COVID Periodic Update Post

     

    Personal update: Wow, where does the time fly? Oh, right, virtually while at work. Had two scares of COVID Delta in my office, so had to get tested for both (cue the brain swab music) and thankfully tested negative both times. Daughter also had two separate surgeries (mostly minor, but these days, nothing is really minor), and I had a visit to the ER with Kidney Stones (thankfully small and easy to pass.)

     

    System Updates: So my Dell T310 has been working well (24/7/365) for the most part, and I've made a couple of minor upgrades on the server. I did move from the Xeon X3440 to a X3480 to boost the clock up to 3.06Ghz, via a Chinese eBay refurb CPU for $71. That seems to have helped with the overall utilization of tasks/data flow, without impacting the thermals (both chips are rated at 95W, so it was an easy swap and allowed me to put new thermal paste on the CPU.)

     

    I also updated unRAID to 6.9, (now on 6.9.2) and that seems to be running without much of an issue. I do like the new options, user interface layout and overall system information flow. I am mostly still using the box for a NAS and a PLEX media server.


    I also moved my BOINC from a virtual machine on Windows 10/x64 Pro to a Docker image. That was a significant boost in both compute speed within BOINC (Rosetta) and overhead (memory) reduction.

     

    Oddity: I did notice the other day that the memory went from 32GB to 16GB "available" in the memory status. Not clear why that was, but I suspect it was a combination of VM and Docker usage. The memory returned after a re-boot, but I am going to be keeping an eye on it.

     

    Unfinished work left to do: There's still some issues that I run into when I run a VM on unRAID, mostly that the graphics/GPU card seems to be unable to be passed correctly out using any VNC or remote desktop I've tried (and I've tried a bunch). I saw that there was a new option out for using Guacamole, so that might be something I try soon-ish. The VM runs basically fine using the included VNC graphics drivers available, but it's just not as "snappy" as using the GPU.

     

    Other stuff: In the meantime, I also acquired a Mac Pro (Mid 2010) and upgraded Bootdrive (SSD), memory (32GB), MacOS X, and the CPUs (now dual HexCore x5680's @ 3.33Ghz each) in it - and even have it running Win 10/x64 as well as High Sierra or Mojave. I'll probably slap on OpenCore soon so I can just multiboot across any of the three OS's. I did go with the AMD RX580 GPU, after struggling with a nVIDIA 680 GPU, and glad I did. But that is another story for another time.

     

    And I also upgraded my home's wireless from a DLink WiFi 6 router (DIR-1950) to a Google Wifi (not NEST) mesh system - connected to 200mbps Verizon FIOS. Didn't affect the server at all, but I try to keep track of what's going on with my network here.

     

    That's about all I have right now. And I have to update the signature block...

    • Like 1
  12. On 3/23/2021 at 11:28 AM, joleger said:

    First off...Great Post!   Lots of colour and details.  Very informative!  Thank you.

    I was just curious if you looked at any other cards besides the H200 and H700?

    How is the H200 working for you?

    Thanks for any thoughts you can provide.

    Jason

    Hi Jason! Sorry for the long period not posting... it's been a slog for me. But glad you got something out of this. So, to your questions: I looked at other cards, and the H200 has been the best so far for me. Nothing wrong with the H700, but I had some concerns about being able to flash it and use drives larger than 2TB as I recall. Plus, there was the use of the battery for cache back up that always concerned me. Not a huge issue, but I needed about 20TB for my system, so that made the Flashed H200 a better choice for me.

     

    Going to do an update here shortly... but this COVID isolation has been a real slog for me.

  13. Feb 6, 2021 - New DDoS Plex Media Server Vulnerability?

    Seems to be a number of recent news posts, like this one that PLEX Media Server is enabling distributed denial-of-service (DDoS) attacks across a number of vulnerable servers/systems. My understanding is that this is as much a network configuration issue as a PLEX software issue, as it seems to reply exploiting router port configuration (32400-32414) vulnerabilities. As PLEX is configured, users often enable external (internet) access to media (movies, music, etc) from one their server to other external devices (iPhones, tablets, etc) through the configuration process, when using protocols like universal plug and play (UPnP). UPnP allows systems on the same network (Server->Router) to seek each other out and share file Access. UPnP often uses simple service discovery protocol (SSDP) in order to do this.

     

    This is apparently where external hackers/attackers take advantage by leveraging the exposed SSDP in DDOS amplification attacks in the specific router ports. I don't understand all the dynamics of it, and am looking for that and other insights - especially where it comes to unRAID and PLEX interacting.

     

    My questions are:

     0) Should I be concerned? (I temporarily stopped/took my PLEX docker server offline on my unRAID server, and closed the port on my router. Am also on Verizon FIOS - so not sure if they are "intercepting" the DDoS within their network?)

     1) Anyone seen artifacts of a DDoS like this on their unRAID systems (either in VM or Dockers?)

     2) Anyone know if the vulnerability would likely exist with port forwarding typically seen with most home routers and a PLEX (unRAID) Server? Would/Could other local networked systems be compromised? How would you tell (on unRAID or other)?

     3) Would PLEX Media Server be more or less (or equally) vulnerable as a VM or as a Docker on unRAID?

     4) PLEX said they would be issuing a patch in the next few days, any idea how long that would take to propagate into the Docker versions that are in the Community Distributions in unRAID?

     

    Thanks for reading, and thanks especially for anyone more knowledgeable than me to provide additional insight and knowledge. It's greatly appreciated, and this forum is great - thanks to those who share information, and help keep it running!

  14. On 1/22/2021 at 12:07 AM, benak said:

    I have a T310 sitting around, I'm curious if you have any power usage/consumption data? 

    Nothing I can really share. I do have an APC UPS attached, and it seems to run around 150 watts on idle. If I get data from it, I’ll post it in this thread.

    FWIW- The 1050 is a nice add -IF- you plan to do transcoding with Plex, otherwise, I’d not bother. The VM performance on the T310 is “less than stellar” of late with build 6.8.3. And I’m happier to just have a headless NAS, with a Win10x64 image for a VM running a qbitorrent, and a Plex Docker. If I had it to do over, I’d build on a more modern SuperMicro base. But hey, cheap hardware. 

     

    Update:

    So, I went back to look at my UPS Logs, and my system seems to pull between 180-210 watts, depending on loads. I'm running a couple of VMs (one with BONIC for nCOVID folding & qbitorrent), and a couple of Dockers (one is Plex), but nothing too heavy. If I was running harder, I could see this going up to 300 watts easily. I probably top 350 watts at start/spin up.  If I was concerned about power, I'd definitely go to a laptop (90watts, 3Ghz i7) with dual SSDs (m.4's in RAID) and a SATA external drive based expander with larger (>10TB) drives that could spin down. But to be honest, I'm still ahead of the game on cost with this system (I think.) If I was concerned about throughput, then I'd be getting a old Dell r720 rack mount. But I'm not sharing media, and I don't need the hastles of users complaining. This is a small business & my digital photo archive. I don't need power, I need reliability. This gives it to me at a decent price-point.

  15. 22 OCTOBER 2020 - System & Network re-configuring.

     

    So I moved the Dell T310 server from "Cold standby" to our temporary rental house, that has Verizon FIOS (200mbs) service. So, sadly, nothing much to report - except more trials and tribulations setting up my network. Went to a new IP home address system. Systems came up ok, but the VMs are giving me issues, and only partially connecting to services. No idea why. Example, a google search will produce links, but clicking on any link ends up just hanging in FireFox like there is a DNS conflict - yet everything looks good when I ping at the same address. Weird! More later, I'm not done tweaking things yet...

  16. On 8/10/2020 at 12:05 AM, tjb_altf4 said:

    Officially not supported, but...

    https://forums.developer.nvidia.com/t/nvenc-and-gp106-100/67850/3

    But with some hacked drivers it might work

    So, bit of an update here. The Zotac P106-90 Card just arrived. (Looks like crap, but might still work.) For $30 on eBay, worth the shot. I might be able to do something with a "folding" computational machine, if nothing else. And if it's a working card, and I can't do anything with it, back on eBay it goes at a really good price. (shrug) Hardly a loss when I think about it- as I will have learned something.

     

    Reminder: My goal is PLEX Transcoding within a Docker using the NVIDIA port of unRAID;  not to run a gaming Windows 10 VM.

    (Sorry if that disappoints, but hey... these are my priorities.)

     

    And yes, I've seen the LTT discussion, and the potentials for unlocking the NVENC hardware decoder. Still working my way through some of those posts on REDDIT, and none of them actually gave it a "fair shot" on linux imho. And considering all risks involved, that's maybe not a huge thing. If I do this, it's going to be on tested on a rig I have with a unRAID 30-day trial license first. Not going to risk my main system on this "thing" (yet.) If I do get it to work, my goal is going to be testing PLEX with H.265 NVENC for some 4K files, and see what happens.

     

    Heck, to be honest, I really just want a few streams of 1080p to be cast to some TVs on gigabit lan set up, and maybe speed up some other offerings like the BOINC protein folding for nCOVID-19 project.

     

    It's probably going to be 30-60 days before I get any real results, so be patient. (Just got news that I am moving to a rental house for a year while we do a major home reno! So I get to move all my LAN and Server gear... so - Press [F] to pay Respect! to my bank account... ow.)

     

     

     

    bank account.png

    • Like 2
  17. On 5/8/2020 at 6:51 PM, xeats said:

    Do you know if it's possible please ?

    Am ordering one now off eBay, and will follow up on this after I have some chance to test it out. Everyone I've talked to says it's "possible" but has not yet been tested. Also, I don't expect the performance of a 1080GTX, but if it can do a h265 transcode - it will be worth it.

    • Like 1
  18. 08 AUGUST 2020 - System down for a bit.

     

    Put the Dell T310 server on "Cold standby" as the temps in the basement were getting outrageous, and that's where my home office is. We're planning a house renovation, so this is a short term summer heat issue. So, sadly, nothing to report. Did finally give up on Verizon DSL and went to Comcast (200mbs) service. I'm not pushing the 1TB data cap yet, so no real concern there either. And if I did, I'd adjust the service to unlimited downloads via a business account. I am going to add another SDD to the system to run my VMs through, but that's a minor detail. I did pick up a HDMI to USB 2.0 video converter, which I will be downloading some older videos off VHS tape with. Quality seems good at 1080p30, although most of the tapes will probably look better at 720p30. At least it's something to play with and pass the time.

  19. 25 JULY 2020 - A long overdue 2 month update.

     

    First, I'm doing "ok" with the Pandemic. No illness in the family. Had a colleague die from it, and another recover from it. Not my idea of fun.

     

    I also have "ditched" Verizon DSL in favor of a Comcast 200/5mbs dry drop (internet only). After being on it for a month, I am "good" with the stability that I've finally been able to get. Yes, the 1TB data cap sucks, but I've not been bumping up against it (yet), so I am not too worried. And since I don't serve out of the house (and am not traveling) - Comcast 200/5 beats the 4.0/0.6 I was getting from Verizon.

     

    But, it's been two months since my last post, and not much else has changed in my server system configuration or operation. I have taken the system down for a few days to reduce the heat load in the house, since we've been at 100F or higher this week. Still encountering a lot of issues with VM stability, but it's only when I use a graphics card in the VM and try to use a remote desktop app to access it. That kills the "standard" VNC connection, and I have to use a remote desktop like Remote Desktop for Chrome which was working - but would die with any windows 10 updates, and then I tried "Anydesk" - which appeared to crash my laptop.

     

    And I won't use TEAMVIEWER ever again. Once badly burned, never again.

     

    After that crash on the laptop, I did a short "flirt" on my laptop with Linux Mint and the xcfe desktop - I am back to running Windows 10 on my laptop in a "bare metal" configuration. I liked Linux Mint (and have it running as a VM on the server), but it just wasn't stable on my Dell Precision M6500 laptop. Even with 16GB of ram. It would "slog" from time to time, and the software loaders were terrible. So, back to Windows 10x64 (1909). I had Windows 7 on the laptop before, but with EOL on Win 7x64, I knew this day would come - and now that I am changed over, I'm not really missing anything about windows 7, except maybe the "gadgets" - which I know I could reload with "Rainmaker" - but honestly, why bother?

     

    Back to the server: So I'm watching the 6.9.0 beta comments, and don't think I will be jumping over to it too quickly on my Dell T310 server. I'm basically happy with Plex and my couple of docker and VMs running (except the remote desktop issues) all under 6.83, and watching the discussions prior to 6.9 public release. I want them (Lime Tech) to work out the last of the bugs - and then - that's when I will move over. To be honest, I wish they'd allow implementation of a virtualbox VM system, as the VM on unRAID still seems plagued by instabilities on some platforms. I'm looking forward to some of the new VM engine improvements I've heard of.

     

    I also have a new server box that I will transition to at some point, but the issue is more of "why". At this point, if I need more speed, I can pop the Xeon CPU in the current box out, and plug in a new CPU with a 40% bump in speed for about $50 - or in a new system I can replace the single Xeon with a dual CPU box and motherboard, with at least a 100% bump in performance, and with a lot more drive bays. But since the current system is not doing much from midnight to noon, I am wondering "Why change anything?" I'm a firm believer in "Let sleeping dogs lie". At some point I'll get bored with the performance, need a video editing rig, and something to process filters on lots of digital photos... but I am not (quite) there yet. Plus with the new 5Ghz systems coming out, there will be some "cheap" 4Ghz servers that will be out on the second hand market before long. That will be soon enough for me to change the server.

     

    I think one of the other upgrades I will need to do is put in a SSD that will be an "unassigned drive" in the system, just to load the VMs from and run them from it. Although, I really do like the idea of a adding a new PCIe card that will handle multiple M.2 SSD drives. But not today, or this month. I have other important things to do... like make a backup image of all my server files. =)

  20. Tuesday, 26 MAY 2020 - nCOVID19 Edition, Day 71

     

    Still working at home remotely from my daytime job. Local area is pretty much shut down at the moment, as we remain a "hot spot" of local infections, some over 50% in local population testing.

     

    The past Western Digital "WD Re 4TB Datacenter Capacity Hard Disk Drive" has not had another single error, so I think it was the Windows 10 VM that gave it some "throwback" error in data.

     

    Replaced my router (I'm on Verizon DSL, which stinks!) and solved most of my networking issues by using a DLink DIR-1950 router. No issues with Plex or other passthroughs with it.  Speaking of the Plex (linuxserver.io) docker - it continues to work smoothly. Chromecasting 1080p with the V1.0 wifi unit is also eased up a lot. Have had a few issues with bandwidth and stability with the WIn10 VM running "BOINC" in the background. I think it's an issue of overall power consumption on the VM. It just doesn't like BOINC if I give it more than about 80% of the CPU clock and 80% of the CPU usage. I think the other Win 10 system level programs just have bandwidth issues when I do that.

     

    Still have Monitorr and Netapp running, and got around the one issue of showing more services in Monitorr. Still can only show 3 disks though, and the other limited data on the system. But for that, there is the Netapp, so... who cares.  Starting to pole around with the Home Automation (HA) dockers. Have lots of cameras & hardware for those items. But for now, I am working to get base network stability re-established.

     

    Hope everyone is staying inside and by all means - STAY HEALTHY & WEAR A MASK!

  21. Thursday, 29 APRIL 2020 - Cabin Fever Edition

     

    Still working at home remotely from my daytime job.

     

    The previous issues with a Western Digital "WD Re 4TB Datacenter Capacity Hard Disk Drive"  appears to be working fine. Zero errors since cable checkup and reboot. May have been due to a Windows VM I was running. So, maybe dodged the bullet there.

     

    Had some networking issues, and ended up bricking my TP-Link AC1750 router. I liked that thing. Will continue to see if I can "unbrick" it, but at this point, it won't even allow a TFTP link into it. Was trying to force a firmware update from OpenWRT. Had it working with DD-WRT, but it was a lousy firmware distro and gave me all sorts of issues. Thankfully, it was a Goodwill find, so its only $13 lost if it is "bricked."

     

    Got Plex (linuxserver.io) docker working more smoothly. Also was even able to Chromecast 1080p to a V1 wifi unit, and it looked good on the 1080p HDTV I have. "Lagginess" went away when I found that you can adjust the number of CPU Cores/Threads within the docker menu on the "CPU Pinning Docker" tab on the CPU settings on teh dashboard (e.g. https://tower.local/Dashboard/CPUset ). Much love & appreciation to whomever coded up that little helper - very clean and easy. Set my Plex to four threads as a default, and boom - no more lag. Didn't seem to use up any more CPU cycles... but I will be watching that. Nice bit is that it looks like you can adjust VMs and other programs just as easily.

     

    Also have Monitorr running, which is a nice dashboard once you get it set up. (hint, set the /disk1/ as /../mnt/disk1/, see image attached to get the added disks included. If anyone knows how to add more services than what the docker allows, please - give me a reply! Could only show a few services and only 3 disks... not sure why that was so limited. Must be a setting I am missing.)

     

    The BOINC software in a Windows 10 WM continues to chuck out computations for CoronaVirus folding daily. Glad I can give something back.

     

    Hope everyone is staying inside and by all means - STAY HEALTHY & WEAR A MASK!

    Screenshot_6a.gif

    Screenshot_4.jpg

  22. Wednesday, 21 APRIL 2020 - STAY AT HOME EDITION!

     

    I am still working at home remotely from my daytime job, and mostly continuing to do minor maintenance with unRAID 6.8.3.

     

    I ran into some interesting issues with a Western Digital "WD Re 4TB Datacenter Capacity Hard Disk Drive" (7200 RPM Class SATA 6Gb/s 64MB Cache 3.5 inch WD4000FYYZ) recently. It came up with a couple of read errors, which I think are cable interface issues. So I'll have to crack the case to see "what's up" soon. Smart Drive Report only shows "Extended offline Testing - Completed without error", but it had an earlier error of "READ FPDMA QUEUED" - which was dated long before I precleared & reformatted the drive. So, it's just something I will continue to watch for in the future. UNRAID said it recovered from "read" errors it had, and the drive appears to work well otherwise. So, I will see how it goes. I've currently only got a single Win 10 VM on it, and have that backed up separately, just in case. I also looked at it with the diskspeed docker app, and it appears to be working within expected read/write of all my other drives. It is in the hard drive "extension" that I added to the system, so it could be a power handling issue... more to follow up on with that later on.

     

    (Late add:  I really need to add a RAID 1 m.2 nvme SSD for VMs, and get them off the cache and shares. I have a spare 4x PCIe slot, so that should provide power and enough space for anything I could come up with to run on a remote desktop. Am guessing that's at least a $150 add to the system when I tally it all up. 2 nvme + 1 interface card)

     

    Plex (linuxserver.io) threw a small hissy fit when I upgraded to the latest docker image (Version 1.19.1.2645), but stopping and restarting the docker seems to have fixed issues.  It does seem a little more "laggy" at start up, so the cache and other database information may just need to "settle" to get it back to running it like normal. And these days, Dockers keep me busy for the most part. And I am having a fair bit better performance with it running MP4 encoding vice MKV encoded files. MKV is slightly smaller, and has "ok" versatility over a number of devices I broadcast to, but is not as versatile or flawless in playback as MP4.

     

    I am also trying to read up and understand how letsencrypt docker works, however.  I need to watch some videos for that. So, just to be clear - I am running 95% as a local network NAS only. I have a couple of VMs that I come in via Chrome Remote Desktop, and Plex (lifetime subscription) that I use from remote locations - but I don't need (or want) a website running. I would like SSL certificates to be resolved correctly, and maybe address the VPN issues, but its not something I loose sleep over. And I just don't want my server hacked. I could attach it to a website I have (.com addy, where I own the domain and can access the DNS record, so I could make it nastyfox.blahblahblah.com), but again, I don't really 'need' that functionality. And if I did set it up, I'd want RSA/AES 256 encryption with a 30 character long random password - as a bare minimum.

     

    And BOINC in the Windows 10 WM continues to run well. Still very simple software install to do protein folding in the hopes of finding vaccines for nCOVID19.  I did load MacinaBox, but not made a VM yet. I need to do that. No problem with getting a copy and installing it.

     

    Hope everyone is staying inside and by all means - STAY HEALTHY!

  23. Wednesday, 08 APRIL 2020 - Nothing much new.

     

    So, mostly minor maintenance with unRAID 6.8.3. I am working at home remotely from my daytime job, so that keeps me busy for the most part. during the day. In between, I am loading up my Plex library (linuxserver.io) with movies ripped from the DVDs we have, and make the occasional over the air (OTA) TV recording from the HomeRunHD Duo on the network. Mostly doing small stuff. I did install BOINC in a Windows 10 WM and ran that for a few "quadrillion" cycles. Simple software install to do protein folding in the hopes of finding vaccines for nCOVID19. I run my VMs with Chrome Remote Desktop software, and am able to that on anything from my iPhone 7, Chromebook, HP desktop or Dell Laptop.

     

    I did hear that MacinaBox was not available on CA for some reason. I still grabbed a copy anyway and installed it.

     

    Also became an Administrator on the "unRAID Family" group on facebook. (Link here) It's a good group, with some good insights - and some very needy people looking for advice.

     

    Hope everyone is staying inside and by all means - STAY HEALTHY!

    • Thanks 1