My first hobby “TOWER”


rollieindc

Recommended Posts

17 minutes ago, rollieindc said:

Hi Jason! Sorry for the long period not posting... it's been a slog for me. But glad you got something out of this. So, to your questions: I looked at other cards, and the H200 has been the best so far for me. Nothing wrong with the H700, but I had some concerns about being able to flash it and use drives larger than 2TB as I recall. Plus, there was the use of the battery for cache back up that always concerned me. Not a huge issue, but I needed about 20TB for my system, so that made the Flashed H200 a better choice for me.

 

Going to do an update here shortly... but this COVID isolation has been a real slog for me.

Hey rollieindc,

 

No worries...totally understand.

 

I ended up buying a H200 flashed to IT mode as well. Snagged it off ebay.  My server has been up and running for months without issue...thanks in no small part to your post.

 

Thanks again.

 

J.

  • Like 1
Link to comment

06 SEPTEMBER 2021 - The DELTA COVID Periodic Update Post

 

Personal update: Wow, where does the time fly? Oh, right, virtually while at work. Had two scares of COVID Delta in my office, so had to get tested for both (cue the brain swab music) and thankfully tested negative both times. Daughter also had two separate surgeries (mostly minor, but these days, nothing is really minor), and I had a visit to the ER with Kidney Stones (thankfully small and easy to pass.)

 

System Updates: So my Dell T310 has been working well (24/7/365) for the most part, and I've made a couple of minor upgrades on the server. I did move from the Xeon X3440 to a X3480 to boost the clock up to 3.06Ghz, via a Chinese eBay refurb CPU for $71. That seems to have helped with the overall utilization of tasks/data flow, without impacting the thermals (both chips are rated at 95W, so it was an easy swap and allowed me to put new thermal paste on the CPU.)

 

I also updated unRAID to 6.9, (now on 6.9.2) and that seems to be running without much of an issue. I do like the new options, user interface layout and overall system information flow. I am mostly still using the box for a NAS and a PLEX media server.


I also moved my BOINC from a virtual machine on Windows 10/x64 Pro to a Docker image. That was a significant boost in both compute speed within BOINC (Rosetta) and overhead (memory) reduction.

 

Oddity: I did notice the other day that the memory went from 32GB to 16GB "available" in the memory status. Not clear why that was, but I suspect it was a combination of VM and Docker usage. The memory returned after a re-boot, but I am going to be keeping an eye on it.

 

Unfinished work left to do: There's still some issues that I run into when I run a VM on unRAID, mostly that the graphics/GPU card seems to be unable to be passed correctly out using any VNC or remote desktop I've tried (and I've tried a bunch). I saw that there was a new option out for using Guacamole, so that might be something I try soon-ish. The VM runs basically fine using the included VNC graphics drivers available, but it's just not as "snappy" as using the GPU.

 

Other stuff: In the meantime, I also acquired a Mac Pro (Mid 2010) and upgraded Bootdrive (SSD), memory (32GB), MacOS X, and the CPUs (now dual HexCore x5680's @ 3.33Ghz each) in it - and even have it running Win 10/x64 as well as High Sierra or Mojave. I'll probably slap on OpenCore soon so I can just multiboot across any of the three OS's. I did go with the AMD RX580 GPU, after struggling with a nVIDIA 680 GPU, and glad I did. But that is another story for another time.

 

And I also upgraded my home's wireless from a DLink WiFi 6 router (DIR-1950) to a Google Wifi (not NEST) mesh system - connected to 200mbps Verizon FIOS. Didn't affect the server at all, but I try to keep track of what's going on with my network here.

 

That's about all I have right now. And I have to update the signature block...

Edited by rollieindc
  • Like 1
Link to comment
15 minutes ago, joleger said:

...thanks in no small part to your post.

 

That's exactly why I keep doing it, to save others trouble and let them "get on with it!" -  And am ecstatic to hear that you got the H200 flashed - and your system up and running, Jason! Let me know how you get along with it. If you start your own thread - I will want to follow it. Feel free to drop me a note here or via email anytime. Glad to share what I've learned and compare notes.

Edited by rollieindc
remove link
Link to comment
  • 1 month later...

Headline: 12 OCT 2021 And then...BOOM: "Automatic unRaid Non-Correcting Parity Check will be started"

 

Symptom: Running a Windows 10 Pro VM under 6.9.2 with a GPU passthrough causes system wide crash.

 

Goal: Be able to remote into a Win10VM with a headless GPU (MSI Radeon R7 240) and remote into it.

 

Preface: Yes, I've watched SpaceInvader's VM video guide about a dozen times and a few others and followed the passthrough process for a GPU. Yes, I've tried using nVidia cards (GT610, GT1030, etc) without any success - only seeing the dreaded "Error 43" pop up repeatedly. I decided to go Radeon and picked up a MSI R7 240 card at MicroCenter to give that a go. Easy, right? No. Far from it so far.

 

At this point, I'm seeking advice. I'm about to the point where I am considering a dedicated gaming laptop machine with a remote desktop access alternative vice trying to continue to use an unRAID VM with GPU passthroughs.

 

Background: I've been trying to remote into a Win10Pro x64 VM for about two years on my Dell T310 and had been "now and again" successful - but ONLY with using the VNC QXL controller. That seems to work, and I've had one running for about 6 months stablely (at least as much as Windows is stable.) Its not bad, but not able to handle a gaming program (SecondLife) that I enjoy playing. And ultimately, I want to be able to do this with a remote connection (tried Chrome Remote Desktop, which kinda-sorta worked) but most of the time - while the RedHat QXL controller was working fine - anytime I switch to using the R7 240 GPU as passthrough and remote connect into it, the system hangs in one way or another.

 

Still, I felt like I had been making slow progress by "tinkering" with the system in the hope I could get it to work.

 

But last night - I had an interesting new artifact start to occur (recently upgraded to unRAID 6.9.2), when I changed the VM to use the installed MSI Radeon R7-240 card - then things went really bad. At first it worked, but would lock up with any updates to the MSI video drivers. (Ugh) But I got past that - and now it just crashes the entire system from a fresh install when I change the VM over to use the GPU - and not just the VM - the entire server. When I reboot the server system, I get a "Automatic unRaid Non-Correcting Parity Check will be started" in the log file. And the last time it ran, it detected no errors. (Is it possible since I am running the VM from a disk share (disk5), something is going wrong there? Should maybe I pop in a separate "unassigned" HDD for the VM?)

 

Anyway - I've tried all sorts of VNC programs, MS Remote Desktop for Windows, and various means of connecting the card within the VM (although not Guacamole- "yet")... and just not sure what's the real issue. Not even sure what information to post that might be helpful (Feel free to post a link to the standard reporting protocol for the forum, that's probably where I need to start.)

 

One thing though, please do not recommend Teamviewer - I essentially was "blackmailed" by their system admin process to cough up the price of a commercial license (Hack-hack at $50.90/mo "Are you out of your [censored]" ) - and my VM system was unreachable for over a week - while I was basically told "Pay up deadbeat." So I finally deleted that VM, and vowed never to trust that program again to log into a VM.

 

Anyway.... still going to keep plugging at this problem, but if anyone is interested in helping out... drop me a line. I'd appreciate it. At least worth a beer (or KoFi).

 

(Added - Diagnostics Download)

nastyfox-diagnostics-20211012-2227.zip

Edited by rollieindc
Added diagnostics file.
Link to comment
  • 1 month later...

 

Headline: 22 NOV 2021 Upgrades & downgrades and Virtual Machines that just won't work

 

 

Symptom: Windows 10 Pro VM under 6.9.2 with a AMD R7-240 GPU passthrough causes issues.

 

Goal: Be able to remote into a Win10VM with a headless GPU (MSI Radeon R7 240).

 

Discussion: So, I managed to add a SSD into the system with use of a PCIe Card that could hold two 2.5 inch SSDs. Unfortunately, the SSDs I have are SAS, and the connector is SATA. So, have a SAS-SATA passthrough connector on the way. (And no, I don't want to Dremel the PCIe card, it's a nice one!) Until then I am using an older 240GB inland SSD and while the VM seems stable when running in on the QXL VNC/Redhat graphics driver, the moment I move it to the MSI Drivers - all heck breaks loose. Yes I used the original drivers, yes I tried the latests, yes I tried the beta drivers, yes I tried Chrome RDP, yes I tried Microsoft RDP. No I won't try teamview, as they "fouled" my system the last time and I will never give them or anyone else who routes my VM through their serves another chance. Interesting bit is, that it works at first - then some upgrade (windows or the AMD Radeon software) does an upgrade to the driver - and I am locked out from being able to get back into the machine until I go back to the redhat QXL and VNC drivers. And personally, that won't cut it for the work I need to do. Also it won't let me work the card as a second graphics card either, or the primary with the MSI/AMD GPU and the VNC as secondary.(And yes, I have checked and the I0MMUs are separate number/call slots)

 

Plans: At some point though, two Nimbus 400GB SAS SSDs will go into the system, the VM will move to it, and I will be beating the graphics into submission. In short, it's frustrating as heck. I might pull the card completely, and try another card (nVIDIA GT680 or 610) that I have, but the nVIDIA were similarly cursed with ERROR 43 problems. I might be up against it with this T310 in that the VM just will not allow a separate GPU to run on it. If that's the case, then I will start looking to offload this machine and move to a SuperMicro dual Xeon system that I have and can upgrade.

Edited by rollieindc
correct date, minor edit
Link to comment

Update: 23NOV2021 -

Seeing elsewhere on the forum - that it's recommend to use a Windows 10 VM with the Q35 4.2 version as the emulator to do GPU passthrough. Hopefully this fixes it - more after I try it out.

 

(And "nope", that didn't work either!)

Edited by rollieindc
minor update
Link to comment
  • 2 months later...

Update: 19 FEB2022 - Pandemic to Endemic Phase?

 

So, I am back to fighting with the Dell T310 server and the VM for using any kind of GPU card. Tried the recommendation of using Windows 10 VM with the Q35 4.2 version as the emulator to do GPU passthrough - and it continues to be unstable. Sometimes working, others not. So, I've just given up and plopped in separate PC system on the network - with the ability to log into the server via a browser. Works well with various gaming and other remote desktop options (Chrome) - but been trying the newest "No Machine" RDP - which I like even better. Sure, I have to "run" another PC, but the stability is far better than a server VM, and the overall performance (again, headless for the new PC) is unquestionably better. I'm just sad I didn't think to do this earlier, as it is a lot less of a headache for me to run this way. Yes, the Dell T310 is still a great remote file server and I am still running Plex from it, which I am happy about, but I am really tired of the KVM platform. If I could run VirtualBox off it, I would - as I know those work, and have worked well for me. At some point, I might look at ESXi - but not right now. I have more things that I need to do - and get done. This is probably more about the Dell T310 hardware than the unRAID software, but I am just tired of trying to beat this problem. My system is built, and I just need to load files onto it and use it that way (as a NAS).

 

Oh, and those NIMBUS 400GB SSDs - they were both crap (end of life), so I was able to return them and got two Hynix 256GB SSDs instead from another source.

Edited by rollieindc
spelling (minor)
Link to comment
  • 1 month later...

Update: 04 APR 2022 - Just another phase

 

First thing to report, the system remains very stable, and after a couple of years - remains very usable. Having a UPS on the system makes it much more "stable" and less likely to have any power drops - or if the local power drops, gives it enough time (10min) to wait for power to return, or does an automated shut down with enough power to complete the entire recovery process. That was definitely worth the money. Oh, my T310 is on a 750VA UPS from APC, and that seems more than up to the task. Plus I keep an eye on the battery and can change it out if needed. The upgraded CPU (Xeon 3480, 3GHz) seems to have been a worthwhile investment too, and been rock solid. 

 

So- still few successes with the VM engine on the system working with my server. Still think that Q35 engine for the KVM is part of the solution, but it's so finicky with Windows and the AMD drivers, that I may never really have a stable solution for it. For now, I am just running one with a VNC connection, and then doing what I need to from it.

 

Recently, I also started seeing a severe drop off in use on my docker image with BOINC, so that is off the table as well. If I can find a cheap nVidia card (>1050) to put into it, those might become more relevant - but since my purpose for the server was to have this be a NAS, I'm happy - since it's doing quite well at that. So well, that I think I need to add my second parity disk into the system soon. And while the price of spinning disks and SSDs are coming down, I still think that I made the right decision with using 4TB drives. New WD Red and Seagate Ironwolfs are sub-$100 brand new, so replacements are easy to come by. The question might become one of needing another set of larger drives to increase overall pool size. For now, I don't see that as necessary. Also for now - I have single disk failure covered, but I really want to get two disk failure covered next. A second parity drive should cover that concern for me.

 

(Drive size economics dialog) But if I went to a larger drive, I'd have to then invest in at least three larger drives just to see any increase and make that worthwhile. I'd guess I'd need to move to 8TB drives, so might start looking at the market for those in the future. (Many 8TB drives are now selling new for $150, so that would mean that I'd have to drop $450 in drives to cover the first two parity drives, and see an increase one NAS drive from 4TB to 8TB.) If I went and replaced all 7 of my current 4TB drives, that means dropping $1,050. I think I need a cheap graphics card before I need a larger NAS. I think my past thinking on the economics of the drives were right. Keep adding inexpensive 4TB drives, go double parity, perhaps consider updating the entire server in 2-3 years, and maybe go "pro" on unraid.

 

IMHO, 4TB drives are going to remain the mainstay of many small businesses for just those sort of economic reasons. Besides, have you ever had to rebuild an 8TB drive from parity? (shudders) - That takes lots of time and CPU clicks. Ultimately, it will probably be a bigger server (Dell R720 or SmartMicro equiv.) with dual Xeon CPUs that can get me to 4Ghz that I put money into. Recently saw some R900 servers come on the market on Craigslist for $250, so will keep an eye out for those. Still hoping to score a good SmartMicro server MB & CPU set for that 3U chassis I got for next to nothing.] So lesse... new MB or R900, unRAID Pro License, add another SAS card... yeah, I'd still be a lot further ahead than dropping a grand on 8TB drives at that point. The math still works out for me (a hobbyist). If I was a pro in a smaller business, I'd probably need the growth space that the 8 or 10TB drives were giving me now. And if I was doing video, I'd just start adding SSDs as I could. SSDs in 5 years are going to drop in price, and overtake mechanical drives in performance, long-life & cost factors. Heck you can get 8TB SSDs now for less than $750. (NVME drives are still too high, but not much over a grand each!) As service integration improves, I can see those being the next level of automated systems that are connected to google & amazon services in the home/business place.

 

[Other stuff] I also installed Shinobi - for having a system look at and record my security net cams. The DLink services are being shut down soon (which was the basis of most of my cameras to date), so I needed to find a back up for that service - and after trying a few, and watching SpaceInvader's tutorial, I can say that Shinobi is pretty nice for the average home user. My framing rates aren't spectacular, but I'm running the cameras over 802.11g that's on WPA, so I can get it to do reasonably well. If I ever feel like I need better throughput, I can always hardwire them into their own sub-net. I can also do PoE with them, so that's an option too. But at some point, the cameras will become smaller, cheaper, and 4K compliant, with google, ADT & amazon integration, so why worry.

 

At some point, I also need to rotate out those cache drives in the server currently to smaller ones. 256GB ones seem more than adequate for most uses than the 1TB one I am using now. Most of my drives are uncached anyway, since I want to know the files are on the NAS hard drives. I have a UPS, which is integrated really well into the UNRAID plug in (THANK YOU WHO EVER DID THAT!) - but I still don't like the idea of being in mid-transfer and loosing some of the files or gunking up the drives in the cache and not knowing where I stopped.

 

About the only other thing to report is that I have an older laptop that was dogging it under Windows 10, and now literally "flies" with ubuntu studio 20.04 LTS. It's becoming my "everyday lightweight laptop." It won't do "power lifting" - but it is what I am using to type this into the internet portal for unRAID.

Link to comment
  • 2 weeks later...

Update: 17 APR 2022 - nVidia on the cheap

 

Managed to snag a nVidia GTX 1050 Ti (4GB) for $100, so that's replacing the MSI Radeon R7 240 video card. Will be an upgrade to allow transcoding for Plex and light graphics (gaming) use. Card was a little difficult to get into the chasis, but seems to be addressable from the VMs. Need to add the plugin for it, and allow it to be accessed by the Dockers.

Link to comment
6 hours ago, JonathanM said:

Those two things are currently mutually exclusive. You can change back and forth with a reboot though.

Thanks Jonathan - Just came to that realization too.

 

Ultimately, I wanted this for Plex transcoding. At first, I had tried to use it in a VM, then removed it from the VM (because didn't work within the VM) -> but when I tried to install the nVidia driver plugin, it said it can't find the GPU card,

  • "NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running."

- even though the driver is compatible (GTX 1050 Ti), and I can still see it in the IOMMU listing (and appears to be properly captured/identified as the only one in that group.)

  • IOMMU group 16:
    • [10de:1c82] 04:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)
    • [10de:0fb9] 04:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

Is there something I need to "un-re-do?" that you can think of?

Link to comment
  • 1 year later...

Hello,

 

It looks like it has been a while since the last message. I have been following and reading as I came across a T310 that was going to the dump and was able to snag it for free. It came with 3 146GB SAS drives, 16GB, Perc 6i. I have, from following along, changed cards to a H700 and added 4x 4TB drives in a RAID 5 with 1 hot swappable. It took me a while to get everything working but I finally have it up and running. I am wondering if I should change out the Video card after reading about all of the issues that you have been having. I have wanted to modify the box to be able to add more drives but not sure I want to go that route as of yet. 

 

Again I want to thank you for all of your notes and pics. They have helped me tremendously!

  • Thanks 1
Link to comment
  • 6 months later...

Update: 08 February 2024 - All good things must come to an inevitable end.

 

So, with a couple of power line sags after returning to our remodeled home, a power voltage regulator on the T310 motherboard was damaged. The system stopped as it was booting, and showed a CPU under-voltage error, and wouldn't restart regardless of the configuration changes.

 

I had a replacement T310 on order, but the order couldn't be filled due to a warehouse inventory error. So, after contacting the company, they agreed to upgrade my order to a T320. With some thought, this "upgrade" will include a 10c/20thread Xeon e5-2470 v2 CPU, updated BIOS, 96GB of DDR3 ECC - and I was able to transfer the existing 7 HDD 3.5" Drive unRAID array and IT/mode H200 controller card. It also would take the nVidia 1050Ti that I had bought. It didn't like my 1x PCIe SSD card, but that's a minor issue.

 

After moving everything over, I crossed my fingers as the system booted and the array spun up. Surprisingly, no real issues. The array did do a parity check, that came back with zero errors. (Whew!) The good news is that Plex and the other dockers that I use, also spun up naturally with little to no fixing required. My VMs were down due to the PCIe error issue, but those were inconsequential and will be "fixed" soon.

 

I did have a Seagate IronWolf Drive show a x0330 smart-drive error (pre-mature failure), but after looking at the forums - that error seems like a "non-issue" for the drives, and leads to incorrect drive assumptions being made. Still I have migrated the critical data off that drive as a precaution, and am considering re-formatting it, which apparently will clear the x0330 errors. After a week, I am seeing no new errors - so will follow up on that later. (Just to be clear, I am running two drives as parity!)

 

So, that's kinda the end for the T310 unRAID home server project. It wasn't a bad solution, but it also was time to move on, and I'm still tinkering with the T320 and a T710 that I have laying on a bench. Also I am likely going to be parting out the T310, perhaps putting in a new motherboard before selling it. Dunno. Let me know if there is any interest in any of the parts. I'm not in any rush to change things from where they are right now. I'm not a IT professional, I'm a home hobbyist. And I like it that way. 😃

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.