Jump to content

ClintE

Members
  • Content Count

    17
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ClintE

  • Rank
    Member
  • Birthday 03/01/1956

Recent Profile Visitors

113 profile views
  1. You should be able to upgrade to 64gb ram with no problem. The Z370 motherboards can address 64gb in dual channel mode just like you're running now. Cheers!
  2. ClintE

    need help upgrading my server

    The jdm_waaat anniversary build components are very cost effective and would work great for your requirements. Possibly overkill, but you never know what you're going to do with the server in future. Used parts can be even easier on the wallet, especially motherboard, cpu's, and memory. You didn't mention your chassis model; motherboard choice depends a lot on what form factor your chassis will accommodate.
  3. ClintE

    So unRAID then...

    You can add more drives by installing an expander card connected to the lsi raid card. Most lsi raid cards accept this type of connection. Very inexpensive way to add lots of sata ports to a system. https://www.ebay.com/itm/487738-001-468405-002-HP-4K10C5-EXPANDER-CARD/273028095105?hash=item3f91be1c81:g:OBMAAOSwjIVaXjgb Or something similar.
  4. ClintE

    Preclear plugin

    If I'm right with my rough calculations, that means it would take about 9 days or so to preclear the drive. Probably shouldn't take that long. Try restarting the system after cancelling the preclear and see what the speed is at that point. Maybe set the array to not auto-start before rebooting, but that shouldn't have anything to do with the preclear. I've tried preclearing drives with and without the array running, and haven't seen any difference in speed of preclear. Even though I see the screen messages from my LSI SAS9201 controller BIOS, I have the BIOS set to disabled, since it's not using any of the BIOS functions and just running JBOD in IT mode firmware; not sure if that makes any difference either.
  5. If your cpu temps are sitting at 100C for more than a few seconds, it's probably something wrong with temp sensing. There are certain cpu thermal events that trigger motherboard alarms, typically called THERM#, ALERT#, and THERMTRIP#. Most cpu's can't go 100C without tripping the last one, sending a signal to the motherboard to shut down. Therefore, if it sits at that temp for a while, that means the sensor is returning incorrect temp to the system, otherwise it would just turn itself off.
  6. Yes, I've heard the Steam streaming works quite well for Steam games. I can stream pretty much any game on the shield from the pc. I've tested a few 3rd party controllers, and ms xb controllers work great. ps controllers work ok with the usb dongle if available.
  7. I played around with htpc for years, settled on keyboard and connecting mouse only when needed. Last year I purchased an Android TV streaming box (nVidia Shield TV Pro) and it's been so much better. Works better with remote, lots more functionality. You don't have to go this far, as the shield pro is rather expensive, but there are some really good deals out there right now on media streaming boxes for your TV that will pretty much replace all your htpc functions and be easier to use. With the Shield, I can even run any PC game on it that I want to, provided I have the game installed on my computer and I have a compatible nVidia GTX 650 or better video card in that same computer. I look forward to the day I can install the games on a VM and have the video card passed through to this vm in unraid. That's a work in progress. Oh, and the Harmony remote works great with the Shield TV Pro. Good luck and have fun!
  8. There's not a lot of support for Thunderbolt 3 YET... But I would say this is the way to go if investing in hardware right now. Plenty of future-proofing that way. Look up Linus Sebastian's home setup (Linus Tech Tips guy). He uses UnRaid to host VMs KVM-connected with TB3 card in server to TB3 docks via long optical cables. Rather expensive right now (100 foot optical TB3 cable is well over $200), but very elegant way to hook up a few KVM setups on the other side of the house. By the time you get cables & docks you're talking $500+ per connection. A few cheaper alternatives out there right now, if not using 4K. I hope to get into this type of thing at some point in my UnRaid adventures. I'll be acquiring a decent TB3 card for the server in not too distant future; can be used for all sorts of nifty USB connections for VMs. Sometime down the road I'll get a pcie extension/expansion card for more TB3 cards.
  9. ClintE

    cant pull from depositry

    SuperMicro CSE-M35T-1B Maybe not calendar material, but definitely *** unRAID ServerPorn *** I use these, had 'em for years. Built like tanks and almost as heavy. Amazed that they're still sold everywhere, but don't pay too much for them. I think they came out before SATA3 spec, but who needs that; can't saturate a SATA3 port with spinning rust anyway. That's what SSD cache is for, right?
  10. ClintE

    [Request] Put updates on CloudFront

    Like some other applications etc. do it; download the update, have it available for install, notify user. Maybe choices on whether to automatically download, install, and notify user to reboot; download and notify; or just notification of update available without download.
  11. ClintE

    20 Windows 10 VMs?

    Use case: 20-person office. If you keep thin client cost down, might be cost-effective, even. 20 thin clients using RDP to access the vm's... Maybe. Assign them passthrough network addresses, DHCP or static, such as 192.168.0.xx on the local lan using network bridge br0 instead of virbr0 (teaming up more than one ethernet interface would be helpful). Perhaps I'll give it a try sometime with 3 or 4 nic's teamed up to the switch. I don't have 20 thin clients to play with though. But I could see if all 20 would at least load up and have maybe 5-10 people accessing that many at the same time. Once I acquire the 2x 2TB cache ssd's (when prices fall a bit more in a month or 2), I could configure 10 vm's to use each ssd. Loop something kinda resource intensive on the unused vm's to keep them busy. 6GB of ram for each vm, with 128gig to start with, that's 120 with 8 left over for the os and overhead. Might have to turn off some/most dockers and/or plugins. Split up the cores and ht cores 10/10, one for each vm, keeping 12 for os. High performance? No. Acceptable? Maybe...
  12. I know some errors in log files can safely be ignored, but this probably isn't that case: syndrome:0x0 - OVERFLOW area:DRAM err_code:0001:0091 socket:1 ha:0 channel_mask:2 rank:0) kernel: mce: [Hardware Error]: Machine check events logged kernel: EDAC sbridge MC1: HANDLING MCE MEMORY ERROR kernel: EDAC sbridge MC1: CPU 8: Machine Check Event: 0 Bank 5: 8c00004000010091 kernel: EDAC sbridge MC1: TSC 0 kernel: EDAC sbridge MC1: ADDR 20227e0340 kernel: EDAC sbridge MC1: MISC 20423a1a86 kernel: EDAC sbridge MC1: PROCESSOR 0:206d7 TIME 1541673606 SOCKET 1 APIC 20 kernel: EDAC MC1: 1 CE memory read error on CPU_SrcID#1_Ha#0_Chan#1_DIMM#0 (channel:1 slot:0 page:0x20227e0 offset:0x340 grain:32 Started going though log files and found this, it seems to be repeating depending on system activity. Looks like the errors are the same each time. System is running properly though; parity check ran with zero errors. Thanks for any insight into what this might mean.
  13. This is interesting. Started out at 6.6.2. Upgraded the other day to 6.6.3 after backup of flash drive. No problems at all. Today I update (after backing up flash) to 6.6.4, and now I get random missing disks after reboot. The disks are there, bios & controller sees them fine. GUI reports stale configuration. Thoughts, anyone? Edit: Now it's running fine. Restarted computer about 3 or 4 times and all the disks finally showed up. Started array. All is good. So far.