Jump to content

ClintE

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by ClintE

  1. Use case: 20-person office. If you keep thin client cost down, might be cost-effective, even. 20 thin clients using RDP to access the vm's... Maybe. Assign them passthrough network addresses, DHCP or static, such as 192.168.0.xx on the local lan using network bridge br0 instead of virbr0 (teaming up more than one ethernet interface would be helpful). Perhaps I'll give it a try sometime with 3 or 4 nic's teamed up to the switch. I don't have 20 thin clients to play with though. But I could see if all 20 would at least load up and have maybe 5-10 people accessing that many at the same time. Once I acquire the 2x 2TB cache ssd's (when prices fall a bit more in a month or 2), I could configure 10 vm's to use each ssd. Loop something kinda resource intensive on the unused vm's to keep them busy. 6GB of ram for each vm, with 128gig to start with, that's 120 with 8 left over for the os and overhead. Might have to turn off some/most dockers and/or plugins. Split up the cores and ht cores 10/10, one for each vm, keeping 12 for os. High performance? No. Acceptable? Maybe...
  2. I know some errors in log files can safely be ignored, but this probably isn't that case: syndrome:0x0 - OVERFLOW area:DRAM err_code:0001:0091 socket:1 ha:0 channel_mask:2 rank:0) kernel: mce: [Hardware Error]: Machine check events logged kernel: EDAC sbridge MC1: HANDLING MCE MEMORY ERROR kernel: EDAC sbridge MC1: CPU 8: Machine Check Event: 0 Bank 5: 8c00004000010091 kernel: EDAC sbridge MC1: TSC 0 kernel: EDAC sbridge MC1: ADDR 20227e0340 kernel: EDAC sbridge MC1: MISC 20423a1a86 kernel: EDAC sbridge MC1: PROCESSOR 0:206d7 TIME 1541673606 SOCKET 1 APIC 20 kernel: EDAC MC1: 1 CE memory read error on CPU_SrcID#1_Ha#0_Chan#1_DIMM#0 (channel:1 slot:0 page:0x20227e0 offset:0x340 grain:32 Started going though log files and found this, it seems to be repeating depending on system activity. Looks like the errors are the same each time. System is running properly though; parity check ran with zero errors. Thanks for any insight into what this might mean.
  3. This is interesting. Started out at 6.6.2. Upgraded the other day to 6.6.3 after backup of flash drive. No problems at all. Today I update (after backing up flash) to 6.6.4, and now I get random missing disks after reboot. The disks are there, bios & controller sees them fine. GUI reports stale configuration. Thoughts, anyone? Edit: Now it's running fine. Restarted computer about 3 or 4 times and all the disks finally showed up. Started array. All is good. So far.
  4. I'm having the same issue, set parity schedule to run once a week and runs at the proper time scheduled, but every day. Doing it manually right now...
  5. Hello, I'm very new to unraid, plugins, etc. but lots of experience with many flavors of hardware and software in general. Hardware: Motherboard -- AsRock Rack EP2C602-4L/D16 CPU's -- E5-2687W 0 At idle, BIOS screen reports CPU temps around 50 C or so, and this plugin reports about 35 C. Could there be an offset being applied automatically? I thought 50 was a bit high, but not really concerned about it. Would like to have temps in 35 range, of course. Thanks for all your great work with this plugin!
×
×
  • Create New...