JohnSnyder

Members
  • Content Count

    44
  • Joined

  • Last visited

Community Reputation

3 Neutral

About JohnSnyder

  • Rank
    Newbie
  • Birthday 04/16/1943

Converted

  • Gender
    Male
  • Location
    Charlotte, NC, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. First of all, I downloaded the Virt_Manager docker by djaydev. Then I went to the support site, and the very first post instructed me to do the following: I followed what @dee31797 posted - here are the itemized steps: turn on netcat-openbsd in Nerd Pack, apply, and restart computer open the virt-manager docker GUI - you'll still have the error in the Docker container, go File > Add Connection Hypervisor: QEMU Select connect to remote host over SSH Hostname: the internal network IP of your server Hit connect It will then ask you to accept
  2. Well, I got everything going (took a restart to do it), and I've changed my vgamem to 65536 and now I can display my virtual machine in 4K. Problem solved!!
  3. Thank you very much, Meep! I did install the Virt-Manager Docker, and when I ran it, it said I needed to install libvirtd. I certainly don't want to mess up my unRaid installation. If I install libvirtd and whatever else Virt-Manager might need, will this prevent me from creating and using built-in unRaid virtual machines? Will installing the dependencies required by Virt-Manager in any way mess with my unRaid installation? Thanks again!
  4. Using Virtual Machine Manager in Manjaro Linux,, I can edit the template and increase the video memory which will allow me to display my virtual machines at full 4K resolution. When I examine the unRaid VM template of a virtual machine, I do not see any line which addresses the subject of video memory. Is there a way to edit an unRaid VM template to increase the video memory and therefore allow full 4K resolution?
  5. Thanks, johnnie.black! I wasn't aware that the log you were referring to was in the BIOS. I did check it, and it is completely empty. I verified that the logging is enabled. However, I'll read the manual and try to figure out if there are any other settings I need to change in order to have this event log actually log something!
  6. The temperature always reads in the 30s and 40s. Currently I have the side panel off. I haven't done anything specific to check the power supply - I'm not sure what to do. It's quite new, 850 watt Corsair. I have repeatedly checked the log file (the link to which is shown in the upper right hand corner). It's the only log file I know for unRAID, and it's the one I posted in my original post (saved in a zip file as unRAID-NAS SysLog). Is there another one I should look at?
  7. Well, the same hardware errors showed up. I removed the original memory from CPU 0 in socket 1 (I've filled up the second set of slots in the interim) and the hardware error remains. So ... whatever ...
  8. I ran a 36 hour memory test and the result was 0 errors. After I restarted unRAID I continued getting the same hardware errors. I reseated my video card and my memory modules; and then ran an extended Fix Common Problems. So far (only 30 minutes or so), no errors of any kind are showing up. I'm encouraged ... but not yet convinced that reseating those components has fixed the problem. I've had periods of time in the past where no errors showed up for hours and even days -- only to reappear without warning. So, we'll see.
  9. Well ... The hardware errors are now showing up again. Message from syslogd@unRAID-NAS at Jul 7 15:59:34 ... kernel:mce: [Hardware Error]: CPU 8: Machine Check: 0 Bank 7: 8800004000310e0f Message from syslogd@unRAID-NAS at Jul 7 15:59:34 ... kernel:mce: [Hardware Error]: TSC 65c0cc4db27 MISC 1c6c46004c00bd Message from syslogd@unRAID-NAS at Jul 7 15:59:34 ... kernel:mce: [Hardware Error]: PROCESSOR 0:206d7 TIME 1530993574 SOCKET 1 APIC 20 microcode 713 Any idea what these mean??
  10. Crazy! I did a parity check/restore, and not only did the corrupt disk get restored, but the hardware problems disappeared from the Fix Common Problems report! It's interesting that I've had disk1 get corrupted 3 times. And I've swapped the disks around so that today's disk1 is different from a previous disk1 that also got corrupted. And the corruption occurred EACH time during a manually initiated Mover operation. Never during a scheduled Mover - only when I clicked on the button which initiates a Mover operation now.
  11. Fix Common Problems found hardware errors on my machine. I've attached the syslog file. I'm using the latest version of unRAID (v6.5.3). I have dual 8 core Xeon processors on a Z9PE-D8-WS motherboard. I'd really appreciate an interpretation of this syslog by someone who understand it!! Thanks!! unraid-nas-syslog-20180703-1403.zip
  12. Is the unRaid platform sufficiently stable/robust to support eight or ten Windows 10 virtual machines that would function as primary desktop computers as daily drivers in an office environment? The non-profit with which I am associated is considering upgrading their office computers, and I'm trying to determine whether or not a single unRaid machine with separate virtual computers for all the staff members is a viable option.
  13. I never pursued the issue further. My next step would have been to do as you did - namely, reinstall Windows using an alternate BIOS.
  14. The smart report has been perfect for that disk. I couldn't collect any data prior to the hard shutdown because the console was unresponsive. I had to press and hold the power button to do a hard shutdown.
  15. While changing some settings inside of my Windows 10 virtual machine (trying to update the Microsoft Display Driver to that in the qxl directory of the VirtIO disk), I lost the network connection between my computer and the unRAID computer. When I went to the unRAID computer, I saw some message about an error on the cache disk (my virtual machines are all on a separate SSD which is not a part of the unRAID array, so I'm not sure why the cache disk was even involved). There was no blinking cursor, so I'm assuming that the terminal was inoperative. I had to do a hard shutdown. Whe