Jump to content

nraygun

Members
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

1 Neutral

About nraygun

  • Rank
    Member
  1. Bingo! Looks like NoMachine added its own audio adapter and now I get sound! Thanks Bastl!!!
  2. I'm trying to get HDMI passthrough audio to work too and no luck so far. Here's my device XML: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1' multifunction='on'/> </hostdev> and the Nvidia card from lspci: 06:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) 06:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1) Sound doesn't come through the remote desktop. Windows 10, however, does come through. Using Google remote desktop to access the VMs.
  3. I have audio from my GTX 1050 working with Windows 10. Sound comes through Google Remote Desktop. But with High Sierra, I can't get any sound. I tried adjusting the XML to have the card be a multifunction card with audio as a subsystem. I tried an HDMI.kext in Clover in the EFI folder. I bought a USB sound adapter, but that might only provide sound on the actual server in the basement - not through remote access methods. Does anyone have sound working in a VM of macOS High Sierra like it does with Windows 10 with a GTX card such that sound comes through remote desktop?
  4. Doh! I'm embarrassed - I thought it was the cache drive. Maybe things moved around when I moved the drives around? Not sure I even needed to put the SSD on a PCI card. Oh well. At least now I have an extra drive bay for more storage. The sdc drive is my backup drive for borg. It's old and probably needs to be replaced. Thanks for taking a look!
  5. Here you go. Thanks! flores-diagnostics-20190830-1253.zip
  6. I had my 1TB SSD cache drive on my flashed H200 and started seeing errors such as this: Aug 1 06:00:50 server kernel: print_req_error: critical target error, dev sdc, sector 1996910981 So as someone recommended, I popped it out off of the backplane and installed it into a PCI card: https://www.amazon.com/gp/product/B01452SP1O/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 I thought all was well, but recently saw this: Aug 29 02:00:09 server kernel: print_req_error: I/O error, dev sdc, sector 979775864 I ran Check Filesystem Status check and it didn't seem to indicate an error. I then ran a SMART extended test and it showed "Completed without error". Do I need to be concerned? Do I need to replace the SSD? It's relatively new (purchased 5/2019). Or is everything OK?
  7. I do have a DVD drive and would like to leave it there. Plus I hear that SATA port is not as fast as a newer controller would be (3Gbps vs 6?). Then again, I could attach an external DVD drive for the few times I rip something. Plus it would give me another bay for more bits! Is this what everyone does on their R710 - use the DVD bay with a tray for their SSD cache drive?
  8. I currently have my SSD cache drive on a port on my flashed H200 controller. I see errors and I hear it's because that controller even when flashed doesn't support SSDs. Can you recommend a good, say 2 port, SATA controller for an R710 server? I saw some for $20 on Amazon but not sure they are "server grade".
  9. Right. I would use one or the other. Are you saying that once you've made a choice to go the GPU route you can't go back to VNC?
  10. I'm having an hell of a time trying to get a Linux VM to work with a passed through GPU and VNC. I know the passthrough works because I have it working in a Windows VM. I shut that VM down before fiddling with the Linux VM. I can VNC to the Linux VM with graphics set to VNC but then when I change the graphics to the Nvidia card, I can no longer access it. And when I change it back to VNC, I get a message that says the graphics has not initialized (yet) or "internal error: qemu unexpectedly closed the monitor: 2019-08-08T13:59:37.609434Z qemu-system-x86_64: -device pcie-pci-bridge,id=pci.8,bus=pci.2,addr=0x0: Bus 'pci.2' not found". I'd expect to be able to access the VM when I put the graphics back to VNC. I'm using MX Linux and X2Go. Any ideas?
  11. If I'm on 6.7.1 with a cache drive and am experiencing no issues, is it advisable to go to 6.7.2? Or should I leave well enough alone given this investigation will probably yield yet another update?
  12. It's wonky, but it gives me what I need. I created a FreeDOS VM and called it scripts. Inside of my borg backup script I start off with a "virsh start scripts" and then end the script with a "virsh destroy scripts". The scripts VM will show running when the borg script is in process and will show stopped when it's not. Close enough.
  13. Does anyone know if there is a way to post the status of a script to unRaid's dashboard? I'd like to see if my backup script is still running right on the dashboard instead of going into the CA User Scripts area.
  14. I popped in a 2TB drive (WD Green) to use as a backup drive and started the preclear on it last night. When I checked it this morning, it was in a "stalled" state. The log shows this: Jun 20 06:51:02 preclear_disk_WD-XXX: Zeroing: progress - 90% zeroed Jun 20 07:06:08 preclear_disk_WD-XXX: smartctl exec_time: 1s Jun 20 07:20:02 preclear_disk_WD-XXX: Zeroing: progress - 95% zeroed Jun 20 07:53:26 preclear_disk_WD-XXX: smartctl exec_time: 10s Jun 20 07:53:37 preclear_disk_WD-XXX: smartctl exec_time: 21s Jun 20 07:53:47 preclear_disk_WD-XXX: smartctl exec_time: 31s Jun 20 07:53:47 preclear_disk_WD-XXX: dd[30448]: Pausing (smartctl exec time: 31s) Jun 20 07:53:57 preclear_disk_WD-XXX: smartctl exec_time: 41s Jun 20 07:54:08 preclear_disk_WD-XXX: smartctl exec_time: 52s Jun 20 07:54:18 preclear_disk_WD-XXX: smartctl exec_time: 62s Jun 20 07:54:18 preclear_disk_WD-XXX: killing smartctl with pid 30816 - probably stalled... Any ideas what went wrong here? I just hit continue and it looks like it's on the post-read task.
  15. How about handling of VMs in general? Should I just shut them down after I'm done with it?