jsebright

Members
  • Posts

    47
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jsebright's Achievements

Rookie

Rookie (2/14)

8

Reputation

  1. That's correct, I did. Struggling to get it to work again though. Going back to VNC, uninstalling all drivers & old devices before adding the graphics will probably help.
  2. I haven't - that's where it all goes wrong. You're stuck with the default drivers. I'm on 6.9.2 and not ready to try 6.10 yet - still waiting for the next RC, "soon?". I'm hoping that updates to KVM/QEMU will help, as may future bios updates, and updated drivers from AMD. For now I've reverted back to adding a physical graphics card for VM work. A bit disappointing.
  3. Sorry, no idea where the xvga='yes' came from. I try to stick to the Forms mode when editing VMs. As I said, for me I got to a state where there was no display but the VM was running and able to serve a remote desktop. I also saw the Microsoft Basic Adapter, but just did a reboot and the screen came up. I'm not 100% sure if the output is from HDMI or DP>HDMI adapter (I'm then running two long HDMI leads to another room). I don't get multi monitor from the basic adapter so it might be that only one of the outputs is working. If you are able to test a display port connection it might show something. I'm not seeing any errors in the logs. Sorry I can't offer more help - I don't understand enough to explain what's going on.
  4. <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <address bus='1' device='6'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> This is the section with the graphics - all the "hostdev" bits. The bit I edited was the line <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> I don't have the graphics & audio bound to VFIO. I don't think that the whole script will help - I've not got a full understanding of what it does, and it's probably a bit of a mess due to previous changes to this machine. The important points are Q35 Seabios, no other devices (add them later), get remote desktop working first (whilst graphics are VNC) so you can log in remotely if the display doesn't work (after you've changed the graphics) then you can check device manager and reboot.
  5. OK, so I couldn't resist having another go. This is what I've done and the issues Made sure the AMD Cezanne graphics and driver were uninstalled from the VM Added the GPU and audio to the VM Fiddled with the Advanced settings xml so that the passed through audio bus was the same as the graphics, and the function was 0x1 (sort of matching the devices pre-passthrough). This might not be necessary if we're not installing drivers. Booted up the VM. Black screens and nothing. But, I could remote desktop into the VM, so I did that and rebooted it. It came up with a screen. Made sure the keyboard & mouse was attached, and logged in. Rebooted - came up straight away with the screen. Changed the resolution to match the monitor, rebooted and it's still good. Shut it down, then rebooted and all fine. I even tried pausing the VM from unraid then starting it and that worked. So - quite successfull. But - I can only see one monitor and I have two plugged in (I think both connections are plugged in at the server anyway!). Display adapter in Device Manager shows "Microsoft Basic Display Adapter". Obviously not making the most of the chip, but it's perfectly usable. I'm not going to go further and try to install any driver updates as it's got a high probability of jamming something and the server might require a hard reboot. It's possible to get this far. Good luck!
  6. Can't remember exactly, but VM is Q35-5.1 (latest Q35 for 6.9.2) with SeaBIOS. I think that some time ago I had to move from i440fx to Q35 to get a GPU passthrough working. I've not tried much recently as I don't want to crash the machine again, but I think the first try I hadn't even done the VFIO binding. Tempted to have another try to see if I can get something basic working without the drivers that doesn't crash the system following a VM shutdown / startup. Will post back when I do that.
  7. Have just upgraded to this CPU with the aim of removing a graphics card and reducing power consumption, but still hoping to pass through to a Windows VM. Initial passthrough worked OK (I think). (I have not tried to mess with the GPU bios.) Then trying to install the drivers I had some problems. Driver install seemed to hang, then once I'd got round that (possibly), then I found that turning off the VM caused horrible problems with CPU usage going crazy and the system becoming unresponsive. A reboot was the only solution. Since then I've avoided this hoping that there will be some fixes in a new driver, or in 6.10 (not ready to try a beta build yet as I only have the one server). I have meddled with an Ubuntu VM, but couldn't get that to pass through. Am tempted to try doing a clean windows install to see what happens, but am wary of locking everything up again, also might try and see how I can get on without drivers as I'm not worried about gaming - just occasional desktop use. I hadn't considered dumping the bios might help as I haven't needed to go that route before with a stand alone (very old) AMD card that I passed through. Don't really want to put the old card back in, but that thought has crossed my mind. Interested to follow this and see what progress is made.
  8. ing Chia... ey at path: /root/.chia/mnemonic.txt ey at path: /root/.chia/mnemonic2.txt ot directory "/plots". ot directory "/plots1". ot directory "/plots2". ot directory "/plots3". ot started yet daemon vester: started mer: started l_node: started let: started annot create directory '/root/.chia/flax': File exists ing Flax... ey at path: /root/.chia/mnemonic.txt ey at path: /root/.chia/mnemonic2.txt ot directory "/plots". ot directory "/plots1". ot directory "/plots2". ot directory "/plots3". ot started yet daemon vester: started mer: started l_node: started let: started ing Plotman... ing Chiadog... Chiadog... ing Flaxdog... Flaxdog... Machinaris API server... Machinaris Web server... d startup. Browse to port 8926. I might have restarted the docker since - not sure sorry. With the usual first few characters cut off - If you know where I can get all the output I'll update it. - still getting plotting failures. This page https://github.com/madMAx43v3r/chia-plotter/issues/574 seems to suggest it may be RAM related. I have had some other crashes with the intense processing (otherwise the server is stable, but only lightly used), so I might do a full reboot and see what I can tweak.
  9. Sorry, that's something that took me a while to get right, and I couldn't see what was wrong with it. I'd suggest you try to revert to defaults and as simple as possible. Check the drive paths are mapping to where they should be. Can't offer any more help as I don't know enough...
  10. @localh0rst I had the internal server error. Logged into the docker and ran "flax init" that seemed to start it for me. May or may not work for you...
  11. Ah yes. Why didn't I think of that? Has the advantage that it doesn't get killed if the docker restarts. Watching a few together in just one window is pretty good though, and a bit less to keep track of.
  12. Nice. Haven't seen that before. I couldn't seem to open two windows of the unraid docker console, but I did find I could just use the command watch -n10 du -sh /plotting /plotting2 and it watches both folders in one window.
  13. Thanks for your reply. Seems like the runs are crashing sometimes. The tempdirs have files from different runs leftover in them. I've just checked my config against the wiki sample (was doing that as I saw your previous post). I've got max jobs set to 1, but had the staggers set differently - might have been causing the issue. I've got the threads set to 4 - I've only got 5 pairs of cores available (Ryzen 2600 with a pair pinned in case a VM wants a look-in) and it does tend to run them at about 90%. Perhaps this needs taking down from the default? - I'll see how it goes with the default stagger options before reducing it. Thanks again for everything your doing with this.
  14. It also took me some time to get MadMax plotting to work. I also think there's an issue that it's not clearing temp folders up. I've got two ssds that I'm using and have one of them as the primary temp, and one as tmp2. They are slowly filling up until plotting stops. After it's all ground to a halt - see plotting and plotting2 : root@Tower:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p1 932G 673G 259G 73% / tmpfs 64M 0 64M 0% /dev tmpfs 32G 0 32G 0% /sys/fs/cgroup shm 64M 60K 64M 1% /dev/shm shfs 3.7T 893G 2.8T 24% /plots /dev/sde1 447G 313G 135G 70% /plotting /dev/nvme0n1p1 932G 673G 259G 73% /id_rsa /dev/sdf1 447G 447G 24K 100% /plotting2 /dev/sdb1 7.3T 7.2T 153G 98% /plots2 /dev/sdc1 7.3T 7.2T 153G 98% /plots3 tmpfs 32G 0 32G 0% /proc/acpi tmpfs 32G 0 32G 0% /sys/firmware I'm having to stop plotting, clear the files, and start it off again. Don't know if this is a MadMax or Machinaris issue. @guy.davis ?? This is still faster than the chia plotter, I don't have much diskspace left...
  15. Just did a "check for updates" and it's available. Just waiting for some plots to finish. Looking forward to the latest version.