Jump to content

dgaschk

Moderators
  • Content Count

    8919
  • Joined

  • Last visited

Community Reputation

3 Neutral

About dgaschk

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. johnnie.black is correct. Works with latest firmware. Thanks.
  2. LSISAS3008: FWVersion(16.00.10.00), Linux 4.19.56-Unraid. root@Othello:~# fstrim -v /mnt/cache /mnt/cache: 1.4 TiB (1500309532672 bytes) trimmed root@Othello:~# Working now. Thanks.
  3. root@Othello:~# fstrim -v /mnt/cache fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM root@Othello:~# fstrim -av fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error sd[k-m] are (3x) 860 EVO connected to a 9300-8i and configured as cache. There are too few SATA 3 ports on the MB. Does anyone have this combination working? Can you recommend a combination of HBA and SSD that does support TRIM? Thanks, David othello-diagnostics-20200108-1820.zip
  4. Have you confirmed this? I have a 9300-8i and 860 EVO does not TRIM. root@Othello:~# fstrim -v /mnt/cache fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM root@Othello:~# fstrim -av fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error sd[k-m] are (3x) 860 EVO connected to a 9300-8i. Maybe I should start another thread.
  5. Default hangs at "Loading /bzroot . . .ok" GUI Mode hangs at "Loading /bzroot-gui . . .ok" Safe Mode hangs at "Loading /bzroot . . .ok" GUI Safe Mode mode hangs at "Loading /bzroot-gui . . .ok" Memtest86+ doesn't work either. It just reboots.
  6. Yes. Those lines unbind the console. Enter them on the command line to see if they work. Make sure everything is working. When you reboot the effects will be undone. Add the lines to flash/config/go so they are executed on every reboot. See my sig for go file info.
  7. I am booting UEFI. BIOS is set to EFI only.
  8. The console stops after "Loading /bzroot . . .ok" with the blue syslinix boot selector on the screen. rack-diagnostics-20190924-0158.zip
  9. The console stops operating after inetd is loaded. Anyone see this before? The physical console shows the same as the screenshot. TIA, David UPDATE: After attaching HDMI dummy display plug to the graphics card and rebooting the console stops after "Loading /bzroot . . .ok" rack-diagnostics-20190924-0034.zip
  10. What does the syslog look like when this is happening? Post diagnostics. See my GPU efforts here:
  11. Thanks for updating this post. A built in Ethernet port just died and USB to Ethernet looks like a viable fix.
  12. After further testing weirdness ensues. I rebooted in safe mode and back to normal mode. The BIND from vfio-pci.cfg is working correctly and i am successfully using the stock Syslinux configuration. I still needed the following three lines in the go file for the VM to startup and not fill the syslog: #fix video for VM echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
  13. 2019-09-21T23:33:56.395403Z qemu-system-x86_64: vfio_region_write(0000:65:00.0:region1+0xd71e0, 0x0,8) failed: Device or resource busy I created a vfio-pci.cfg but the binding to a Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER doesn't seem to work. I get the same result as not having a vfio-pci.cfg file when I attempt to start the VM: VM log: ErrorWarningSystemArrayLogin -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x8,chassis=6,id=pci.6,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-pci-bridge,id=pci.7,bus=pci.1,addr=0x0 \ -device pcie-root-port,port=0x9,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=9,id=pci.9,bus=pcie.0,addr=0x1.0x2 \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -drive 'file=/mnt/disks/Scorch/VM/Windows 10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' \ -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on \ -drive file=/mnt/user/backup/Win10_1903_V1_English_x64.iso,format=raw,if=none,id=drive-sata0-0-0,readonly=on \ -device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=2 \ -drive file=/mnt/user/backup/virtio-win-0.1.160-1.iso,format=raw,if=none,id=drive-sata0-0-1,readonly=on \ -device ide-cd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1 \ -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:da:47:b1,bus=pci.3,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=31,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=3 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.7,addr=0x1 \ -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0 \ -device vfio-pci,host=65:00.1,id=hostdev1,bus=pci.6,addr=0x0 \ -device usb-host,hostbus=1,hostaddr=5,id=hostdev2,bus=usb.0,port=1 \ -device usb-host,hostbus=1,hostaddr=7,id=hostdev3,bus=usb.0,port=2 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: high-privileges 2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. 2019-09-21 05:40:54.488+0000: shutting down, reason=failed I am able to BIND and start the VM if I edit the Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" and this noted here: In safe mode the VM tab reports "Libvirt Service failed to start." Please see second diagnostics. Rebooting to normal after safe-mode seems to have fixed some things. The BIND is now working with a stock Syslinux configuration so vfio-pci.cfg appears to be working. (Or are there effects remaining form the previous mods?) Starting the VM resulted in this line filling the syslog: 2019-09-21T23:33:56.395403Z qemu-system-x86_64: vfio_region_write(0000:65:00.0:region1+0xd71e0, 0x0,8) failed: Device or resource busy Adding these lines to my go file allows the VM to operate acceptably: #fix video for VM echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind Please see latest diagnostics attached. rack-diagnostics-20190921-0547.zip vfio-pci.cfg rack-diagnostics-20190921-2315.zip rack-diagnostics-20190921-2356.zip
  14. This seems to fix the problem. VNC did not connect at first but rdp and splashtop did. VNC worked after I logged in once. https://www.redhat.com/archives/vfio-users/2016-March/msg00088.html Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER appears to be working. VNC is not usable but splashtop works well. I passed the GeForce through using Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" I installed newer nvidia drivers and chose the nvidia card as primary GPU. No ROM BIOS file is needed