dgaschk

Moderators
  • Posts

    8920
  • Joined

  • Last visited

Everything posted by dgaschk

  1. I'm seeing this as well. Using a 2T nmve I entered 1500G which became 1T. I entered 1.2T which became 12T I was unable to change it until i entered 1.5T which became 15T. I entered: qemu-img resize --shrink vdisk1.img 1500G which becomes 1T. I entered 2T and will just not allocate a portion.
  2. johnnie.black is correct. Works with latest firmware. Thanks.
  3. LSISAS3008: FWVersion(16.00.10.00), Linux 4.19.56-Unraid. root@Othello:~# fstrim -v /mnt/cache /mnt/cache: 1.4 TiB (1500309532672 bytes) trimmed root@Othello:~# Working now. Thanks.
  4. root@Othello:~# fstrim -v /mnt/cache fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM root@Othello:~# fstrim -av fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error sd[k-m] are (3x) 860 EVO connected to a 9300-8i and configured as cache. There are too few SATA 3 ports on the MB. Does anyone have this combination working? Can you recommend a combination of HBA and SSD that does support TRIM? Thanks, David othello-diagnostics-20200108-1820.zip
  5. Have you confirmed this? I have a 9300-8i and 860 EVO does not TRIM. root@Othello:~# fstrim -v /mnt/cache fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM root@Othello:~# fstrim -av fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error sd[k-m] are (3x) 860 EVO connected to a 9300-8i. Maybe I should start another thread.
  6. Default hangs at "Loading /bzroot . . .ok" GUI Mode hangs at "Loading /bzroot-gui . . .ok" Safe Mode hangs at "Loading /bzroot . . .ok" GUI Safe Mode mode hangs at "Loading /bzroot-gui . . .ok" Memtest86+ doesn't work either. It just reboots.
  7. Yes. Those lines unbind the console. Enter them on the command line to see if they work. Make sure everything is working. When you reboot the effects will be undone. Add the lines to flash/config/go so they are executed on every reboot. See my sig for go file info.
  8. I am booting UEFI. BIOS is set to EFI only.
  9. The console stops after "Loading /bzroot . . .ok" with the blue syslinix boot selector on the screen. rack-diagnostics-20190924-0158.zip
  10. The console stops operating after inetd is loaded. Anyone see this before? The physical console shows the same as the screenshot. TIA, David UPDATE: After attaching HDMI dummy display plug to the graphics card and rebooting the console stops after "Loading /bzroot . . .ok" rack-diagnostics-20190924-0034.zip
  11. What does the syslog look like when this is happening? Post diagnostics. See my GPU efforts here:
  12. Thanks for updating this post. A built in Ethernet port just died and USB to Ethernet looks like a viable fix.
  13. After further testing weirdness ensues. I rebooted in safe mode and back to normal mode. The BIND from vfio-pci.cfg is working correctly and i am successfully using the stock Syslinux configuration. I still needed the following three lines in the go file for the VM to startup and not fill the syslog: #fix video for VM echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
  14. 2019-09-21T23:33:56.395403Z qemu-system-x86_64: vfio_region_write(0000:65:00.0:region1+0xd71e0, 0x0,8) failed: Device or resource busy I created a vfio-pci.cfg but the binding to a Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER doesn't seem to work. I get the same result as not having a vfio-pci.cfg file when I attempt to start the VM: VM log: ErrorWarningSystemArrayLogin -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x8,chassis=6,id=pci.6,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-pci-bridge,id=pci.7,bus=pci.1,addr=0x0 \ -device pcie-root-port,port=0x9,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=9,id=pci.9,bus=pcie.0,addr=0x1.0x2 \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -drive 'file=/mnt/disks/Scorch/VM/Windows 10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' \ -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on \ -drive file=/mnt/user/backup/Win10_1903_V1_English_x64.iso,format=raw,if=none,id=drive-sata0-0-0,readonly=on \ -device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=2 \ -drive file=/mnt/user/backup/virtio-win-0.1.160-1.iso,format=raw,if=none,id=drive-sata0-0-1,readonly=on \ -device ide-cd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1 \ -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:da:47:b1,bus=pci.3,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=31,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=3 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.7,addr=0x1 \ -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0 \ -device vfio-pci,host=65:00.1,id=hostdev1,bus=pci.6,addr=0x0 \ -device usb-host,hostbus=1,hostaddr=5,id=hostdev2,bus=usb.0,port=1 \ -device usb-host,hostbus=1,hostaddr=7,id=hostdev3,bus=usb.0,port=2 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: high-privileges 2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. 2019-09-21 05:40:54.488+0000: shutting down, reason=failed I am able to BIND and start the VM if I edit the Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" and this noted here: In safe mode the VM tab reports "Libvirt Service failed to start." Please see second diagnostics. Rebooting to normal after safe-mode seems to have fixed some things. The BIND is now working with a stock Syslinux configuration so vfio-pci.cfg appears to be working. (Or are there effects remaining form the previous mods?) Starting the VM resulted in this line filling the syslog: 2019-09-21T23:33:56.395403Z qemu-system-x86_64: vfio_region_write(0000:65:00.0:region1+0xd71e0, 0x0,8) failed: Device or resource busy Adding these lines to my go file allows the VM to operate acceptably: #fix video for VM echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind Please see latest diagnostics attached. rack-diagnostics-20190921-0547.zip vfio-pci.cfg rack-diagnostics-20190921-2315.zip rack-diagnostics-20190921-2356.zip
  15. This seems to fix the problem. VNC did not connect at first but rdp and splashtop did. VNC worked after I logged in once. https://www.redhat.com/archives/vfio-users/2016-March/msg00088.html Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER appears to be working. VNC is not usable but splashtop works well. I passed the GeForce through using Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" I installed newer nvidia drivers and chose the nvidia card as primary GPU. No ROM BIOS file is needed
  16. I created a vfio-pci.cfg but the binding doesn't seem to work. I get the same result as not having a vfio-pci.cfg file: 2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. VM log: ErrorWarningSystemArrayLogin -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x8,chassis=6,id=pci.6,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-pci-bridge,id=pci.7,bus=pci.1,addr=0x0 \ -device pcie-root-port,port=0x9,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=9,id=pci.9,bus=pcie.0,addr=0x1.0x2 \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -drive 'file=/mnt/disks/Scorch/VM/Windows 10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' \ -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on \ -drive file=/mnt/user/backup/Win10_1903_V1_English_x64.iso,format=raw,if=none,id=drive-sata0-0-0,readonly=on \ -device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=2 \ -drive file=/mnt/user/backup/virtio-win-0.1.160-1.iso,format=raw,if=none,id=drive-sata0-0-1,readonly=on \ -device ide-cd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1 \ -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:da:47:b1,bus=pci.3,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=31,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=3 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.7,addr=0x1 \ -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0 \ -device vfio-pci,host=65:00.1,id=hostdev1,bus=pci.6,addr=0x0 \ -device usb-host,hostbus=1,hostaddr=5,id=hostdev2,bus=usb.0,port=1 \ -device usb-host,hostbus=1,hostaddr=7,id=hostdev3,bus=usb.0,port=2 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: high-privileges 2019-09-21 05:40:54.258+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. 2019-09-21 05:40:54.488+0000: shutting down, reason=failed I am able to start the VM if I edit the Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" but this is another post. vfio-pci.cfg rack-diagnostics-20190921-0547.zip
  17. The VM works until I try to add a Nvidia card as a second GPU. This line fills the syslog: Sep 20 21:34:09 rack kernel: vfio-pci 0000:65:00.0: BAR 1: can't reserve [mem 0x38bfe0000000-0x38bfefffffff 64bit pref] rack-diagnostics-20190921-0449.zip
  18. I added a NVIDIA card to a Supermicro board and the console using the built-in ASPEED VGA worked at first. I added the vfio-pci.cfg to the config directory and rebooted. The console froze after "Loading /bzroot . . . ok". I deleted the vfio-pci.cfg file and rebooted. The console still froze. I rebooted disabled the PCIe holding the Nvidia card. The console worked and I saved the first diagnostic attached. I rebooted enabling the PCIe slot holding the Nvidia card and the console video stopped after loading bnzroot. I saved the second diagnostic attached. Other than the console not working the basic server function seems ok. I have not yet restarted VM. Why is the console frozen? rack-diagnostics-20190920-0111.zip rack-diagnostics-20190920-0118.zip vfio-pci.cfg.txt
  19. If you have a disk formatted by unRAID (or possibly otherwise formatted correctly) that has data, it can be added to the array by resetting the array, adding the disk, and rebuilding parity. A replacement disk can be swapped in without preparation because the disk will be written with the data and formatting existing on the previous disk. (Many would preclear as a test but it’s not required in any case) Adding an an additional disk to the array will cause the disk to be zeroed(cleared), added to the array and then formatted. The new disk will be empty and any data it previously contained will be gone. (This is what preclear was originally for) If if you have a random disk with data that you want to add to the array. You must first copy the data to another location, perhaps in the array, and then add the disk as an additional disk. If the data is not in the array it should be copied to the array at this point. So preclear is not needed and no longer appears to be supported. Does anyone know of a docker for disk test and “burn in”? Should we start removing references to preclear? Should the OP be modified to reflect the current non-working status of preclear? If gfjardim returns to update the plugin they can update the the OP to say that the plugin works now.
  20. I haven’t tried but they should work. Someone try it and report back. I will test it eventually. I’ve never seen a shell script lose compatibility. The issue here is with the GUI integration.
  21. Should I update the OP with this info?
  22. I going to use one of these procedures to pre-clear until the plugin is updated. https://forums.unraid.net/topic/2732-preclear_disksh-a-new-utility-to-burn-in-and-pre-clear-disks-for-quick-add/ https://forums.unraid.net/topic/30921-unofficial-faster-preclear/
  23. This is power or connection issue. See here: http://lime-technology.com/wiki/index.php?title=The_Analysis_of_Drive_Issues