dgaschk

Moderators
  • Content Count

    8919
  • Joined

  • Last visited

Everything posted by dgaschk

  1. johnnie.black is correct. Works with latest firmware. Thanks.
  2. LSISAS3008: FWVersion(16.00.10.00), Linux 4.19.56-Unraid. root@Othello:~# fstrim -v /mnt/cache /mnt/cache: 1.4 TiB (1500309532672 bytes) trimmed root@Othello:~# Working now. Thanks.
  3. root@Othello:~# fstrim -v /mnt/cache fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM root@Othello:~# fstrim -av fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error sd[k-m
  4. Have you confirmed this? I have a 9300-8i and 860 EVO does not TRIM. root@Othello:~# fstrim -v /mnt/cache fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error root@Othello:~# hdparm -I /dev/sd[k-m] | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM root@Othello:~
  5. Default hangs at "Loading /bzroot . . .ok" GUI Mode hangs at "Loading /bzroot-gui . . .ok" Safe Mode hangs at "Loading /bzroot . . .ok" GUI Safe Mode mode hangs at "Loading /bzroot-gui . . .ok" Memtest86+ doesn't work either. It just reboots.
  6. Yes. Those lines unbind the console. Enter them on the command line to see if they work. Make sure everything is working. When you reboot the effects will be undone. Add the lines to flash/config/go so they are executed on every reboot. See my sig for go file info.
  7. I am booting UEFI. BIOS is set to EFI only.
  8. The console stops after "Loading /bzroot . . .ok" with the blue syslinix boot selector on the screen. rack-diagnostics-20190924-0158.zip
  9. The console stops operating after inetd is loaded. Anyone see this before? The physical console shows the same as the screenshot. TIA, David UPDATE: After attaching HDMI dummy display plug to the graphics card and rebooting the console stops after "Loading /bzroot . . .ok" rack-diagnostics-20190924-0034.zip
  10. What does the syslog look like when this is happening? Post diagnostics. See my GPU efforts here:
  11. Thanks for updating this post. A built in Ethernet port just died and USB to Ethernet looks like a viable fix.
  12. After further testing weirdness ensues. I rebooted in safe mode and back to normal mode. The BIND from vfio-pci.cfg is working correctly and i am successfully using the stock Syslinux configuration. I still needed the following three lines in the go file for the VM to startup and not fill the syslog: #fix video for VM echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
  13. 2019-09-21T23:33:56.395403Z qemu-system-x86_64: vfio_region_write(0000:65:00.0:region1+0xd71e0, 0x0,8) failed: Device or resource busy I created a vfio-pci.cfg but the binding to a Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER doesn't seem to work. I get the same result as not having a vfio-pci.cfg file when I attempt to start the VM: VM log: ErrorWarningSystemArrayLogin -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -
  14. This seems to fix the problem. VNC did not connect at first but rdp and splashtop did. VNC worked after I logged in once. https://www.redhat.com/archives/vfio-users/2016-March/msg00088.html Windows 10 VM on Supermicro X11SPM-F with GeForce RTX 2080 SUPER appears to be working. VNC is not usable but splashtop works well. I passed the GeForce through using Syslinux configuration with "{append initrd=/bzroot} vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9" I installed newer nvidia drivers and chose the nvidia card as primary GPU. No ROM BIOS file is needed
  15. I created a vfio-pci.cfg but the binding doesn't seem to work. I get the same result as not having a vfio-pci.cfg file: 2019-09-21T05:40:54.449416Z qemu-system-x86_64: -device vfio-pci,host=65:00.0,id=hostdev0,bus=pci.5,addr=0x0: vfio 0000:65:00.0: group 28 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. VM log: ErrorWarningSystemArrayLogin -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1
  16. The VM works until I try to add a Nvidia card as a second GPU. This line fills the syslog: Sep 20 21:34:09 rack kernel: vfio-pci 0000:65:00.0: BAR 1: can't reserve [mem 0x38bfe0000000-0x38bfefffffff 64bit pref] rack-diagnostics-20190921-0449.zip
  17. I added a NVIDIA card to a Supermicro board and the console using the built-in ASPEED VGA worked at first. I added the vfio-pci.cfg to the config directory and rebooted. The console froze after "Loading /bzroot . . . ok". I deleted the vfio-pci.cfg file and rebooted. The console still froze. I rebooted disabled the PCIe holding the Nvidia card. The console worked and I saved the first diagnostic attached. I rebooted enabling the PCIe slot holding the Nvidia card and the console video stopped after loading bnzroot. I saved the second diagnostic attached. Other than the console not working the
  18. If you have a disk formatted by unRAID (or possibly otherwise formatted correctly) that has data, it can be added to the array by resetting the array, adding the disk, and rebuilding parity. A replacement disk can be swapped in without preparation because the disk will be written with the data and formatting existing on the previous disk. (Many would preclear as a test but it’s not required in any case) Adding an an additional disk to the array will cause the disk to be zeroed(cleared), added to the array and then formatted. The new disk will be empty and any data it pr
  19. I haven’t tried but they should work. Someone try it and report back. I will test it eventually. I’ve never seen a shell script lose compatibility. The issue here is with the GUI integration.
  20. Should I update the OP with this info?
  21. I going to use one of these procedures to pre-clear until the plugin is updated. https://forums.unraid.net/topic/2732-preclear_disksh-a-new-utility-to-burn-in-and-pre-clear-disks-for-quick-add/ https://forums.unraid.net/topic/30921-unofficial-faster-preclear/
  22. This is power or connection issue. See here: http://lime-technology.com/wiki/index.php?title=The_Analysis_of_Drive_Issues
  23. The preclear script does not appear to terminate: Version: 6.7.0 Server Description Registration Uptimerack • 192.168.10.198 backup and test Unraid OS Pro 30 days, 21 hours, 7 minutes DASHBOARD MAIN SHARES USERS SETTINGS PLUGINS DOCKER APPS TOOLS 57% Processes UID PID PPID C STIME TTY TIME CMD root 1 0 0 May18 ? 00:00:32 init root 2 0 0 May18 ? 00:00:00 [kthreadd] root 3 2 0 May18 ? 00:00:00 [rcu_gp] root 4 2 0 May18 ? 00:00:00 [rcu_par_gp] root 6 2 0 May18 ? 00:00