TheSkaz

Members
  • Content Count

    47
  • Joined

  • Last visited

Community Reputation

6 Neutral

About TheSkaz

  • Rank
    Newbie
  • Birthday 01/05/1984

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ok, was able to downgrade to 6.8.3 and everything seems to be smooth for 24 hours or so.
  2. here is a detalied list of my hardware. Also, I noticed that everytime the kernel panic shows up, the internet in my house goes out and I have to either restart the 10G switch that it is plugged into, or the Ubiquiti router that the switch is plugged in. This affects all of my devices on the network.... effin weird. Asus Zentih II Extreme Alpha Threadripper 3990X GSKill Triden Neo 3200 DDR4 256GB RAM 2x Nvidia Titan RTX 1x RTX 2080Ti LSI 9206 -16e -> Supermicro 45Bay JOBD array. 25H
  3. I have a fresh install of 6.9.0-rc2. When i start the array, these screens are what shows up. I will add hardware list shortly Sent from my SM-N986U using Tapatalk
  4. @steini84 getting a weird error: Sep 27 13:14:40 Tower kernel: VERIFY3(zfs_btree_find(tree, value, &where) != NULL) failed (0000000000000000 != 0000000000000000) Sep 27 13:14:40 Tower kernel: PANIC at btree.c:1780:zfs_btree_remove() Sep 27 13:14:40 Tower kernel: Showing stack for process 8689 Sep 27 13:14:40 Tower kernel: CPU: 54 PID: 8689 Comm: txg_sync Tainted: P O 4.19.107-Unraid #1 Sep 27 13:14:40 Tower kernel: Hardware name: System manufacturer System Product Name/ROG ZENITH II EXTREME ALPHA, BIOS 1101 06/05/2020 Sep 27 13:14:40 Tower kernel: Call
  5. working beautifully. you sir, are a scholar among men (or women )
  6. you have built one for me before, that would be awesome, I REALLY dont want to lose that data. maybe it could help someone else too?
  7. just downgraded from unraid beta25 to 6.8.3 and had build 2 pools on the previous version. when trying to import them, I get this: root@Tower:~# zpool import pool: datastore id: 7743322362316987465 state: UNAVAIL status: The pool can only be accessed in read-only mode on this system. It cannot be accessed in read-write mode because it uses the following feature(s) not supported on this system: com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.) action: The pool cannot be imported in read-write
  8. If everything is exactly the same backwards and forward, meaning there isnt a redundant path or something, it could be the difference in read/write speed on one of the devices. to put it in a simpler way, One device might be able to read at 112MBps but only write at 77MBps. here is mine: from client to server: and server to client: ~30MB difference
  9. ok, It did it again while creating a Ubuntu VM. here are the syslog and diagnostics tower-syslog-20200922-2124.zip tower-diagnostics-20200922-1426.zip
  10. I am running version 6.9.0-beta25. I changed my configuration of KVM to this: /VMstorage is a ZFS Pool (RAID10): ever since the change KVM will hang and fill up the log files in a second. its kind of erratic. usually happens when editing or creating a new VM tower-diagnostics-20200922-1131.zip
  11. googled the error and found that running: echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind works. NVLink seems to work too
  12. I have the VM up and able to boot with both gpus showing. in the VM logs for the machine, I am getting hundreds of these: 2020-09-22T06:21:28.221139Z qemu-system-x86_64: vfio_region_write(0000:01:00.0:region1+0x801b8, 0x0,8) failed: Device or resource busy that is my primary video card for the system and 1 of the 2 gpus for the VM. anything that attempts to use the gpus freezes.
  13. do you know how long I looked for one of those???? all I could find was expanders (and they still took up another pcie slot)
  14. I understand that, although I didnt know you could get a cable to go from one to the other. that is pretty cool. Full Disclosure: the whole system has 3 GPUs in 3 of the slots. in the 4th slot there is an 8 port HBA. so 8 of my drives are there. I am using 4 of the onboard sata ports. the other 4 are disabled due to an nvme drive in the back slot. I also have 4 other nvme drives. this is a new mobo and was trying to hook up all my drives. I have 2 more that I figured could go USB, but that doesnt work. If I could just get those last drives connected......