unRAID Server Version 6.2.1 Available


Recommended Posts

Upgraded to 6.2.1 this morning and now my VM will not start.  Here is the error log from the VM page.

 

2016-10-07 14:05:01.388+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Tower
Domain id=1 is tainted: high-privileges
Domain id=1 is tainted: custom-argv
Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2016-10-07T14:05:02.215662Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/16: Operation not permitted
2016-10-07T14:05:02.215685Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 16
2016-10-07T14:05:02.215690Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed
2016-10-07 14:05:02.481+0000: shutting down
2016-10-07 14:07:02.947+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Tower
Domain id=2 is tainted: high-privileges
Domain id=2 is tainted: custom-argv
Domain id=2 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2016-10-07T14:07:03.762923Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/16: Operation not permitted
2016-10-07T14:07:03.762947Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 16
2016-10-07T14:07:03.762952Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed
2016-10-07 14:07:04.033+0000: shutting down

 

I believe it is erroring out on my Video Card, but really have no idea what to change to make it work.  Worked fine in all 6.2 and lower versions, and now doesn't.  Odd.

 

I have reverted back to 6.2 and the issue remains.  I will create another post in the main help section now.

 

Are you sure you didn't just enable ACS override where there was none before? Maybe your IOMMU groups changed layout. I had this issue when enabling ACS override on a processor with no ACS feature, where it would try to separate the video cards from the PCIe root controller they're normally grouped with.

Link to comment
  • Replies 101
  • Created
  • Last Reply

Top Posters In This Topic

For the first time in a long time this is the first update where I needed to intervene and force a reboot. I shutdown my 5 or 6 dockers, then updates a couple plugins. Then went for the main OS update. Several hours later I could access my shares from my living room and that's untypical. I clicked on the reboot button at the top right again and nothing would happen. So I logged into SSH and entered reboot and that did the trick. At the time I didn't think anything of it and wasn't paying real close attention to anything.

 

Link to comment

i upgrade from 6.1.9 to 6.2.1 from webgui and restart the server. Now i cant reach any webgui neither unraid nor my dockers. But telnet and file sharig seems ok. Syslog is a little odd, no entries after fail2ban.

 

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.2  0.0   4372  1400 ?        Ss   12:35   0:08 init
root         2  0.0  0.0      0     0 ?        S    12:35   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        S    12:35   0:00 [ksoftirqd/0]
root         5  0.0  0.0      0     0 ?        S<   12:35   0:00 [kworker/0:0H]
root         7  0.0  0.0      0     0 ?        S    12:35   0:02 [rcu_preempt]
root         8  0.0  0.0      0     0 ?        S    12:35   0:00 [rcu_sched]
root         9  0.0  0.0      0     0 ?        S    12:35   0:00 [rcu_bh]
root        10  0.0  0.0      0     0 ?        S    12:35   0:00 [migration/0]
root        11  0.0  0.0      0     0 ?        S    12:35   0:00 [kdevtmpfs]
root        12  0.0  0.0      0     0 ?        S<   12:35   0:00 [netns]
root        14  0.0  0.0      0     0 ?        S<   12:35   0:00 [perf]
root       255  0.0  0.0      0     0 ?        S<   12:35   0:00 [writeback]
root       257  0.0  0.0      0     0 ?        SN   12:35   0:00 [ksmd]
root       258  0.0  0.0      0     0 ?        SN   12:35   0:00 [khugepaged]
root       259  0.0  0.0      0     0 ?        S<   12:35   0:00 [crypto]
root       260  0.0  0.0      0     0 ?        S<   12:35   0:00 [kintegrityd]
root       261  0.0  0.0      0     0 ?        S<   12:35   0:00 [bioset]
root       263  0.0  0.0      0     0 ?        S<   12:35   0:00 [kblockd]
root       382  0.0  0.0      0     0 ?        S<   12:35   0:00 [ata_sff]
root       400  0.0  0.0      0     0 ?        S<   12:35   0:00 [devfreq_wq]
root       500  0.0  0.0      0     0 ?        S<   12:35   0:00 [rpciod]
root       529  0.0  0.0      0     0 ?        S    12:36   0:00 [kswapd0]
root       530  0.0  0.0      0     0 ?        S<   12:36   0:00 [vmstat]
root       604  0.0  0.0      0     0 ?        S    12:36   0:00 [fsnotify_mark]
root       621  0.0  0.0      0     0 ?        S<   12:36   0:00 [nfsiod]
root       624  0.0  0.0      0     0 ?        S<   12:36   0:00 [cifsiod]
root       636  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfsalloc]
root       637  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs_mru_cache]
root       675  0.0  0.0      0     0 ?        S<   12:36   0:00 [acpi_thermal_p
root       730  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       735  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       737  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       743  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       746  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       750  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       756  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       759  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       763  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       765  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       766  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       767  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       768  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       769  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       770  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       771  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       772  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       773  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       774  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       775  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       776  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       777  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       778  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       779  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       822  0.0  0.0      0     0 ?        S<   12:36   0:00 [vfio-irqfd-cle
root       853  0.0  0.0      0     0 ?        S<   12:36   0:00 [kpsmoused]
root       923  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root       929  0.0  0.0      0     0 ?        S<   12:36   0:00 [deferwq]
root       934  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_0]
root       935  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_0]
root       936  0.0  0.0      0     0 ?        S    12:36   0:00 [usb-storage]
root       945  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1082  0.0  0.0  26752  2748 ?        Ss   12:36   0:00 /sbin/udevd --d
root      1097  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_1]
root      1098  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_2]
root      1099  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_1]
root      1100  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_2]
root      1101  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_3]
root      1102  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_3]
root      1103  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_4]
root      1104  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_4]
root      1105  0.0  0.0      0     0 ?        S<   12:36   0:00 [kvm-irqfd-clea
root      1106  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_5]
root      1108  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_5]
root      1109  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_6]
root      1110  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_6]
root      1111  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_7]
root      1113  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_7]
root      1115  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_8]
root      1117  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_8]
root      1119  0.0  0.0      0     0 ?        S    12:36   0:00 [scsi_eh_9]
root      1121  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_tmf_9]
root      1133  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1134  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1135  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1136  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1137  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1143  0.0  0.0      0     0 ?        S<   12:36   0:00 [scsi_wq_1]
root      1144  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1145  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1146  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1148  0.0  0.0      0     0 ?        S<   12:36   0:01 [kworker/0:1H]
root      1149  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1150  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      1260  0.0  0.0 233636  2316 ?        Ssl  12:36   0:00 /usr/sbin/rsysl
message+  1374  0.0  0.0  19612   236 ?        Ss   12:36   0:00 /usr/bin/dbus-d
bin       1382  0.0  0.0  13380   176 ?        Ss   12:36   0:00 /sbin/rpcbind -
rpc       1387  0.0  0.1  21416  5820 ?        Ss   12:36   0:00 /sbin/rpc.statd
root      1397  0.0  0.0   6484  1664 ?        Ss   12:36   0:00 /usr/sbin/inetd
root      1406  0.0  0.0  24504  2572 ?        Ss   12:36   0:00 /usr/sbin/sshd
root      1420  0.0  0.1  98180  4588 ?        Ss   12:36   0:00 /usr/sbin/ntpd
root      1427  0.0  0.0   4388   112 ?        Ss   12:36   0:00 /usr/sbin/acpid
root      1436  0.0  0.0   6492  1572 ?        Ss   12:36   0:00 /usr/sbin/crond
daemon    1438  0.0  0.0   6480   104 ?        Ss   12:36   0:00 /usr/sbin/atd -
root      1444  0.0  0.1 220176  5520 ?        Ss   12:36   0:00 /usr/sbin/nmbd
root      1446  0.0  0.3 297204 15444 ?        Ss   12:36   0:00 /usr/sbin/smbd
root      1448  0.0  0.1 290888  4556 ?        S    12:36   0:00 /usr/sbin/smbd
root      1449  0.0  0.0 290880  2324 ?        S    12:36   0:00 /usr/sbin/smbd
root      1453  0.0  0.1 271668  6852 ?        Ss   12:36   0:00 /usr/sbin/winbi
root      1456  0.0  0.3 273356 12224 ?        S    12:36   0:00 /usr/sbin/winbi
root      1465  0.1  0.0   9552  2376 ?        S    12:36   0:06 /bin/bash /usr/
root      3334  0.0  0.0  19940  3472 ?        S    12:36   0:00 /usr/local/sbin
root      3350  0.0  0.0      0     0 ?        S<   12:36   0:00 [md]
root      3351  0.0  0.0      0     0 ?        S    12:36   0:00 [mdrecoveryd]
root      3359  0.0  0.0      0     0 ?        S    12:36   0:00 [spinupd]
root      3360  0.0  0.0      0     0 ?        S    12:36   0:00 [spinupd]
root      3361  0.0  0.0      0     0 ?        S    12:36   0:00 [spinupd]
root      3362  0.0  0.0      0     0 ?        S    12:36   0:00 [spinupd]
root      3363  0.0  0.0      0     0 ?        S    12:36   0:00 [spinupd]
root      3364  0.0  0.0      0     0 ?        S    12:36   0:00 [spinupd]
root      3388  0.1  0.0 113700   324 ?        Ssl  12:36   0:05 /sbin/apcupsd
avahi     3408  0.0  0.0  34368  2412 ?        S    12:36   0:00 avahi-daemon: r
avahi     3409  0.0  0.0  34236   256 ?        S    12:36   0:00 avahi-daemon: c
root      3417  0.0  0.0  12748   104 ?        S    12:36   0:00 /usr/sbin/avahi
root      3461  0.0  0.0      0     0 ?        S    12:36   0:00 [unraidd]
root      3462  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      3463  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      3464  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      3465  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      3466  0.0  0.0      0     0 ?        S<   12:36   0:00 [bioset]
root      3478  0.0  0.0      0     0 ?        S<   12:36   0:00 [reiserfs/md1]
root      3488  0.0  0.0      0     0 ?        S<   12:36   0:00 [reiserfs/md2]
root      3502  0.0  0.0      0     0 ?        S<   12:36   0:00 [reiserfs/md3]
root      3512  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-buf/md4]
root      3513  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-data/md4]
root      3514  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-conv/md4]
root      3515  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-cil/md4]
root      3516  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-reclaim/md
root      3517  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-log/md4]
root      3518  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-eofblocks/
root      3519  0.0  0.0      0     0 ?        S    12:36   0:00 [xfsaild/md4]
root      3530  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-buf/md6]
root      3531  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-data/md6]
root      3532  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-conv/md6]
root      3533  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-cil/md6]
root      3534  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-reclaim/md
root      3535  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-log/md6]
root      3536  0.0  0.0      0     0 ?        S<   12:36   0:00 [xfs-eofblocks/
root      3537  0.0  0.0      0     0 ?        S    12:36   0:00 [xfsaild/md6]
root      3547  0.0  0.0      0     0 ?        S<   12:36   0:00 [reiserfs/sdh1]
root      3563  0.0  0.0  87740   468 ?        Ssl  12:36   0:00 /usr/local/sbin
root      3573  0.0  0.0 571488  3872 ?        Ssl  12:36   0:00 /usr/local/sbin
root      3624  2.0  0.0      0     0 ?        D<   12:36   1:20 [loop0]
root      3626  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-worker]
root      3627  0.0  0.0      0     0 ?        S<   12:36   0:00 [kworker/u17:0]
root      3628  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-worker-h
root      3629  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-delalloc
root      3630  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-flush_de
root      3631  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-cache]
root      3632  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-submit]
root      3633  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-fixup]
root      3634  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-endio]
root      3635  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-endio-me
root      3636  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-endio-me
root      3637  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-endio-ra
root      3638  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-endio-re
root      3639  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-rmw]
root      3640  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-endio-wr
root      3641  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-freespac
root      3642  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-delayed-
root      3643  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-readahea
root      3644  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-qgroup-r
root      3645  0.0  0.0      0     0 ?        S<   12:36   0:00 [btrfs-extent-r
root      3650  0.0  0.0      0     0 ?        S    12:36   0:00 [btrfs-cleaner]
root      3651  0.0  0.0      0     0 ?        S    12:36   0:00 [btrfs-transact
root      3655  0.0  0.0   9396  2392 ?        S    12:36   0:00 sh -c /etc/rc.d
root      3656  0.0  0.0   9524  2492 ?        S    12:36   0:00 /bin/sh /etc/rc
root      3657  0.0  0.0   6484  1664 ?        S    12:36   0:00 logger
root      3662  5.5  2.0 269872 78520 ?        Sl   12:36   3:41 /usr/bin/docker
root      3793  0.0  0.2 200848  9724 ?        Sl   12:36   0:00 /usr/bin/python
root      3794  0.0  0.0   9524  1856 ?        S    12:36   0:00 /bin/sh /etc/rc
root      3795  0.0  0.7 195104 27664 ?        Sl   12:36   0:00 docker info
root      3806  0.0  0.0   6500  1592 tty1     Ss+  12:36   0:00 /sbin/agetty --
root      3807  0.0  0.0   6500  1588 tty2     Ss+  12:36   0:00 /sbin/agetty 38
root      3808  0.0  0.0   6500  1608 tty3     Ss+  12:36   0:00 /sbin/agetty 38
root      3809  0.0  0.0   6500  1516 tty4     Ss+  12:36   0:00 /sbin/agetty 38
root      3810  0.0  0.0   6500  1656 tty5     Ss+  12:36   0:00 /sbin/agetty 38
root      3811  0.0  0.0   6500  1560 tty6     Ss+  12:36   0:00 /sbin/agetty 38
root      3924  0.0  0.0   9396   868 ?        S    12:37   0:00 /bin/sh -c /usr
root      3925  0.0  0.5 167608 20152 ?        S    12:37   0:00 /usr/bin/php -q
root      3927  0.0  0.0  25068  3376 ?        S    12:37   0:00 wget -qO /dev/n
root      3947  0.4  0.0      0     0 ?        S<   12:37   0:16 [kworker/u17:1]
root      7063  0.0  0.0      0     0 ?        S    12:49   0:01 [kworker/u16:2]
root     14736  0.0  0.0      0     0 ?        S    13:21   0:00 [kworker/0:2]
root     15086  0.0  0.4 432804 17776 ?        S    13:23   0:01 /usr/sbin/smbd
root     15088  0.0  0.1 271668  5948 ?        S    13:23   0:00 /usr/sbin/winbi
root     15987  0.0  0.1  27064  4372 ?        Ss   13:26   0:00 sshd: root@pts/
root     16063  0.0  0.0  13464  3380 pts/0    Ss   13:26   0:00 -bash
root     17650  0.0  0.0      0     0 ?        S    13:33   0:00 [kworker/u16:1]
root     18687  0.0  0.0      0     0 ?        S    13:37   0:00 [kworker/0:1]
root     19154  0.0  0.0      0     0 ?        S    13:39   0:00 [kworker/u16:0]
root     19972  0.0  0.0   4368   668 ?        S    13:42   0:00 sleep 0.9968907
root     19973  0.0  0.0  11824  2072 pts/0    R+   13:42   0:00 ps aux

syslog.zip

Link to comment
I know they're supposed to => but click on Spin Down and then check the power -- just in case there's an issue with the timed spindowns.
I clicked it, but it makes no difference.

After a while it is now more between 32-48 watts (~40 watts). I disabled also my Dockers (Apache and Emby), but that makes no difference.

Maybe I should just make a reboot?

Link to comment

Umm.... The 6.3.0-rc1 actually works!

 

So what the heck is wrong with 6.2.1 that my server doesn't like it?

 

I think what you meant to say was "So what the heck is wrong with my server that it can't install 6.2.1 like almost everyone else?"  ;)

 

There was a discussion in the 6.2 announcement thread about 2 others with reboot loops (search reboot loop in that announcement thread).  Yours may not be the same issue, but does seem suspicious.  There appears to be a problem with a certain motherboard family.  The only advice currently (I believe) is stick close to BIOS defaults and check for the latest BIOS.  And check again in a month or 2.

 

Anyone experiencing a reboot loop, please mention your hardware, especially the motherboard and BIOS version.

Link to comment

i upgrade from 6.1.9 to 6.2.1 from webgui and restart the server. Now i cant reach any webgui neither unraid nor my dockers. But telnet and file sharig seems ok. Syslog is a little odd, no entries after fail2ban.

 

Instead of the syslog, we now prefer the diagnostics instead, tells us more (Need help? Read me first!).  In particular, I'm wondering what you may be installing from the /boot/plugins folder.

 

Syslog looks mostly OK.  Anyone upgrading from 6.1 to any 6.2 should read the first 2 posts of the 6.2 announcement thread, in particular the networking section and Docker section of the Additional Upgrade Advice.  I believe you are going to see significant performance issues, due to your tunables and the new tunable, so take note of that section too.

Link to comment

i upgrade from 6.1.9 to 6.2.1 from webgui and restart the server. Now i cant reach any webgui neither unraid nor my dockers. But telnet and file sharig seems ok. Syslog is a little odd, no entries after fail2ban.

 

Instead of the syslog, we now prefer the diagnostics instead, tells us more (Need help? Read me first!).  In particular, I'm wondering what you may be installing from the /boot/plugins folder.

 

Syslog looks mostly OK.  Anyone upgrading from 6.1 to any 6.2 should read the first 2 posts of the 6.2 announcement thread, in particular the networking section and Docker section of the Additional Upgrade Advice.  I believe you are going to see significant performance issues, due to your tunables and the new tunable, so take note of that section too.

 

i remove the powerdown plugin and manually unmount drives (cant unmount cache) stop array and restart that solves my webgui problem but i guess my docker.img corupted. I recreate it. Now everything seems ok. Thanks for tunables hint, i will keep that in mind if there is any performance issues.

 

BTW there is nothing in /boot/plugins but i m installing CA, cache dirs, dynamix and Unassigned Devices from /boot/config/plugins 

Link to comment

Upgraded to 6.2.1 this morning and now my VM will not start.  Here is the error log from the VM page.

 

2016-10-07 14:05:01.388+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Tower
Domain id=1 is tainted: high-privileges
Domain id=1 is tainted: custom-argv
Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2016-10-07T14:05:02.215662Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/16: Operation not permitted
2016-10-07T14:05:02.215685Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 16
2016-10-07T14:05:02.215690Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed
2016-10-07 14:05:02.481+0000: shutting down
2016-10-07 14:07:02.947+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Tower
Domain id=2 is tainted: high-privileges
Domain id=2 is tainted: custom-argv
Domain id=2 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2016-10-07T14:07:03.762923Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/16: Operation not permitted
2016-10-07T14:07:03.762947Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 16
2016-10-07T14:07:03.762952Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed
2016-10-07 14:07:04.033+0000: shutting down

 

I believe it is erroring out on my Video Card, but really have no idea what to change to make it work.  Worked fine in all 6.2 and lower versions, and now doesn't.  Odd.

 

I have reverted back to 6.2 and the issue remains.  I will create another post in the main help section now.

 

Are you sure you didn't just enable ACS override where there was none before? Maybe your IOMMU groups changed layout. I had this issue when enabling ACS override on a processor with no ACS feature, where it would try to separate the video cards from the PCIe root controller they're normally grouped with.

 

No I have had ACS enables the entire time.  Also there is no issues with IOMMU group 16.  It is just the NVIDIA video and audio devices in it.

Link to comment

Upgraded to 6.2.1 this morning and now my VM will not start.  Here is the error log from the VM page.

 

2016-10-07 14:05:01.388+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Tower
Domain id=1 is tainted: high-privileges
Domain id=1 is tainted: custom-argv
Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2016-10-07T14:05:02.215662Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/16: Operation not permitted
2016-10-07T14:05:02.215685Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 16
2016-10-07T14:05:02.215690Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed
2016-10-07 14:05:02.481+0000: shutting down
2016-10-07 14:07:02.947+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Tower
Domain id=2 is tainted: high-privileges
Domain id=2 is tainted: custom-argv
Domain id=2 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2016-10-07T14:07:03.762923Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/16: Operation not permitted
2016-10-07T14:07:03.762947Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 16
2016-10-07T14:07:03.762952Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed
2016-10-07 14:07:04.033+0000: shutting down

 

I believe it is erroring out on my Video Card, but really have no idea what to change to make it work.  Worked fine in all 6.2 and lower versions, and now doesn't.  Odd.

 

I have reverted back to 6.2 and the issue remains.  I will create another post in the main help section now.

 

Are you sure you didn't just enable ACS override where there was none before? Maybe your IOMMU groups changed layout. I had this issue when enabling ACS override on a processor with no ACS feature, where it would try to separate the video cards from the PCIe root controller they're normally grouped with.

 

No I have had ACS enables the entire time.  Also there is no issues with IOMMU group 16.  It is just the NVIDIA video and audio devices in it.

 

I see from one of your posts that you have a Core i7 4790K. Unfortunately, the only non-Xeon processors blessed with ACS support are the Extreme models, which are not even socket compatible with the regular Core processors in the same series.

 

You will need to get either a Xeon or a Core Extreme processor and motherboard to really play with splitting up IOMMU groupings. And if you intend to go with the easiest configuration of booting to a GPU other than the one(s) you wish to bind to VMs, you'll probably want one of the Xeon E3 models with integrated graphics, and a motherboard with connectivity for that.

 

With what you've got, you'll have to disable that ACS overrides option, and live with the groups you get by default. As far as I may guess, it's quite possible that attempting to use ACS override mode will result in VMs that can't boot unless you manually bind all of the devices that are ordinarily in the same IOMMU group as the video card. In the case of my Core i7 3770 system, that would include the PCIe root controller, and whatever video card is also plugged into the middle slot.

 

You can try to bind the minimum of devices that you want in one VM, but you won't be able to bind any of the unbound devices into another VM. Sorry.

Link to comment

Updated from 6.1.9 to 6.2.1 with very little trouble.  I have my Docker image located on an Unassigned Devices drive, and took some of the precautions noted in the 6.2 thread.  I was prepared to switch to using a cache drive, but it just worked.  I did have to rebuild my docker.img because of the 'layers from manifest' issue, but that was easy.  I don't know if I was lucky, or if the issue has been resolved, but it worked for me.  It's a little strange that limetech said in the comments that it should work, the installation notes say that it almost certainly won't, but it does.  Pretty confusing.

 

Doing a parity check now and will drop in a second parity drive once that completes.  Exciting!

Link to comment

Just tried updating from 6.2 to 6.2.1 and got the following error:

 

plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2.1-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2.1-x86_64.md5 ... done
unzip error 0
plugin: run failed: /bin/bash retval: 1

 

I know how to do it manually, just wanted to figure out why it's having an issue.

Link to comment

Just tried updating from 6.2 to 6.2.1 and got the following error:

 

plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2.1-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2.1-x86_64.md5 ... done
unzip error 0
plugin: run failed: /bin/bash retval: 1

 

I know how to do it manually, just wanted to figure out why it's having an issue.

 

zip file is downloaded in RAM memory (/tmp). Perhaps you run out of free memory?

 

Link to comment

Just tried updating from 6.2 to 6.2.1 and got the following error:

 

plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2.1-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.2.1-x86_64.md5 ... done
unzip error 0
plugin: run failed: /bin/bash retval: 1

 

I know how to do it manually, just wanted to figure out why it's having an issue.

 

zip file is downloaded in RAM memory (/tmp). Perhaps you run out of free memory?

 

Maybe. I was going to reboot when I got the chance anyway. I had a plugin that wouldn't install as well. Wasn't downloading the zip. I'll give that a try. Thanks.

Link to comment

Am I right in thinking that the Beep on powerdown is now gone with this release?

(I quite liked that :))

It should beep in 6.2.x if initiated from webGui, but won't if initiated from command line or power button or apcupsd.

 

Wait, what, there's a beep!?  :o

 

Also, two machines updated to 6.2.1 with no issues.

Link to comment

Am I right in thinking that the Beep on powerdown is now gone with this release?

(I quite liked that :))

It should beep in 6.2.x if initiated from webGui, but won't if initiated from command line or power button or apcupsd.

 

Wait, what, there's a beep!?  :o

 

Also, two machines updated to 6.2.1 with no issues.

If your m/b has an on-board speaker.  Sadly many no longer have these.

Link to comment

Am I right in thinking that the Beep on powerdown is now gone with this release?

(I quite liked that :))

It should beep in 6.2.x if initiated from webGui, but won't if initiated from command line or power button or apcupsd.

 

Ah ok that explains it. Yes can confirm, beeps from web UI, doesn't when I hit the power button directly on the server.

Thanks.

Link to comment

Upgraded from 5.0.4 to 6.2.1 without any trouble. Just wanted to thank limetech & everyone on this board for their hard work on this upgrade and the great documentation. Between the upgrades to unraid itself, the new webgui, and the community plugins/dockers its like getting a new & improved NAS for free.

 

Only quibble is that parity checks went from 17 to 25 hours but I expect that adjusting the tunables will take care of that. I'll just live with it until the script for that is ready for 6.2.

 

Thanks again!

Link to comment

Am I right in thinking that the Beep on powerdown is now gone with this release?

(I quite liked that :))

It should beep in 6.2.x if initiated from webGui, but won't if initiated from command line or power button or apcupsd.

 

Wait, what, there's a beep!?  :o

 

Also, two machines updated to 6.2.1 with no issues.

If your m/b has an on-board speaker.  Sadly many no longer have these.

 

I recently stumbled across a little motherboard -> speaker dongle that I think I now have a use for.. :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.