• Unraid OS version 6.9.0-beta30 available


    limetech

    Changes vs. 6.9.0-beta29 include:

     

    Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file:

    /etc/modprobe.d/mpt3sas-workaround.conf

    which contains this line:

    options mpt3sas max_queue_depth=10000

    When the mpt3sas module is loaded at boot, that option will be specified.  If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it.  Likewise, if you manually load the module via 'go' file, can also remove it.  When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of this workaround.

     

    Reverted libvirt to v6.5.0 in order to restore storage device passthrough to VM's.

     

    A handful of other bug fixes, including 'unblacklisting' the ast driver (Aspeed GPU driver).  For those using that on-board graphics chips, primarily Supermicro, this should increase speed and resolution of local console webGUI. 

     


     

    Version 6.9.0-beta30 2020-10-05 (vs -beta29)

    Base distro:

    • libvirt: version 6.5.0 [revert from version 6.6.0]
    • php: version 7.4.11 (CVE-2020-7070, CVE-2020-7069)

    Linux kernel:

    • version 5.8.13
    • ast: removed blacklisting from /etc/modprobe.d
    • mpt3sas: added /etc/modprobe.d/mpt3sas-workaround.conf to set "max_queue_depth=10000"

    Management:

    • at: suppress session open/close syslog messages
    • emhttpd: correct 'Erase' logic for unRAID array devices
    • emhtppd: wipefs encrypted device removed from multi-device pool
    • emhttpd: yet another btrfs 'free/used' calculation method
    • webGUI: Update statuscheck
    • webGUI: Fix dockerupdate.php warnings

     

    • Like 5
    • Thanks 5



    User Feedback

    Recommended Comments



    Just re-posting on beta30; Is it possible to get some input on when can we expect embedded GPU drivers in releases (basically avoiding using linuxserver.io plugin)

    • Like 1
    Link to comment

    Hi Thanks for the update any chance for a fix on VM's I keep getting these issues in my syslog

     

    vfio-pci 0000:09:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
    vfio-pci 0000:09:00.0: No more image in the PCI ROM

     

    vfio-pci 0000:09:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem

     

    and this in my VM Log

     

    Domain id=2 is tainted: high-privileges
    Domain id=2 is tainted: host-cpu

     

    I get no output on my g-card whatsoever any ideas? could we maybe get a fix on beta 31 Please...

    movbuster-diagnostics-20201006-1409.zip

    Edited by Dava2k7
    Link to comment
    2 minutes ago, Dava2k7 said:

    Hi Thanks for the update any chance for a fix on VM's I keep getting these issues in my syslog

     

    You should be including your Diagnostics File    Tools   >>>>   Diagnostics     

     

     

    Link to comment
    5 minutes ago, Frank1940 said:

    You should be including your Diagnostics File    Tools   >>>>   Diagnostics     

     

     

    i've included it in previous post sorry bout that 

    Link to comment
    6 hours ago, mrbrowndk said:

    Just re-posting on beta30; Is it possible to get some input on when can we expect embedded GPU drivers in releases (basically avoiding using linuxserver.io plugin)

    Supposed to be 6.10 or whatever it's called.

     

    They said that in the podcast interview.

     

    Make your own builds for now or download ich777 builds when he posts them.

     

     

    Instead of waiting for linuxserver.io builds.

     

    Edited by Dazog
    • Like 1
    • Thanks 2
    Link to comment
    7 minutes ago, Dazog said:

    Supposed to be 6.10 or whatever it's called.

     

    They said that in the podcast interview.

     

    Make your own builds for now or download ich777 builds when he posts them.

     

     

    Instead of waiting for linuxserver.io builds.

     

    Any comment on if write amplification on BTRFS cache drives is fixed yet? I am severely affected by this but don't want to lose my redundant cache by being forced to switch to XFS. 

     

    If you instead support XFS cache raid, that might work as well, though.

    • Like 1
    Link to comment
    13 minutes ago, Naonak said:

    Any comment on if write amplification on BTRFS cache drives is fixed yet? I am severely affected by this but don't want to lose my redundant cache by being forced to switch to XFS. 

     

    If you instead support XFS cache raid, that might work as well, though.

    That was the main issue addressed in the last several betas.  Please read release notes.

    Link to comment
    4 minutes ago, Naonak said:

    Any comment on if write amplification on BTRFS cache drives is fixed yet?

    This has been fixed on beta25

    Link to comment
    3 minutes ago, turnipisum said:

    Log is still filling up with these kernel: tun: unexpected GSO

    There are ways to solve this, search the forums.

    Link to comment
    6 hours ago, mrbrowndk said:

    Just re-posting on beta30; Is it possible to get some input on when can we expect embedded GPU drivers in releases (basically avoiding using linuxserver.io plugin)

    The instant we do this, a lot of people using GPU passthrough to VM's may find their VM's don't start or run erratically until they go and mark them for stubbing on Tools/System Devices.  There are a lot of changes already in 6.9 vs. 6.8 including multiple pools (and changes to System Devices page) that our strategy is to move the Community to 6.9 first, give people a chance to use new stubbing feature, then produce a 6.10 where all the GPU drivers are included.

    • Like 7
    • Thanks 1
    Link to comment
    23 minutes ago, limetech said:

    There are ways to solve this, search the forums.

    I did! But anyway i have just looked at the vm's again and 2 had reverted back to virtio weirdly! so changed them back to virtio-net now seems to have sorted it. 🙂

    Link to comment
    4 hours ago, Dazog said:

    Make your own builds for now or download ich777 builds when he posts them.

     

    @Naonak

    I will build tham ASAP. ;)

     

    Prebuilt images are now finished and ready to download. ;)

    • Thanks 4
    Link to comment

    I just updated to 6.9beta30 from 6.8.3 and none of my drives have been found on a HP H220 LSI 9205-8i 9207 with P20 firmware. It took a really long time to boot as well.

     

    The card is detected but no drives.

    Edited by dansonamission
    • Like 1
    Link to comment
    5 minutes ago, dansonamission said:

    The card is detected but no drives.

    Please post the diagnostics.

    • Like 2
    Link to comment
    3 minutes ago, JorgeB said:

    Please post the diagnostics.

    I thought I had saved them, but it would seem not, sorry.

    The server has been restored to 6.8.3 already, when I have more time on the weekend I will run the update again.

     

    I was only wanted to try out the cache drive pool with the different formatting. With the samsung drives I have, with more than one disk in the pool, its really slow.

    Link to comment

    In the future if you see this same problem, please type this command:

    cat /sys/module/mpt3sas/parameters/max_queue_depth

    It should output the value: 10000

    • Thanks 1
    Link to comment

    Hello 

    I am running into 3 problems I have not seen in the stable version:

     

    Oct  7 03:50:33 Unraid crond[1705]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Oct  7 03:52:33 Unraid kernel: cat: page allocation failure: order:5, mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0
    Oct  7 03:52:33 Unraid kernel: CPU: 0 PID: 30268 Comm: cat Tainted: P           O      5.8.13-Unraid #1
    Oct  7 03:52:33 Unraid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./B75 Pro3, BIOS P1.60 12/10/2012
    Oct  7 03:52:33 Unraid kernel: Call Trace:
    Oct  7 03:52:33 Unraid kernel: dump_stack+0x6b/0x83
    Oct  7 03:52:33 Unraid kernel: warn_alloc+0xe2/0x160
    Oct  7 03:52:33 Unraid kernel: ? _cond_resched+0x1b/0x1e
    Oct  7 03:52:33 Unraid kernel: ? __alloc_pages_direct_compact+0xff/0x126
    Oct  7 03:52:33 Unraid kernel: __alloc_pages_slowpath.constprop.0+0x753/0x780
    Oct  7 03:52:33 Unraid kernel: ? __alloc_pages_nodemask+0x1a9/0x1fc
    Oct  7 03:52:33 Unraid kernel: __alloc_pages_nodemask+0x1a1/0x1fc
    Oct  7 03:52:33 Unraid kernel: kmalloc_order+0x15/0x67
    Oct  7 03:52:33 Unraid kernel: proc_sys_call_handler+0xb2/0x132
    Oct  7 03:52:33 Unraid kernel: vfs_read+0xa8/0x103
    Oct  7 03:52:33 Unraid kernel: ksys_read+0x71/0xba
    Oct  7 03:52:33 Unraid kernel: do_syscall_64+0x7a/0x94
    Oct  7 03:52:33 Unraid kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
    Oct  7 03:52:33 Unraid kernel: RIP: 0033:0x15226d55282e
    Oct  7 03:52:33 Unraid kernel: Code: c0 e9 f6 fe ff ff 50 48 8d 3d b6 5d 0a 00 e8 e9 fd 01 00 66 0f 1f 84 00 00 00 00 00 64 8b 04 25 18 00 00 00 85 c0 75 14 0f 05 <48> 3d 00 f0 ff ff 77 5a c3 66 0f 1f 84 00 00 00 00 00 48 83 ec 28
    Oct  7 03:52:33 Unraid kernel: RSP: 002b:00007fffc8b571b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
    Oct  7 03:52:33 Unraid kernel: RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 000015226d55282e
    Oct  7 03:52:33 Unraid kernel: RDX: 0000000000020000 RSI: 000015226d42b000 RDI: 0000000000000003
    Oct  7 03:52:33 Unraid kernel: RBP: 000015226d42b000 R08: 000015226d42a010 R09: 0000000000000000
    Oct  7 03:52:33 Unraid kernel: R10: 000015226d65ea90 R11: 0000000000000246 R12: 0000000000402e80
    Oct  7 03:52:33 Unraid kernel: R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000
    Oct  7 03:52:33 Unraid kernel: Mem-Info:
    Oct  7 03:52:33 Unraid kernel: active_anon:1276148 inactive_anon:877919 isolated_anon:0
    Oct  7 03:52:33 Unraid kernel: active_file:302697 inactive_file:4606484 isolated_file:0
    Oct  7 03:52:33 Unraid kernel: unevictable:1729 dirty:82 writeback:5
    Oct  7 03:52:33 Unraid kernel: slab_reclaimable:253868 slab_unreclaimable:288535
    Oct  7 03:52:33 Unraid kernel: mapped:130341 shmem:298117 pagetables:13068 bounce:0
    Oct  7 03:52:33 Unraid kernel: free:150406 free_pcp:0 free_cma:0
    Oct  7 03:52:33 Unraid kernel: Node 0 active_anon:5104592kB inactive_anon:3511676kB active_file:1210788kB inactive_file:18425936kB unevictable:6916kB isolated(anon):0kB isolated(file):0kB mapped:521364kB dirty:328kB writeback:20kB shmem:1192468kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 4222976kB writeback_tmp:0kB all_unreclaimable? no
    Oct  7 03:52:33 Unraid kernel: Node 0 DMA free:15896kB min:32kB low:44kB high:56kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15896kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    Oct  7 03:52:33 Unraid kernel: lowmem_reserve[]: 0 3053 31654 31654
    Oct  7 03:52:33 Unraid kernel: Node 0 DMA32 free:424868kB min:6516kB low:9640kB high:12764kB reserved_highatomic:0KB active_anon:551912kB inactive_anon:505524kB active_file:28248kB inactive_file:1381904kB unevictable:0kB writepending:4kB present:3355692kB managed:3272532kB mlocked:0kB kernel_stack:96kB pagetables:1008kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    Oct  7 03:52:33 Unraid kernel: lowmem_reserve[]: 0 0 28600 28600
    Oct  7 03:52:33 Unraid kernel: Node 0 Normal free:160860kB min:61032kB low:90316kB high:119600kB reserved_highatomic:2048KB active_anon:4553276kB inactive_anon:3006284kB active_file:1182540kB inactive_file:17043816kB unevictable:6916kB writepending:344kB present:29857792kB managed:29287124kB mlocked:16kB kernel_stack:36568kB pagetables:51264kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    Oct  7 03:52:33 Unraid kernel: lowmem_reserve[]: 0 0 0 0
    Oct  7 03:52:33 Unraid kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15896kB
    Oct  7 03:52:33 Unraid kernel: Node 0 DMA32: 24345*4kB (UME) 28528*8kB (UME) 5509*16kB (UME) 362*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 425332kB
    Oct  7 03:52:33 Unraid kernel: Node 0 Normal: 3839*4kB (UMEH) 4780*8kB (UMEH) 6632*16kB (UEH) 52*32kB (UEH) 5*64kB (H) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 161692kB
    Oct  7 03:52:33 Unraid kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
    Oct  7 03:52:33 Unraid kernel: 5218938 total pagecache pages
    Oct  7 03:52:33 Unraid kernel: 25186 pages in swap cache
    Oct  7 03:52:33 Unraid kernel: Swap cache stats: add 233014, delete 207828, find 21328/25054
    Oct  7 03:52:33 Unraid kernel: Free swap  = 7567100kB
    Oct  7 03:52:33 Unraid kernel: Total swap = 8388604kB
    Oct  7 03:52:33 Unraid kernel: 8307366 pages RAM
    Oct  7 03:52:33 Unraid kernel: 0 pages HighMem/MovableOnly
    Oct  7 03:52:33 Unraid kernel: 163478 pages reserved
    Oct  7 03:52:33 Unraid kernel: 0 pages cma reserved
    Oct  7 09:45:45 Unraid avahi-daemon[9922]: Record [Unraid._ssh._tcp.local#011IN#011SRV 0 0 22 Unraid.local ; ttl=120] not fitting in legacy unicast packet, dropping.
    Oct  7 09:47:15 Unraid avahi-daemon[9922]: Record [Unraid._ssh._tcp.local#011IN#011SRV 0 0 22 Unraid.local ; ttl=120] not fitting in legacy unicast packet, dropping.
    Oct  7 09:48:45 Unraid avahi-daemon[9922]: Record [Unraid._ssh._tcp.local#011IN#011SRV 0 0 22 Unraid.local ; ttl=120] not fitting in legacy unicast packet, dropping.
    Oct  7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797515,  0] ../../lib/util/fault.c:79(fault_report)
    Oct  7 10:12:09 Unraid smbd[29711]:   ===============================================================
    Oct  7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797569,  0] ../../lib/util/fault.c:80(fault_report)
    Oct  7 10:12:09 Unraid smbd[29711]:   INTERNAL ERROR: Signal 11 in pid 29711 (4.12.7)
    Oct  7 10:12:09 Unraid smbd[29711]:   If you are running a recent Samba version, and if you think this problem is not yet fixed in the latest versions, please consider reporting this bug, see https://wiki.samba.org/index.php/Bug_Reporting
    Oct  7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797595,  0] ../../lib/util/fault.c:86(fault_report)
    Oct  7 10:12:09 Unraid smbd[29711]:   ===============================================================
    Oct  7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797611,  0] ../../source3/lib/util.c:829(smb_panic_s3)
    Oct  7 10:12:09 Unraid smbd[29711]:   PANIC (pid 29711): internal error
    Oct  7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797697,  0] ../../lib/util/fault.c:222(log_stack_trace)
    Oct  7 10:12:09 Unraid smbd[29711]:   BACKTRACE:
    Oct  7 10:12:09 Unraid smbd[29711]:    #0 log_stack_trace + 0x39 [ip=0x15106b3f0f49] [sp=0x7ffdd6132a40]
    Oct  7 10:12:09 Unraid smbd[29711]:    #1 smb_panic_s3 + 0x23 [ip=0x15106afbf223] [sp=0x7ffdd6133380]
    Oct  7 10:12:09 Unraid smbd[29711]:    #2 smb_panic + 0x2f [ip=0x15106b3f115f] [sp=0x7ffdd61333a0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #3 smb_panic + 0x27d [ip=0x15106b3f13ad] [sp=0x7ffdd61334b0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #4 funlockfile + 0x50 [ip=0x15106a6d1690] [sp=0x7ffdd61334c0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #5 create_file_default + 0x76a [ip=0x15106b2099ba] [sp=0x7ffdd6133a70]
    Oct  7 10:12:09 Unraid smbd[29711]:    #6 close_file + 0xcb [ip=0x15106b20a36b] [sp=0x7ffdd6133aa0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #7 file_close_user + 0x35 [ip=0x15106b1afad5] [sp=0x7ffdd6133cc0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #8 smbXsrv_session_logoff + 0x4d [ip=0x15106b25190d] [sp=0x7ffdd6133ce0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #9 smbXsrv_session_logoff + 0x3e2 [ip=0x15106b251ca2] [sp=0x7ffdd6133d30]
    Oct  7 10:12:09 Unraid smbd[29711]:    #10 dbwrap_unmarshall + 0x186 [ip=0x151069efb6b6] [sp=0x7ffdd6133d50]
    Oct  7 10:12:09 Unraid smbd[29711]:    #11 dbwrap_unmarshall + 0x3bb [ip=0x151069efb8eb] [sp=0x7ffdd6133e10]
    Oct  7 10:12:09 Unraid smbd[29711]:    #12 dbwrap_traverse + 0x7 [ip=0x151069ef9f37] [sp=0x7ffdd6133e40]
    Oct  7 10:12:09 Unraid smbd[29711]:    #13 smbXsrv_session_logoff_all + 0x5c [ip=0x15106b251e5c] [sp=0x7ffdd6133e50]
    Oct  7 10:12:09 Unraid smbd[29711]:    #14 smbXsrv_open_cleanup + 0x4d2 [ip=0x15106b2573e2] [sp=0x7ffdd6133e90]
    Oct  7 10:12:09 Unraid smbd[29711]:    #15 smbd_exit_server_cleanly + 0x10 [ip=0x15106b257980] [sp=0x7ffdd6133ef0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #16 exit_server_cleanly + 0x14 [ip=0x15106a862284] [sp=0x7ffdd6133f00]
    Oct  7 10:12:09 Unraid smbd[29711]:    #17 smbd_server_connection_terminate_ex + 0x111 [ip=0x15106b2337c1] [sp=0x7ffdd6133f10]
    Oct  7 10:12:09 Unraid smbd[29711]:    #18 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x15106b2365d9] [sp=0x7ffdd6133f40]
    Oct  7 10:12:09 Unraid smbd[29711]:    #19 tevent_common_invoke_fd_handler + 0x7d [ip=0x15106a82370d] [sp=0x7ffdd6133fb0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #20 tevent_wakeup_recv + 0x1097 [ip=0x15106a829a77] [sp=0x7ffdd6133fe0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #21 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x15106a827c07] [sp=0x7ffdd6134040]
    Oct  7 10:12:09 Unraid smbd[29711]:    #22 _tevent_loop_once + 0x94 [ip=0x15106a822df4] [sp=0x7ffdd6134060]
    Oct  7 10:12:09 Unraid smbd[29711]:    #23 tevent_common_loop_wait + 0x1b [ip=0x15106a82309b] [sp=0x7ffdd6134090]
    Oct  7 10:12:09 Unraid smbd[29711]:    #24 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x15106a827ba7] [sp=0x7ffdd61340b0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #25 smbd_process + 0x7a7 [ip=0x15106b225bb7] [sp=0x7ffdd61340d0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #26 _start + 0x2271 [ip=0x565531933241] [sp=0x7ffdd6134160]
    Oct  7 10:12:09 Unraid smbd[29711]:    #27 tevent_common_invoke_fd_handler + 0x7d [ip=0x15106a82370d] [sp=0x7ffdd6134230]
    Oct  7 10:12:09 Unraid smbd[29711]:    #28 tevent_wakeup_recv + 0x1097 [ip=0x15106a829a77] [sp=0x7ffdd6134260]
    Oct  7 10:12:09 Unraid smbd[29711]:    #29 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x15106a827c07] [sp=0x7ffdd61342c0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #30 _tevent_loop_once + 0x94 [ip=0x15106a822df4] [sp=0x7ffdd61342e0]
    Oct  7 10:12:09 Unraid smbd[29711]:    #31 tevent_common_loop_wait + 0x1b [ip=0x15106a82309b] [sp=0x7ffdd6134310]
    Oct  7 10:12:09 Unraid smbd[29711]:    #32 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x15106a827ba7] [sp=0x7ffdd6134330]
    Oct  7 10:12:09 Unraid smbd[29711]:    #33 main + 0x1b2f [ip=0x565531930c1f] [sp=0x7ffdd6134350]
    Oct  7 10:12:09 Unraid smbd[29711]:    #34 __libc_start_main + 0xeb [ip=0x15106a4fce5b] [sp=0x7ffdd6134700]
    Oct  7 10:12:09 Unraid smbd[29711]:    #35 _start + 0x2a [ip=0x565531930ffa] [sp=0x7ffdd61347c0]
    Oct  7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.815130,  0] ../../source3/lib/dumpcore.c:315(dump_core)
    Oct  7 10:12:09 Unraid smbd[29711]:   dumping core in /var/log/samba/cores/smbd
    Oct  7 10:12:09 Unraid smbd[29711]:

     

    unraid-diagnostics-20201007-1028 2.zip

    Link to comment

    Does this fix the VM WebVNC connection on safari? Was fine in b25 then broke in b29. Script errors in b29 thread.

    Link to comment

    Anyone see if this is fixed?

    It has been broken since 6.9.0-beta25 at least. Not sure if it was working earlier.

    Thanks.

     

    • Like 1
    Link to comment
    5 hours ago, steini84 said:

    I am running into 3 problems I have not seen in the stable version:

    Please open a separate bug report for this.

     

    First thing to try: boot in 'safe mode' - possibly an incompatibility with one of your plugins.

    Link to comment
    27 minutes ago, mlapaglia said:

    i've got an issue with sshfs segfaults after i upgraded, bringing down docker containers etc. restarting brings the user shares back.

    After upgrading from -beta29?

     

    Please open a separate bug report for this.

    First thing to try: boot in 'safe mode' - possibly an incompatibility with one of your plugins.

    Link to comment
    2 hours ago, Denisson said:

    Does this fix the VM WebVNC connection on safari? Was fine in b25 then broke in b29. Script errors in b29 thread.

    This is due to upgrading vnc - something in the upgrade is incompatible with Safari (the new IE7) apparently.

    Please open a separate bug report for this.

    • Haha 1
    Link to comment
    2 minutes ago, limetech said:

    After upgrading from -beta29?

     

    Please open a separate bug report for this.

    First thing to try: boot in 'safe mode' - possibly an incompatibility with one of your plugins.

    upgrading from latest stable to 29, it's happening on 30 also. i'll try disabling plugins

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.