• Unraid OS version 6.9.0-beta29 available


    limetech

    Back in the saddle ... Sorry for the long delay in publishing this release.  Aside from including some delicate coding, this release was delayed due to several team members, chiefly myself, having to deal with various non-work-related challenges which greatly slowed the pace of development.  That said, there is quite a bit in this release, LimeTech is growing and we have many exciting features in the pipe - more on that in the weeks to come.  Thanks to everyone for their help and patience during this time.

    Cheers,

    -Tom

     

    IMPORTANT: This is Beta software.  We recommend running on test servers only!

     

    KNOWN ISSUE: with this release we have moved to the latest Linux 5.8 stable kernel.  However, we have discovered that a regression has been introduced in the mtp3sas driver used by many LSI chipsets, e.g., LSI 9201-16e.  Typically looks like this on System Devices page:

    Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02)

    The problem is that devices are no longer recognized.  There are already bug reports pertaining to this issue:

    https://bugzilla.kernel.org/show_bug.cgi?id=209177

    https://bugzilla.redhat.com/show_bug.cgi?id=1878332

     

    We have reached out to the maintainer to see if a fix can be expedited, however we feel that we can neither revert back to 5.7 kernel nor hold the release due to this issue.  We are monitoring and will publish a release with fix asap.

     

    ANOTHER known issue: we have added additional btrfs balance options:

    • raid1c3
    • raid1c4
    • and modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1).

     

    However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile.  The solution is to simply run the same balance again.  We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default.  For now, it's left as-is.

     

    THE PRIMARY FOCUS of this release is to put tools in place to help users migrate data off SSD-based pools so that those devices may be re-partitioned if necessary, and then migrate the data back.

     

    What are we talking about?  For several years now, storage devices managed by Unraid OS are formatted with an "Unraid Standard Partition Layout".  This layout has partition 1 starting at offset 32KiB from the start of the device, and extending to the end of the device.  (For devices with 512-byte sectors, partition 1 starts in sector 64; for 4096-byte sector size devices, partition 1 starts in sector 8.)  This layout achieves maximum storage efficiency and ensures partition 1 starts on a 4096-byte boundary.

     

    Through user reports and extensive testing however, we have noted that many modern SSD devices, in particular Samsung EVO, do not perform most efficiently using this partition layout, and the devices seem to write far more than one would expect, and with SSD, one wants to minimize writes to SSD as much as possible.

     

    The solution to the "excessive SSD write" issue is to position partition 1 at offset 1MiB from the start of the device instead of at 32KiB.  The will both increase performance and decrease writes with affected devices.  Do you absolutely need to re-partition your SSD's?  Probably not depending on what devices you have.  Click on a device from Main, scroll down to Attributes and take a look at Data units written.  If this is increasing very rapidly then you probably would benefit by re-partitioning.

     

    Note: if you have already (re)Formatted using previous 6.9-beta release, for SSD smaller than 2TiB the proper partition layout will appear like this on the Device Information page:

    Partition format:   MBR: 1MiB-aligned

    For SSD larger than 2TiB:

    Partition format:   GPT: 1MiB-aligned

     

    Here's what's in this release to help facilitate re-partitioning of SSD devices:

     

    An Erase button which appears in the Device Information page.

     

    The Erase button may be used to erase (delete) content from a volume. A volume is either the content of an unRAID array data disk, or the content of a pool. In the case of an unRAID disk, only that device is erased; in the case of a multiple-device pool ALL devices of the pool are erased.

    The extent of Erase varies depending on whether the array is Stopped, or Started in Maintenance mode (if started in Normal mode, all volume Erase buttons are disabled).

    Started/Maintenance mode: in this case the LUKS header (if any) and any file system within partition 1 is erased. The MBR (master boot record) is not erased.

    Stopped - in this case, unRAID array disk volumes and pool volumes are treated a little differently:

    • unRAID array disk volumes - if Parity and/or Parity2 is valid, then operation proceeds exactly as above, that is, content of only partition 1 is erased but the MBR (master boot record) is left as-is; but, if there is no valid parity, then the MBR is also erased.
    • Pool volumes - partition 1 of all devices within the pool are erased, and then the MBR is also erased.


    The purpose of erasing the MBR is to permit re-partitioning of the device if required.  Upon format, Unraid OS will position partition 1 at 32KiB for HDD devices and at 1MiB for SSD devices.

     

    Note that erase does not overwrite the storage content of a device, it simply clears the LUKS header if present (which effectively makes the device unreadable), and file system and MBR signatures.  A future Unraid OS release may include the option of overwriting the data.

     

    Additional "Mover" capabilities.

     

    Since SSD pools are commonly used to store vdisk images, shfs/mover is now aware of:

    • sparse files - when a sparse file is moved from one volume to another, it's sparseness is preserved
    • NoCOW attribute - when a file or directory in a btrfs volume has the NoCOW attribute set, the attribute is preserved when the file or directory is moved to another btrfs volume.

     

    Note that btrfs subvolumes are not preserved.  A future Unraid OS release may include preservation of btrfs subvolumes.

     

    Ok how do I re-partition my SSD pools?

     

    Outlined here are two basic methods:

    1. "Mover" method - The idea is to use the Mover to copy all data from the pool to a target device in the unRAID array.  Then erase all devices of the pool, and reformat.  Finally use the Mover to copy all the data back.
    2. "Unassign/Re-assign" method - The idea here is, one-by-one, remove a device from a btrfs pool, balance the pool with reduced device count, then re-assign the device back to the pool, and balance pool back to include the device.  This works because Unraid OS will re-partition new devices added to an existing btrfs pool.  This method is not recommended for a pool with more than 2 devices since the first balance operation may be write-intensive, and writes are what we're trying to minimize.  Also it can be tricky to determine if enough free space really exists after removing a device to rebalance the pool.  Finally, this method will introduce a time window where your data is on non-redundant storage.

     

    No matter which method, if you have absolutely critical data in the pool we strongly recommend making an independent backup first (you are already doing this right?).

     

     

    Mover Method

    This procedure presumes a multi-device btrfs pool containing one or more cache-only or cache-prefer shares.

     

    1. With array Started, stop any VM's and/or Docker applications which may be accessing the pool you wish to re-partition.  Make sure no other external I/O is targeting this pool.

     

    2. For each share on the pool, go to the Share Settings page and make some adjustments:

    • change from cache-only (or cache-prefer) to cache-yes
    • assign an array disk or disks via Include mask to receive the data.  If you  wish to preserve the NoCOW attribute (Copy-on-write set to No) on files and directories, these disks should be formatted with btrfs.  Of course ensure there is enough free space to receive the data.

     

    3. Now go back to Main and click the Move button.  This will move the data of each share to the target array disk(s).

     

    4. Verify no data left on the pool, Stop array, click on the pool and then click the Erase button.

     

    5. Start the array and the pool should appear Unformatted - go ahead and Format the pool (this is what will re-write the partition layout).

     

    6. Back to Share Settings page; for each above share:

    • change from cache-yes to cache-prefer

     

    7. On Main page click Move button.  This will move data of each share back to the pool.

     

    8. Finally, back to Share Settings page; for each share:

    • change from cache-prefer back to cache-only if desired

     

    Unassign/Re-assign Method

    1. Stop array and unassign one of the devices from your existing pool; leave device unassigned.
    2. Start array.  A balance will take place on your existing pool.  Let the balance complete.
    3. Stop array.  Re-assign the device, adding it back to your existing pool.
    4. Start array.  The added device will get re-partitioned and a balance will start moving data to the new device.  Let the balance complete.
    5. Repeat steps 1-4 for the other device in your existing pool.

     

    Whats happening here is this:

    At the completion of step 2, btrfs will 'delete' the missing device from the volume and wipe the btrfs signature from it.

    At the beginning of step 4, Unraid OS will re-partition the new device being added to an existing pool.

     

    I don't care about preserving data in the pool.  In this case just Stop array, click on the pool and then click Erase.  Start array and Format the pool - done.  Useful to know: when Linux creates a file system in an SSD device, it will first perform a "blkdiscard" on the entire partition.  Similarly, "blkdisard" is initiated on partition 1 on a new device added to an existing btrfs pool.

     

    What about array devices?  If you have SSD devices in the unRAID array the only  way to safely re-partition those devices is to either remove them from the array, or remove parity devices from the array.  This is because re-partitioning will invalidate parity.  Note also the volume size will be slightly smaller.

     


     

    Version 6.9.0-beta29 2020-09-27 (vs -beta25)

    Base distro:

    • at-spi2-core: version 2.36.1
    • bash: version 5.0.018
    • bridge-utils: version 1.7
    • brotli: version 1.0.9
    • btrfs-progs: version 5.6.1
    • ca-certificates: version 20200630
    • cifs-utils: version 6.11
    • cryptsetup: version 2.3.4
    • curl: version 7.72.0 (CVE-2020-8231)
    • dbus: version 1.12.20
    • dnsmasq: version 2.82
    • docker: version 19.03.13
    • ethtool: version 5.8
    • fribidi: version 1.0.10
    • fuse3: version 3.9.3
    • git: version 2.28.0
    • glib2: version 2.66.0 build 2
    • gnutls: version 3.6.15
    • gtk+3: version 3.24.23
    • harfbuzz: version 2.7.2
    • haveged: version 1.9.13
    • htop: version 3.0.2
    • iproute2: version 5.8.0
    • iputils: version 20200821
    • jasper: version 2.0.21
    • jemalloc: version 5.2.1
    • libX11: version 1.6.12
    • libcap-ng: version 0.8
    • libevdev: version 1.9.1
    • libevent: version 2.1.12
    • libgcrypt: version 1.8.6
    • libglvnd: version 1.3.2
    • libgpg-error: version 1.39
    • libgudev: version 234
    • libidn: version 1.36
    • libpsl: version 0.21.1 build 2
    • librsvg: version 2.50.0
    • libssh: version 0.9.5
    • libvirt: version 6.6.0 (CVE-2020-14339)
    • libxkbcommon: version 1.0.1
    • libzip: version 1.7.3
    • lmdb: version 0.9.26
    • logrotate: version 3.17.0
    • lvm2: version 2.03.10
    • mc: version 4.8.25
    • mpfr: version 4.1.0
    • nano: version 5.2
    • ncurses: version 6.2_20200801
    • nginx: version 1.19.1
    • ntp: version 4.2.8p15 build 2
    • openssl-solibs: version 1.1.1h
    • openssl: version 1.1.1h
    • p11-kit: version 0.23.21
    • pango: version 1.46.2
    • php: version 7.4.10 (CVE-2020-7068)
    • qemu: version 5.1.0 (CVE-2020-10717, CVE-2020-10761)
    • rsync: version 3.2.3
    • samba: version 4.12.7 (CVE-2020-1472)
    • sqlite: version 3.33.0
    • sudo: version 1.9.3
    • sysvinit-scripts: version 2.1 build 35
    • sysvinit: version 2.97
    • ttyd: version 1.6.1
    • util-linux: version 2.36
    • wireguard-tools: version 1.0.20200827
    • xev: version 1.2.4
    • xf86-video-vesa: version 2.5.0
    • xfsprogs: version 5.8.0
    • xorg-server: version 1.20.9 build 3
    • xterm: version 360
    • xxHash: version 0.8.0

    Linux kernel:

    • version 5.8.12
    • kernel-firmware: version kernel-firmware-20200921_49c4ff5
    • oot: Realtek r8152: version 2.13.0
    • oot: Tehuti tn40xx: version 0.3.6.17.3

    Management:

    • btrfs: include 'discard=async' mount option
    • emhttpd: avoid using remount to set additional mount options
    • emhttpd: added wipefs function (webgui 'Erase' button)
    • shfs: move: support spares files
    • shfs: move: preserve ioctl_iflags when moving between same file system types
    • smb: remove setting 'aio' options in smb.conf, use samba defaults
    • webgui: Update noVNC to v1.2.0
    • webgui: Docker: more intuitive handling of images
    • webgui: VMs: more intuitive handling of image selection
    • webgui: VMs: Fixed: rare cases vdisk defaults to Auto when it should be Manual
    • webgui: VMs: Fixed: Adding NICs or VirtFS mounts to a VM is limited
    • webgui: VM manager: new setting "Network Model"
    • webgui: Added new setting "Enable user share assignment" to cache pool
    • webgui: Dashboard: style adjustment for server icon
    • webgui: Update jGrowl to version 1.4.7
    • webgui: Fix ' appearing
    • webgui: VM Manager: add 'virtio-win-0.1.189-1' to VirtIO-ISOs list
    • webgui: Prevent bonded nics from being bound to vfio-pci too
    • webgui: better handling of multiple nics with vfio-pci
    • webgui: Suppress WG on Dashboard if no tunnels defined
    • webgui: Suppress Autofan link on Dashboard if plugin not installed
    • webgui: Detect invalid session and logout current tab
    • webgui: Added support for private docker registries with basic auth or no auth, and improvements for token based authentication
    • webgui: Fix notifications continually reappearing
    • webgui: Support links on notifications
    • webgui: Add raid1c3 and raid1c4 btrfs pool balance options.
    • webgui: For raid6 btrfs pool data profile use raid1c3 metadata profile.
    • webgui: Permit file system configuration when array Started for Unmountable volumes.
    • webgui: Fix not able to change parity check schedule if no cache pool present
    • webgui: Disallow "?" in share names
    • webgui: Add customizable timeout when stopping containers

    Edited by limetech

    • Like 6
    • Thanks 6



    User Feedback

    Recommended Comments



    Hey all

    Playing with my VM today I noticed I have a pinned core at 100% which jumps around from core to core now and again whilst I’m using it if I leave the VM and don’t do anything it stays pinned at 100% and then doesn’t jump around is this a GUI bug cause if It was pinned surely it would stay pinned?

    Edited by Dava2k7
    Link to comment
    3 minutes ago, Dava2k7 said:

    Hey all

    Playing with my VM today I noticed I have a pinned core at 100% which jumps around from core to core now and again whilst I’m using it if I leave the VM and don’t do anything it stays pinned at 100% and then doesn’t jump around is this a GUI bug cause if It was pinned surely it would stay pinned?

    Have you checked with htop? Usualy i have that 100% indication jumping around cores but i check with htop and its a false value...

    Link to comment
    17 hours ago, limetech said:

    To fix this now, download attached file and put it on your flash, then add a command in your 'go' file like this:

    
    cp /boot/statuscheck /usr/local/emhttp/plugins/dynamix/scripts

    (be sure to get rid of that when next release comes out)

    statuscheck 7.34 kB · 7 downloads

    Thanks @limetech !!!    For anyone else who wants to fix loss of E-mail Array Status Notification---  This does work!  I got my E-mail notifications this morning.

    Edited by Frank1940
    Link to comment

    Added work around:

     

    image.png.b4b00fbe07c464fbdfc57ba055621395.png

     

    So can see drive on 2116

     

    image.thumb.png.2fa6c4c80ccfe0e0bbaae7957f621180.png

     

    FYI my FW and Bios Vers.

     

    mpt2sas_cm1: LSISAS2116: FWVersion(20.00.07.00), ChipRevision(0x02), BiosVersion(07.39.02.00)

    Edited by SimonF
    Additional Info
    • Like 1
    • Thanks 2
    Link to comment

    Hi btagomes 

    1 hour ago, btagomes said:

    Have you checked with htop? Usualy i have that 100% indication jumping around cores but i check with htop and its a false value...

    No I haven’t heard of it since now il have alook into it can you tell me where I might find it please thank you 👍🏻

    Edited by Dava2k7
    Link to comment
    54 minutes ago, Frank1940 said:

    Thanks @limetech !!!    For anyone else who wants to fix loss of E-mail Notification---  This does work!  I got my E-mail notifications this morning.

    As an FYI, the regular notifications (SMART, updates etc) always worked.  It was only the array status notification that had the issue.

    • Like 1
    Link to comment
    37 minutes ago, Dava2k7 said:

    Hi btagomes 

    No I haven’t heard of it since now il have alook into it can you tell me where I might find it please thank you 👍🏻

    Open a terminal window and type "htop" and have a look at it to see if your cpu's are really at 100%.

    Link to comment

    A question regarding cache pools.

     

    I have installed my dockers using a path /mnt/cache/... instead of /mnt/user/... This because of earlier instructions about preventing a data corruption issue. What measures regarding this should I take when updating from 6.8.3 or should it work without changing anything?

     

    Thank you!

     

    Link to comment
    Quote

    webgui: Added support for private docker registries with basic auth or no auth, and improvements for token based authentication

    Is there any more information on this? There doesn't seem to be any changes to the UI for adding or editing a Docker container, so I'm wondering if I'm missing something

    Link to comment
    3 minutes ago, Ruato said:

    A question regarding cache pools.

     

    I have installed my dockers using a path /mnt/cache/... instead of /mnt/user/... This because of earlier instructions about preventing a data corruption issue. What measures regarding this should I take when updating from 6.8.3 or should it work without changing anything?

     

    Thank you!

     

    No change required.  (Assuming that you have a cache pool named "cache", which is the default)

    • Like 1
    • Thanks 1
    Link to comment
    3 minutes ago, Squid said:

    No change required.  (Assuming that you have a cache pool named "cache", which is the default)

    If you are using another pool, just substitute its name instead of "cache". For example, I have a pool named "fast" and my appdata is at /mnt/fast/appdata.

    • Thanks 1
    Link to comment
    7 minutes ago, Squid said:

    No change required.  (Assuming that you have a cache pool named "cache", which is the default)

    Thank you for a very fast reply!

    Link to comment
    47 minutes ago, btagomes said:

    Open a terminal window and type "htop" and have a look at it to see if your cpu's are really at 100%.

    Yeah it’s still showing the same thing on htop 100% till I open a folder or do something that uses cpu power then it jumps to another thread with 100% any ideas?

    Edited by Dava2k7
    Link to comment
    5 hours ago, btagomes said:

    That happened with me before and i solved it by recreatig the "proxynet" network (created from spaceinvader's video instructions for reverse proxy)

    Which part happened to you before?

    Link to comment
    17 hours ago, limetech said:

     

    Does it make any difference which virtio-win drivers are installed, or put anther way, have you installed the latest windows drivers?  Under Settings/VM Manager we do try to keep that current, and at present it does let you select the latest "virtio-win-0.1.189".  The direct downloads from Fedora are kept here:

    https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/?C=M;O=D

    Cheers limetech.

     

    So i tried as you advised with 0.1.189 and windows updates this on its own made no difference. I then wiped away the entire VM. Reinstalled from fresh with windows 10 pro 2004 media and received update errors. I then wiped away the VM once again and created from fresh  once again however this time i went with i440fx-4.0 instead of i440fx-4.2 as i had previously. 

     

    The VM is working and these are them configurable used: hyper-v enabled, OVMF,USB controller 2.0, vdisk with SATA/RAW. My network card is virtio-net (as i would see a GSO error in logs when using just "virtio"). Along side the VM changes i have append kernel arguments: "vfio-pci.ids=10de:1c02,10de:10f1 video=vesafb:off video=efifb:off". For the GPU vbios i dumped the vbios in another system and removed the nvflash header in a hex editor as per spaceinvader ones youtube guide.

     

     I used virtio-win-0.1.189 again for this build and fully updated.This this time my USB audio device and then passing through the GPU was also successful. 

     

    Before tearing everything down and changing the virtual hardware template i would notice that if i pass through the GPU windows would have no audio outputs (not just the HDMI out but every audio device was missing) and if i stopped passing through the GPU then the USB audio device would automatically appear in outputs working as expected.

     

    What the single solution was i am not so sure either the hardware profile from 4.2 to 4.0 or wiping away the windows install completely would be my guess since that VM i had tried passing through the built in HD sound card thats very difficult to get functioning it would seem and may have impacted windows sound for anything else thus why i brought a USB device to then that didn't work either!

     

    Thanks for the reply/ help and the complete answer is for anyone else that may get some inspiration on a similar issue since i have spent days at it!

    Edited by m1012000
    • Thanks 1
    Link to comment
    9 hours ago, falconexe said:

    Hi Tom. Just wanted to make sure you saw this possible Broadcom LSI workaround.

    The developer we reached out to emailed this morning saying to try that workaround.  Please give this a try:

    With array Stopped (should be since your drives are gone :))

    rmmod mpt3sas
    modprobe mpt3sas max_queue_depth=10000

    The syslog should show the driver executing discovery.

    Next, hit Refresh on webGUI Main page and see if drives are back.

    • Like 1
    Link to comment
    On 9/29/2020 at 3:20 PM, Dava2k7 said:

    Yeah Ryzen 3900x I did find ways on the forum which got VNC working but couldn’t get anything on the TV it was a nightmare to say the least I tried loads of different things but I’m glad it’s sorted now I tried changing passthrough to model I tried different lines I found on here and they got VNC to work but that was it 👍🏻

     

    Same CPU as me. It's a beast for the momey.

     

    I had a devil of a time getting my Nvidia 1070 to passthrough until I used a second video card in the system to dump the BIOS from my card. I downloaded about a dozen BIOSes from the web, but could never make them work. After I used my own BIOS I was able to pull out the other card and run with just the one video card passed through to my VM. 

    Link to comment
    1 hour ago, Chess said:

     

    Same CPU as me. It's a beast for the momey.

     

    I had a devil of a time getting my Nvidia 1070 to passthrough until I used a second video card in the system to dump the BIOS from my card. I downloaded about a dozen BIOSes from the web, but could never make them work. After I used my own BIOS I was able to pull out the other card and run with just the one video card passed through to my VM. 

    Yeah I followed space invader videos for sorting out bios I’m thinking I might get a new 3000 series card when they come out but a mini itx version be a good edition to the system until then I’m a happy bunny with my gtx 1050 ti 😊

    Link to comment

    afbeelding.thumb.png.178a3a7ad71cf9ad81cd467cc39d5699.png

     

    I've added an existing virtual disk to an existing vm and instead of qcow2 it ads the disk as "raw" by default. I've done it with different disks ant the bugs still persist.

    Is this a legit bug or is it me?

     

    Link to comment

    Hi, before trying again, is wake-on-lan fixed with this release?

    I've tried beta22 and beta25, both not working and thus returning to 6.8.3 with the hassle of re-allocating the drives (with the risk of losing data if done wrong).

    Link to comment

    How do I check it the partition alignment actually worked / happened? I tried the 2nd method (removing the drives from the pool 1 by 1) but when I re-added them it didn't seem that a balance was automatically triggered so I manually triggered one.

     

    from my quick bit of research this seems to be a way to check but I'm not sure how to interpret the output:

    Quote

    fdisk -l -u /dev/sdm

    Disk /dev/sdm: 447.13 GiB, 480103981056 bytes, 937703088 sectors

    Disk model: SanDisk Ultra II

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: dos

    Disk identifier: 0x00000000

     

    Device     Boot Start       End   Sectors   Size Id Type

    /dev/sdm1          64 937703087 937703024 447.1G 83 Linux

     

    fdisk -l -u /dev/sdn

    Disk /dev/sdn: 447.13 GiB, 480103981056 bytes, 937703088 sectors

    Disk model: SSD PLUS 480GB

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: dos

    Disk identifier: 0x00000000

     

    Device     Boot Start       End   Sectors   Size Id Type

    /dev/sdn1          64 937703087 937703024 447.1G 83 Linux

     

    Link to comment
    38 minutes ago, gerard6110 said:

    Hi, before trying again, is wake-on-lan fixed with this release?

    That's something we don't typically test.  How are you enabling it?

    Link to comment
    7 minutes ago, atconc said:

    from my quick bit of research this seems to be a way to check but I'm not sure how to interpret the output:

    Seems to not have re-partitioned.  Under Start column it should say 2048 for those devices.

     

    Quick sanity check, type these commands, both should return '0':

    cat /sys/block/sdm/queue/rotational
    cat /sys/block/sdn/queue/rotational

    You can repeat procedure but after the first device uninstall/reinstall post your diags.

     

    Link to comment

    I did some testing on one of my own problems last night, but it turn out it exists in beta 25 also.  The problem I accidentally discovered due to an unbootable install iso can be replicated on my machine over and over.

     

    The problem is that if you force close a vm at the first install screen (or the screen where you are presented with the failed to boot grub text (e.g. using vnc), you are presented with similar to the following screenshot:

     

    1055989072_ScreenShot2020-10-01at5_29_13PM.thumb.png.f91e44d775d046a511823cb34424da3f.png

    This seems to result in two issues I've noticed, 1 - I can no longer access the virtual machines tab or it's contents, 2 - I can't delete files from the virtual machines folder on the SSD I'm using, requiring a reboot of the host.

     

     

    I've run a full memory test over night, including with SMP enabled to rule out any memory related issues and it all came up clean.

     

    I'm posting this here in case it's some other combination of hardware I have.  To that end please note I'm on a threadripper 1950x which has never given me a single issue on unraid and I am storing this vm on an SSD formatted with ZFS.  If that becomes an issue I can store it on an official unraid file system.

     

    I have performed the same test on my xeon system and cannot get this to occur, my suspicion is it's AMD related as both systems run the beta and ZFS and the AMD system it appears to be 100% repeatable and seems to slow down the system too.

     

    I suspect if I force close the VM at any other point this will also happen.  Logs attached.

    obi-wan-diagnostics-20201002-0852beta25.zip

    obi-wan-diagnostics-20201001-1724beta29.zip

     

     

    Edited by Marshalleq
    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.