• Unraid OS version 6.9.0-beta25 available


    limetech

    6.9.0-beta25 vs. -beta24 Summary:

    • fixed emhttpd crash resulting from having NFS exported disk shares
    • fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below)
    • fixed spin-up/down issues
    • ssh improvements (see SSH Improvements below)
    • kernel updated from 5.7.7 to 5.7.8
    • added UI changes to support new docker image file handling - thank you @bonienl.  Refer also to additional information re: docker image folder, provided by @Squid under Docker below.
    • known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools.  Upon array Start you might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    1 MiB Partition Alignment

    We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256.  This partition type is now used for all non-rotational storage (only).

     

    It is not clear what benefit 1 MiB alignment offers.  For some SSD devices, you won't see any difference; others, perhaps big performance difference.  LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).

     

    To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device.  Of course this will erase all data on the device.  Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:

    blkdiscard /dev/xxx  # for exmaple /dev/sdb or /dev/nvme0n1 etc)

            WARNING: be sure you type the correct device identifier because all data will be lost on that device!

     

    Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to implement 2-factor authentication in a future release.

     

    Docker

    Updated to version 19.03.11

     

    It's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    Additional information from user @Squid:

     

    Quote

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the asigned share.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that:

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.

    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)

    You definitely want the share to be cache-only or cache-no (although cache-prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated to let you know of any problems that it detects with how you've configured the folder.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    SSH Improvements

    There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):

    • only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
    • non-null password is now required
    • non-root tunneling is disabled

     

    In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

     

    Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended).

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta25 2020-07-12

    Linux kernel:

    • version 5.7.8

    Management:

    • fix emhttpd crash resulting from exporting NFS disk share(s)
    • fix non-rotational device partitions were not actually being 1MiB aligned
    • dhcpcd: ipv6: use slaac hwaddr instead of slaac private
    • docker: correct storage-driver assignemnt logic
    • ssh: allow only root user, require passwords, disable non-root tunneling
    • ssh: add /root/.ssh symlink to /boot/config/ssh/root directory
    • syslog: configure to also listen on localhost udp port 514
    • webgui: Added btrfs info for all pools in diagnostics
    • webgui: Docker: allow BTRFS or XFS vdisk, or folder location
    • webgui: Multi-language: Fixed regression error: missing indicator for required fields
    • webgui: Dashboard: fix stats of missing interface
    • Like 2
    • Thanks 3



    User Feedback

    Recommended Comments



    13 hours ago, tech960 said:

    Are unassigned devices (USB Drive) able to be spun down yet? Mine is always on and consistently over heats (it's a raid -0 unit). As it's hardly used, it would be nice if it went to sleep at some point.

    I use a User Script to spin mine down. It works although I see occasional unexplained wake-ups which I haven't tracked down.

     

    Link to comment

    looks like an old issue from 6.8 RC1 has come back in 6.9 beta 25 with only 1 day uptime I was forced to reboot my server last night and 13 hours later plex started having connect/disconnect issues I am using br0 for my official plex docker without issue on beta 24 and for about 30hrs on beta 25


    i have about 4 dockers running on br0 and one small VM also using br0

    updated vm to old br4 (nic doesnt exist but profile in unraid still does) the logs stop spamming

    now my /var/logs fills with

    Jul 15 13:25:30 Thor kernel: tun: unexpected GSO type: 0x0, gso_size 1398, hdr_len 1452
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: 12 00 00 00 00 00 00 00 00 07 0b 00 c0 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: unexpected GSO type: 0x0, gso_size 1398, hdr_len 1452
    Jul 15 13:25:30 Thor kernel: tun: 90 86 20 00 c0 00 00 00 00 00 00 00 00 00 00 00 .. .............
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: unexpected GSO type: 0x0, gso_size 875, hdr_len 929
    Jul 15 13:25:30 Thor kernel: tun: 12 00 00 00 00 00 00 00 c0 4d 08 00 c0 00 00 00 .........M......
    Jul 15 13:25:30 Thor kernel: tun: 05 00 00 00 00 00 00 00 08 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: 40 7a 1a 00 c0 00 00 00 00 00 00 00 00 00 00 00 @z..............
    Jul 15 13:25:30 Thor kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jul 15 13:25:30 Thor kernel: tun: unexpected GSO type: 0x0, gso_size 1398, hdr_len 1452
    Jul 15 13:25:30 Thor kernel: tun: 8a 26 c7 c6 e3 92 5b 00 16 61 77 51 f6 5f 24 cb .&....[..awQ._$.
    Jul 15 13:25:30 Thor kernel: tun: 67 31 87 62 df de 15 13 3d 35 9a d6 30 b7 03 ec g1.b....=5..0...
    Jul 15 13:25:30 Thor kernel: tun: 2e 2e a7 e3 8b a4 64 18 1b 49 c9 73 ed a5 41 42 ......d..I.s..AB
    Jul 15 13:25:30 Thor kernel: tun: 81 54 cb 4c 4d 0f b4 9b 60 d8 42 ee 21 d1 8a ea .T.LM...`.B.!...
    Jul 15 13:25:30 Thor kernel: tun: unexpected GSO type: 0x0, gso_size 1398, hdr_len 1452
    Jul 15 13:25:30 Thor kernel: tun: 40 d5 18 38 a2 f6 65 93 a2 96 60 71 bd 04 a3 71 @..8..e...`q...q
    Jul 15 13:25:30 Thor kernel: tun: 90 db b6 58 a5 09 ac 1b 2b e2 6c 39 d4 1c ee 8f ...X....+.l9....
    Jul 15 13:25:30 Thor kernel: tun: 08 45 ef 28 38 82 3b 88 6c 26 e6 37 43 f0 03 1c .E.(8.;.l&.7C...
    Jul 15 13:25:30 Thor kernel: tun: cc bb 3c a1 ec 4c 7b af fd 5e 57 ad 8a 6e 01 1b ..<..L{..^W..n..


    diagnostics attached

    thor-diagnostics-20200715-1323.zip

    Link to comment
    Just now, Can0nfan said:

    looks like an old issue from 6.8 RC1 has come back in 6.9 beta 25 with only 1 day uptime I was forced to reboot my server last night and 13 hours later plex started having connect/disconnect issues I am using br0 for my official plex docker without issue on beta 24 and for about 30hrs on beta 25


    i have about 4 dockers running on br0 and one small VM also using br0

    updated vm to old br4 (nic doesnt exist but profile in unraid still does) the logs stop spamming

    now my /var/logs fills with

    Jul 15 13:25:30 Thor kernel: tun: unexpected GSO type: 0x0, gso_size 1398, hdr_len 1452

    I have moved my vm to my other unraid server on 6.8.3 and no more kernel: tun errors

    Link to comment
    19 minutes ago, Can0nfan said:

    I have moved my vm to my other unraid server on 6.8.3 and no more kernel: tun errors

    Search for "unexpected GSO" on the LT release note for fix i.e. change from virtio to virtio-net.

     

    Note though that virtio-net can carry some performance penalty so if you find it too slow, try changing the machine type to 5.0 version (presumably yours is still 4.2).

    I don't have any more GSO error with Q35-5.0 and i440fx-5.0 evne with virtio.

    • Like 1
    Link to comment
    58 minutes ago, testdasi said:

    Search for "unexpected GSO" on the LT release note for fix i.e. change from virtio to virtio-net.

     

    Note though that virtio-net can carry some performance penalty so if you find it too slow, try changing the machine type to 5.0 version (presumably yours is still 4.2).

    I don't have any more GSO error with Q35-5.0 and i440fx-5.0 evne with virtio.

    I cant use virtio or virtio-net as this little VM (fedora 32 server) needs a static IP as it is used to ssh into my home network to access dockers blocked by certain firewalls (plex for example) that i sometimes connect to where the wireguard vpn is not able to be used.  Some of my dockers need static IP's on br0 as well so they can talk to other dockers (for example Plex on one server Ombi, Sab,Sonarr,Radarr and Deluge through a reverse proxy on another server)

    Link to comment
    1 hour ago, testdasi said:

     

     

    Note though that virtio-net can carry some performance penalty so if you find it too slow, try changing the machine type to 5.0 version (presumably yours is still 4.2).

    I don't have any more GSO error with Q35-5.0 and i440fx-5.0 evne with virtio.

    im actuallying use Q35-4.1 ill try 5.0 and br0 to see if that helps

    Link to comment
    3 minutes ago, Can0nfan said:

    im actuallying use Q35-4.1 ill try 5.0 and br0 to see if that helps

    so far using br0 and Q35-5.0 is not producing the kernel tun errors ill keep an eye out as I will be working all night so if its going to happen it will happen in the next 12 hours or so

    • Thanks 1
    Link to comment

    Can anyone ELI5 what this passage means for people using an Nvidia card for Plex hardware transcoding on 6.8 and an additional card passed through to a VM (also Nvidia)? I'm using the Nvidia version of Unraid.

     

    Quote

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    Am I supposed to do something now, before 6.9 releases? 

    Link to comment
    9 minutes ago, nlash said:

    Can anyone ELI5 what this passage means for people using an Nvidia card for Plex hardware transcoding on 6.8 and an additional card passed through to a VM (also Nvidia)? I'm using the Nvidia version of Unraid.

    Am I supposed to do something now, before 6.9 releases? 

    In the current beta, the VFIO-PCI.CFG plugin has been integrated into Unraid.

    So instead of binding using the usual vfio-ids method in syslinux or manually editing the VFIO-PCI.CFG file (manually or through the plugin), you can now do that on the Unraid native GUI via Tools -> System Devices. You just tick the boxes next to the devices you want to bind for VM pass-through and apply and reboot.

     

    What the passage means is in addition to the usual devices that you would need to bind (e.g. USB controller, NVMe SSD etc.), you should also bind the graphic card that you intent to pass through to the VM as well. That has not been required in the past and is not required until Nvidia / AMD driver is baked in. Better do it now rather than "my VM stops working" in the future.

     

    Right now though, there is no implication since there's no Nvidia / AMD driver included (yet).

     

    Side note: if the syslinux method has been working for you then you don't really need to use the new GUI method. Just need to take note to add the graphic card device ID to syslinux as well.

     

    Link to comment
    42 minutes ago, testdasi said:

    In the current beta, the VFIO-PCI.CFG plugin has been integrated into Unraid.....

     

    Ah, okay. That makes sense.

     

    I wasn't sure if there would be a window where what I was doing (manually editing and binding) wasn't going to work after the upgrade, thus breaking my current set-up.

     

    Thanks.

    Link to comment
    1 hour ago, testdasi said:

    That has not been required in the past and is not required until Nvidia / AMD driver is baked in. Better do it now rather than "my VM stops working" in the future.

    Exactly right.

     

    1 hour ago, testdasi said:

    Side note: if the syslinux method has been working for you then you don't really need to use the new GUI method. Just need to take note to add the graphic card device ID to syslinux as well.

    True but still suggest using Tools/System Devices method in case we switch away from syslinux at some point 👍

    • Like 1
    • Thanks 1
    Link to comment

    u/UnraidOfficial Is there any fix for the FLR issue when passing through USB and Audio on 3rd Gen Ryzen builds? I don't see it in the patch notes but I know it has been requested. If I missed it I apologize. I'm a bit worried if this is the last release before RC that it wont get in.

     

    See this post on VFIO for their current workaround.

    https://www.reddit.com/r/VFIO/comments/eba5mh/workaround_patch_for_passing_through_usb_and/

    It seems kernel 5.8 has a workaround built in but it looks like we are only going to 5.7.8 in this build.

    Link to comment
    38 minutes ago, TheOriginalBox said:

    u/UnraidOfficial Is there any fix for the FLR issue when passing through USB and Audio on 3rd Gen Ryzen builds? I don't see it in the patch notes but I know it has been requested. If I missed it I apologize. I'm a bit worried if this is the last release before RC that it wont get in.

     

    See this post on VFIO for their current workaround.

    https://www.reddit.com/r/VFIO/comments/eba5mh/workaround_patch_for_passing_through_usb_and/

    It seems kernel 5.8 has a workaround built in but it looks like we are only going to 5.7.8 in this build.

    Did you try latest 6.9-beta?  I believe this issue is solved but I can't test it.

    Link to comment
    On 7/15/2020 at 3:12 PM, Can0nfan said:

    so far using br0 and Q35-5.0 is not producing the kernel tun errors ill keep an eye out as I will be working all night so if its going to happen it will happen in the next 12 hours or so

    @limetech and @testdasi Machine version Q35-5.0 has been solid for my small Fedora 32 Server VM using br0 and a few dockers also using br0

    nearly 48hours uptime and no kernel tun messages in logs at all

    • Like 1
    Link to comment
    On 7/16/2020 at 11:02 PM, limetech said:

    Did you try latest 6.9-beta?  I believe this issue is solved but I can't test it.

    I updated to 6.9.0b25 and I can successfully passthrough [AMD] Starship/Matisse HD Audio Controller without the FLR reboot issue. I have yet to test USB as my motherboard only has 1 USB controller (which just happens to have unraid on it). If anyone can validate USB works as well that would be awesome.

     

    • Like 1
    • Thanks 1
    Link to comment
    On 7/14/2020 at 9:29 PM, limetech said:

    Yes

    Perfect, toying with the idea of manually creating a read cache using some commands I found online with one of the secondary cache pools.

     

    Snapshots and a read cache / tired storage are the only features I miss at this point.

     

    At the least I want to move some frequently accessed folders to an SSD so they don't spin up the main drives for no reason.

    Link to comment

    What is the mount point for new disk pools and is the cache pool still be at /mnt/cache/?

     

    I'm following @Squid's Docker FAQ advice by having docker config directories pointed at /mnt/cache/appdata/ instead of /mnt/user/appdata/ and I would like not to screw up my server and containers by having everything pointing to an invalid mount point.

    Link to comment
    1 minute ago, SelfSD said:

    What is the mount point for new disk pools

    Whatever name you choose for the pool, pool name cache will still be at /mnt/cache, pool named newpool will be at /mnt/newpool

    • Thanks 1
    Link to comment

    With the new docker folder setup is it going to be possible to migrate existing dockers to that setup without data loss?

    or how would that work?

    If someone already asked and i missed it im sorry.

    Link to comment
    2 hours ago, fithwum said:

    With the new docker folder setup is it going to be possible to migrate existing dockers to that setup without data loss?

    or how would that work?

    If someone already asked and i missed it im sorry.

    It's no different than rebuilding the docker.img file   all of your appdata is safe and not affected.   Just ideally use a separate cache only share for it

    Link to comment

    Just curious being new around here, what kind of timeline are we looking at for 6.9 RC and then full release? Weeks? Months?

     

    I ask because I am having to use an SSD in the array for docker right now, not a problem since I don't have a parity drive.

     

    The issue is I just got a drive to use for parity, so trying to figure out what order to do things in.

     

    Install parity now and build it (24 hours+) knowing that the SSD could break the first 256gb of parity.

     

    Move docker to cache and just eat the excessive writes for a little bit while waiting for 6.9.

     

    Wait a few weeks for 6.9 to be released and then install parity after removing the SSD from the array.

     

    If 6.9 is around the corner I might as well wait and use the extra drive as a backup for the time being. If it is still months away I will run out of space most likely as I am going to move some of my replaceable backup drives (aka, linux ISO's) into the array once I have parity working.

     

    I trust parity for replaceable data but not for irreplaceable data.

    Edited by TexasUnraid
    Link to comment

    @TexasUnraid, this is 6.9.0-beta 25 release.  There are still the RC releases to go before the final 6.9.0.  Since you are new around here, let me share something with you.  LimeTech has never released a final version of Unraid if there is a single unresolved problem.  (If fact, this is the first time that they have released a beta version as a public release!  It was done because a problem which some users were experiencing that could only be addressed by making this beta version available.)   

     

    It has typically taken six-plus months  after the release of the first rc version before the final version is released.

    Edited by Frank1940
    • Like 2
    • Thanks 1
    Link to comment

    Yeah, I figured there would be an RC release before a full release, just was not sure on the time frame between them.

     

    Thanks for the info, so basically I have some time to kill before moving to 6.9 will be an option.

     

    Leaves me with a conundrum of what to do but think I will move that conversation to the excessive write docker thread.

    Link to comment

    Hello,

     

    Today I attempted to upgrade to Beta25, and unfortunately, the only VM I run would not start. The VM is a Ubuntu server 18.04.4. This VM has my iGPU from my 8700K passed through used for jellyfin.


    I am currently running 6.8.3 and it works. I am not isolating this GPU via VFIO. My go file has the following at the bottom:

    modprobe i915
    chmod -R 777 /dev/dri

    This currently allows 6.8.3 the iGPU to be used by unraid when the VM is not running, and by the VM when it is running.


    After upgrading, I attempted to both use the existing configuration, I upgraded the configuration (for network configuration as shown in spaceinvader one's video), and tried to rerun. I attempted to isolate the iGPU via VFIO, rebooted, and tried to rerun. In each case the core 0 of the passed through cores was pinned at 100%, but the VM never started. The VM log did not throw any warnings.

     

    I also noticed that despite being in the go file /dev/dri did not exist until after manually entering modprobe i915 into the console.

     

    I have reverted back to 6.8.3 for the time being, but I am wondering if I missed something, or if there is something I need to do to resolve this issue.

     

    Thank you.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.