• Unraid OS version 6.7.0-rc3 available


    limetech

    More bug fixes.  More refinements to webGUI, especially check out latest Community Apps plugin!

     

    Other highlights:

    • Parity sync/Data rebuild/Check pause/resume capability.  Main components in place.  Pause/resume not preserved across system restarts yet however.
    • Enhanced syslog capability.  Check out Settings/Nework Services/Syslog Server.

     

    Special thanks once again to @bonienl and @Mex.

     

    Version 6.7.0-rc3 2019-02-09

    Base distro:

    • jq: version 1.6
    • oniguruma: version 5.9.6_p1
    • php: version 7.2.14

    Linux kernel:

    • version: 4.19.20
    • md/unraid: version 2.9.6 (support sync pause/resume)
    • patch: PCI: Quirk Silicon Motion SM2262/SM2263 NVMe controller reset: device 0x126f/0x2263

    Management:

    • emhttp: use mkfs.btrfs defaults for metadata and SSD support
    • emhttp: properly dismiss "Restarting services" message
    • firmware:
    • added BCM20702A0-0a5c-21e8.hcd
    • added BCM20702A1-0a5c-21e8.hcd
    • vfio-pci script: bug fixes
    • webgui: telegram notification agent bug fixes
    • webgui: VM page: allow long VM names
    • webgui: Dashboard: create more space for Dokcer/VM names (3 columns)
    • webgui: Dashboard: include links to settings
    • webgui: Dashboard: fix color consistency
    • webgui: Syslinux config: replace checkbox with radio button
    • webgui: Docker page: single column for CPU/Memory load
    • webgui: Docker: usage memory usage in advanced view
    • webgui: Dashboard: fixed wrong display of memory size
    • webgui: Dashboard: fix incorrect memory type
    • webgui: Plugin manager: align icon size with rest of the GUI
    • webgui: Plugin manager: enlarge readmore height
    • webgui: Plugin manager: add .png option to Icon tag
    • webgui: Plugin manager: table style update
    • webgui: Added syslog server functionality
    • webgui: syslog icon update
    • webgui: Main: make disk identification mono-spaced font
    • webgui: Added parity pause/resume button
    • webgui: Permit configuration of parity device(s) spinup group.
    • Like 6
    • Upvote 1



    User Feedback

    Recommended Comments



    Upgraded yesterday from 6.7.0-rc2 to rc3, went smoothly.

     

    I captured a diagnostics before and after and did a diff compare. No surprises, everything looks as it should so I won't bother attaching.

     

    Link to comment
    15 minutes ago, Lev said:

    Upgraded yesterday from 6.7.0-rc2 to rc3, went smoothly.

     

    I captured a diagnostics before and after and did a diff compare. No surprises, everything looks as it should so I won't bother attaching.

     

    u have xfs encrypted cache and array?

    Edited by nuhll
    Link to comment
    2 hours ago, FreeMan said:

    I just wanted to voice my opinion.

    I agree with you. I said essentially the same in my rant in the -rc2 section.

    • Like 1
    Link to comment

    Can we get a toggle for some more colorful icons? I find the new B/W icons really boring and doesnt fit the overall theme 

    Link to comment

    updated a z400. seems fine, following is new to the logs:

     

    Feb 11 06:30:37 Twins kernel: ACPI BIOS Warning (bug): 32/64X length mismatch in FADT/Pm1aEventBlock: 32/16 (20180810/tbfadt-569)
    Feb 11 06:30:37 Twins kernel: ACPI BIOS Warning (bug): 32/64X length mismatch in FADT/Gpe0Block: 128/32 (20180810/tbfadt-569)
    Feb 11 06:30:37 Twins kernel: ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aEventBlock: 16, using default 32 (20180810/tbfadt-674)

     

    I haven't had time to research, but I wouldn't be surprised if it's an HP thing. 

     

    also the following

     

    Feb 11 06:30:37 Twins kernel: ACPI BIOS Error (bug): Failure creating [\_SB.PCI0._OSC.CAPD], AE_ALREADY_EXISTS (20180810/dsfield-183)
    Feb 11 06:30:37 Twins kernel: ACPI Error: Method parse/execution failed \_SB.PCI0._OSC, AE_ALREADY_EXISTS (20180810/psparse-516)
    Feb 11 06:30:37 Twins kernel: acpi PNP0A08:00: _OSC failed (AE_ALREADY_EXISTS); disabling ASPM

     

     

    Feb 11 06:30:37 Twins kernel: acpi PNP0A08:02: _OSC failed (AE_NOT_FOUND); disabling ASPM

     

    Feb 11 06:30:48 Twins rpc.statd[1704]: Failed to read /var/lib/nfs/state: Success

     

     

    again, I think it's an HP issue, wouldn't be the first regarding ACPI, but the server rebooted and started 3 vm's automatically as setup to do so.

     

     

    Link to comment

    I have to say since rolling back to the 6.6.6 I really do like having more colorful icons.  Makes navigating the WebUI much easier.  Not a deal breaker for me for sure but would be nice to to offer the new/old icons as themes so users have a choice.

    • Like 1
    • Upvote 1
    Link to comment

    So, when adding new disk Unraid started disk clearing. Is this new in 6.7 or did this happen with 6.6.x? Can't remember it happening before.

    Link to comment
    1 hour ago, Niklas said:

    So, when adding new disk Unraid started disk clearing. Is this new in 6.7 or did this happen with 6.6.x? Can't remember it happening before.

    This has been standard behavior on adding a disk to a parity protected array ever since v5.    The difference in v5 was that the array was offline while clear took place, while in v6 the array is usable (although the disk itself is not until the Clear finishes).

    Link to comment
    Just now, itimpi said:

    This has been standard behavior on adding a disk to a parity protected array ever since v5.    The difference in v5 was that the array was offline while clear took place, while in v6 the array is usable (although the disk itself is not until the Clear finishes).

    I see. My memory is not what it used to be I think. ;-) Thanks. 

    Link to comment
    Just now, Niklas said:

    I see. My memory is not what it used to be I think. ;-) Thanks. 

    If you have run pre-clear (which many do just to stress test new disks the Clear phase gets scrapped making the disk immediately available on starting the array.   Also if you do not have parity then Clear is not required.  

    Link to comment
    5 minutes ago, itimpi said:

    This has been standard behavior on adding a disk to a parity protected array ever since v5.    The difference in v5 was that the array was offline while clear took place, while in v6 the array is usable (although the disk itself is not until the Clear finishes).

    Probably goes back long before V5. I'm pretty sure it worked that way on 4.7 when I started. And of course, it is necessary to maintain parity. Unless the disk added to a new data slot is all zeros, parity will not be valid.

    Link to comment
    5 minutes ago, trurl said:

    Probably goes back long before V5. I'm pretty sure it worked that way on 4.7 when I started. And of course, it is necessary to maintain parity. Unless the disk added to a new data slot is all zeros, parity will not be valid.

    I was under the impression that Unraid could use the new drive directly if it was newly formatted (by Unraid, no data = 0). The drive I have added in the past has been unformatted brand new. I started to use Unraid when pre-clear was not a thing anymore. But I also started with no parity and one data drive.

    Edited by Niklas
    Link to comment
    12 minutes ago, JoeUnraidUser said:

    I have the same problem.  I had 2 Marvell HBAs and I replaced them with 2 SAS9211-8I HBAs.  Is there a thread for this problem?

    This thread seems to have high hopes for 6.7 (4.19/4.20 kernel);

     

     

    Link to comment
    5 minutes ago, Niklas said:

    I was under the impression that Unraid could use the new drive directly if it was newly formatted (by Unraid, no data = 0). The drive I have added in the past has been unformatted brand new. I started to use Unraid when pre-clear was not a thing anymore.

    An empty disk is not a clear disk. The format operation writes an empty filesystem to the disk. This is a very small number of the total bits on the disk. Format doesn't write zeros to the entire disk, nor does it need to. If it did it would take a long time on even a small disk. Even the empty filesystem itself is not zeros. It has the metadata to represent an empty top level folder ready to have files and folders added to it.

    Link to comment

    What I mean is that an empty disk (newly formatted) could be seen as clear (all sectors as zeroes). But that is just a theory of mine and how I thought it worked. ;) (without deep knowledge) 

    Edited by Niklas
    Link to comment
    7 minutes ago, Niklas said:

    What I mean is that an empty disk (newly formatted) could be seen as clear (all sectors as zeroes). But that is just a theory of mine and how I thought it worked. ;) (without deep knowledge) 

    If it's not actually cleared then there will be trouble down the road.  The only way to ensure an HDD is cleared is to write zeros to it.  If you use the pre-clear plugin, the last step it takes before completion is to write a special label to the device which Unraid OS interprets as "has already been set to all-zeros".  This is why adding a precleared disk to the array is instantaneous.

    Link to comment
    Just now, limetech said:

    If it's not actually cleared then there will be trouble down the road.  The only way to ensure an HDD is cleared is to write zeros to it.  If you use the pre-clear plugin, the last step it takes before completion is to write a special label to the device which Unraid OS interprets as "has already been set to all-zeros".  This is why adding a precleared disk to the array is instantaneous.

    I understand, thanks. Never used pre-clear but I have been reading lots about it. Just cant remember that I have seen the disk clearing before and I have added and removed drives several times. The thing is that I could have done it before I added the parity drive. I need to pre-clear my brain I think. ;) Got my original question well answered now. Thanks!

    Link to comment
    21 minutes ago, Niklas said:

    What I mean is that an empty disk (newly formatted) could be seen as clear (all sectors as zeroes). But that is just a theory of mine and how I thought it worked. ;) (without deep knowledge) 

    Just in case there is still any doubt, that theory of yours is completely wrong. As I already explained, an empty disk isn't a clear disk. And even after you format a clear disk, that empty disk isn't clear either. It has an empty filesystem on it.

    Link to comment

    Not sure if this is any help but a positive from my point of view is that Docker to USB hard drive transfers seem quicker now in Krusader. I've yet to test again in Duplicati which was also giving me speed issues to USB but I can not do that until all my data is re added from my external drive.

     

    Thanks

     

    Terran

    Link to comment
    11 minutes ago, trurl said:

    Just in case there is still any doubt, that theory of yours is completely wrong. As I already explained, an empty disk isn't a clear disk. And even after you format a clear disk, that empty disk isn't clear either. It has an empty filesystem on it.

    I know. I was brainstorming about the system TREATING the newly formatted disk as "clear". Not that it IS "clear". I hope that I'm clear about what I was thinking, even if it was wrong. English is not my first language.

    Link to comment
    5 minutes ago, Niklas said:

    I know. I was brainstorming about the system TREATING the newly formatted disk as "clear". Not that it IS "clear". I hope that I'm clear about what I was thinking, even if it was wrong. English is not my first language.

    No worries!

    • Like 1
    Link to comment
    22 minutes ago, Niklas said:

    I know. I was brainstorming about the system TREATING the newly formatted disk as "clear". Not that it IS "clear". I hope that I'm clear about what I was thinking, even if it was wrong. English is not my first language.

    When you realise that parity has no concept of ‘data’, but merely of sectors with no understanding of what they contain you realise why the disk HAS to be zero when adding it if parity is to be maintained.

     

    Humans tend to think of disk with no files on it as being ‘clear’ but that is not truly the case when one thinks at the physical sector level.

    Link to comment
    Just now, itimpi said:

    When you realise that parity has no concept of ‘data’, but merely of sectors with no understanding of what they contain you realise why the disk HAS to be zero when adding it if parity is to be maintained.

     

    Humans tend to think of disk with no files on it as being ‘clear’ but that is not truly the case when one thinks at the physical sector level.

    Yes, i know. I have some deeper understanding when it comes to sectors and how storage devices work. Just not much experience with parity calculation in Unraid. Yet. Found Unraid in nov-oct 2018 and have been reading and watching videos about parity (in unraid) so.. I'm learning as i go. ;)

    Link to comment

    Don't suppose it would be possible to better support SAS drives, would it? 

    Like have the ability to spin them down and be able to run SMART test on them? 

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.