• Unraid OS version 6.7.0-rc8 available


    limetech

    Hopefully last release before 6.7.0 stable.

     

    Two notes:

    1. Added sdparm command in order to get SAS spin-up/down implemented properly.  If you have SAS controllers with SAS HDD's please send me PM if you are willing to help debug this.
    2. Removed kernel patch which was trying to work around problematic Silicon Motion SM2262/SM2263 NVMe controllers.  A different workaround is recommended.

     

    Version 6.7.0-rc8 2019-04-30

    Base distro:

    • at-spi2-core: version 2.32.1
    • bash: version 5.0.007
    • cifs-utils: version 6.9
    • dhcpcd: version 7.2.0
    • docker: version 18.09.5
    • glib2: version 2.60.1
    • glibc-zoneinfo: version 2019a
    • gtk+3: version 3.24.8
    • icu4c: version 64.2
    • kernel-firmware: version 20190424_4b6cf2b
    • libcap: version 2.27
    • libcroco: version 0.6.13
    • libdrm: version 2.4.98
    • libpng: version 1.6.37 (CVE-2018-14048 CVE-2018-14550 CVE-2019-7317)
    • libpsl: version 0.21.0
    • nano: version 4.2
    • ncurses: version 6.1_20190420
    • nghttp2: version 1.38.0
    • openssh: version 8.0p1
    • pcre2: version 10.33
    • pixman: version 0.38.4
    • samba: version 4.9.6
    • sdparm: version 1.10
    • sg3_utils: version 1.44
    • sqlite: version 3.28.0
    • util-linux: version 2.33.2
    • wget: version 1.20.3 (CVE-2019-5953)
    • zstd: version 1.4.0

    Linux kernel:

    • version: 4.19.37
    • remove patch: PCI: Quirk Silicon Motion SM2262/SM2263 NVMe controller reset: device 0x126f/0x2263

    Management:

    • docker: preserve container fixed IPv4 and IPv6 addresses across reboot/docker restart
    • emhttp: ignore *.key files that begin with "._"
    • networking: pass user-specified MAC address through to bridge
    • smartmontools: update drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt}
    • webgui: Allow optional notifications on background docker update checks
    • webgui: Dashboard: fixed hanging when no share exports are defined
    • webgui: Minor textual changes
    • webgui: Add GameServers to category for docker containers

     




    User Feedback

    Recommended Comments



    On 5/3/2019 at 5:25 PM, limetech said:

    That is HP's "hpsa" driver:

    https://sourceforge.net/projects/cciss/files/hpsa-3.0-tarballs/

     

    The version in Linux kernel 4.19 is "3.4.20-125".  You can see from above link that there are newer hpsa versions, but those drivers are designed to be built/integrated into RedHat variation of the kernel and do not build with our kernel.

     

    Checking kernel 5.0 and 5.1, reveals they all also use hpsa driver "3.4.20-125".  Why hardware vendors insist to maintain their own driver vs. the stock kernel driver is a mystery.  Eventually someone who HP cares about will complain and they'll update the stock kernel driver.

    @johnnie.black @limetech

     

    Thanks for your response, guess I'll just have to stick with 6.6.7 for the foreseeable, in the meantime I have emailed HP to try and hasten a solution - I will update you if I get a response,

     

    Regards

     

    Duggie

    Link to comment
    17 hours ago, Squid said:

    Post the diagnostics, the applicable VM XML file, and a screenshot of the error.

    Hi I have attached the diagnostics and VM XML file though that will have changed as I've tried to get the VM working

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Windows 10</name>
      <uuid>f7924a7b-24f4-a50f-9c47-8bca810aa128</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>5</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='2'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='4'/>
        <vcpupin vcpu='4' cpuset='5'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/f7924a7b-24f4-a50f-9c47-8bca810aa128_VARS-pure-efi.fd</nvram>
        <boot dev='hd'/>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='5' threads='1'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='ide' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:66:14:08'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='2'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </video>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x0424'/>
            <product id='0x2228'/>
          </source>
          <address type='usb' bus='0' port='1'/>
        </hostdev>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </memballoon>
      </devices>
    </domain>

    image.png.bdb97c76d5b5b3922e7c16b9163ae1c4.png

    atlas-diagnostics-20190507-1849.zip

    Link to comment
    On 5/3/2019 at 5:25 PM, limetech said:

    That is HP's "hpsa" driver:

    https://sourceforge.net/projects/cciss/files/hpsa-3.0-tarballs/

     

    The version in Linux kernel 4.19 is "3.4.20-125".  You can see from above link that there are newer hpsa versions, but those drivers are designed to be built/integrated into RedHat variation of the kernel and do not build with our kernel.

     

    Checking kernel 5.0 and 5.1, reveals they all also use hpsa driver "3.4.20-125".  Why hardware vendors insist to maintain

     

    own driver vs. the stock kernel driver is a mystery.  Eventually someone who HP cares about will complain and they'll update the stock kernel driver.

    So I emailed the maintainers from the SF link you provided, and this was their response:

     

    Hi Duggie,

    a quick glance at the UNRAID changelog it looks like in unRAID 6.6.7 they are using the 4.18.20 kernel. Do you know what kernel is in the newer RC versions?

     

    Feel free to submit the diagnostic logs to us. We will take a look. Contrary to their comments, we do maintain the kernel driver as well and have some patches staged to go upstream soon.

     

    Thanks,
    Scott

     

    AND ON SENDING MY DIAG LOGS AND THE UNRAID CHANGE-LOGS FOR THE 6.7.0-rcX FAMILY RCVD THIS FROM DON
     

    I see a DMAR error logged. That may be what caused the controller lockup, which caused the OS to send down a reset to the drive.

     

    Are you able to update the driver and build it for a test? If so, there is  a structure member in the scsi_host_template called .max_sectors.

    It is set to 2048, wondering if you can change it to 1024 for a test?

     

    If not, I would have to know what OS I could do the build for you on. Not real sure about unraid.

     

     

    Feb 25 23:03:09 TheNewdaleBeast kernel: DMAR: DRHD: handling fault status reg 2

    Feb 25 23:03:09 TheNewdaleBeast kernel: DMAR: [DMA Read] Request device [81:00.0] fault addr fe8c0000 [fault reason 06] PTE Read access is not set

    Feb 25 23:03:40 TheNewdaleBeast kernel: hpsa 0000:81:00.0: scsi 14:0:7:0: resetting physical  Direct-Access     SEAGATE  ST4000NM0023     PHYS DRV SSDSmartPathCap- En- Exp=1

    Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Leaving mDNS multicast group on interface br0.IPv6 with address fe80::1085:73ff:fedb:90d4.

    Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Joining mDNS multicast group on interface br0.IPv6 with address fd05:820d:9f35:1:d250:99ff:fec2:52fb.

    Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Registering new address record for fd05:820d:9f35:1:d250:99ff:fec2:52fb on br0.*.

    Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Withdrawing address record for fe80::1085:73ff:fedb:90d4 on br0.

    Feb 25 23:03:58 TheNewdaleBeast ntpd[3173]: Listen normally on 6 br0 [fd05:820d:9f35:1:d250:99ff:fec2:52fb]:123

    Feb 25 23:03:58 TheNewdaleBeast ntpd[3173]: new interface(s) found: waking up resolver

    Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: Controller lockup detected: 0x00130000 after 30

    Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: controller lockup detected: LUN:0000000000800601 CDB:01030000000000000000000000000000

    Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: Controller lockup detected during reset wait

    Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: scsi 14:0:7:0: reset physical  failed Direct-Access     SEAGATE  ST4000NM0023     PHYS DRV SSDSmartPathCap- En- Exp=1

    Feb 25 23:04:33 TheNewdaleBeast kernel: sd 14:0:7:0: Device offlined - not ready after error recovery

     

     

    @limetech would you be able to assist as I am currently about 8000 feet below sea level, with only a snorkel for comfort!

    Edited by Duggie264
    updated to add responses from HPSA Devs
    Link to comment

    Hello @limetech 

    I have a very odd issue...I was trying to install Netdata from the app store and it appears for some reason docker would not display the install progress just a blank screen.  nothing else seems off at the time until i noticed extremely slow speeds on the mover (less than 5kbps) i was not able to load logs or download diagnostics but navigating the gui seems fine.. my dockers are showing running but i cannot access them the forced reboot is now on a parity check stuck at 0.00% after over an hour I cannot stop the parity check or the array. Most stuff in command line hangs too

    i am not sure where to go from here

     

     

    ****edit looks like my docker.img is corrupted with docker disabled everything runs fine I created a new docker.img file and will have to restore my dockers later...i did a new config to the array too to prevent the hanging parity check...i also seem to have some BTRFS issues that the BTRFS check cant fix so im moving app/data to the array and formatting the cache disks 

    Edited by Can0nfan
    Link to comment
    11 hours ago, J89eu said:

    image.png.bdb97c76d5b5b3922e7c16b9163ae1c4.png

    This is the usual error if you remove a USB device for example a keyboard or mouse which you have setup in the VM setting to passthrough. Plug in the specific device or remove the part from your XML

     

        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x0424'/>
            <product id='0x2228'/>
          </source>
          <address type='usb' bus='0' port='1'/>
        </hostdev>

     

    Link to comment
    7 hours ago, bastl said:

    This is the usual error if you remove a USB device for example a keyboard or mouse which you have setup in the VM setting to passthrough. Plug in the specific device or remove the part from your XML

     

    
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x0424'/>
            <product id='0x2228'/>
          </source>
          <address type='usb' bus='0' port='1'/>
        </hostdev>

     

    That's the thing, I haven't removed anything, I just updated to the latest RC and it hasn't booted since, I pass through a mouse, keyboard, M.2, Vega 56 and it's sound card along with an additional sound card. 

     

    What I've just done is remove that line then add everything back in, which works, but I don't think my M.2 is passed through to the VM now. I'm not at home though so I can't confirm this, but it' s not booting

     

    This is the code I previously used that doesn't seem to want to work now 

     

    <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
          </source>
      <boot order='1'/>
          <alias name='hostdev3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
        </hostdev>

    EDIT: So i did manage to save the above by adding a VDISK in the normal settings then saving it, editing again but going into the XML and replacing the new code with the above, I can then see that the M.2 passes through as it shows as "Other PCI devices" in the options, though it still doesn't boot from the M.2, just starts with no errors and doesn't boot

    Edited by J89eu
    Link to comment
    Quote

    Are you able to update the driver and build it for a test? If so, there is  a structure member in the scsi_host_template called .max_sectors.

    It is set to 2048, wondering if you can change it to 1024 for a test?

    Next release will have this patch.

    • Like 1
    Link to comment

    How close are we to the final release? @limetech I am on RC8 and having issues with the mover moving them from my cache to the array. Also, are you guys fixing the IPtables in openVPN? Thank you very much.

    Link to comment
    10 minutes ago, Tucubanito07 said:

    I am on RC8 and having issues with the mover moving them from my cache to the array.

    what problem are you referring to?   What is 'them' in this context?

    Link to comment
    11 minutes ago, Tucubanito07 said:

    I hit the move button on the Main screen because there is 833gb used in cache and not moving it to the array.

    Can't help without diags, and don't know what you're referring to re: "IPtables in openVPN".

    Link to comment
    15 minutes ago, Tucubanito07 said:

    Where should i send it too? @limetech

     

    Post diagnostics.zip right here in this topic.  Would have been better to post this issue in General Support though.

    Link to comment

    Can you give an example of some files that you think should be moved and are not being moved? Alternatively turn on move logging, tart mover and then post new Diagnostics.

     

    I note that quite a few shares are set to Use Cache=No.   If you ever get any files on the cache that logically belong to such a share then mover will not move them (turning on the help in the GUI should make this clear).   Only shares which are set to Use Cache=Yes will have files moved from cache to array.

    Link to comment
    23 minutes ago, Tucubanito07 said:

    I change some to yes instead of prefer and see if that works. If it does thank you so much and I will report that it did. Thank you @itimpi

    Prefer means try to move from array to cache, and is the exact opposite of Yes

    • Like 1
    Link to comment
    32 minutes ago, itimpi said:

    Prefer means try to move from array to cache, and is the exact opposite of Yes

    Thank you for helping out. 👍

    • Like 1
    Link to comment
    On 5/1/2019 at 2:19 AM, limetech said:

    The code is not in place to fix this yet, just the packages required for a fix.  I only have limited SAS h/w to test with and so far things not working correctly but I think it might be an issue with the enclosure I'm using.

    Sent PM to help debug.

    Link to comment

    Having issues with one newer 6tb sas seagate drive spinning down.  I am on 6.7.2.  I sent this as a PM as well

     

    If you still need feedback for sas spindown i have more information.

     

    sg_requests -s -v /dev/sde  will give a result you can work with.

    Additional sense: Standby condition activated by timer

    It will have conditions based on what PO options are turned on for the drive.  You can turn on IDLE, IDLE_B, IDLE_C, and Standby.  (explained PO here https://www.seagate.com/files/docs/pdf/en-GB/whitepaper/tp608-powerchoice-tech-provides-gb.pdf)

     

    I am going to use this info to create a script that uses sg_start --stop /dev/sd? to spindown drives when timer is triggered.  My older SAS drives work with UNRAID natively but the newer ones dont seem to respond to hdparm or whatever unraid is using to spindown.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.