• [6.8.0] <wsdd high cpu>


    diyjeff
    • Minor

    Just noticed that on the Dashboard one cpu at a time goes to 100%.  Running top shows wsdd taking 100% (I am assuming that means 100% on one processor).  It jumps around to both regular and HT processors, all of which are not pinned to a VM.  It does not seem to be impacting any function or performance; I have not seen this before upgrading to 6.8.0.  I do not see any issues in the system log.

    • Thanks 3


    User Feedback

    Recommended Comments



    Just as a note to others with this issue, I have only seen it once since adding "-i br0" and after a reboot it was resolved and has not occurred again.

     

    It has been 2-3 months since I last saw a CPU spike to 100%.  My network is exclusively WSD/SMB 2.0+ as I have disabled SMB 1.0 on all client computers as well as in unRAID.

     

    I know that for others this parameter has not resulted in a fix, but for me it seems to have worked (knock on wood).  Single CPU spike to 100% was a semi-regular occurrence before I added that parameter.

    • Thanks 1
    Link to comment
    Share on other sites
    3 minutes ago, Hoopster said:

    Just as a note to others with this issue, I have only seen it once since adding "-i br0" and after a reboot it was resolved and has not occurred again.

     

    It has been 2-3 months since I last saw a CPU spike to 100%.  My network is exclusively WSD/SMB 2.0+ as I have disabled SMB 1.0 on all client computers as well as in unRAID.

     

    I know that for others this parameter has not resulted in a fix, but for me it seems to have worked (knock on wood).  Single CPU spike to 100% was a semi-regular occurrence before I added that parameter.

    Now rebooted (didn't think to do this after adding that parameter 😆). I don't believe any of my client computers use SMB 1.0 either, but I'm unsure how to disable this in unRaid - can't see a specific setting for that (didn't look for long, as long as the main part is done for me).

     

    Only been using unRaid for 12 days, so it's all new to me. Overall, it's phenomenal for my needs & the support forums are full of unbelievably helpful people!

    Link to comment
    Share on other sites
    9 minutes ago, xxDeadbolt said:

    but I'm unsure how to disable this in unRaid

    Help turned on in Settings-->SMB Settings gives the answer (set Enable NetBIOS to 'No'):

    image.thumb.png.2fb486f43c827210291f9c96c493162f.png

    Link to comment
    Share on other sites

    Thanks! Have just disabled this as well. 

    Showing my general noobyness with unRaid in that I keep forgetting the help popups are always available!

     

    (Excuse the shoddy workmanship here)

    image.png.b17f4e1183b6e9fe5d594551e27f455c.png

    Link to comment
    Share on other sites
    1 minute ago, xxDeadbolt said:

    Showing my general noobyness with unRaid in that I keep forgetting the help popups are always available!

    No worries!  Not everyone is expected to be an expert in all things unRAID; especially not after just a couple of weeks of use. 

     

    There are no stupid questions (well, actually that does happen from time to time 😀) just lack of understanding.  Most people here are very happy to help improve understanding of unRAID.

    • Like 3
    Link to comment
    Share on other sites

    LOL - NEVER declare victory before a bug has officially been marked resolved! 

     

    Of course, I had to tempt fate and state that I had not seen the wsdd 100% CPU bug in 2-3 months since adding the "-i br0" SMB parameter.  It has occurred three times today and I have to reboot the server to clear it. 

     

    Next time, I'll just keep my mouth shut!

    Edited by Hoopster
    Link to comment
    Share on other sites

    Seeing this issue now myself on 6.8.3.

    I've implemented the -i br0 work around and will continue to monitor.

    Link to comment
    Share on other sites

    I have -i br0 in the settings, but still wsdd pops up with 100% since months. Annoying, since I was going for a very low power build, which is ruined by this. 😞

    Link to comment
    Share on other sites

    glad i found this thread. seems this is happening to me. i will try the -i br0 option and see how that goes.

    Link to comment
    Share on other sites

    Just happened to me last night. WSD started using 100% of a single core briefly after I had installed ShinobiPro docker, might be a co-incidence. Have not seen this before. The only syslog entries at that time were related to the docker networking:

    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered blocking state
    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered disabled state
    Jun 28 23:17:14 TMS-740 kernel: device vethf278871 entered promiscuous mode
    Jun 28 23:17:14 TMS-740 kernel: IPv6: ADDRCONF(NETDEV_UP): vethf278871: link is not ready
    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered blocking state
    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered forwarding state
    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered disabled state
    Jun 28 23:17:14 TMS-740 kernel: eth0: renamed from veth3b25ee7
    Jun 28 23:17:14 TMS-740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf278871: link becomes ready
    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered blocking state
    Jun 28 23:17:14 TMS-740 kernel: docker0: port 16(vethf278871) entered forwarding state
    Jun 28 23:17:15 TMS-740 avahi-daemon[10892]: Joining mDNS multicast group on interface vethf278871.IPv6 with address fe80::5082:23ff:fedd:12af.
    Jun 28 23:17:15 TMS-740 avahi-daemon[10892]: New relevant interface vethf278871.IPv6 for mDNS.
    Jun 28 23:17:15 TMS-740 avahi-daemon[10892]: Registering new address record for fe80::5082:23ff:fedd:12af on vethf278871.*.

     

    As a side note, server restart is not mandatory to recover from the issue; you can also stop the array, disable and re-enable the WSD daemon in settings and start the array. I just hate having my precious uptime resetting ;)

     

    AwOkkUg.jpg

     

    Edited by henris
    Link to comment
    Share on other sites

    Just chiming in to say I experienced this issue on 6.8.3. Setting -i br0 fixed it for me.

    Link to comment
    Share on other sites

    The problem persists on 6.9 beta25. It seems to be related to the docker service. Turning off docker completely in the settings page->docker solved it. This is strange because all containers were already stopped. So maybe its related to the docker service somehow, whether containers are running or not.

    Link to comment
    Share on other sites

    For whomever winds up here with this symptom:  While I also configured  -i br0, I can reproduce this at-will by starting up an old Win 8.1 tablet.  Every time.

     

    Stop/Start the array to resolve it.

    Also 6.8.3.

    Link to comment
    Share on other sites

    Also thanks for posting this.  Just logging this here for the point of logging that this seems to be a very common issue.  I am also seeing this issue and I had to start and stop to eliminate it.

    Link to comment
    Share on other sites

    I had the same problem today. After I had set -i br0 everything was ok again. Will continue to monitor.

     

    Thanks for all the tips

    Link to comment
    Share on other sites

    Same issue. As someone else said, stopping array, disable wsdd, start array, stop array, enable wsdd, start array clears it.

    6.8.3

     

    Bridging not enabled so that command should have no effect on my setup I'd think.

     

    Only running 2 Dockers (storj both) other than that I'm stock. I have diskspeed installed but it wasn't running.

     

    Link to comment
    Share on other sites

    I've been pointed in the direction of this post from another I created (not realising they were connected due to lack of knowledge). 

     

    This seems to have been outstanding for a while. I do not have any network bridges configured, but do have dockers running.

     

    I have now disabled the NetBIOS under SMB as I have no SMBv1 on the network (It is 2021 now!!).

    Link to comment
    Share on other sites

    I had the same issue but the fix others have posted didn't take care of the issue for me, even turning SMB off completely only delayed it. After turning everything I could and turning them back on one by one to see what caused the issue, I finally tracked it down to the Dynamix Cache Dirs plugin. Once uninstalled the drive and CPU use dropped to idle and have been there ever since. Hopefully this helps someone else out, at least it's another thing to try if the above fix doesn't work out for you.

    • Like 1
    Link to comment
    Share on other sites

    Nope, its entirely tied to Enable WSD. If you have WSD disabled, restarted, and are seeing high usage it is entirely a different issue.

    Link to comment
    Share on other sites

    Also an issue here. Hope changing these settings will help. I'm currently 3 days in trial period and these kind of issues are not what I'm looking for:)

    Link to comment
    Share on other sites
    1 hour ago, mivadebe said:

    Also an issue here. Hope changing these settings will help. I'm currently 3 days in trial period and these kind of issues are not what I'm looking for:)

    The WSD Daemon (wsdd) is not developed by Limetech.  They have merely chosen to implement it in unRAID for those who wish to have an alternative to SMB/netBIOS for network discovery.  There is probably not much they can do to fix this issue unless it is proven to be some interaction with wsdd and core unRAID services that causes the problem.

     

    I have completely disabled SMB v1 on all my Windows clients and am using WSD exclusively.  The -i br0 parameter has worked for me to mostly prevent the single-core 100% CPU usage issue.  Every 3-4 months it may pop up, but a reboot fixes it (or the stop array, disable WSD, start array enable WSD procedure).

    Link to comment
    Share on other sites

    There is a patch for newer version of WSDD to remove infinite loop under certain conditions. It was mentioned in one of these threads. 

     

    Limetech could release patches / updates for their stable 6.8 build but they seem to have forgone that for a long time now to put out 6.9 beta and RCs.

    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.