Jump to content
scubieman

wsdd 100% using 1 core

39 posts in this topic Last Reply

Recommended Posts

No need to stop sharing SMB, just stop using WSDD. It's an option in the UI.

Share this post


Link to post
Posted (edited)
On 5/12/2020 at 6:07 PM, BRiT said:

No need to stop sharing SMB, just stop using WSDD. It's an option in the UI.

Thank you. Is there any fix for this yet?

Edited by itskamel

Share this post


Link to post
46 minutes ago, itskamel said:

Thank you. Is there any fix for this yet?

Not yet, still have it disabled. I'm waiting for the new version, 6.9, to try again and check.

Share this post


Link to post

I'm seeing this now on a brand new (to unraid) system running 6.8.3.   It's a trial key with only 2 drives and no additional setup--no vm's, no docker setup, not even any plugins installed or any data loaded/shares.

 

So likely this is more basic than suggested above.   Note, I don't have this issue on my primary production system, so maybe it's cpu generation based or some other hardware interaction.

 

FWIW, old system is 2x AMD 2431 on a Supermicro H8dm8-2,   new system is 1x AMD 6274 and Supermicro H8DG6.

 

 

Share this post


Link to post

I now get this happening regularly on a Lenovo Thinkstation E31 (Xeon E3-1225v2), never happened in 8 years continuous running on my old HP Microserver N36L.  Have been running unraid 24x7 since 2011 (originally version 4.7, currently 6.8.3 on a Basic license) and never had the problem until moving to the new hardware about 4 months ago.  Same installation - I just moved the USB drive and all the disks across to the Thinkstation and the problem started and has been recurring since then.  Stopping then restarting the array fixes it for a few days or weeks, then it comes back again.  :(

Share this post


Link to post
6 minutes ago, tj80 said:

I now get this happening regularly on a Lenovo Thinkstation E31

WS Discovery (the wsdd process) did not even exist until version 6.8 of unRAID so you could not have had this problem until moving to that version of the OS, regardless of hardware configuration.

 

There is nothing so far that indicates the problem is hardware related so I don't think your move to the Thinkstation was the trigger for the problem. 

 

If you are not using WS Discovery, you can always disable it in SMB settings and the problem is guaranteed to disappear.

 

 

Share this post


Link to post

WS discovery is a welcome convenience on my network and I would prefer to leave it running, is it possible to maybe lock it to a specific core to play more nicely with Docker and VM CPU Pinning settings?

 

As it stands, I use CPU pinning to give VM's and Docker Containers access to certain cores to help with higher CPU dependent things like a game server or Plex container, so it would be nice to just throw WS Discovery on specific cores that aren't shared with the CPU dependent stuff.

Share this post


Link to post
3 minutes ago, axipher said:

WS discovery is a welcome convenience on my network and I would prefer to leave it running, is it possible to maybe lock it to a specific core to play more nicely with Docker and VM CPU Pinning settings?

 

As it stands, I use CPU pinning to give VM's and Docker Containers access to certain cores to help with higher CPU dependent things like a game server or Plex container, so it would be nice to just throw WS Discovery on specific cores that aren't shared with the CPU dependent stuff.

There isn't a way to specify a core for this function. Additionally, when the issue happens, the core that is at 100% may hoop to another core.

Share this post


Link to post

Same problem, 6.9 beta 25.  

 

Shutting down Docker made it stop.  I only have one container in there and it wasn't even started.

Share this post


Link to post
On 8/15/2020 at 9:00 PM, trevisthomas said:

Same problem, 6.9 beta 25.  

 

Shutting down Docker made it stop.  I only have one container in there and it wasn't even started.

Same here. Shutting down docker (in the settings page->docker) fixed it here, too. Strange because all containers were stopped already, so it must be something with the docker service itself.

Edited by Videodr0me

Share this post


Link to post

Patch details:

 

Quote

This prevents infinite loops on zero-length routing attributes.

 

Edited by BRiT

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.