Jump to content

T0rqueWr3nch

Members
  • Posts

    88
  • Joined

Report Comments posted by T0rqueWr3nch

  1. Same. I've never had this issue before, though admittedly it could just be a coincidence.

     

    Just to confirm @DarkMan83, since I don't think this is necessarily an "SMB" issue, when you navigate to "Shares" in the GUI, are your shares missing there as well? In my case the locally-mounted "shares" themselves were gone. My working theory is that this is what's going on for most of the "SMB issues" being reported. I think for many users, they only interact with unRAID via using its exported SMB shares and so the issue manifests itself as an "SMB problem" though the underlying cause is that the local mounts themselves are gone. Just my theory so far.

     

    Regardless, the flag on this post seems misprioritized. It doesn't seem like a "minor" issue.

     

    -TorqueWrench

  2. I can confirm this issue (or at least a related one), though mine really has nothing to do with SMB itself. Today, my locally-mounted "shares" disappeared completely. Here's the blow-by-blow:

     

    While accessing a previously running container (Grafana), I was getting errors in the browser. Stopping and starting the container resulted in the error "Execution error - server error". Then I realized that all of my Docker containers weren't working.

     

    Attempting to restart Docker itself, I noticed this:

     

    ApplicationFrameHost_MhHvKN2CYT.thumb.jpg.02d080ce11e7b35ef9f7f5aeb2333a2a.jpg

     

    And, sure enough, navigating to "Shares" in the GUI, I don't have any mounted shares:

     

    ApplicationFrameHost_W2ywPWbqGO.thumb.jpg.7afc823dd5c470bc71cf92dcf74af6ed.jpg

     

    The only thing that looks interesting in the syslog is this, which occurred at the time I see my server going offline in Grafana:

     

    shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed.

    Research:

     

    This looks similar to a problem reported last year with the same error message and symptoms:

     

     

    Let me know if you would like for me to open up a new issue.

     

    -TorqueWrench

  3. 10 minutes ago, TexasUnraid said:

    Agreed, I can't make sense of it.

     

    I think most of you that have the truly extreme write black holes are running things like plex, my best guess is that these fixes help the issue those dockers have but not the underlying issue.

     

    I only run very mild dockers, lancache, krusader, mumble, qbittorrent etc that are not actively doing anything right now.

     

    The difference from putting docker/appdata on an XFS array drive vs the btrfs cache is undeniable though at around 200-300mb/hour vs 1000-1500mb/hour and climbing in most cases.

    I would have loved to have blamed it on your individual Docker containers, but I agree, those don't seem like extravagant containers. PMS is definitely a clunker. A lot of database containers also seem to be particularly bad about cache writes. MongoDB was horrendous for me: 

     

     

    Since you seem to still be experiencing this issue, could I get you to run 

    docker stats

    I'm curious if Block I/O identifies a particular container.

     

    -TorqueWrench

     

  4. 57 minutes ago, TexasUnraid said:

    Well, after a full hour, the LBA's have increased by a total of 1.5gig / hour on the beta but could just be first hour after boot up work going on since it is actually a bit worse then the stable version. Does not appear to be any better though, nothing like when it was on the XFS drive.

     

    The CPU still spends ~70-80% of it's time with 1-2 threads pegged and 15-20% total CPU usage. I can actually see the higher power draw on my UPS reporting.

     

    Going to leave it for a few more hours at least, more then likely revert things tomorrow. See how things progress.

     

    edit: Another hour, another 1.5GB of writes. it somehow got worse with the beta it seems. Still high CPU usage as well.

    Very strange. I had the exact opposite experience from the latest beta update to 6.9.0-beta22. My cache writes are way down to a much more reasonable ~500 kB/s and it's still holding from this morning.

     

    It's weird that we have such discrepancies. 

  5. 15 hours ago, chanrc said:

    Anyone try out 6.90-beta22 yet? I'm assuming since we haven't heard anything from the LT guys thsi is still probably an issue. 

    I did this morning. While it's still very early, I think this may finally be fixed:

     

    Screenshots here: https://forums.engineerworkshop.com/t/unraid-6-9-0-beta22-update-fixes-and-improvements/215

     

    I am seeing a drop from ~8 MB/s to ~500 kB/s after upgrade with a similar server load (basically idle) and the same Docker containers running. Hopefully the trend holds.

     

    -TorqueWrench

    • Thanks 2
×
×
  • Create New...