Jump to content

T0rqueWr3nch

Members
  • Content Count

    42
  • Joined

  • Last visited

Community Reputation

23 Good

About T0rqueWr3nch

  • Rank
    Advanced Member

Converted

  • URL
    engineerworkshop.com
  • Personal Text
    The Engineer's Workshop: https://engineerworkshop.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Same. I've never had this issue before, though admittedly it could just be a coincidence. Just to confirm @DarkMan83, since I don't think this is necessarily an "SMB" issue, when you navigate to "Shares" in the GUI, are your shares missing there as well? In my case the locally-mounted "shares" themselves were gone. My working theory is that this is what's going on for most of the "SMB issues" being reported. I think for many users, they only interact with unRAID via using its exported SMB shares and so the issue manifests itself as an "SMB problem" though the underlying cause is that the local mounts themselves are gone. Just my theory so far. Regardless, the flag on this post seems misprioritized. It doesn't seem like a "minor" issue. -TorqueWrench
  2. I can confirm this issue (or at least a related one), though mine really has nothing to do with SMB itself. Today, my locally-mounted "shares" disappeared completely. Here's the blow-by-blow: While accessing a previously running container (Grafana), I was getting errors in the browser. Stopping and starting the container resulted in the error "Execution error - server error". Then I realized that all of my Docker containers weren't working. Attempting to restart Docker itself, I noticed this: And, sure enough, navigating to "Shares" in the GUI, I don't have any mounted shares: The only thing that looks interesting in the syslog is this, which occurred at the time I see my server going offline in Grafana: shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. Research: This looks similar to a problem reported last year with the same error message and symptoms: Let me know if you would like for me to open up a new issue. -TorqueWrench
  3. I would have loved to have blamed it on your individual Docker containers, but I agree, those don't seem like extravagant containers. PMS is definitely a clunker. A lot of database containers also seem to be particularly bad about cache writes. MongoDB was horrendous for me: Since you seem to still be experiencing this issue, could I get you to run docker stats I'm curious if Block I/O identifies a particular container. -TorqueWrench
  4. Very strange. I had the exact opposite experience from the latest beta update to 6.9.0-beta22. My cache writes are way down to a much more reasonable ~500 kB/s and it's still holding from this morning. It's weird that we have such discrepancies.
  5. I did this morning. While it's still very early, I think this may finally be fixed: Screenshots here: https://forums.engineerworkshop.com/t/unraid-6-9-0-beta22-update-fixes-and-improvements/215 I am seeing a drop from ~8 MB/s to ~500 kB/s after upgrade with a similar server load (basically idle) and the same Docker containers running. Hopefully the trend holds. -TorqueWrench
  6. I had the same problem, though I think the writes are technically coming from MongoDB which Tdarr uses: I just disabled Tdarr and MongoDB until I need them, which isn't ideal...
  7. Interesting, so your VM is using unRAID as the host? What made you choose NFS over VirtFS/9P?
  8. That's something I'd be interested in hearing more about. So are you using unRAID to install your Steam games?
  9. Hey Nick, That's not what we're complaining about. We're complaining that the size of the container itself is so large. 1 GB containers are unheard of. Over 5 gigs? That's astronomical.
  10. I used to love tdarr and even did some of the testing for it before it was released to CA on unRAID. This past weekend, I updated my container and started to get Docker disk space warnings. My jaw dropped when I saw the size of Tdarr after the latest image pull: tdarr is now over >5.5 GB! This is the alpine version, not even the Ubuntu OS base. Even at that, we're now looking at an image that's larger than a full Ubuntu GUI desktop OS install. Edit: Actually, it looks like the image uses an Ubuntu base by default, regardless of what Community Apps says. When did we lose our minds? -TorqueWrench
  11. Do we have an ETA on when unRAID will support NFSv4+? I've seen this request come up multiple times on here, and it looks like at one point, Tom even "tentatively" committed to trying to "get this into the next -rc release": Unfortunately, that was over 3 years ago. Do we have any updates on this? I believe adding support for more recent NFS versions is important because it is likely to resolve many of the problems we see with NFS here on the forum (especially the NFS "stale file handle" errors). I think that's why we also keep seeing this request come up over and over again. I understand where Tom is coming from when he says, "Seriously, what is the advantage of NFS over SMB?": The majority of the time, for the majority of users, I would recommend SMB. It's pretty fantastic as it exists today but, there are times when NFS is the better tool for the job. Particularly when the clients are Linux-based machines; NFS offers much better support for Unix operations (i.e. when you're backing up files to an unRAID share and it contains symbolic links). NFS also offers better performance with smaller files (i.e. those short, random-R/W-like file operations). Rereading my post, I hope this request doesn't come off as overly aggressive. That's certainly not the intent. I just wanted to provide some background on the request and advocate for continued NFS support on unRAID. NFS is still an important feature of unRAID. Thank you in advance for your consideration! -TorqueWrench
  12. I came up with an additional solution for using a Pi-hole docker container:
  13. Lol, it happened to me last night while I was adding my cell phone at a coffee shop! Thankfully, I had a backup WireGuard tunnel on my unRAID development VM, but I figured I'd warn everyone else.
  14. ADDITIONAL WARNING: DO NOT add a new client ("peer") to WireGuard if you are connected remotely. Adding a new peer toggles the WireGuard tunnel off which will render you unable to reconnect. All the more reason to always have more than one way into your homelab.
  15. Added an additional quick start guide for WireGuard with Linux clients: How to Set Up a WireGuard Client on Linux with .conf File