T0rqueWr3nch

Members
  • Content Count

    44
  • Joined

  • Last visited

Community Reputation

27 Good

About T0rqueWr3nch

  • Rank
    Newbie

Converted

  • URL
    engineerworkshop.com
  • Personal Text
    The Engineer's Workshop: https://engineerworkshop.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Keep advocating for Unraid R&D to implement NFS v4. That's your best bet. (Or go iSCSI).
  2. Hi @Gordon Shumway, Sorry for the delay; I haven't been as active here in the forums lately as I should've been. Could you run this: nfsstat -m for me on your Ubuntu client and paste the results? -TorqueWrench
  3. Same. I've never had this issue before, though admittedly it could just be a coincidence. Just to confirm @DarkMan83, since I don't think this is necessarily an "SMB" issue, when you navigate to "Shares" in the GUI, are your shares missing there as well? In my case the locally-mounted "shares" themselves were gone. My working theory is that this is what's going on for most of the "SMB issues" being reported. I think for many users, they only interact with unRAID via using its exported SMB shares and so the issue manifests itself as an "SMB problem" though the underlying cause is th
  4. I can confirm this issue (or at least a related one), though mine really has nothing to do with SMB itself. Today, my locally-mounted "shares" disappeared completely. Here's the blow-by-blow: While accessing a previously running container (Grafana), I was getting errors in the browser. Stopping and starting the container resulted in the error "Execution error - server error". Then I realized that all of my Docker containers weren't working. Attempting to restart Docker itself, I noticed this: And, sure enough, navigating to "Shares" in the GUI,
  5. I would have loved to have blamed it on your individual Docker containers, but I agree, those don't seem like extravagant containers. PMS is definitely a clunker. A lot of database containers also seem to be particularly bad about cache writes. MongoDB was horrendous for me: Since you seem to still be experiencing this issue, could I get you to run docker stats I'm curious if Block I/O identifies a particular container. -TorqueWrench
  6. Very strange. I had the exact opposite experience from the latest beta update to 6.9.0-beta22. My cache writes are way down to a much more reasonable ~500 kB/s and it's still holding from this morning. It's weird that we have such discrepancies.
  7. I did this morning. While it's still very early, I think this may finally be fixed: Screenshots here: https://forums.engineerworkshop.com/t/unraid-6-9-0-beta22-update-fixes-and-improvements/215 I am seeing a drop from ~8 MB/s to ~500 kB/s after upgrade with a similar server load (basically idle) and the same Docker containers running. Hopefully the trend holds. -TorqueWrench
  8. I had the same problem, though I think the writes are technically coming from MongoDB which Tdarr uses: I just disabled Tdarr and MongoDB until I need them, which isn't ideal...
  9. Interesting, so your VM is using unRAID as the host? What made you choose NFS over VirtFS/9P?
  10. That's something I'd be interested in hearing more about. So are you using unRAID to install your Steam games?
  11. Hey Nick, That's not what we're complaining about. We're complaining that the size of the container itself is so large. 1 GB containers are unheard of. Over 5 gigs? That's astronomical.
  12. I used to love tdarr and even did some of the testing for it before it was released to CA on unRAID. This past weekend, I updated my container and started to get Docker disk space warnings. My jaw dropped when I saw the size of Tdarr after the latest image pull: tdarr is now over >5.5 GB! This is the alpine version, not even the Ubuntu OS base. Even at that, we're now looking at an image that's larger than a full Ubuntu GUI desktop OS install. Edit: Actually, it looks like the image uses an Ubuntu base by default, regardless of what Comm
  13. Do we have an ETA on when unRAID will support NFSv4+? I've seen this request come up multiple times on here, and it looks like at one point, Tom even "tentatively" committed to trying to "get this into the next -rc release": Unfortunately, that was over 3 years ago. Do we have any updates on this? I believe adding support for more recent NFS versions is important because it is likely to resolve many of the problems we see with NFS here on the forum (especially the NFS "stale file handle" errors). I think that's why we also keep seeing this request come up o
  14. I came up with an additional solution for using a Pi-hole docker container:
  15. Lol, it happened to me last night while I was adding my cell phone at a coffee shop! Thankfully, I had a backup WireGuard tunnel on my unRAID development VM, but I figured I'd warn everyone else.