• Posts

  • Joined

  • Last visited

grigsby's Achievements


Newbie (1/14)



  1. I can't think of any good reason to do this. It's just digital hoarding. All the crap you're "collecting" is readily available and downloadable anytime you want it.
  2. Thank you for these directions. Super easy! All (34?) of my containers started up without a problem.
  3. Influxdb 2.x doesn't work with these containers. The API or authentication system is significantly different than in 1.8. I haven't really looked into trying to fix it, because it was just easier to stick with 1.8. In the Influxdb docker container, change the repository to "influxdb:1.8.4" and things will work again.
  4. OMG. Thank you! I feel dumb now. That was driving me crazy.
  5. I disabled checking for OS updates, and the "Check for Updates" button re-appeared! But when I click on it, the popup comes up and says "plugin: checking unRAIDServer.plg ..." then I click on "Done," and it still only shows 6.8.3 available. 6.9.1 did not re-appear. Thanks for the suggestion! I thought it was going to work. ๐Ÿ™‚
  6. Thank you. Maybe I'll just do a manual download. I don't have a "Check for Updates" button, either!
  7. While trying to isolate a networking issue, I downgraded from 6.9.1 to 6.8.3. Turns out that it's a firewall problem, not an Unraid problem. So I'd like to RE-upgrade to 6.9.1. But when I go into Tools -> Update OS, the only version I see now is 6.8.3. 6.9.1 is no longer offered. ๐Ÿ˜ž Sad! How can I get the Update OS tool to offer me 6.9.1 again? Thank you! Scott
  8. EDIT: I'm now pretty sure it's a problem with pfSense 2.5.0's NAT port forwarding, not Unraid. But still, if you have any ideas, I can still use the help! At this point I'm just going to try to downgrade pfSense to 2.4.5. Ugh. I previously wrote: ---------------snip----------- Hi all, I have a bunch of docker containers that were all working perfectly for a long time. (The typical stuff like Plex, Overseerr, geth, etc.) I had been running Unraid 6.8.3 and pfSense 2.4.x with port forwarding. Everything was peachy. A couple days ago I upgraded to pfSense 2.5.0 and Unraid 6.9.1, and now none of my port forwards work anymore. ๐Ÿ˜ž When I check the packet state on pfSense, they show "NO_TRAFFIC:SINGLE" for udp and "CLOSED:SYN_SENT" for tcp connections. From what I can figure out from searching for this problem, this means that the Unraid server is not responding to the packets correctly. The few answers I've found online seem to indicate that it has something to do with the server not having a correct default gateway or route for the answering packets. I have checked every setting I can think of. I am almost certain the problem is not with pfSense, since it is forwarding packets correctly. I'm almost sure it's that Unraid -- in particular, Unraid's docker and/or network configuration -- is not sending reply packets correctly. I have no idea what I should try next. If you have any ideas, please share! All of my docker services are now unavailable outside my network until I get the port forwarding problem figured out. ๐Ÿ˜ž Thank you! Scott
  9. Not Dephcon, but I recognize the graph: it's grafana. To get cool visualizations like that, you'll need telegraf (data collection), influxdb (storage), and grafana (visualization). It's super fun if you're into tinkering with stuff like this and monitoring everything on your network. I've attached a few images here of what some of my dashboards look like. Mine aren't super cool (yet!), but they're always evolving. The top one is part of my pfsense firewall dashboard, the rest (disks and docker containers) are from my Unraid server. (Also, the delta-data disk usage numbers are totally wrong. I'm still trying to figure out how to make those work right. I'm not any kind of database or data visualization person, I'm just learning as I go and copying panels from other people who have posted theirs on the grafana repository.)
  10. I'm curious to know if these "workarounds" in the beta releases are true bugfixes. There's a difference between, "We identified the bug and have fixed it," and "We have not been able to identify the bug, but if we do these non-standard things with dockers/filesystems/etc., things seem to get a little better?" Sort of just throwing spaghetti at the wall and seeing what sticks. I'm definitely more interested in a true bugfix than some sort of poorly-defined workaround that just appears to make things a little better while the source of the problem remains unknown.
  11. Yes, the SMART report does correlate correctly with the excessive writes. I think TexasUnraid has done a lot of helpful testing, but some of his terminology might be a bit confusing. Basically it comes down to this: SSD cache drive formatted as btrfs = huge (unacceptable) amounts of write operations (gigabytes every hour) by the loop2 device SSD cache drive formatted as xfs = works normally I currently have my cache drive formatted as xfs (so my SSDs don't get trashed) and it's working normally. The problem with this arrangement is that you can't have a cache pool or redundancy with xfs-formatted drives, so I'm giving up redundancy to save wear on my drives. The ideal solution would be: 1. Fix the bug with cache+btrfs so that the drive writes are reduced to a normal level, and we can go back to having cache pools/redundancy 2. Somehow make cache pools/RAID1 available with xfs-formatted cache drives
  12. Well, I gotta say, LimeTech's response to this bug has been impressive -- in a not good way. This is a major, potentially catastrophic bug that could result in loss of data, time, and hardware/money that was first reported seven months ago, and the only two comments LimeTech makes about it are dismissing it as "tldr"? I first installed Unraid in May on a new server build and promptly purchased a license for $89. Obviously I don't have much history with Unraid or the company, but their total non-response to this bug report is disheartening.
  13. The bug was originally reported in 6.7.2. The thread title was changed to 6.8.3 when it was discovered that it still exists in the current release.