grigsby

Members
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

grigsby's Achievements

Noob

Noob (1/14)

6

Reputation

  1. Man, I really struggled getting the nvidia driver plugin reinstalled after upgrading to 6.10.3. The problem was that I'm an idiot. Also, maybe a slight problem with the plugin installer user interface? I followed all the instructions here -- removed the plugin using 'rm' at the command line, rebooted about 40 times, and kept trying to reinstall the plugin from community apps. The problem arose because the plugin installer status window would get to the line that says "Package nvidia-driver-2022.05.06.txz installed." And then the window/text would pause and do nothing for 30-60 seconds. So at that point I thought the driver was installed and rebooted. I never waited for the plugin to actually finish installing. I did this about 10 times. The problem was I never saw the *next* part that says: "WARNING - WARNING - WARNING... Don't close this window ... until the Done button is displayed!" But it took soooo loooong for that text to be displayed, I never saw it. I didn't figure out that the process wasn't finished until I saw other people's screenshots here on the forum. I wonder what's happening between the "Package... installed" and "WARNING - WARNING" that caused such a long delay? If there's some call there that can be moved/removed so the WARNING WARNING comes up immediately, it sure would have saved me a couple hours of going around in circles and rebooting too soon. Of course, once I actually waited until the whole thing was *actually* finished and the Done button finally appeared, it all worked great.
  2. Hi all. I'm having a problem with the most recent homer docker container updates (https://hub.docker.com/r/b4bz/homer). The problem is described by others here: https://github.com/bastienwirtz/homer/issues/441 Basically, there was a change in the docker compose file, so the latest container doesn't work on Unraid. The error is that the container runs as the incorrect user and group, so the container doesn't have read/write permissions for the appdata folder. Adding the normal PUID and PGID environment variables doesn't work, I think because the docker compose file expects to see something in the form of "-user 99:100". Also, having the appdata folder owned by root didn't help either. I can't figure out how to update the Unraid docker template, or how to get the latest homer containers to work. For now, I've just set the tag to b4bz/homer:22.02.2, which was the last version that worked before the repo changed the format of the docker compose. Any ideas? Thank you!
  3. I can't think of any good reason to do this. It's just digital hoarding. All the crap you're "collecting" is readily available and downloadable anytime you want it.
  4. Thank you for these directions. Super easy! All (34?) of my containers started up without a problem.
  5. Influxdb 2.x doesn't work with these containers. The API or authentication system is significantly different than in 1.8. I haven't really looked into trying to fix it, because it was just easier to stick with 1.8. In the Influxdb docker container, change the repository to "influxdb:1.8.4" and things will work again.
  6. OMG. Thank you! I feel dumb now. That was driving me crazy.
  7. I disabled checking for OS updates, and the "Check for Updates" button re-appeared! But when I click on it, the popup comes up and says "plugin: checking unRAIDServer.plg ..." then I click on "Done," and it still only shows 6.8.3 available. 6.9.1 did not re-appear. Thanks for the suggestion! I thought it was going to work. 🙂
  8. Thank you. Maybe I'll just do a manual download. I don't have a "Check for Updates" button, either!
  9. While trying to isolate a networking issue, I downgraded from 6.9.1 to 6.8.3. Turns out that it's a firewall problem, not an Unraid problem. So I'd like to RE-upgrade to 6.9.1. But when I go into Tools -> Update OS, the only version I see now is 6.8.3. 6.9.1 is no longer offered. 😞 Sad! How can I get the Update OS tool to offer me 6.9.1 again? Thank you! Scott
  10. EDIT: I'm now pretty sure it's a problem with pfSense 2.5.0's NAT port forwarding, not Unraid. But still, if you have any ideas, I can still use the help! At this point I'm just going to try to downgrade pfSense to 2.4.5. Ugh. I previously wrote: ---------------snip----------- Hi all, I have a bunch of docker containers that were all working perfectly for a long time. (The typical stuff like Plex, Overseerr, geth, etc.) I had been running Unraid 6.8.3 and pfSense 2.4.x with port forwarding. Everything was peachy. A couple days ago I upgraded to pfSense 2.5.0 and Unraid 6.9.1, and now none of my port forwards work anymore. 😞 When I check the packet state on pfSense, they show "NO_TRAFFIC:SINGLE" for udp and "CLOSED:SYN_SENT" for tcp connections. From what I can figure out from searching for this problem, this means that the Unraid server is not responding to the packets correctly. The few answers I've found online seem to indicate that it has something to do with the server not having a correct default gateway or route for the answering packets. I have checked every setting I can think of. I am almost certain the problem is not with pfSense, since it is forwarding packets correctly. I'm almost sure it's that Unraid -- in particular, Unraid's docker and/or network configuration -- is not sending reply packets correctly. I have no idea what I should try next. If you have any ideas, please share! All of my docker services are now unavailable outside my network until I get the port forwarding problem figured out. 😞 Thank you! Scott
  11. Not Dephcon, but I recognize the graph: it's grafana. To get cool visualizations like that, you'll need telegraf (data collection), influxdb (storage), and grafana (visualization). It's super fun if you're into tinkering with stuff like this and monitoring everything on your network. I've attached a few images here of what some of my dashboards look like. Mine aren't super cool (yet!), but they're always evolving. The top one is part of my pfsense firewall dashboard, the rest (disks and docker containers) are from my Unraid server. (Also, the delta-data disk usage numbers are totally wrong. I'm still trying to figure out how to make those work right. I'm not any kind of database or data visualization person, I'm just learning as I go and copying panels from other people who have posted theirs on the grafana repository.)
  12. I'm curious to know if these "workarounds" in the beta releases are true bugfixes. There's a difference between, "We identified the bug and have fixed it," and "We have not been able to identify the bug, but if we do these non-standard things with dockers/filesystems/etc., things seem to get a little better?" Sort of just throwing spaghetti at the wall and seeing what sticks. I'm definitely more interested in a true bugfix than some sort of poorly-defined workaround that just appears to make things a little better while the source of the problem remains unknown.
  13. Yes, the SMART report does correlate correctly with the excessive writes. I think TexasUnraid has done a lot of helpful testing, but some of his terminology might be a bit confusing. Basically it comes down to this: SSD cache drive formatted as btrfs = huge (unacceptable) amounts of write operations (gigabytes every hour) by the loop2 device SSD cache drive formatted as xfs = works normally I currently have my cache drive formatted as xfs (so my SSDs don't get trashed) and it's working normally. The problem with this arrangement is that you can't have a cache pool or redundancy with xfs-formatted drives, so I'm giving up redundancy to save wear on my drives. The ideal solution would be: 1. Fix the bug with cache+btrfs so that the drive writes are reduced to a normal level, and we can go back to having cache pools/redundancy 2. Somehow make cache pools/RAID1 available with xfs-formatted cache drives
  14. Well, I gotta say, LimeTech's response to this bug has been impressive -- in a not good way. This is a major, potentially catastrophic bug that could result in loss of data, time, and hardware/money that was first reported seven months ago, and the only two comments LimeTech makes about it are dismissing it as "tldr"? I first installed Unraid in May on a new server build and promptly purchased a license for $89. Obviously I don't have much history with Unraid or the company, but their total non-response to this bug report is disheartening.