eric.frederich

Members
  • Posts

    68
  • Joined

  • Last visited

Everything posted by eric.frederich

  1. Almost a year ago (Oct 28th 2022), I got hit with some PHP version error. I changed my image to use a fixed tag (linuxserver/nextcloud:24.0.6) and have been using that fine ever since. I feel like I shouldn't be running a year old software though. Is it find to just change it to (linuxserver/nextcloud)... will it upgrade fine?
  2. Just got a new laptop yesterday... what a dumb time for me to update this Docker image. I got hit with a PHP version error. Just changed my tag to "linuxserver/nextcloud:24.0.6" and thankfully it seems to have worked. I didn't get that message about mid-upgrade that someone else got.
  3. I've been unable to download stuff, even things like Ubuntu iso torrents. It'll make some initial connections to a peer or two then nothing. I updated my openvpn config file and crt from IPVanish, nothing. I've used the same openvpn config file and crt in a qtorrentvpn container and it is able to download torrents. No idea what is going on.
  4. Hi. I'm having difficulty running this on anything other than port 8080. The problem is that port 8080 is in use by another container of mine. I had to stop that container to even get this thing to start up. I have changed both Host Port 3 as well as the WEBUI_PORT and I cannot connect to the web ui. I am using the custom network proxynet (as shown in SpaceInvaderOne's YouTube videos) for running behind a SWAG (nginx) LetsEncrypt reverse proxy. Side node: the reason I'm trying this qtorrentvpn is becaues for some reason my deluge container stopped being able to download. I can't even download an Ubuntu ISO torrent. I updated my openvpn config files and crt files. No idea whats going on there. The same config files are working fine here on qtorrent. In my deluge container the torrents seem to start to download then immediately stop. They'll have a peer pop up then disappear.
  5. Cool thanks for the reply. Maybe you can clarify something for me then. It's my understanding that: WireGuard is baked into the Linux kernel TailScale is built on top of WireGuard The Dynamix WireGuard plugin for UnRaid simply provides a web-ui to manage the WireGuard already baked into the kernel. Is all of that correct? If so, I'm curious how they don't conflict with each other.
  6. I currently have WireGuard working with UnRaid via the Dynamix WireGuard plugin. What do I need to do if I want to try out this TailScale? Can both run at the same time or do I need to uninstall the WireGuard plugin? I only have two clients, so I don't care if they're lost.
  7. Awesome work @HyperV thanks. It worked for me. Where is that code maintained? I wanted to see if it was fixed in a newer version of if we should file a bug. All I found was this which seems to be 5 years old and completely out of date. https://github.com/limetech/dynamix The DockerClient.php in that repo doesn't even have any Docker-Content-Digest.
  8. I'm having this issue as well. I'm also on 6.8.3... I haven't updated my images in 2 months. Now every one of them is showing "not available". I hopped on #unraid IRC channel on FreeNode and asked there too. It's pretty quiet there, but the one person who did check was also having the issue... again on 6.8.3 I wouldn't assume it's a problem with this specific version until we hear from someone on a different release saying "it works for me". That hasn't happened yet.
  9. I'm having this problem as well.... I constantly have to `umount` then `mount` the shares after getting stale NFS errors.
  10. Thanks, I had someone recommend WD Blue 3D. Neither Amazon or Newegg pages mentioned cache. Had to find some PDF which mentioned it. https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-blue-ssd/user-manual-wd-blue-3d-nand-sata-ssd.pdf The Crucial MX500 I guess has a known issue with Unraid. I think I may go with this one unless someone can recommend another good value for 500GB.
  11. Backstory: I had a Samsung 840 120G working fine in Unraid. Son needed an SSD in his laptop, so I ordered a Samsung 860 500G and I'd give him the 120. Now I get CRC errors. Apparently this is a known issue between AMD controllers and Samung SSDs. My motherboard is an AMD 970 chipset. Reference: So... now I need a recommendation on a new drive for cache. It's difficult to find information on whether various drives have a DRAM cache or not. Even drives that have it don't advertise it up front and it takes research for each drive. Is DRAM cache important? Many options I see... Western Digital, Kingston, SanDick, Crucial, PNY, SK Hynix, ADATA, etc... What specific drives or brands should I go with? What drives should I definitely stay away from?
  12. I was just looking at the MX500. It's almost impossible to see on the specs whether these things have DRAM cache or not. It seems the Crucial has something called Momentum Cache but that uses your system RAM as a buffer and is only for Microsoft Windows.
  13. Thanks... I found some things on both Samsung and AMD website... none of them seem to acknowledge the issue. Is this only a problem on older AMD chipsets or even new ones? This is an old AMD 970 board. Would upgrading to a new Ryzen based system fix the issue? I can't justify spending money right now on a new system, but I'm curious for down the road. I just received the drive yesterday, so I should be able to return it to Amazon. From my research, it seemed Samsung EVO was the way to go for SSD with DRAM cache. What is the best 2nd choice?... what should I use instead of Samsung EVO 860? EDIT: I know it's more of an opinion, but I see many options, Western Digital, Kingston, SanDick, Crucial, PNY, SK Hynix, ADATA, etc... I can read reviews on Amazon but I'd like to know from an Unraid perspective if there are ones to use or ones to stay away from.
  14. Thanks. Can you point me to any documentation regarding this? Like, is there a list of SSDs or chipsets / controllers? The drive I upgraded from was a Samsung 840 120G, this is a Samsung 860 500G. Are the errors being caught and handled?... or should I replace the drive? This morning I see the count is now 21 after being idle last night. It seems to have a new error every 31 minutes or so.
  15. Just came across this: https://superuser.com/questions/1294158/what-may-cause-very-high-crc-errors-on-ssd-apart-from-bad-sata-cables-if-any Someone said could be issue with AMD SATA controller and the drive. He recommended disabling NCQ but had a Windows registry solution. Is it possible to disable NCQ on Unraid?... is it advisable?
  16. Okay, just rebooted. Before bringing it down I had 12 errors, immediately upon bringing it back up it was 12, but now it has increased to 13 after less than 5 minutes. EDIT: by reboot I mean, powered down disconnected the drive, re-connected the drive, and powered back on.
  17. Yeah, I've read that. That's why I mentioned that it's using the same cable previously used with the drive it replaced. I have also heard that if it clicks, it's a good connection. I don't have a ton a cables lying around, and honestly it was working just fine with the old drive. I'll humor everyone and power it down tomorrow morning, and unplug/replug it back in. Are these numbers stored on the drive itself or in Unraid?... is there a way to reset it?
  18. I upgraded my cache drive today to a Samsung 860 EVO 500G. Before it was some other Samsung 120G. Now I'm getting "CRC error count" errors. This new drive replaced the old one.... same cable, same power connector, etc. Do I have a bad drive? The error count was 0 with the old drive. This thing has been up for just 2 hours and has 8 errors.
  19. Got this working although very slowly. I only have 2 clients on my Zero Tier network. I have this Unraid Docker container, and my cell phone. Browsing files via Total Commander did not work at all (some access denied error). Browsing via CX File Explorer worked but basically unusable due to slowness. Also tried SSH via Juice SSH. Again, basically unusable. It's not my mobile connection because I have a jump host on my network and connecting over internet to Unraid via my jumphost works fine and is fast/responsive. My phone is a Pixel 2. It should bet powerful enough right?
  20. I'm exposing this to internet via the letsencrypt app also running on my Unraid server. I have configured a .htaccess file to enable http basic auth so it's not wide open. I ran into an issue because the name had upper case letters which don't play nice with Docker's DNS. https://stackoverflow.com/questions/55518144/using-variable-in-nginx-conf https://github.com/linuxserver/docker-letsencrypt/issues/287 Perhaps the name should change to gitlab-ce instead of GitLab-CE
  21. Hi, I recently set up my Unraid server to serve some conatiners up over a reverse nginx proxy (see video below). I'm curious whether this GitLab application is safe to expose to the internet via something like "gitlab.mydomain.com". What could/should I do to protect it? I just installed it and signed in once, then created a user. I noticed that anyone could just register and it didn't do any kind of email validation. Is there a way to disable registration? I want to manually create all users, will be just a handful.
  22. Yes, everything you said is understood and correct. On my home network I am able to use plex by the domain address. Same from my phone. It works. What I was asking though, is how to make it possible to access it by ip:port while at home and not have this traffic traverse the public internet. I would like to use my domain when I'm away and my ip while at home. I believe this can be solved 2 different ways. Let plex run on proxynet and either modify template to publish the ports modify the Extra Parameters to include port mappings like -v 32400:32400 Let plex run on host and Hard code Unraid IP address in proxy config: i.e. change "proxy_pass http://$upstream_plex:32400;" to "proxy_pass http://10.10.1.99:32400;" I thought the best way to fix this would be to have the linuxserver guys change their template so that it would play nice with the LetsEncrypt container/configurations which are also owned by them.
  23. This is how I understand how it works for me currently from the internet: plex.mydomain.com has a CNAME record which points to mydomain.duckdns.org which points to my public (mostly static) IP. Then, without getting into specifics of pfsense vs. Unifi, could you explain what it means to "setup DNS resolver". What should my local DNS resolver resolve plex.mydomain.com to? As I understand it while plex is running on the proxynet Docker network it is not accessible at all locally. I do not understand how DNS would solve the fact that it's inaccessible even by IP address.
  24. Yeah. Turns out you don't need to rely on anyone else's infrastructure. If you have any machine on your home network which is accessible via SSH just do something like thid ssh -L 9000:10.10.1.99:80 home-computer Where 10.10.1.99 is the local IP of your Unraid server and home-computer is something you have set up in ~/.ssh/config to connect to your home machine. Then I can just point my browser at http://localhost:9000 and everything seems to work.
  25. See attachment... on a scale of 1 to 10, how bad of an idea is this? This exposes Unraid to the internet which is not normally the case. It adds https but is relying solely on whatever protections are built into Unraid. Seems to work but opening terminal windows (either to host or docker containers) doesn't work. Is there a better way to do this? I looked at Serveo but cannot wrap my head around how there is any security there. I think you're trusting someone else to be a man in the middle. unraid.subdomain.conf.sample