Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. @JusI think it's important to add, it's extremely unlikely a new beta build or kernel update will lose data. By the time the kernel get's to unraid there's probably millions of people have used it and anything serious would be known. And in this case, the current unraid beta has been out for a very long time, so you're good. There's always a risk (hence the disclaimer), but it's a very very low risk. If you don't really understand this stuff, then I'd suggest using the prebuilt kernel mentioned above on any system that's in-use.
  2. That's kinda the main reason I'm interested in this, is for beta builds. I was hoping this would autocompile against those newer kernels. I like not having to rely on others to get an update.
  3. Actually, there's the home gadget geeks interview on the front page of unraid was an interview with Limetech - in it Limetech says they're really considering ZFS, (or something along those lines). He pretty much indicated it's in the works, which is very exciting. It would be great to get an official version.
  4. Hey all, just thought I'd post that we are super lucky because we now have two methods of getting ZFS support in Unraid. 1 - This plugin, which has served us well (and I have to say, the ZFS developer has been both super responsive and amazing) 2 - And now an interesting alternative via a new community kernel method here. There isn't any difference in the ZFS capability, mainly it's just that you don't have to wait for the developer to update plugins when a new unraid version comes out. Obviously that's not really a big deal for ZFS since the developer is super responsive, but I always feel bad for asking! However, this kernel also builds in support for Nvidia drivers and DVB drivers, Nvidia at times was hard to get updated for the latest Unraid release, so this works around that, especially nice for testing against beta versions. I'm running it to try it out and it works well for me so far, thought I should share. Thanks, Marshalleq.
  5. Hey all, just thought I'd post that we are super lucky because we now have two methods of getting nvidia support in Unraid. This plugin, which has served us well and now via a community kernel another method here. The main two differences with this new one are: 1 - You can choose any nvidia driver version you like (and change it whenever you like, or just set to latest) 2 - It supports future unraid versions (including beta versions) without having to wait for others to get around to making it work. As a bonus it also includes the option of DVB drivers (previously you couldn't run both DVB and Nvidia kernels) and it also includes ZFS option, which is pretty neat. I'm running it and it works well for me, just thought I should share. Thanks, Marshalleq.
  6. Hey, @ich777 thanks so much for the ZFS support, now when new kernels come out, I don't need to wait for any other devs to compile nvidia / zfs etc which is fantastic! And the ZFS works the same as the other ZFS plugin, so can switch back and forth if I so desired, but probably just going to use your version now, thanks!
  7. Yes of course, add them to the top - no need to ask! Samba is here https://github.com/samba-team/samba The ZFS code that steini uses is at https://github.com/Steini1984/unRAID6-ZFS ZFS is the main one I'm interested in right now though. Steini's ZFS is for a plugin though, so I'm not sure what that means, I expect he'd be keen to help though, he's great like that. I'm quite sure there's a huge win in this for him (not having to compile the code each time himself) and us (not having to annoy him and wait to compile it for every beta version). This is a key advantage really, we're usually at the mercy of others when it comes to testing kernels in the rc path steini is very good, but others have refused at times. Thanks again, this docker is amazing!
  8. Finally! Amazing! Thankyou! I dub thee, the official community kernel! Items on my wishlist to include are: The awesome ZFS plugin from @steini84 - he has previously published all the build scripts and while he's. always very accommodating to build a new version for us, it would be amazing to link the two. NFS updates Samba updates Tips for anyone else first doing this that I didn't know: The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished) Use the logs. The whole process status is outlined by watching the logs (the button on the right of the docker) The image is built into /mnt/cache/appdata/kernel/output-version by default. You need to copy this to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds There is a backup copied to /mnt/cache/appdata/kernel/backup-version. I would copy this to another drive external to your unraid box, that way you can easily copy it straight onto the unraid USB if something goes wrong. As a guide, the whole process took about 10 minutes on my Threadripper 1950x (32 threads). The actual compilation of the kernel seemed to be about 1 minute, so clearly there's a lot of other things going on. Hope that helps someone.
  9. All good and thanks so much! I'm so tired of cloud mail. Finally realised how to get around the lack of PTR on home ISP's. Amazing what happens when you sit down and actually work stuff out! Marshalleq
  10. Thanks - yeah in my original it says Imap - but recognise easy to overlook, you have a huge job responding to all these requests! Many thanks for the info, will check it out! Marshalleq
  11. I was more thinking along these lines: https://www.nginx.com/resources/wiki/start/topics/examples/imapproxyexample/ Apparently nginx needs to be compiled with special support for the mail directive.
  12. Hi all, anyone know if the lets encrypt container supports the mail directive? Am trying to use it to proxy imap and smtp. Many thanks.
  13. Yeah, I'm finding I'm just outgrowing the unraid docker GUI. I need to to create multi-image containers and such. Trying to install something as 5 separate containers when unraid has little ability to offer any dependency mapping is a nightmare. Especially during updates. As far as I know, it can't work. So docker compose solves it apparently. But, for some reason it's been pulled from nerd pack, I'd certainly vote to have it back rather than have to use some bodgy script to keep it running.
  14. It does take a little while for that change to take affect. Basically with the cloud on it proxy's a Cloudflare IP to your real IP, so if you ping your domain, it will come up with a Cloudflare address, vs if you turn the cloud off, a ping will come back with your real IP address. It would pay to test that on your client before confirming it doesn't work. I assume that it's working internally OK? And also, I strongly recommend changing unpaid's 80 and 443 ports so that lets encrypt can use them. Things just work better / and are more consistent, particularly when you're internal. Failing that, I'd suggest you share a little more of your config.
  15. You gotta disable Cloudflare proxy (the cloud next to your domain). And don't use cnames.
  16. @dlandon thanks for your answer, I get where you're going with it, but in my case UD wasn't mounting the NFS share, so at first your response seems unhelpful, though technically it's correct. And this is the problem throughout this thread when this question is asked, no-one really ever answers the question. So after my research I thought I'd help others by answering it here. Main Point As far as I know, NFS doesn't support username password. Instead it will mount anything and handle permissions via the UID/GID match of the local and foreign accounts. There seem to be two cases where this becomes impossible though: 1 - When a 3rd party has assigned you an NFS share and hasn't used the mapall option 2 - When a 3rd party has assigned you an NFS share and hasn't shared which UID it's under So in summary a functioning NFS setup obviously relies on a functioning network and correctly set up NFS export at the other end, which the 3rd party has now resolved for me. I effectively had both of those. And just to add one for good measure, even though I confirmed NFS client connects on the firewall, if I connect across my firewall from a client I get "server <IPADDRESS> requires stronger authentication”. As far as I know the firewall is completely opened up to this host and I am still to resolve this one, but it just goes to show that the error messages aren't always very accurate. Hope it's helpful to someone. Marshalleq
  17. Hi all, I'm trying to connect to a remote NFS share that has a specific username / password. I've seen this question asked multiple times, but each time someone says they don't know the answer, or completely ignore the question. I've been scouring this thread for 90 minutes so far and haven't found any answer. So, does anyone know how to use the unassigned devices plugin, to specify connecting to NFS with a username / password? Of course I could use fstab, but I'd prefer not, I have no idea if that survives a reboot or not either. And of course that would mean I have a plain text password showing. Many thanks, Marshalleq
  18. Oh, so this is not that? I guess I got mixed up somehow. My apologies.
  19. Hi all, I'm trying to connect to a remote NFS share that has a specific username / password. I've seen this question asked multiple times, but each time someone says they don't know the answer, or completely ignore the question. I've been scouring this thread for 90 minutes so far and haven't found any answer. So, does anyone know how to use the unassigned devices plugin, to specify connecting to NFS with a username / password? Of course I could use fstab, but I'd prefer not, I have no idea if that survives a reboot or not either. And of course that would mean I have a plain text password showing. Many thanks, Marshalleq
  20. So giving a little back - here's how to get gitlab working with unrair letsencrypt/nginx neverse proxy and SSL. Obviously the lets encrypt container is covered elsewhere, so not going into that. I wouldn't be surprised to find that there are a few extra things to configure in NGINX to get everything working more betterer, but anyway, once you have a domain, change the following settings in gitlab.rb and reconfigure. Point a standard nginx proxy config at it on port 80. There's an official proxy config to base it from here. Configure for reverse proxy by editing gitlab.rb in your docker config location Find and locate the following values and change to below nginx['listen_port'] = 80 nginx['listen_https'] = false external_url 'https://gitlab.yourdomain.com' Add to existing trusted proxies so that the logging doesn’t all come from a single IP address. (example) gitlab_rails['trusted_proxies'] = ['192.168.1.0/24', '172.18.0.0/24'] Reconfigure as per standard reconfig procedure e.g. below # docker exec -it GitLab-CE bash # gitlab-ctl reconfigure That's all that's needed to get the page to come up anyway. I assume mattermost will be similar - hoping I can easily rename that to chat.domain.com
  21. Also, I see the mattermost page has a good NGINX config for it, however Gitlab comes with NGINX built in. Does anyone have a functioning NGINX setup using lets encrypt container? It's a lot to learn, when you don't know how the internals of an omnibus container work. Further I've noted that as soon as I add the external url as https, it enables it's own https stack, which I don't want. I assume I just add the external url as just http: and let's encrypt proxy it, like with everything else. However, I'd bet that this external URL will define all sorts of outgoing addresses that need it to be accurate and am wondering if it's even possible to do this. Here was me thinking this was going to be easy. Edit: Seems like I can use this: Disable bundled NGINX In /etc/gitlab/gitlab.rb set: nginx['enable'] = false Taken from here. Which has links to external web server settings. I guess I'll give this a go.
  22. @frakman1 seems like we're both trying to do this, maybe we can help each other. I do run the awesome letsencrypt/nginx container though for https but figure I should at least be able to get it to work just on an IP address, but can't even get the package to start after editing gitlab.rb and starting the reconfigure. Does yours start after adding it? I note that unraid says it's started, but after a page refresh it's actually stopped - something to do with crashes not updating the GUI I guess.
×
×
  • Create New...