Jump to content

Shobo

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by Shobo

  1. I'm not quite sure if this is the appropriate place to post this, but thought it was as good as any to start.

    I'm trying to get this docker to work alongside a letsencrypt reverse proxy docker.

     

    I'm able to get everything working fine separately.

    Letsencrypt reverse proxy to a docker works great.

    Setting the docker's network to the privoxyvpn container works great.

    However when I put them together I can only get 502 Bad Gateway errors when accessing the reverse proxy (accessing through the local IP still works).

     

    Not sure what I'm missing.

    I've tried googling all over the place and have found posts from users saying they got it working, but they never explain what they did to get it to work.

     

    Any obvious steps I may have missed?

  2. I'm not quite sure if this is the appropriate place to post this, but thought it was as good as any to start.

    I'm trying to get this docker to work alongside a privoxyvpn docker.

     

    I'm able to get everything working fine separately.

    Letsencrypt reverse proxy to a docker works great.

    Setting the docker's network to the privoxyvpn container works great.

    However when I put them together I can only get 502 Bad Gateway errors when accessing the reverse proxy (accessing through the local IP still works).

     

    Not sure what I'm missing.

    I've tried googling all over the place and have found posts from users saying they got it working, but they never explain what they did to get it to work.

     

    Any obvious steps I may have missed?

  3. On 6/5/2020 at 2:20 PM, draeh said:

    This ^^^, and the need for granular permissions is why I use NFS over Virt and SMB.

     

    I can't wait for the follow-up blog post where the discussion turns to the 'stale file handle' issue seen on some VMs. For my most active VM it seems to go in cycles. It will run without issue for weeks and then sporadically it will have a day where it happens repeatedly. So far I haven't been able to pin down what is happening differently on those days.

     

    I get stale file handles just about daily. It's a bit of a pain but not the end of the world for my usage. 

    I use it as a development server so I just reboot or remount at the start of each day to keep it fresh. 

     

    I've tried a handful of things but nothing has helped.

  4. I have an Ubuntu VM I want to use for development but I need the files to also be accessible outside of the VM.

     

    I have an Unraid user share set up with all of the project files and want to mount it within the VM.

     

    My first attempt was with the default 9p mount from the webui.
    This was painfully slow. Just awful.

     

    I then tried mounting manually with cifs.
    Was a bit tricky to get this working because some npm packages I'm using requires symlinks but was able to work around this by using mfsymlinks.
    This was faster, but still slower than I'd like for developing on every single day.

    I just can't seem to find an efficient way to have data that can be accessed from clients on the network (like user shares) but also accessed within a VM running on Unraid.

     

    Would appreciate any help.

  5. Well then, that is great to know. 

    I'm quite new to Unraid so wasn't sure there was any way to do this. 

     

    1 hour ago, testdasi said:

    You would have better performance mounting smb shares instead. The best performance with NVMe is to create custom smb config to access /mnt/cache/share (to by pass shfs).

    Absolutely no idea to do this, but it'll give me a project to read into. 

     

    1 hour ago, testdasi said:

    Alternatively mounting the NVMe with Unassigned Device will also bypass shfs and give you best performance.

    I can't do this because the NVMe is my cache drive.

     

    Thanks a bunch.

  6. I've done some additional tests and believe my issue doesn't have anything to do with the cache but instead of mounting shares within my VM.

     

    First I tried various other shares and they all ran at approximately the same speed. 

     

    Then I made a copy of my VM disk and moved it within a share that also had the project (on the array) and also set it to mount that same share.

    Booted it up and tested compiling from the VM's filesystem - it was snappy as I'd expected.

    I then tried compiling from mounted share (which again, is the same share the VM is running on) and it was upwards of 20x slower like all previous tests.

     

    This makes me believe it isn't the cache at all but something with mounting Unraid shares on my VM that's causing the issue.

     

     

  7. I have a 500GB NVMe set up for my cache drive (btrfs) and I'm experiencing some extremely poor performance. 

     

    I first noticed it when I set up a development share on the cache drive for a project. I have a Linux VM setup (on a separate SSD mounted with unassigned devices) that mounted the share and the performance on both serving a website and compiling node.js was abysmal.

     

    For testing purposes I copied the data from the share to the VM's local filesystem and the compiling performance was approximately 20x faster. Night and day.

    I've replaced the NVMe drive to see if it had something to do with faulty hardware but that was not the case.

     

    Now I'm still very much a beginner when it comes to Unraid - so it's very possible I just have something configured incorrectly or I'm just going about this all wrong. 

    Any help would be greatly appreciated.

×
×
  • Create New...