Jump to content

murkus

Members
  • Posts

    90
  • Joined

Posts posted by murkus

  1. This script has been successfully used with unraid 6.12.4, change the port numbers as you see fit.
     

    DEFAULT_RPC="/etc/default/rpc"
    STATD_PORT=950
    LOCKD_PORT=4045
    MOUNTD_PORT=635
    
    nfs_config() (
        set -euo pipefail
        sed -i '
        s/^#RPC_MOUNTD_PORT=.*/RPC_MOUNTD_PORT='$MOUNTD_PORT'/;
        s/^#RPC_STATD_PORT=.*/RPC_STATD_PORT='$STATD_PORT'/;
        s/^#LOCKD_TCP_PORT=.*/LOCKD_TCP_PORT='$LOCKD_PORT'/;
        s/^#LOCKD_UDP_PORT=.*/LOCKD_UDP_PORT='$LOCKD_PORT'/;
        ' ${DEFAULT_RPC}
        /etc/rc.d/rc.rpc restart
        sleep 1
        /etc/rc.d/rc.nfsd restart
    )
    
    nfs_config
    if [[ $? -ne 0 ]]; then
        /usr/local/emhttp/webGui/scripts/notify -i warning -s "NFS config failed"
    fi

     

  2. On 1/21/2024 at 11:58 AM, ideasman said:

    thanks heaps for this.. i've followed the guide here: https://unraid.net/de/blog/deploying-an-unraid-nfs-server, updated the script with the changes above and forwarded the ports required between my vlans in opnsense but i can see the client trying to access NFS on the unraid server via other ports outside the specified range (38983, 34691 etc)

     

    any tips or advice?

     

    check the ports announced by portmapper on unraid with rpcinfo

     

    the ports are shown as 2 octets. the portnumber is  256*octet1+octet2

     

    The script is outdated though. I will post an updated version.

  3. 2 minutes ago, dlandon said:

    The mount/unmount status is showing on the buttons now, where before they weren't.  When there is a lot of activity on UD with page refreshes (i.e. clicking buttons very quickly), the 'Mount' buttons can appear to be not working because button clicks are blocked while the page refreshes.  I also found the button clickable area to be somewhat of an issue and I've been trying to adjust that.  So the issue you are seeing is probably a combination of factors.  For the moment, try not to have multiple GUI sessions open to the Unraid 'Main' tab.  UD is updating even when sitting on any 'Main' tab.

     

    I'm pursuing some additional ideas on this.

     

    Remote servers are no longer ping'd.  UD checks the remote server to have the appropriate port for either SMB or NFS open.  If UD finds the port open, it is considered on-line.

     

    I don't understand "changed again".  Since day one, UD has ping'd remote servers.

     

    It may well be that UD was pinging servers, but if servers didn't respond to the ECHO request this did not have any consequences on the UI, the servers were not shown as "down". Later versions then set the server as down when not reachable and the mount wouldn't work. At that point I did not only need to allow SMB and NFS gthrough the firewall, but also ECHO requests. Now I realized that missing ECHO responses didnt have any consequences on the UI any more, as it was the case in early versions.

     

    This is why I said: again

     

    This is my perception and experience. You may disagree. Just explaining.

     

     

  4. Using version 2024.01.17. I am experiencing 2 things

     

    (1) the some Mount buttons are somehow "blocked" or unresponsive (while being shown in orange) for a whole, and then suddenly become responsive. The group of unresponsive buttons changes over time and I have not seen a pattern here, yet. This has not happened in earlier, although I cannot say with which version this behavior was introduced.

     

    (2) in earlier versions file servers were shown as "off" or "grey" indicator if they couldn't be pinged by the plugin. This doesn't seem to be the base with the version mentioned above. Has this behavior been changed again?



     

  5. On 1/10/2024 at 8:33 PM, VRx said:

    There is an additional change, currently there is no need to define the "Apache log path".
    Web interface logs currently go to the container (docker) logs.

     

    Yes, I realized that. I have redirected all my docker logs on all hosts to go to a central syslog. The web UI logs are not really of interest to me in the central log repo and I would need to suppress those. You probably have a reason why you think it is better to have those in the docker logs. I personally liked it the way it was before, but that's just me.

  6. @VRx

    is there a chance that the pg_dump binary gets upgraded to 16.1 in the bacula 13 postgres containers?

     

    currently the catalog backup fails for everyone using Postgres version higher than 14 with bacula 13.

     

    07-Jan 23:45 bacula-dir JobId 18092: BeforeJob: pg_dump: error: server version: 16.1 (Debian 16.1-1.pgdg120+1); pg_dump version: 14.10 (Debian 14.10-1.pgdg110+1) 07-Jan 23:45 bacula-dir JobId 18092: BeforeJob: pg_dump: error: aborting because of server version mismatch 07-Jan 23:45 bacula-dir JobId 18092: Error: Runscript: BeforeJob returned non-zero status=1. ERR=Child exited with code 1

    • Upvote 1
  7. If someone wants to use a more recent version of Postgres than 13, Baculum 11 (as provided in this container) will throw an error. Baculum 13 contains a fix:

     

    https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/commit/e1389d3caf89875c0009930237ba59a1133f6cd6

     

    This fix also works with Baculum 11. You may manually edit the file in the container.

     

    More ideal would be if the fix could be incorporated by @VRx for updates of the container image.

     

    Just my 2 cents.

     

  8. I am seeing this the first time today (I checked back for the last 6 months), although there are no new SMB exports or mounts and unraid has been updated to the latest stable version a while ago.

     

    The time it started to appear in the logs seems to coincide with when a macbook started to backup to unraid using TimeMachine (which is using an SMB disk share). The macbook does this once every day, so I wonder why I didn't see this log entry earlier.

     

    Next day I saw that the flurry of this message ended when a backup of a SMB share ended that is hosted on a TrueNAS VM and mounted on unraid using Unassigned Devices in /mnt/remotes. This would mean these messages are actually coming from the SMB client on unraid.

     

    No idea why this would happen now and not earlier, no idea how to solve it.

     

  9. @VRx What is your strategy regarding providing images for available major versions? Are you waiting for the second next major version to appear before you work on the next major version? You currently work with 11. 13 is available some time and 15 betas are being relased. Will you look into 13 when 15 has been relased (as non-beta)?

  10. On 9/21/2023 at 4:11 PM, dlandon said:

    UD does not make configuration file changes.  Removing and then re-adding a remote share is not that difficult.

    Breaking changes are generally unwelcome.Nobody said it is difficult. It is just not good UX, as it becomes tedious with a lrger number of shares being mounted (e.g. for backup reasons). We may agree to differ in opinion.

  11. torprivoxy container recently started to flood my log with these:

    [warn] Socks version 71 not recognized. (This port is not an HTTP proxy; did you want to use HTTPTunnelPort?)

     

    What does this mean?

    Why does it happen?

    What needs to be fixed?

     

  12. Yes, it DOES work. But with the rationale applied by you, you'd use the "search" definition and remove all local TLDs that are mentioned there. But you may end up with ambiguous hostnames, if the same host names exist in different of these TLDs.

     

    I personally don't think that it makes a substantial performance improvement. You can prove me wrong of course. I would still prefer the FQDN being used, if the user chooses to put that in.  At the least it should not refuse to use the FQDN if that is in the config file, as this is how it was in the past here for me and I had to edit the config file manually.... (faster than deleting and creatig the shares again).

     

    the BUG part for me was that after a reboot UA refused to recognize the existing shares. It should at least have corrected those entries on its own and not just say nay. It was suboptimal UX.

  13. I daresay there is again a bug with the smb config file handling (some may call it a feature). After rebooting the unraid server, UA found all SMB and all NFS share mounts that are in the same DNS domain to be invalid. The cause seems to be that the entries contain full domain names (FILER.SUB.NET), and if I create a new smb / nfs mount, it will just use the hostname (FILER), even if I specify the server by full domain name. I actually would prefer it if UA uses the actual name provided by the user and wouldn't insist on throwing away the rest of the FQDN.

     

    Note that I wrote that this is only happening for the servers that are in the same DNS domain. What do I mean:

     

    UNRAID.SUB.NET -> DNS domain is SUB.NET

    FILER.SUB.NET -> is in same DNS domain, UA wants it to be just FILER

    FILER2.OTHER.NET -> is NOT in the same DNS domain, UA accepts it as FILER2.OTHER.NET

     

    has this to do with domain in /etc/resolv.conf somehow?

    here for sake of the example:

    domain sub.net

    search sub.net other.net other2.net

     

×
×
  • Create New...