kjames2001

Members
  • Posts

    28
  • Joined

  • Last visited

Posts posted by kjames2001

  1. On 8/3/2023 at 7:59 AM, KluthR said:

    Please try this:

     

    Open /usr/local/emhttp/plugins/appdata.backup/pages/content/settings.php. Scroll down to line 214, saying "<form id=abSettingsForm>". Just after that line, insert this:

    <input type="hidden" name="csrf_token" value="<?=_var($var,'csrf_token')?>">

    Save it and reload the settings page and try to save. What is happening now?

    Sorry for the late reply, just saw your answer.

     

    Just tried now and everything is working now. Thanks a lot!

     

    Edit: some settings didnt stick, like individual app settings (dont stop container).

    Edit 2: only xml files are in the backup folder. i know this is a known issue, just confirming it. scratch that, its ok.

  2. On 7/25/2023 at 8:02 AM, KluthR said:

    Interesting, since I got another user (or was it you?) with the same issue recently. CSRF Token are handled by Unraid as far as I know. I have to check this later.

    Kindly let me know if i can config the plugin some how? like a config.yml file somewhere? and point me to a sample config plz, so that i can make use of it to back up my appdata. feeling very insecure right now.

  3. 5 hours ago, KluthR said:

    Interesting, since I got another user (or was it you?) with the same issue recently. CSRF Token are handled by Unraid as far as I know. I have to check this later.

    cool, thanks. look forward to your update.

     

    NB: No, its not me.

  4. Hi,

     

    I have just updated to the new plugin, but i could not save any settings.

     

    Whenever i click the save button, i just get a blank page. if i check the plugin again, the settings i set were not saved.

     

    Please help. Thank!

     

    Edit: My appdata folder is not on cache drive, its under /mnt/docker/appdata.

     

    When i try to change it in the plugin setting (before i can even click the save button), the page just redirects to a blank page again.

  5. On 4/24/2022 at 10:54 AM, Joshndroid said:

    Update 24/4/2022

    Have added Cypht to my Repo.

    I have been using Thunderbird in docker to try and keep everything central however it is a bit clunky.

    I was searching around for an all in one IMAP webmail program.

    A lot of them are tied to a single webserver which is not what I wanted.

    There is only a docker compose for this and appears to bundle a mariadb in with it.

    With this container I managed to decouple it from mariadb so that you can use your own... Which of course needs to be set up before you are able to use this one correctly.

    I can't say I have had much time with this one

    Just installed your cypht docker, thanks for the app!

     

    Add accounts using manual smtp settings, because apparently quick add doesnt support 2fa.

     

    All added accounts passed authentication by pressing test.

     

    However, i cant seem to see any of my mails. under everything and unread, it all say: You don't have any data sources assigned to this page, and on the top right corner, it also displays: last 6 weeks-0 sources@100 each-0 total.

     

    Log shows: An error occured when creating user "USERNAME"

     

    and: CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.

     

    kindly advice what to do to fix it. Thanks in advance!

  6. Thanks!

     

    I noticed that this official docker points to version 6 of seafile, and there's a bug preventing intergration of onlyoffice. This bug is fixed in version 8.0.3, and the latest Community Edition version is 9+. is there anyway we can have a docker for the latest CE version?

  7. On 6/26/2019 at 3:21 PM, PCwhale said:

    Sorry if this is a bit late but it is very possible, you can use the mount command that looks like this for unassigned devices:


    mount -t cifs //192.168.0.37/D /mnt/disks -o username=krys,password=*****,dir_mode=0777,file_mode=0777

    and then use the user scripts plugin to automate the running of the script. I don't actually know how you can set a delay per se but if you know your reboot schedule then using a custom cron time works just fine. I put my computer to rest every night automatically to save power and setting a cron time 5-10 minutes after the normal boot time works just fine for all my 20-30 docker instances. it annoying that such a simple feature is not available on unraid but this is how i solved my issue with internal vm's hosting thier own smb shares for dockers.  

    Edit: running the command over and over again does not cause any harm either so you can just run it every 5 minutes if you really wanted to. not sure if it temporarily breaks storage for dockers but in all the testing i have done the server pics up on whatever it left after a second or two after realizing the storage location is available again. 

    Thank you so much for this solution!

     

    For anyone who's looking for and dont know this yet:

     

    You can schedule the script to run everytime the array starts, just add 'sleep 180' before your command in the script, to delay 3 minutes after the boot.

    • Like 1
  8. 10 minutes ago, guillelopez said:

     

    On Sublime I was using UTF-8, but yes after restart ContainerNursery, same config file works. It's just when saving it with the container running.

    Tried to edit and save it from cli with nano on the Unraid system, and instead it works fine at how it should be, no need to restart the container.

    So definetly something related with the text editor. Would like know what text editor does @kjames2001 use.

     

    But for me this is not a big deal. Just restart the container or use nano on cli.

     

    1 hour ago, Echolot said:

    I just tried this using VSCode as the editor on Mac with the container running, can't reproduce this. Do you maybe use a different encoding than UTF-8? Does the same config file that produces this error work after a restart of the container?

     

    7 hours ago, Echolot said:

    I sadly can't recreate your issue, config reloading works just fine on my machines... Is there anymore detail you can provide?

    I'm a noob, so i still use notepad. lol

     

    Yeah, not a big deal, but it was super convenient. Because as long as the file is saved after any change, the container updates its config.

  9. 17 hours ago, Echolot said:

    @guillelopez The fix to the bug you discovered, as well as the configurable listening port update was just released two hours ago.

     

    I also added the guide I mentioned to the first post. Let me know if somethings missing.

    just checked out the update, fixed config file. everything works properly, except now the container wont auto update when there's a config change. it used to auto update config when any changes are made.

  10. 10 hours ago, guillelopez said:

     

    That did the trick, I used Bridge Network on ContainerNursery and maped 80 to 8080, then used my Unraid ip as proxyHost. But I need to change also the domain on config.yml to use http://filebrowser.rack:8080, with just http://filebrowser.rack there, ContainerNursery told me on browser "Proxy configuration is missing for http://filebrowser.rack:8080"

     

    So my config.yml looks like this:

    proxyHosts:
      - domain: filebrowser.rack:8080
        containerName: FileBrowser
        proxyHost: 192.168.1.9
        proxyPort: 85
        timeoutSeconds: 1800
      - domain: krusader.rack:8080
        containerName: Krusader
        proxyHost: 192.168.1.9
        proxyPort: 6080
        timeoutSeconds: 1800

     

    And ContainerNursery config:

    2133962492_Capturadepantalla2021-09-20alas16_12_11.thumb.png.8dcb10b0b20a0234fbce25007ed028cc.png

     

    Let me know, if you have time, to do more test with other configurations so we can find a perfect use to Unraid.

    Thanks for your help and your really nice app.

    thanks a lot for your tips. i use pihole, so i just add the host names under local dns.

    • Thanks 1
  11. 16 minutes ago, ich777 said:

    This should not be the case, if you enable host access all the containers in br0 can "speak" to unRAID and they should still work fine and I can think of nothing why they shouldn't work anymore with host access enabled.

    The other way around, if you disable host access the containers an br0 can't talk to unRAID...

     

    Are you trying to get PiHole running?

    If so you can also get PiHole running on Host mod with the IP from unRAID, but please be aware that you need to male some tweaks to the Docker template and a few other modifications.

    its cloudflared and zerotier that's not working. will try to change that setting again later, cant stop docker now.

     

    Edit: just retried, now everything works just fine. Thanks!

    • Like 2
  12. Just now, ich777 said:

    Have you CA Backup installed? Keep in mind, everytime you restart PiHole or strictly speaking stop it, the exporter will also stop.

     

    If you want to workaround this you have to change the setting for PiHole in CA Backup so that it don't stops (how to do it is in the tutorial).

    i have and have set it up like in the tutorial, and it works. but now some of my dockers stoped working after enabling host access to custom networks, disabled this and they works again. any idea how to fix this?

  13. Hi ich777, thanks for your hard work!

     

    I tried your guide, and setup Prometheus and Prometheus node exporter without any issues. 

     

    However, when i setup the pihole plugin, and enter my pihole ip and port 80 and api token, and click change and start, it runs for a while and stops automatically.

     

    I get the following error:

     

    2021/07/11 01:30:52 Starting HTTP server
    2021/07/11 01:31:02 An error has occured during retrieving PI-Hole statistics Get "http://192.168.1.46:281/admin/api.php?summaryRaw&overTimeData&topItems&recentItems&getQueryTypes&getForwardDestinations&getQuerySources&jsonForceObject&auth=*****************************************": dial tcp 192.168.1.46:281: connect: connection refused
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x811daa]
    
    goroutine 37 [running]:
    github.com/eko/pihole-exporter/internal/pihole.(*Client).getStatistics(0xc0001663f0, 0xc000042790)
    /root/prometheus_pihole_exporter/pihole-exporter/internal/pihole/client.go:163 +0x2ea
    github.com/eko/pihole-exporter/internal/pihole.(*Client).Scrape(0xc0001663f0)
    /root/prometheus_pihole_exporter/pihole-exporter/internal/pihole/client.go:61 +0x88
    created by main.initPiHoleClient
    /root/prometheus_pihole_exporter/pihole-exporter/main.go:37 +0xc5

     

    192.168.1.46 is my pihole ip and using port 80

     

    please help, im a newbie to linux and have no idea what these is all about.

     

    Thanks again for all your hard work.

     

    Edit: NVM, it just worked after i tried again. Maybe i entered the wrong port by mistake.

    Again, thanks for your hard work!

     

    Edit 2: some of my dockers stoped working after enabling host access to custom networks, disabled this and they works again. any idea how to fix this?

  14. 23 hours ago, snowy00 said:

    I had the same issue, my failure was that I only created a dns record that not used. You have to use a proper dns record that also setup in your reverse proxy.

     

    As I mentioned in the former post, that work now for me because sonar is setup in my reverse proxy with a custom certificate from cloudflare.

    It doesn´t work with a dummy dns record as I configured first something like - tunnel.yourdomain.com

     

    CNAME  yourdomain.com  UUID.cfargotunnel.com

    CNAME  sonarr                 yourdomain.com 

     

    
    ingress:
      - service: https://192.168.1.47:18443
        originRequest:
          originServerName: sonarr.yourdomain.com

     

    Thanks for filling up the missing link! i just got it working without even knowing how it worked. lol

  15. 4 hours ago, snowy00 said:

    Hello,

     

    I get the error below

     

    
    
    2021-06-07T17:15:06Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: x509: certificate is not valid for any names, but wanted to match ******.**" cfRay=hfsfhkfhkfh-FRA originService=https://192.168.178.42:4443

     

    When I use this config and disable TSLVerify it works.

     

    
    
    tunnel: <my_UUID>
    credentials-file: /home/nonroot/.cloudflared/<my_UUID>.json
    
    ingress:
      - service: https://192.168.1.100:1443
        originRequest:
           noTLSVerify: true

     

     

    On the GitHub post is mentioned to use:   host.my.domain, where host is a subdomain you have valid DNS records for. But what does that mean? Have some one an example for me, because I am not so familiär with DNS records. 

     

     

     

     

     

     

    thanks for the tip, tried it and works.

     

    however, i somehow fixed this issue later by using 

    ingress:
      - service: https://192.168.1.47:18443
        originRequest:
          originServerName: sonarr.yourdomain.com

    ie. using "sonarr.yourdomain.com" instead of "yourdomain.com"

  16. 2 hours ago, takkkkkkk said:

    Anyone else getting error like the one below? It seems like it's working fine, but just get the error:

     

    
    
    
    
    
    2021-06-07T09:03:11Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" cfRay=XXXX-LAX originService=https://IP:PORT

     

    same here, deleted the tunnel.