martial

Members
  • Posts

    31
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

martial's Achievements

Noob

Noob (1/14)

5

Reputation

1

Community Answers

  1. One thing that I have noticed is that when I access the Dashboard directly from NPM, I might have it happen. When I access that same dashboard from a CloudFlare tunnel, I have not seen it happen yet.
  2. I see a new "Preview" version in the community apps; what is the use of this one versus the existing plugin?
  3. The error seems to only trigger for me when I leave a browser open on the Dashboard page (I have not seen it yet on the Docker page or VM page, for example), so as long as I close the browser tab, I do not get the empty dashboard/main. I can still ssh into it and start the script manually if/when it happens, so no user scripts are needed. I only use it when needed
  4. Not sure if this request was for the script I use? If it is, I just put a copy on GH, here is a direct link to it https://raw.githubusercontent.com/mmartial/unraid-study/main/varlog_nginx.sh
  5. Yes, I was looking for a simple pastebin solution, without a DB, and that could do text and binaries ... and the simple word combination was useful. I tried it and started to wonder why the UI was missing some of the features from the webpage, so I checked
  6. Happy to do so. Please bear with me, as it might take a couple of days as I am out of town
  7. I updated to the Unraid connect released Today (2024-01-09) and signed in. I have two additional "extra origins" to add (Nginx Proxy Manager over WireGuard + CloudFlare Tunnel). I followed the instructions "Provide a comma separated list of urls that are allowed to access the unraid-api (https://abc.myreverseproxy.com,https://xyz.rvrsprx.com,…)" accordingly (comma separated, no space) but I get the red shield with "The CORS policy for the unraid-api does not allow access from the specified origin." asking me to add the value to the "extra origins" line as I have already done. Am I doing this incorrectly?
  8. For people using Apprise and seeing a large amount of memory used by the tool, this is because "# Workers are relative to the number of CPUs provided by hosting server" The default formula is "multiprocessing.cpu_count() * 2 + 1", which, for high CPU hosts, creates a lot of copies of the apprise worker and, therefore, memory usage. This number can be limited by adding a new "Container Variable" named "APPRISE_WORKER_COUNT" and entering a value. Hoping this helps others
  9. For people using microbin, I recommend modifying the "AppData" directory within the container to "/app" so that the files are not stored only within the container (and would not survive a container update, therefore) Also, you can add environment variables as described in https://microbin.eu/docs/installation-and-configuration/configuration/ to enable things like <Config Name="MICROBIN_ENCRYPTION_CLIENT_SIDE" Target="MICROBIN_ENCRYPTION_CLIENT_SIDE" Default="false" Mode="" Description="Enables server-side encryption.&#13;&#10;https://microbin.eu/docs/installation-and-configuration/configuration/" Type="Variable" Display="always" Required="false" Mask="false">true</Config> <Config Name="MICROBIN_DISABLE_TELEMETRY" Target="MICROBIN_DISABLE_TELEMETRY" Default="false" Mode="" Description="Disables telemetry if set to true" Type="Variable" Display="always" Required="false" Mask="false">true</Config> <Config Name="MICROBIN_ENCRYPTION_SERVER_SIDE" Target="MICROBIN_ENCRYPTION_SERVER_SIDE" Default="false" Mode="" Description="Enables client-side encryptio" Type="Variable" Display="always" Required="false" Mask="false">true</Config> <Config Name="MICROBIN_PRIVATE" Target="MICROBIN_PRIVATE" Default="true" Mode="" Description="Enables private pastas" Type="Variable" Display="always" Required="false" Mask="false">true</Config> <Config Name="MICROBIN_GC_DAYS" Target="MICROBIN_GC_DAYS" Default="90" Mode="" Description="Sets the garbage collector time limit. Pastas not accessed for N days are removed even if they are set to never expire. Default value: 90. To turn off GC: 0." Type="Variable" Display="always" Required="false" Mask="false">0</Config> <Config Name="MICROBIN_ENABLE_BURN_AFTER" Target="MICROBIN_ENABLE_BURN_AFTER" Default="0" Mode="" Description="Sets the default burn after setting on the main screen. Default value: 0. Available expiration options: 1, 10, 100, 1000, 10000, 0 (= no limit)" Type="Variable" Display="always" Required="false" Mask="false">0</Config> <Config Name="MICROBIN_PUBLIC_PATH" Target="MICROBIN_PUBLIC_PATH" Default="" Mode="" Description="Add the given public path prefix to all urls. This allows you to host MicroBin behind a reverse proxy on a subpath.&#13;&#10;You need to set the public path for QR code sharing to work." Type="Variable" Display="always" Required="false" Mask="false">https://yoururl/</Config> <Config Name="MICROBIN_QR" Target="MICROBIN_QR" Default="false" Mode="" Description="Enables generating QR codes for pastas.&#13;&#10;This feature requires the public path to also be set." Type="Variable" Display="always" Required="false" Mask="false">true</Config> <Config Name="MICROBIN_DEFAULT_EXPIRY" Target="MICROBIN_DEFAULT_EXPIRY" Default="24hour" Mode="" Description="Sets the default expiry time setting on the main screen. Default value: 24hour Available expiration options: 1min, 10min, 1hour, 24hour, 1week, never" Type="Variable" Display="always" Required="false" Mask="false">never</Config> I am adding a copy of my modified XML file for those interested. Hoping this helps some -- M microbin.xml
  10. FYSA, Dozzle changed how they do authentication. https://github.com/amir20/dozzle/issues/2630 explains the simple changes to create a configuration file to replace the environment parameters. You will need a new "Container Path" for /data where you will put the "users.yml" file. You will also need to remove the "Username" and "Password" fields from the template for Dozzle to start
  11. FYSA, CryptPad stopped using the Promascu dockerhub and created their own over the summer. They do not have a "latest" build, but I got the 5.6.0 version of cryptpad running by: - updating the "repository" to "cryptpad/cryptpad:version-5.6.0" - changing the "registry url" to "https://hub.docker.com/r/cryptpad/cryptpad/" - adding a "CPAD_CONF" environment variable pointing to "/cryptpad/config/config.js" Hoping this helps others
  12. Got it happening again, used it to clear the logs. The df confirms that worked Unfortunately the dashboard or main page are empty of content, so the next step is to restart nginx I tried the steps in pkill -9 nginx /etc/rc.d/rc.nginx start but the dashboard and main were still not updating with live data ... after a couple of minutes the data was showing up again the "/var/log/nginx/error.log" has many of those "nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory" so use: pkill -9 nginx > /var/log/nginx/error.log & > /var/log/nginx/error.log.1 /etc/rc.d/rc.nginx start To get a more reasonably sized "df -h /var/log" I put all this in a small shell script (attached to this post), hopefully this can help others varlog_nginx.sh
  13. December 2023 update: Please note that a new version under a different location and repository is available. Please search for Jupyter-CTPO and Jupyter-TPO, added to the CA in early December 2023
  14. FYSA: I am the person who built this container and also the original author of the previous version.
  15. Just a quick update. I now access the UI from a CloudFlare Tunnel, not a Nginx Proxy Manager proxy. I also close the tab to the dashboard after I am done using it. So far, the log has not shown any sign of growth.