Jump to content

Ptolemyiv

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by Ptolemyiv

  1. On 1/8/2024 at 5:29 AM, CopesaCola said:

    Hey guys, just getting started with unraid and loving it so far!

     

    I've set up my domain, cloudflare, and nginx up properly to use http for all of my dockers (I've also created SSL certificates for each of them from the SSL tab). What I am trying to do now is change them to https, the issue is that whenever I update the "scheme" from http to https, the site gives me a "too many redirects" error. I have a feeling that there is something else I should be doing, but I have no clue where to start.

     

    Any help would be appreciated!

    This could be a lan routing issue - try accessing the site externally (e.g. via mobile) and see if you get the same too many redirects error.

     

    If not, in your router you need to find somewhere to configure a static dns hostname - so if e.g. your external domain is www.yoursite.co.uk and your nginx docker ip address is 192.168.1.2 then you would map that domain to that IP - any internal nginx requests would then be routed directly and avoid this infinite routing loop issue you are experiencing. I use a Unifi UDM Pro router and they finally added this feature last year for instance..

     

    (For external access you'll also need to forward any incoming port 443 traffic on your router to the nginx IP)

  2. Am getting a certbot route53 error again in the logs and am unable to log in to the gui (since itself relies on ssl certificate!) - log is showing the following repeatedly:

     

    [app         ] [12/29/2023] [11:33:43 AM] [Global   ] › ✖  error     Command failed: pip install --no-cache-dir certbot-dns-route53==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') 
    [app         ] The 'certbot_dns_route53.authenticator' plugin errored while loading: cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/usr/lib/python3.10/site-packages/urllib3/util/ssl_.py). You may need to remove or update this plugin. The Certbot log will contain the full error details and this should be reported to the plugin developer.
    [app         ] Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-ul_q9vn7/log or re-run Certbot with -v for more details.
    [app         ] ERROR: Could not find a version that satisfies the requirement certbot-dns-route53== (from versions: 0.15.0.dev0, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 0.18.1, 0.18.2, 0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.22.1, 0.22.2, 0.23.0, 0.24.0, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.29.1, 0.30.0, 0.30.1, 0.30.2, 0.31.0, 0.32.0, 0.33.0, 0.33.1, 0.34.0, 0.34.1, 0.34.2, 0.35.0, 0.35.1, 0.36.0, 0.37.0, 0.37.1, 0.37.2, 0.38.0, 0.39.0, 0.40.0, 0.40.1, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.32.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.8.0)
    [app         ] ERROR: No matching distribution found for certbot-dns-route53==
    [app         ] [12/29/2023] [11:33:44 AM] [Migrate  ] › ℹ  info      Current database version: none

     

    Unfortunately the fix before doesn't seem to be working - anyone know how to fix this once and for all? (may be a recent update issue since only just started reoccurring again)

     

    EDIT: So the only way I was able to fix this error was to run the following command and download urllib manually:

    pip install 'urllib3<2'

     

    Nginx Proxy Manager then loaded and unsuccessfully failed to auto-renew the certificates - after this, I was able to manually renew the certificates from the UI.

     

    Strangely, if I reboot the container than the original error re-occurs and I have to manually execute the above command again...

     

    Anyone else encountering the same or can suggest a permanent fix? Many thanks

     

     

    • Like 1
  3. On 9/6/2023 at 12:46 PM, JCM said:

    I have the exact same issue and I can not login.

    [app         ] [9/6/2023] [1:43:57 PM] [Migrate  ] › ℹ  info      Current database version: none
    [app         ] [9/6/2023] [1:43:58 PM] [Global   ] › ✖  error     Command failed: pip install --no-cache-dir certbot-dns-route53==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') 
    [app         ] An unexpected error occurred:
    [app         ] pkg_resources.ContextualVersionConflict: (urllib3 2.0.4 (/usr/lib/python3.10/site-packages), Requirement.parse('urllib3<1.27,>=1.25.4'), {'botocore'})
    [app         ] Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-ka2_ng49/log or re-run Certbot with -v for more details.
    [app         ] ERROR: Could not find a version that satisfies the requirement certbot-dns-route53== (from versions: 0.15.0.dev0, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 0.18.1, 0.18.2, 0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.22.1, 0.22.2, 0.23.0, 0.24.0, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.29.1, 0.30.0, 0.30.1, 0.30.2, 0.31.0, 0.32.0, 0.33.0, 0.33.1, 0.34.0, 0.34.1, 0.34.2, 0.35.0, 0.35.1, 0.36.0, 0.37.0, 0.37.1, 0.37.2, 0.38.0, 0.39.0, 0.40.0, 0.40.1, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.32.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0)
    [app         ] ERROR: No matching distribution found for certbot-dns-route53==

     

     

    How did you kill the certbot process?

    I think I just went into the terminal and ran e.g.:

     

    ps -ef | grep certb

     

    Look for the process ID which is the number immediately after user such as:

    root 1234 5100 ...

     

    To kill the process type:

    kill 1234

     

     

    • Like 1
  4. Been running nginx proxy manager on unraid docker for a long time with limited issues - only issue I had really been incurring and never figured out was the auto-renewal for certificates never working..

     

    I received an email saying a certificate was expiring shortly and went to log in to the UI to manually renew - the UI wouldn't go past the login page, it just refreshed without any sort of error message.

     

    Checking the logs I noticed repeated errors related to "certbot-route53-dns" (how my own certificates are authenticated) with no valid version available and failure to "pip install" this package. 

     

    Remoting in via terminal, I manually executed the pip install command to which the login via the UI immediately worked.. however renewing the certificate still failed (UI showing "Invalid error") until I manually killed the standard "certbot" process.

     

    Obviously this is only a temporary fix given it will reset whenever the container is rebuilt.

     

    I am guessing this may be a bug introduced with recent update to v23.08.1? I didn't take screenshots at the time but can rebuild the container to replicate if not easily identifiable..

    • Like 1
  5. Yep, would be fantastic if this were updated to latest 2.14.01 because it fixes a number of issues with hue + tradfri switches amongst others 

     

    Edit:

    I just had to do 2 minor changes in the docker settings to upgrade and get the latest deconz-community docker:

     

    1) Under advanced view change repository from spaceinvader to:

    deconzcommunity/deconz

     

    2) Change appdata container path to:

    /opt/deCONZ

     

    • Thanks 1
  6. 1 hour ago, itimpi said:

    No - only physical drives attached to the server at array start time are counted.

     

    There should be something in the logs indicating why you cannot mount.

    Sorry for dumb question but which logs should I check and what is easiest way of doing that? If I click the log icon on the right of the row for this share under unassigned devices it is just blank

     

    And thanks both for confirming on other question re license

  7. Hi all,

     

    I can't seem to work out why any remote SMB or NFS share mounts (a synology NAS I use for critical backup) will not show the mount button enabled under Unassigned Devices section of the Main page. If I click log then nothing appears.

     

    It used to work and when I add the share with user name / password I get a "Success!" pop-up window.

     

    My best hunch is if perhaps accessing an external share counts towards my unraid disk limit? (I only did basic version since I physically can't fit more than 4 drives + 2 SSDs in my case) 

     

    If this is the case, it is fine and I will upgrade but wanted to check this was definitely the issue before I pay more to potentially be disappointed! I did check the FAQ and saw it say that all drives that appear as linux drives other than the flash drive will count but I am not quite sure if SMB/NFS shares appear as such (I assumed they were mounts within an existing share)

     

    Thanks for the guidance! Overall am loving unraid.. has been a bit of a game-changer from underpowered Syno world..

  8. On 6/12/2021 at 1:36 PM, mattie112 said:

    You could check the location of the .htpasswd file (if you use one). That should not be readable by the web.

     

    Some generic pointers:

    https://stackoverflow.com/questions/6441578/how-secure-is-htaccess-password-protection/6442113

     

    Not sure if NPM has somekind of bruteforce protection.

    Thanks - that is just if I want to provide basic login authentication to hosts myself, right? In which case, I do not use that.

     

  9. Can I ask a paranoid question - when I set this up original I could swear there was some message about needing to move the default passwords file location from www folder or something? For the life of me, I can't find any mention or reference to this elsewhere so not sure if was just hallucinating or not...

     

    Point is - I have nginx working great with a mix of externally accessible and lan-only sub-domains but want to ensure I haven't left some silly gaping security hole.. I have of course set up a strong password for nginx itself and all the hosts have their own login access controls

     

    Thanks

  10. On 3/26/2019 at 6:43 PM, Ntouchable said:

    I managed to fix this eventually in the following way:

     

    1. Mounted Let'sEncypt config files to inside the Plex docker config in the following manner: /letsencrypt = /mnt/user/data/letsencrypt/
    2. Plex settings in browser > Network > 
      1. Custom certificate location = /letsencrypt/keys/letsencrypt/privkey.pfx
      2. Custom certificate encryption key = /letsencrypt/keys/letsencrypt/privkey.pem
      3. Custom certificate domain = plex.XXXXXX.com
      4. Custom server access URLs = https://plex.XXXXXX.com:443
    3. In the file in /mnt/appdata/letsencrypt/nginx/proxy-confs/plex.domain.conf > change the line "proxy_pass https://$upstream_plex:32400" to proxy_pass https://UnRaidServerIP:32400
    4. Optional - 301 redirect so that it forces https - Go to /mnt/appdata/letsencrypt/nginx/site-confs/default > remove the "#" signs next to the code: server {
          listen 80;
          server_name _;
          return 301 https://$host$request_uri;
      }
    5. Optional - Plex UI in browser > Network > Secure connections = Required.

     

    Hope this helps someone out there.

     

    This worked a treat for me as well using Nginx proxy manager - only tweaks were to update the docker config path which I specifically limited to read-only on the certificate folder for the plex certificate for a little extra security.

     

    I also didn't need to do step 3 either and Plex is now working remotely via my own domain (including via web + android / tv apps).

     

    My main question (part out of curiosity, part to ensure no gaping security risk) is what is the above actually implementing?

     

    Is it that we effectively have SSL encryption from REMOTE DEVICE<->NGINX SERVER which is then reverse proxied but encrypted again between NGINX SERVER<->PLEX SERVER? (i.e. there is in essence one superfluous extra encryption running on the plex server in contrast to a typically https->http reverse proxy)

  11. On 5/10/2021 at 6:02 AM, ich777 said:

    Do you own a Plex Pass and have you enabled hardware transcoding in the App itself? From where did you try to play a movie?

    I got some reports that if you try it from the webclient that in some instances it doesn't work that you start watching a movie and then skip around in the timeline (Android and all other platforms work fine).

     

    What card is this exactly? A 4GB card? I think you need at least a card with more than 4GB of VRAM because the DAG is about 4.24GB from the output of your logs.

     

    I would strongly recommend that you try it in another computer.

     

    Look here: Click

    Or this thread on the forums: Click

    Ok thanks - will try and open up my mItx then and extract out its beloved rtx 2070 I got as a retrospective steal in April 2020!

     

    Re plex, yes I have plex pass and set it up according to your guide and was running a 4k video on my phone set to max 1080p quality - the UI showed this accurately but the video froze.

     

    Will keep you posted - looks like will need to find a stopgap cheap basic nvidia gpu in interim (just for windows vm hosting blue iris).. something like a GT 710 I guess?? Blue iris won't be hardware accelerated on that basis but its just to make the vm function.

     

    Thanks again

     

    UPDATE:

    So I ran memtestG80 which test's GPU VRAM - it works in 128 MB iterations and starts off fine and then suddenly throws off a ton of errors... got the same result from OCCT testing software.

     

    Interestingly enough the ebay seller immediately accepted my return... pretty horrible situation for many whereby its impossible to buy anything but the most basic GPUs new! 

     

    UPDATE2:

    Am pleased to say I returned that card and got a refund.. then "splashed out" on a 1660, just about squeezed it into the case and passthrough works immediately with no crashes on demos at all :)

     

    So much time wasted thanks to a lovely dodgy ebay seller!!

     

     

     

    • Like 1
  12. 3 hours ago, ich777 said:

    This seems like a defective GPU but I can not tell for sure.

     

    Have you eventually someone who have a spare PC where you can put the GPU in?

     

    Also if you use the latest Nvidia driver you should not need a BIOS file for the GPU.

    Ok thanks - I am coming to the same conclusion. Re no need for a bios file, is that nvidia driver in windows or elsewhere? Why is this out of curiosity?

     

    --

     

    So I've installed your driver to try it out within a docker container - Plex transcoding appears to work initially (shows 'hw') but then freezes the moment i try to e.g. jump forward by 30 secs - no such issue in software transcoding.

     

    On PhoenixMiner the log reads - ignorantly, I would take this to mean the GPU memory is dud or such but interested to get your wisdom (I should be able to claim under ebay/paypal but obviously want to be as sure as possible)

     

    CUDA version: 11.0, CUDA runtime: 8.0
    OpenCL driver version: 20.20-1089974-ubuntu-20.04
    Available GPUs for mining:
    GPU1: NVIDIA GeForce GTX 1650 (pcie 1), CUDA cap. 7.5, 3.8 GB VRAM, 14 CUs
    Unable to load NVML
    Eth: the pool list contains 1 pool (1 from command-line)
    Eth: primary pool: asia1.ethermine.org:4444
    Starting GPU mining
    Eth: Connecting to ethash pool asia1.ethermine.org:4444 (proto: EthProxy)
    Eth: Connected to ethash pool asia1.ethermine.org:4444 (172.65.231.156)
    Eth: New job #79db64b9 from asia1.ethermine.org:4444; diff: 4295MH
    GPU1: Starting up... (0)
    GPU1: Generating ethash light cache for epoch #413
    Listening for CDM remote manager at port 5450 in read-only mode
    Eth: New job #26f4df4b from asia1.ethermine.org:4444; diff: 4295MH
    Eth: New job #d2b91bb9 from asia1.ethermine.org:4444; diff: 4295MH
    Light cache generated in 2.0 s (33.1 MB/s)
    GPU1: Allocating DAG (4.24) GB; good for epoch up to #415
    CUDA error in CudaProgram.cu:388 : out of memory (2)
    GPU1: CUDA memory: 3.82 GB total, 3.77 GB free
    GPU1 initMiner error: out of memory
    Fatal error detected. Restarting.
    
    *** 0:00 *** 5/9 23:30 **************************************
    Eth: Mining ETH on asia1.ethermine.org:4444 for 0:00
    Available GPUs for mining:
    GPU1: NVIDIA GeForce GTX 1650 (pcie 1), CUDA cap. 7.5, 3.8 GB VRAM, 14 CUs
    Current -gt 15
    Eth: Accepted shares 0 (0 stales), rejected shares 0 (0 stales)
    Eth: Incorrect shares 0 (0.00%), est. stales percentage 0.00%
    Eth: Average speed (5 min): 0.000 MH/s

     

  13. Ok thanks v much and the latter looks particularly impressive...

    Out of curiosity have you ever seen a symptom like I am experiencing whereby I have seemingly passed through my GPU (albeit UEFI boot only with CSM enabled) and the Windows desktop will operate fine but the moment I run a game or demo just that software freezes or crashes...

     

    I've tried a lot of things including new vm templates, diff driver versions, techpowerup and direct bios dump (albeit latter was 124KB vs 1MB for hex edited techpowerup!)..

     

    I guess my main paranoia is if I got sold a dud on eBay or not..

  14. Just now, ich777 said:

    That depends, you can of course try the for example the Phoenixminer from @lnxd or you could try Jellyfin if you got some video files that you can transcode (please read the second post of this thread how to do that for example with Jellyfin and add all the necessary parameters to the container).

    Thanks - was hoping something even simpler like a retro 3d demo but can try one of these!

  15. Is there any docker container I can use after installing this nvidia driver to test my GPU is actually working (under moderate stress)

     

    The reason I ask is I bought a GTX 1650 off ebay (sold as fully working open box) and whilst I can install windows VM, nvidia drivers and the desktop shows fine, any time I run a game or directx/opengl demo, the software immediately freezes / crashes back to the windows desktop.

     

    I don't have another computer to try it on hence thought this could be a simple way to test without needing to worry if its a passthrough issue or not..

     

    Thanks!

  16. Hi all,

    Just getting started with my Unraid Server.. thanks of course to SpaceInvaderOne for the tremendous efforts (and definitely deserving of a cut from LimeTech!)

     

    Everything has gone very well apart from a bizarre issue around the GPU passthrough. The GPU is a simple Zotac GTX 1650 Low Profile and whilst I can actually boot into windows and operate the desktop fine, the moment I look to run any sort of DirectX game/test the software immediately crashes back to the Windows desktop.. there is no error message whatsoever.

     

    I've tried switching from i441 to Q35 and a lot of different bios settings without luck and modified the XML to ensure the GFX+audio controllers are operating multi-function etc

     

    My final thought was the vBios - I've been primarily using one from TechPowerUp (with the hex editor change) but figured dumping my own would be worthwhile trying.. despite the VFIO bindings set in the GUI (6.9.2) this script kept throwing off the error others have been getting.

     

    So I figured I would try to actual turn off the VFIO bindings and low-and-behold the script apparently ran fine and dumped the rom..

     

    However, the rom is showing only as 129KB versus 1MB for the TechPowerUp download - opening it up and it seems to start with the 'correct' text but appears to be ominously small in size!!

     

    Other question was if to switch from i441 to Q35 I could create a new VM template but still use the same vdisk? (I am using a 500MB vdisk on a 1GB unassigned device SSD).

     

    Thanks and any other ideas appreciated

  17. Hi - I am going adventurous and trying to build a somewhat powerful unraid server as a NAS, windows VM (blueiris) plus a bunch of dockers including plex.

     

    Main hardware is a short depth 4U rackmount case with intel i7 10700 and asus rog strix z490-h mobo. 

     

    I am planning to shuck 4x WD Elements 18tb hard drives and install 2 SSDs for cache / windows VM.

     

    A couple of basic questions as I continue to prep the hardware purchases (not easy in these times!):

     

    1) Will the shucked drives be SAS drives and consequently do I need to buy a specific PCI-E SAS controller for this motherboard? It seems odd to me this lack of support given SAS drives appear pretty common these days.. is there a generally recommended LSI card I should look for?

     

    2) SSDs - I am assuming that it would be crazy not to have redundancy for the cache drive? This mobo only has 2 m.2 slots though so ideally I would get e.g. 2x 1 TB SSDs and then use it for a combination of cache drive and windows VM partition - however, I understand this is not generally recommended? Any advice on how to accommodate this optimally would be much appreciate.

     

    I will post more details as and when I gather the hardware successfully together..

     

    Thanks for the advice!

×
×
  • Create New...