chiaramontef
-
Posts
13 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by chiaramontef
-
-
Awesome! Thank you!
-
Hi All,
Does anyone happen to have the .zip for version 6.9.2?
The site only goes back to 6.10.3 and 6.9.2 was the version I was running before things got messy. Thought I would try something before reaching out for additional help.
Thanks,
Frank
-
I'm having an issue and I'm not really sure where to start. I'm not even 100% certain it's an issue with letsencrypt, which I run as a docker on UNRAID.
Anyway, here is my situation. I recently moved, all infrastructure (unifi) moved with me, including my ATT gateway. Everything was shutdown at my home before I moved and I just booted everything back up (about 2 1/2 weeks later) at my new location. I have everything back up y and running again, all settings and configs are the same, including my WAN IP.
All my dockers are up and running and fully functional except I for that I can no longer resolve any of my external applications that utilize letsencrypt with nginx reverse proxy. I can ping and resolve my domain name fine, and can even access PLEX externally, but nothing that was fronted by the letsencrypt docker seems to work, about 8 applications in all with the same generic error. No status code, nothing, just a "domain name" refused to connect message followed by a ERR_CONNECTION_REFUSED.
Has anyone experienced any similar issues before where everything just stopped working?
Thanks for any input.
-
No matter what I plugin to the MAIL_HOST field, I get the below error when trying to install. What am I missing?
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='malfurious-roundcube-postfixadmin' --net='bridge' -e TZ="America/New_York" -e HOST_OS="unRAID" -e 'MYSQL_HOST'='192.168.2.10' -e 'ROUND_USER'='roundcube' -e 'ROUND_PASS'='***' -e 'ROUND_DB'='roundcube' -e 'POST_USER'='postfix' -e 'POST_PASS'='***' -e 'POST_DB'='postfix' -e 'MAIL_HOST'='mail.***.com' -e 'ENABLE_IMAPS'='true' -e 'ENABLE_SMTPS'='true' -e 'DISABLE_INSTALLER'='false' -e 'ROUNDCUBE_PORT'='8888' -e 'POSTFIX_PORT'='8080' -e 'PASS_CRYPT'='' -p '8888:8888/tcp' -p '8080:8080/tcp' -v '/mnt/user/appdata/malfurious-roundcube-postfixadmin':'/enigma':'rw' --add-host mail.domain.com:xxx.xxx.xxx.xxx 'malfurious/roundcube-postfixadmin'
invalid argument "mail.domain.com:xxx.xxx.xxx.xxx" for --add-host: invalid IP address in add-host: "xxx.xxx.xxx.xxx"
See 'docker run --help'.The command failed.
-
Great catch. That was a facepalm moment, thanks for the help!
-
Originally, I only had 443 and 32400, but I've since added 80 & 81 to troubleshoot the issue. No luck. Here are the screenshots you requested. Thanks for taking a look.
-
Can anyone help with this? I updated letsencrypt today and now nothing is working. No other changes have been made. It's as if everything is trying to go over 8443 (which is what my UNRAID server is set to since there was a conflict between letsencrypt 443 and UNRAID 443. This is the error I'm getting:
2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Sub-domains processed are: -d nextcloud.xxx.com -d office.xxx.com -d apps.xxx.com
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for apps.xxx.com
http-01 challenge for xxx.com
http-01 challenge for nextcloud.xxx.com
http-01 challenge for office.xxx.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. apps.xxx.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx: Invalid port in redirect target. Only ports 80 and 443 are supported, not 8443, office.xxx.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx: Invalid port in redirect target. Only ports 80 and 443 are supported, not 8443, nextcloud.xxx.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx: Invalid port in redirect target. Only ports 80 and 443 are supported, not 8443, xxx.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx: Invalid port in redirect target. Only ports 80 and 443 are supported, not 8443
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: apps.xxx.com
Type: connection
Detail: Fetching
https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx:
Invalid port in redirect target. Only ports 80 and 443 are
supported, not 8443
Domain: office.xxx.com
Type: connection
Detail: Fetching
https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx:
Invalid port in redirect target. Only ports 80 and 443 are
supported, not 8443
Domain: nextcloud.xxx.com
Type: connection
Detail: Fetching
https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx:
Invalid port in redirect target. Only ports 80 and 443 are
supported, not 8443
Domain:xxx.com
Type: connection
Detail: Fetching
https://xxx.unraid.net:8443/.well-known/acme-challenge/xxx:
Invalid port in redirect target. Only ports 80 and 443 are
supported, not 8443 -
Thanks Johnnie! I was able to mount the cache disk as read only. I'm copying my data now.
-
Also, It's probably important to note that I didn't preclear the SSD, thinking it wouldn't take that long...
-
Time is off, but these are current. Thanks!
-
I've been using unRAID successfully for years. Yesterday I added a 2nd 128GB SSD drive to my cache and created a 2 disk cache pool and started the array (the original disk was already formatted with btrfs). After some time I thought the server hung up because nothing was coming online and any attempts at accessing the webpage timed out so I shut down the server. After booting everything up, my SSD's were un-mountable. This is when I started to read the various posts and messages and how-to's on the forum. I went in blindly and didn't realize it actually created a RAID pool, I was just thinking it would allow me to manually use both SSD's.
Anyways, long story short, I've tried most of the suggestions on the forums to no avail. I've rebooted, removed the 2nd disk, etc., but no matter what I do I get various messages saying that my cache pool is un-mountable because of "missing devices", "bound to btrfs pool" or no "file system". I simply couldn't get the original SSD to come online and do not want to format it.
Last night I decided to put both disks back in, start up the server, set the pool back to 2, and start the array hoping that it would balance and come online. That was about 12 hours ago and it's still timing out on the website and nothing is available yet.
Are there any suggestions on how I can get this working again? Should I just let it ride? Is there something I may have missed or some commands I can run via SSH to see what is going on? I'm not sure how long it usually takes to balance out, but 12 hours seems long for 128GB SSD.
Anything is appreciated.
Thank you!
Flash Problem Solved
in General Support
Posted · Edited by chiaramontef
grammer
Just posting this so others might benefit.
My 36 hours of struggle getting my Unraid server to boot back up isn't the star of the show here, but found the flash drive process to be extremely finicky and wanted to share "what worked for me".
Currently running 6.9.2, but don't think think that's pertinent here. Some things that happened to me that have been covered in other posts:
I found it hard to believe that all my drives were bad, but knowing my newest one was 8 years old, I decided to pick up some new drives. I wasn't experiencing any other hardware issues on my server at least so I went to the store and picked up a few different varieties to ensure I was covered. Keeping them all at 32GB or less (for FAT32). Picked up some Lexar (USB 2.0/32GB), Sandisk (I know the history here, but purchased anyway, 32GB USB 3.0).
Inevitably I got it working on the Lexar, here is how:
What did help and what I'm hoping might help others, was Rufus 4.1. I know that has been covered to some degree in other posts, but what worked for me was a bit different that what I found at least.
Solution: I used my Lexar (32GB/USB 2.0) drive (port didn't matter) and Rufus 4.1 with the following settings:
Then I merely copied all my files from my most recent backup except for ldlinux.sys and ldlinux.c32
Put the drive back in my server and it booted right up without issue, right to the Unraid bootloader menu, then successfully to my login prompt. No more "Bond0" messages which is definitely a red herring, but at least a well documented one.
At this point, I'm not messing with it for a little while, but I am interested to see if this method works on some of the other flash drives I had laying around.
Hope this helps!