-
Posts
434 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by DieFalse
-
-
Ok. So even though it read that it was not protected, I did get this:
sg_format failed: Data protect
So it seems there is some sort of protection but how to clear it. I will try the raid controller next, need to open it for ram removal anyways.
-
This does not seem to be the case, no PSID on the drive and smartctl doesnt show protection:
root@GSA:~# smartctl -i /dev/sdaa
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.28-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Vendor: SEAGATE
Product: ST91000642SS
Revision: ASF9
Compliance: SPC-4
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Logical block size: 512 bytes
Rotation Rate: 7200 rpm
Form Factor: 2.5 inches
Logical Unit id: 0x5000c50055ca49d3
Serial number: 9XG36YA6
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Mon Jul 19 12:48:33 2021 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled -
6 hours ago, JorgeB said:
All disks are failing to read sector 0, see if they have some kind of protection/encryption enable.
Great call out - any way for me to check from within unraid? They're new, so wiping them is ok also.
-
Diagnostics attached - I have 6 brand new blank drives that are Dell Seagate's. They wont attach, mount, or select.
-
4 hours ago, Kira said:
you can probably create another docker and just change the name of docker and app folder
That would not be feasible as the docker utilizes the host itself for the networking as there are no ports or adapters configured.The config should be adjustable for additional per the CloudFlareD Documentation, just havent tried it yet. I believe it would require a business or paid cloudflare plan though.The only way without a paid account I can see so far (or without multiple daemons) is to create a CNAME on the one domain that points to the other.
The other alternative that appears to work is multiple containers with different names and appdata folders as Kira mentioned. - Given how lightweight the docker is, this seems to be the absolute best way.
-
On 6/28/2021 at 11:11 PM, samba_69 said:
How do I use multiple domains?
I have this question also.
- 1
-
2 hours ago, trurl said:
Why aren't you fixing it?
Secondary server - hard to access - non-critical error.
-
The hardware memory issue only exists in the secondary server - the issue existed in both servers.
The memory issue has been present for the last 6 version releases.
-
I get a timeout error - it never loads.
SSH / NFS works - however file transfers themselves do not. They timeout. I rolled back one of the two servers and its still experiencing the issue. I can access the server all ways except webgui on my primary pc.
Wireshark pcap didn't reveal anything.
-------
Strangely enough - today I can access and login to both servers webgui ---- with NOTHING changed
Diag from when it wasn't working
-
17 hours ago, SimonF said:
This was added in 6.9.2
I was thinking it was in the release notes. Im wondering if for some reason it is blacklisting my desktop. Where can I check the Fail2Ban config / what is blocked or add any whitelist (IE My home subnet)
-
Ok - so I have made some progress - I can access the webgui from another device on the network. It seems my primary PC is blocked from accessing the WebGui.... Is there an Fail2Ban or similar item that would blacklist my connection to ports 80/443?
-
Thanks;
I am having trouble pulling the diagnostics even with SCP. I will keep trying and once I can gain physical access again tonight will copy them manually also.
CURL IP of host results in a time out also.
root@GSA:/mnt/user/www# sha256sum /boot/bz* 7216239d48d9f276c65fd1bce5c80d513beadde63f125bbb48b97228f4e3db1c /boot/bzfirmware debc904556b518fc6ea2bf7c679b86d8b99ad978b321fad361c25d829ecb7460 /boot/bzfirmware.sha256 1a7dd82250acf93b711633bbf854cc90a03465bb32c3cec4d56a0355cfc10096 /boot/bzimage b9098fd8dc1f1e3fa594a54864a1e0ede7c2d41d750564e8168b2ab406c3ec3f /boot/bzimage.sha256 75be3470b4536272062f4673ef21726da1d54b7bde5e264254e5df77c87c40a0 /boot/bzmodules 9de395254b24ddb1c52c2d9f22e613567ef61659dab837777f41c25ae0bafa5b /boot/bzmodules.sha256 7692d002882cc96760d5f1a98b23e4c8872f6b8d2233bfcdec7e6331802b0cf1 /boot/bzroot 9fa3228cebfdd48eb5d78f44a1272231e9d1e0944b54e08c18f2aa315b8e148f /boot/bzroot-gui 52f7f3e9118f8b96db00ea8cbe795baf48bddb6ed2be08cf54af81e66ff17ab6 /boot/bzroot-gui.sha256 12ce4274dcb3f3422c1e0f9fcc37bc3f0aef9c834a19c25da06a21c2ce52303f /boot/bzroot.sha256 root@GSA:/mnt/user/www# head -n 1 /boot/changes.txt ## Version 6.9.2 2021-04-07
-
I have SSH and direct server connected GUI / Terminal access. The web pages will not load on either. The only change was upgrade to the latest stable release.
I am beggining this ticket to see if its a known issue I may have missed in my search or if any generic steps are available - IE: how to check the default host webserver status etc?
Diagnostics pending
-
Do not go with a r420 or Rx20 for your office if you need it to be quiet. These are made for datacenters and are not considered quiet.
Mine are installed in an area I dont hear, and yes have way more than 16tb in each.
-
8 hours ago, sekrit said:
SO VERY SORRY to hijack a thread. But this is the only one which I could find mentioning "NX MODE". My Taichi manual doesn't describe it and I don't know what it's for. It's not been in one of my boards before (Nor has "PSS Support").
Would someone please share what "NX Mode" and "PSS Support" are, and recommended general setting for my Unraid build? (I essentially want to avoid any settings which will prevent functionality while I am setting up. I can dial them in more granularly later.
NX = No Execute Mode
PSS = Power Supported State
You can safely have both off.
- 1
-
2 hours ago, CorneliousJD said:
thank you so much for continuing to reply and trying to help. I really do appreciate it very much!
So I added a few other /locations for testing and pretty much nothing works like that.
I can get some pages to load their title in the browser, but no contents, and I can get some to show their authentication pages but then fail to load once logged in, etc.
ALL of these services work fine on sub.domain.com however with no issues.
So it seems like it's trying to load the proper site, but for whatever reason having them at a /location vs a subdomain is breaking things.
I used to have a /plex location working in a SWAG/LetsEncrypt config, but it was pretty simple, so I'm not sure what I'm missing here.
Here's my old SWAG/LetsEncyrpt config
# PLEX CONTAINER location /plex/ { proxy_pass http://10.0.0.10:32400/; include /config/nginx/SSO.conf; } if ($http_referer ~* /plex/) { rewrite ^/web/(.*) /plex/web/$1? redirect; }
And SSO.conf was all of this
client_max_body_size 10m; client_body_buffer_size 128k; proxy_bind $server_addr; proxy_buffers 32 4k; #Timeout if the real server is dead proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; # Advanced Proxy Config send_timeout 5m; proxy_read_timeout 240; proxy_send_timeout 240; proxy_connect_timeout 240; proxy_hide_header X-Frame-Options; # Basic Proxy Config proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect http:// $scheme://; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_no_cache $cookie_session; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";
If I add all that into the custom config for the location then I'm still not getting anywhere unfortuantely.
Something really weird the the /plex location happens too where sometimes it will try to load domain.com:4443/plex (where 4443 is the port NPM runs with my internal network) - Nothing should be configured to ever add port 4443 in there so I'm not sure why that's getitng added either.
So weird.
Do you have Discord or some other online messenger? Can you PM me your info so I can troubleshoot directly with you. I feel we can solve this rapidly that way.
-
16 hours ago, CorneliousJD said:
How would I check/know if I have DNS rebinding allowed for plex.direct?
If you mean internally and NAT loopback, then yes that is enabled and working.
For what it's worth. i'm getting same 401 unauthorized when testing via my phone off of WiFi.
I don't understand what would be different about our configs since there's almost zero config in NPM.
NAT Loopback and DNS Rebinding are completely different. Plex uses "HASH".plex.direct to create dns entries or proxy to your server. the domain.com/plex service uses this. You can verify this is being done by visiting the /plex location and reviewing the certificate, which you will find is issued to plex.direct. I feel that something is interrupting the connection to /plex (XML-Plugins-API) interface causing you this issue.
Can you create another /anything and point it to a known working interface? sonarr/radarr/npm If this works, then the config is working and creating the location properly. It would show that its something needed in advanced config or your router. If its not working, it shows that its NPM not creating the location correctly.
Notes:
DNS Rebinding
Some routers or modems have a feature known as “DNS rebinding protection”, some implementations of which can prevent an app from being able to connect to a Plex Media Server securely on the local network. For most users, this won’t be an issue, but some users of higher-end routers (or those provided by some ISPs) may run into problems.
Similarly, some DNS providers (including some ISPs) may have this feature.
DNS rebinding protection is meant as a security feature, to protect insecurely-designed devices on the local network against attacks. It provides no benefit for devices that are designed and configured correctly.
-
11 hours ago, Tucubanito07 said:
How would you rebuild it? Just in case it happens again I know what I can do to try to fix it.
Hi Tucubanito07,
The npm-01 that had the corrupt PEM would need its "conf" file deleted from the app data. You can copy the conf to another folder and review it to recreate that proxy host. When you delete that conf, NGINXProxyManager will load all but that host that is corrupted. (which sometimes can be more than one) you would then re-add that proxy host.
Example: npm-01 = jimmy.domain.com
Delete conf (/etc/letsencrypt/renewal/npm-1.conf)Load NPM
Review hosts for missing one or review the conf file for the missing host info and re-add.
However, if its multiple, then you will have to delete the others in the log with the same error of nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-1/fullchain.pem": PEM_read_bio_X509_AUX() failed (SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: TRUSTED CERTIFICATE)
Alternatively you can go to each PEM (certificate folder) and check the fullchainX.PEM (x being whatever number it is in the dir) for validity.
openssl x509 -text -noout -in /etc/letsencrypt/live/npm-1/fullchain.pem
-
4 hours ago, Tucubanito07 said:
So i did the ones you said and this is what i got. Seems to have the same files.
You're welcome. It appears somehow your fullchain.pem became corrupted (likely blanked out). Rebuilding would fix this.
-
As I have said, I have mine configured and working. One thing I am thinking you may have an issue with /plex/ goes to a ".plex.direct" url by translation. Do you have DNS Rebinding allowed for "plex.direct"? If not, ONLY IP:32400/plex will work. If so, then domain.com/plex/ will work.
-
1 minute ago, Tucubanito07 said:
This is for one of the certs. Based on this. seems to be permission issues correct? How would i be able to fix it or what permissions does it need?
ls -l /mnt/cache/appdata/NginxProxyManagerLive/letsencrypt/archive/npm-1/
total 16
-rw-r--r-- 1 nobody users 1838 Dec 14 11:21 cert5.pem
-rw-r--r-- 1 nobody users 1586 Dec 14 11:21 chain5.pem
-rw-r--r-- 1 nobody users 3424 Dec 14 11:21 fullchain5.pem
-rw------- 1 nobody users 1704 Dec 14 11:21 privkey5.pemCheck certs 6,7,12,13,20 as those are erroring. Are those files there? I suspect not. In which case, you will have to delete those hosts and recreate or manually force those to regenerate.
-
6 minutes ago, Tucubanito07 said:
If you mean the /ETC/ it does not exist. Do you happen to know the specific directory and i can supply the permission?
/mnt/cache/appdata/NginxProxyManager/letsencrypt/archive/npm-20
-
5 minutes ago, Tucubanito07 said:
This is what i see.
ls -l /mnt/user/appdata/NginxProxyManagerLive/letsencrypt/live/npm-1/
total 20
-rw-rw-rw- 1 nobody users 692 May 24 2020 README
lrwxrwxrwx 1 nobody users 29 Dec 14 11:21 cert.pem -> ../../archive/npm-1/cert5.pem
lrwxrwxrwx 1 nobody users 30 Dec 14 11:21 chain.pem -> ../../archive/npm-1/chain5.pem
lrwxrwxrwx 1 nobody users 34 Dec 14 11:21 fullchain.pem -> ../../archive/npm-1/fullchain5.pem
lrwxrwxrwx 1 nobody users 32 Dec 14 11:21 privkey.pem -> ../../archive/npm-1/privkey5.pemCan you check the archive folder for the originals please?
-
5 minutes ago, Tucubanito07 said:
I tried the certbot renew --force-renewal and restarted the container and still nothing. What is really weird nothing was done for this to happen. My webui is not even working which is also weird. Thank you for your help.
This is also what i see.
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/npm-12/fullchain.pem (failure)
/etc/letsencrypt/live/npm-13/fullchain.pem (failure)
/etc/letsencrypt/live/npm-6/fullchain.pem (failure)
/etc/letsencrypt/live/npm-7/fullchain.pem (failure)
4 renew failure(s), 0 parse failure(s)
at ChildProcess.exithandler (child_process.js:303:12)
at ChildProcess.emit (events.js:315:20)
at maybeClose (internal/child_process.js:1021:16)
at Socket.<anonymous> (internal/child_process.js:443:11)
at Socket.emit (events.js:315:20)
at Pipe.<anonymous> (net.js:674:12)
[nginx] starting...
nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-20/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/npm-20/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)Have you checked the "/etc/letsencrypt/live/npm-20/" or any of the //etc/letsencrypt/live locations to see if the fullchain.pem is there? It seems the symlinking is broken for them.
Example:
drwxrwxrwx 1 nobody users 94 Dec 9 17:01 ./
drwx------ 1 nobody users 138 Dec 11 16:39 ../
-rw-rw-rw- 1 nobody users 692 Jul 30 14:01 README
lrwxrwxrwx 1 nobody users 29 Dec 9 17:01 cert.pem -> ../../archive/npm-1/cert3.pem
lrwxrwxrwx 1 nobody users 30 Dec 9 17:01 chain.pem -> ../../archive/npm-1/chain3.pem
lrwxrwxrwx 1 nobody users 34 Dec 9 17:01 fullchain.pem -> ../../archive/npm-1/fullchain3.pem
lrwxrwxrwx 1 nobody users 32 Dec 9 17:01 privkey.pem -> ../../archive/npm-1/privkey3.pem
Unable to mount 6 new 1TB drives
in Storage Devices and Controllers
Posted
@SimonF there is no PSID on the disk that I can find unless its under the "Tamper Evident Label" which I find to be weird, these are new replacement drives from Dell, so its possible.
I did download the sedutil to see if I could get any info, but can't run it on unraid.
The Kernel flag libata.allow_tpm is not set correctly
Please see the readme note about setting the libata.allow_tpm
Invalid or unsupported disk /dev/sdz