Jump to content

tTownTom

Members
  • Posts

    31
  • Joined

  • Last visited

Posts posted by tTownTom

  1. 12 minutes ago, JorgeB said:

    If you don't have a backup or can recover the current appdata you will to.

    ..sure hope those weekly backups actually did their job 🙃

     

    Would having multiple cache drives solve this in the future? Or is that not how cache pools work?

  2. Just now, JorgeB said:

    image.png

     

    This means the SSD failed, typical issue with multiple models based on the same controller, you will need to replace it.

    Thank you for your reply!

     

    When replacing it, will I have to set every container up from scratch? 😬

  3. Hi @JorgeB

     

    I turned off the server to check all connections. They seemed fine, however I thought I would create a disk image of the disk (as a backup) before rebooting, however I could not initialize the disk in windows..

     

    Re-connecting the disk and booting up the server gives this result (which seems fine):
    image.thumb.png.c2d09de7dc8cdb46247340d27c8e4e78.png

     

    Starting the array however, results in this error: Unmountable: Unsupported or no file system

    image.thumb.png.67d565c39927d14b61a3e1fe49034a66.png

     

    Also:
    The disk is listed under "Historical Devices" as well
    image.thumb.png.3fe339a9596375d2c328ee0179d87316.png

     

    Under Fix Common Problems I get this error:
    image.thumb.png.793b339c294ea6e84168f1443ca800e5.png

    Where I fall into the latter category (If the disk is listed as being unmountable, and it has data on it, whatever you do do not hit the format button. Seek assistance HERE).

    and it also lists a bunch of warnings, all referencing "non existent pool cache".

     

     

    And so that's where I'm stuck right now.

    Any and all suggestions are very welcome ❤️ 

  4. I just updated to Unraid v 6.12.4 and now a whole lot of my containers will not start.

     

    On server reboot I see a lot of these in the log:

    Oct 26 09:18:35 storeStep kernel: br-7f14e4841fe4: port 7(vethd64225f) entered blocking state
    
    Oct 26 09:18:35 storeStep kernel: br-7f14e4841fe4: port 7(vethd64225f) entered disabled state
    
    Oct 26 09:18:35 storeStep kernel: device vethd64225f entered promiscuous mode
    
    Oct 26 09:18:52 storeStep kernel: vethf21ecfb: renamed from eth0

     

    From the logs of some containers that won't start, it seems the file system is read only:

    Zigbee2MQTT:error 2023-10-26 08:14:14: Failed to write state to '/app/data/state.json' (EROFS: read-only file system, open '/app/data/state.json')

     

    Any idea what's going on here and how to fix it?

    storestep-diagnostics-20231026-0928.zip

  5. Just now, Frank1940 said:

    On what directory/directories?   Could you provide us with the exact command that you use?   (I am suspecting that we will be seeing a lot more of these types of issues...)

    The troubled directory was in this structure:
    /mnt/user/<SHARE>/<DIRECTORY>

    where everything within <DIRECTORY> was not accessable.

    I ran the following command:
    chmod -R 777 /mnt/user/<SHARE>/<DIRECTORY>

    • Thanks 1
  6. 4 minutes ago, itimpi said:

    @tTownTom  have you checked the permissions at the Linux level on the share and the files in it?  There have been a number of reports of permissions being wrong for network access and it only showing up after upgrading.

     

    it might be worth posting your system’s diagnostics zip file to your next post in this thread to see if anyone can spot something there.

    Thanks for your suggestion!

     

    The directory had rw for all users.
    To test, I did a chmod -R 777 on the directory, now all the files are back 🤩
    I don't reckon a 777 is a amazing solution, so I will be doing some more tests - but at least I can now access all files and folders again.

     

    Thank you!

  7. In one spesific directory all subfolders and files are gone when trying to access them from Explorer/Finder/Nextcloud, however ls-ing into the directory using the commandline I see that all the files and folders are still there, all 13 gigs. cp-ing all files and folders to another share took about 2mins, and gave the same result: I can ls to see the files/folder, however Explorer reports the folder is empty.

     

    I first encountered the issue after updating to UnraidOS 6.11.0 today.

    Not saying the update is to blame, it's just the only thing I can think of that's changed since I last accessed the folder using Explorer.

     

    Any pointers on how to access this content from anything other than the command line again?

     

    Diagnotics are attached:

    storestep-diagnostics-20220925-1200.zip

  8. 12 hours ago, mgutt said:

    Try to delete your SSL certificate in NPM and obtain a new one.

    This solved the problem.

    Indeed, deleting and requesting new SSL certificates for all the proxies made them all work again.

    Thank you so much for that!

     

    Would you happen to know why this suddenly stopped working in the first place? Is there something I can do to not have it happen again?

     

    Cheers for all your help and time - I truely appreciate it :)

  9. 41 minutes ago, mgutt said:

    Remove the slash after index.html

    Thanks for your help!

    After removing the slash:
     

    [root@docker-eeb330dffb63:/app]# curl -sSL -D - http://10.<internalIP>:32400/web/index.html -o /dev/null
    HTTP/1.1 200 OK
    X-Plex-Protocol: 1.0
    Cache-Control: no-cache
    Accept-Ranges: bytes
    Connection: Keep-Alive
    Keep-Alive: timeout=20
    Content-Length: 9206
    Content-Type: text/html
    Date: Tue, 25 Jan 2022 17:44:14 GMT

    (This was the same for both NPM and UnRaid terminal).

     

    47 minutes ago, mgutt said:

    This should not happen. A domain inside the hosts file must work. Are you sure Windows did not add .txt or similar? Did you copy and paste file to overwrite it? Windows usually does not allow to edit this file directly.

    I did not copy/paste the file - thanks for the tip on that!

    I have now done so, and I am now getting this error in Chrome "ERR_HTTP2_PROTOCOL_ERROR":

    cpError.PNG.8c45024fe1c90dcb133671b6ff57da4e.PNG

  10. 21 minutes ago, mgutt said:

    Looks good. Please verify it through cmd and "ping plex.example.com". Maybe it returns an IPv6 instead of IPv4?! Finally this must work. You could even add a random domain into the hosts file with a random IP and ping should try to reach this ip.

    Pinging the domain entered into the host file gives this result:

    ping plex.<mydomain>.com
    Ping request could not find host plex.<mydomain>.com. Please check the name and try again.

     

    23 minutes ago, mgutt said:

    Ok, then back to the connection problem between NPM and the target container. You said it return error 400.

     

    Is the target url the same as if you open the target container manually through your browser? What if you execute the same curl command through the unRAID WebTerminal?

    The url/ip is the same.

     

    Opening Plex in WebUI as an example - I copied the URL and put it into the curl command.
    Both the UnRaid terminal and the NPM terminal gives the same result.

    From docker terminal:

    [root@docker-eeb330dffb63:/app]# curl -sSL -D - http://10.<internalIP>.19:32400/web/index.html/ -o /dev/null
    HTTP/1.1 404 Not Found
    X-Plex-Protocol: 1.0
    Content-Length: 85
    Content-Type: text/html
    Cache-Control: no-cache
    Date: Tue, 25 Jan 2022 12:25:10 GMT

    From UnRaid terminal:

    root@unRaid:~# curl -sSL -D - http://10.<internalIP>:32400/web/index.html/ -o /dev/null
    HTTP/1.1 404 Not Found
    X-Plex-Protocol: 1.0
    Content-Length: 85
    Content-Type: text/html
    Cache-Control: no-cache
    Date: Tue, 25 Jan 2022 12:25:36 GMT

    Screenshot from NPM proxy host setting:

    plexNPM.PNG.2b9723b244b757d4d992c4600f28a507.PNG

  11. 9 hours ago, mgutt said:

    But you are able to open the target container directly (without NPM)?

    I am able to open the containers directly (UnRaid dash - click container icon - WebUI, and also by entering the IP:port of the container into the browser whilst on my local network).

     

    1 hour ago, mgutt said:

    This change needs up to 24 hours. Do not forget: Every client has a DNS cache.

     

    PS Cloudflare has a developer mode on the main page of the dashboard which allows bypassing Cloudflare.

     

    Did you verify it through the Terminal of your client? Did you edit the correct file? Usually this can't happen as the hosts file has the highest priority.

    I did put Cloudflare in Developer mode yesterday when I removed the DNS entry - however today the changes must have taken effect as I get a error message from Chrome when entering the URL:

    na.PNG.7397a01d63a66a1b113e3ee21691ecb0.PNG

     

    I'm on Windows. Made the changes to this file:

    C:\Windows\System32\drivers\etc\hosts

    I changed the file like so: 

    # Copyright (c) 1993-2009 Microsoft Corp.
    #
    # This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
    #
    # This file contains the mappings of IP addresses to host names. Each
    # entry should be kept on an individual line. The IP address should
    # be placed in the first column followed by the corresponding host name.
    # The IP address and the host name should be separated by at least one
    # space.
    #
    # Additionally, comments (such as these) may be inserted on individual
    # lines or following the machine name denoted by a '#' symbol.
    #
    # For example:
    #
    #      102.54.94.97     rhino.acme.com          # source server
    #       38.25.63.10     x.acme.com              # x client host
    
    # localhost name resolution is handled within DNS itself.
    #	127.0.0.1       localhost
    #	::1             localhost
    109.<myExternalIP>.184   plex.<myDomain>.com

    There is also a "hosts.ics" file in the same folder (etc) which I did not change...

     

    Entering my external IP into the browser sends me to the default NPM page:

    npm.PNG.62ec01219f682948209f42d4b2e0ea2d.PNG

    But entering the urls of the proxy hosts throws the errors.

     

    Thanks for your time looking into this :D

  12. Hi @mgutt, thanks for getting back to me on this issue!

     

    For some reason, adding the ip and url into the hosts file does nothing :/ Even removing the DNS entry from Cloudflare and putting the account in Developer mode just changes the error message from Cloudflare when entering the url...


    After running the curl command mentioned in "4.) Does NPM reach your target container?" I get this response:

    HTTP/1.1 400 Bad Request
    Server: nginx
    Date: Mon, 24 Jan 2022 17:07:36 GMT
    Content-Type: text/html
    Content-Length: 248
    Connection: close
    Referrer-Policy: no-referrer
    X-Content-Type-Options: nosniff
    X-Download-Options: noopen
    X-Frame-Options: SAMEORIGIN
    X-Permitted-Cross-Domain-Policies: none
    X-Robots-Tag: none
    X-XSS-Protection: 1; mode=block

     

    I find it strange that this just suddenly became a problem after months of using NPM with no issues at all :/

  13. NginX Proxy Manager stopped working.


    Hi!
    I've been using NPM flawlessly for a while - yet suddenly it stopped working. 
    I get a Error 520 from Cloudflare when I try to access my services: 

    Quote

    "There is an unknown connection issue between Cloudflare and the origin web server. As a result, the web page can not be displayed."



    Putting my IP into the browser I reach the default NPM screen:

    Quote

     

    "Congratulations!
    You've successfully started the Nginx Proxy Manager.

    If you're seeing this site then you're trying to access a host that isn't set up yet.

    Log in to the Admin panel to get started."

     


    I can still log in to NPM and see all my proxies and SSLs.. 

    Any idea what might have happened here?

  14. On 1/11/2022 at 4:14 PM, tTownTom said:

    Thanks for your reply, @mattie112

    I've tried a few things now, including what you suggested.

     

    I also made sure to purge the Cloudflare cache and turn on devoloper mode in Cloudflare, which let's one "see changes to your origin server in realtime.", just to be sure 😛 

     

    In a incognito window, to also bypass my browser's cache, I first disabled my router's port forwarding to Nginx and then tried to load the IP. I got a connection timed out error - with the error still up I enabled the port forwarding, and the error page changed over to the Nginx default page ("Congratulations..")

    In my mind this proves an issue with the Nginx setup. Would you agree?

    UPDATE:
    I deleted the docker and installed the Nginx-Proxy-Manager-Official docker instead.

    Now everyting works. No idea as to why. But hey..!

  15. Thanks for your reply, @mattie112

    I've tried a few things now, including what you suggested.

     

    I also made sure to purge the Cloudflare cache and turn on devoloper mode in Cloudflare, which let's one "see changes to your origin server in realtime.", just to be sure 😛 

     

    In a incognito window, to also bypass my browser's cache, I first disabled my router's port forwarding to Nginx and then tried to load the IP. I got a connection timed out error - with the error still up I enabled the port forwarding, and the error page changed over to the Nginx default page ("Congratulations..")

    In my mind this proves an issue with the Nginx setup. Would you agree?

  16. 13 minutes ago, mattie112 said:

    Perhaps your "congratulations" page is cached by Cloudflare? I believe it can do that for static websites. You could for example check the port forwarding on your router, disable that and if the page still loads then you are 100% not serving that page.

     

    Also in the commandline you can do "docker ps" to see all containers running, this includes all the ports they listen on so perhaps that might give some insight.

    Thanks!

    I closed the port, and the "Congratulations" page still showed, so it could seem Cloudflare is indeed caching.

    I also ran "docker ps" and there is indeed only one instance of Nginx running.

     

    I still don't understand why it's suddenly stopped working, though =/ Everything seems fine, it just does not work anymore..

  17. TWO INTANCES OF NGINXPROXYMANAGER RUNNING AT THE SAME TIME?

     

    Hi,
    I've been using Nginx Proxy Manager for a while, and it's worked great!

     

    Yesterday, however, I was trying to access Plex and I was presented the Cloudflare 520 Error: "unknown connection issue between Cloudflare and the origin web server."

    cloudflare520.PNG.8af9088f64e4497818547f14f868f122.PNG

     

    Checking Cloudflare DNS-settings everything looks right.

    When I enter my public IP in the browser, I am presented with the Nginx Proxy Manager default page: "

    Congratulations! You've successfully started the Nginx Proxy Manager."

     

    If I open the WebUI for Nginx Proxy Manager from my Unraid dashboard everything seems fine.

     

    If I stop the Nginx Proxy Manager docker and enter my public IP in the browser, however, I am still presented with the Nginx Default Page - as if there are multiple instances of the docker running, and one is not configured..?

    dockerOff.thumb.PNG.8fd2e7c2b26c76f92b0e15a0bde6ea3d.PNG

     

    I've restarted the docker with no change in results, and I've restarted the Unraid server with no change in results.

    Any ideas as to what to do before I go ahead and delete the docker and try re-intalling it?

     

    Cheers

  18. 1 hour ago, UhClem said:

    Benson is correct.

    (I "guessed" wrong, based on assumptions about the on-screen messge. [The term "PM" triggered "port multiplier" when there aren't any; there are 4 ASM106x controller chips.] Also, I didn't realize the OP only had 2 drives connected--I thought the others were "missing" and grasped for a reason.)

    That's my bad, I should have been clear about that.

    4 hours ago, Benson said:

    You may try as below, at BIOS disable the SATA port come from this card which no harddisk connect

     

    Chris Gaukroger 2009-06-28 19:06:03 UTC

    A temporary workaround for me (with GA-EP45-DS5) was: - disable the extra SATA ports not being used. Remember the ROM BIOS settings in case you disable the ports you are using! In ROM BIOS set Onboard SATA/IDE Device to Disable Now quick boots OK!

     

    If above not work, could you try 3 things

     

    1) Set longer boot pause in BIOS

    2) Disable PCIe storage oprom in BIOS

    3) Turn off IOMMU in Unraid

    Thanks for your suggestions, @Benson!
    I've gone through all of them, and unfortunately none seems to work.

     

    So I've bit the bullet and ordered a new card.

    The LSI SAS 9207-8i seemed to have good user feedback and said to work out of the box. Cost a fortune in Norway, however it was a fraction of the price over at eBay. So I guess I'll try that once it arrives.

     

    Thank you all for helping :)

×
×
  • Create New...