Jump to content

Boyturtle

Members
  • Posts

    110
  • Joined

  • Last visited

Posts posted by Boyturtle

  1. On 6/9/2021 at 9:58 PM, Djoss said:

    Is the issue you have about a "Local destination" ?

    I get very slow write speeds to  a local disk (call it disk A), it typically writes at 1GB/hr (similar to the upload speed I get to Crashplan Central). I have another disk that is used for the same backup sets as disk A (call it disk B). Data  is written to both disk and they are alternated and stored offsite when not in use. Disk A takes about 2 days to sync when I add and mount it in Unraid, disk B takes about 12-15 hours to do the same operation. I'm not sure of the write speeds of disk B, but it is substantially quicker that disk A. Both disk are 4TB, disk B is newer though, having recently replaced a faulty (clicking) drive that was the same model and batch as Disk A. The data stored on disk B is only about 10% less that the data stored on disk A. Disk A is not showing any errors on SMARTs, but I understand that the tests are not always reliable.

  2. 2 hours ago, Djoss said:

    Instead of using java mx ,there is a setting in container's configuration that can be used: "Maximum Memory". 

    I've had Maximum Memory in the container setting set to 6144M for more than a year now. Is there a way to check if the docker is using this setting correctly?

    I've also noticed that the /appdata/CrashPlanPRO/cache file is very large at 5GB+, (with /appdata/CrashPlanPRO/cache/42 about 2.5GB) is this normal?

  3. Writing speeds to external USB3 hard drives seems to have gotten very slow just recently. Typically, Crashplan is writing at about 1GB/hour and is having difficulty keeping up with the data that is being written. I get a similar speed writing up to Crashplan Central too.

    I'm not sure if it's related, but one of the backup disks in the rotation cycle shows as present after it has unmounted and removed.

    Any suggestions at what might be wrong?

  4. Apologies for not making everything clearer, but I'm struggling to find the correct terminology to express my problem.

    6 hours ago, mattie112 said:

    "When I click the link to lauch the shortcut, the browser times out; this happens if the cert is applied or not"

     

    What link/shorcut? You cannot access the NPM control panel? Or you cannot access one of the services proxied?

    I can access the NPM control panel, but am unable to connect to some proxied services; I am able to connect to the services that are internet facing where I have created cname records with the dns provider, but am unable to connect to non internet facing services that are also proxied.

    7 hours ago, mattie112 said:

    What is proxynet? A docker network on your Unraid?

    Proxynet is the bridge docker network created on unraid. It has a Subnet of 172.18.0.0/16 and a gateway of 172.18.0.1. All the proxied services run on this network.

    7 hours ago, mattie112 said:

    And are the DNS servers your Unraid uses correct? If it doesn't resolve it is probably not using your pfSense as DNS server.

    In Settings>Network Settings,  DNS is pointing to pfSense. When I ping krusader.boyturtle from the desktop, it resolves to 172.18.0.1, the gateway for the docker network (proxynet), but it doesn't get any further and I have 100% packet loss.

  5. I’m not sure if this belongs here, or somewhere else. I need some help getting NPM to create short local URLs for various internal self-hosted services running on proxynet using split DNS (on pfSense) and local certs. I want this for convenience for me, so I can type in krusader.boyturtle in the address bar and not have to remember ipadd:port number. I also want this so I don’t have to deal with the annoying browser non-secure warnings. This is all for services that are not internet facing so cname and aname records are not involved.

     

    I have created self signed certs and keys, added the short local domain and host to the Host Overrides in DNS Resolver in pfSense, imported said certs into NPM and then added added the host to NPM, using the created/imported cert. When I click the link to lauch the shortcut, the browser times out; this happens if the cert is applied or not. When pinging krusader.boyturtle from my desktop (outside of proxynet), it resolves to docker0 ip address, but gets no further. When pinging from another docker cli, it does not resolve at all (I get Name or service not known).

     

    What am I missing to get around this, or have I hit a feature of the Unraid docker system that doesn’t allow this traffic through in this manner?

  6. 10 hours ago, mattie112 said:

    If your DNS is resolving to cloudflare are you 100% sure the '/config/letsencrypt-acme-challenge' is also forwarded correctly? Letsencrypt needs to contact your server (over http) to verify the signature.

    Bingo. That sorted it, thanks. I'd got an old rule blocking port 80 sitting at the bottom of my rules list that I'd missed and forgotten to disable again. I've unblocked it and Letsencrypt can now issue the certificate 😊

    • Like 1
  7. 5 hours ago, mattie112 said:

    And you can resolve your domain correctly? Both ipv4 and ipv6? It looks like it is doing a http challenge but that fails so most likely letsencrypt cannot reach your domain to verify.

    I have set up a DDNS A record and several CNAME records off the back of that on Cloudflare, inc root and www. I am able to ping them all just fine.  Is there anything else I should be looking at?

    FWIW, I am able to add an origin certificate and use that for the hosts, but I would like to get the Let's Encrypt certificate working, in case I need it at some point in the future.

    When I try to add the Let's Encrypt certificate, I get this error within the nginx proxy manager

    Error: Command failed: /usr/bin/certbot certonly --non-interactive --config "/etc/letsencrypt.ini" --cert-name "npm-16" --agree-tos --email "[email protected]" --preferred-challenges "dns,http" --domains "domain.uk" 
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator webroot, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    http-01 challenge for domain.uk
    Using the webroot path /config/letsencrypt-acme-challenge for all unmatched domains.
    Waiting for verification...
    Challenge failed for domain domain.uk
    http-01 challenge for domain.uk
    Cleaning up challenges
    Some challenges have failed.
    
        at ChildProcess.exithandler (child_process.js:308:12)
        at ChildProcess.emit (events.js:314:20)
        at maybeClose (internal/child_process.js:1022:16)
        at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)

     

  8. When adding a new host, I am unable to get the SSL certificate from Let's Encrypt. Output from logs

     

    [4/8/2021] [9:44:11 PM] [Nginx ] › ℹ info Reloading Nginx
    [4/8/2021] [9:44:11 PM] [SSL ] › ℹ info Requesting Let'sEncrypt certificates for Cert #3: sub.domain.uk
    [4/8/2021] [9:44:15 PM] [Nginx ] › ℹ info Reloading Nginx
    [4/8/2021] [9:44:15 PM] [Express ] › ⚠ warning Command failed: /usr/bin/certbot certonly --non-interactive --config "/etc/letsencrypt.ini" --cert-name "npm-3" --agree-tos --email "[email protected]" --preferred-challenges "dns,http" --domains "sub.domain.uk"
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator webroot, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    http-01 challenge for sub.domain.uk
    Using the webroot path /config/letsencrypt-acme-challenge for all unmatched domains.
    Waiting for verification...
    Challenge failed for domain sub.domain.uk
    http-01 challenge for sub.domain.uk
    Cleaning up challenges
    Some challenges have failed.

    The output states --preferred-challenges "dns,http", I have purposly left dns challenge disabled when adding the host.

    At first I thought the install was corrupt, so I uninstalled the docker and deleted the folder in appdata, before reinstalling it. It still behaves the same way.

    Any suggestions on how to fix this?

  9. [Resolved] I'm having an issue with this plugin; local access works but I am unable to get remote access working and when I set to remote access (having checked the port forwarding is working correctly), I get access unavailable and I lose connection to the mothership.

    After I have lost connection to the mothership, running unraid.net-api restart does not bring it back, but it does bring back local connection, after a few minutes usually. The only way to get connection back to the mothership is to reinstall the plugin.

    I'm using a pfSense box and have the DNS resolver set to

    server:
    private-domain: "unraid.net"
    server:
    private-domain: local.ip.address

    Pinging myhashnumber.unraid.net resolves to local.ip.address listed above, but pinging www.myhashnumber.unraid.net returns Name or service not known.

    Skimming through this thread, I see that this is likely to be a DNS issue, but I couldn't find any further info at getting this resolved; any pointers would be much appreciated.

  10. 55 minutes ago, dlandon said:

    Delete the /flash/config/plugins/unassigned.disks/unassigned.devices.cfg file, then click on the 'Refresh Disks and Configuration" icon (double arrows in the upper right) and see if it doesn't clear up the automount.

    That seems to have sorted it. Hopefully this will resolve the issue of the ext drives not unmounting when they should; time will tell over the over the next couple of weeks when I do the backup disk swap out.  Thanks for your assistance and persistence 🙂

  11. On 2/12/2021 at 11:57 AM, dlandon said:

    It doesn't appear to be related solely to unmounting of the disk.  I recommend the following:

    • Remove the preclear plugin. Done
    • Set the drive in question to not automount. All UD disks showing as Automount on when hovering over settings button, but when clicking on button, it shows as off. It doesn't matter if the setting is set to off or on, the disk mounts automatically
    • Remove the drive in question.  USB drives should be unmounted and removed from the system when you reboot. Ok
    • Reboot the server.
    • Plug the USB disk back in and mount it manually.
    • Check the log for errors before unmounting it.

    I suspect that the drive has some issues.  Post diagnostics again if the log has errors.

     

    Why are you using EXT4 on the disk? No particular reason, it was just convenient. Would I be better off using another fs, disks are used for backup.

     

  12. Hi folks

    When I swap over external backup HDD drives and I click the UNMOUNT button on the main tab, Unraid doesn't always unmount said drive correctly and, while the system still seems to work without issue, I get multiple (2-4/minute) errors in the syslog (EXT4-fs error (device sda1): ext4_find_entry:1455: inode #2: comm pool-16-thread-: reading directory lblock 0), which rapidly fills up; this is probably because the backup docker is still continuously trying to access the disk. Running the umount command in terminal fixes this, but I really would like a fix for this. Can anyone suggest anything?

  13. On 12/17/2020 at 9:47 PM, skois said:

    Go to your /data folder, there should be a folder updater-randomstring inside backups should be your files. copy them to /config/www/nextcloud/

    Do backup before you do anything tho

    Unfortunately, NC wasn't happy at all and I couldn't even get into console properly. I just restored appdata from CA Backup/Restore and it is now back up, but in the earlier version. I will retry the update over the Christmas break, when I have a bit of spare time. Thanks for your input.

  14. 2 minutes ago, skois said:

    try running "occ maintenance:repair" or "updater.phar" depending on what you want.
    In this docker you dont need the sudo part.

    Apologies, I should have said that I have done this already and I get the same outcome.

    Could not open input file: /config/www/nextcloud/occ as it is totally missing: Contents of /config/www/nextcloud are only

    drwxrwxr-x 1 abc abc 284 Dec 17 18:12 apps
    drwxrwxr-x 1 abc abc  64 Dec 17 18:12 config
    -rw-rw-r-- 1 abc abc  57 Dec 17 18:12 index.php
    -rw-rw-r-- 1 abc abc  57 Dec 17 18:12 public.php
    -rw-rw-r-- 1 abc abc  57 Dec 17 18:12 remote.php
    -rw-rw-r-- 1 abc abc  57 Dec 17 18:12 status.php
    drwxrwxr-x 1 abc abc   0 Dec 17 18:12 themes
    drwxrwxr-x 1 abc abc  42 Feb 12  2020 updater

     

    I've run the find command and I can't find folder /occ anywhere in the docker

     

  15. I am having a problem upgrading to v20.04, it would not work from with the webui. I then tried to run the sudo -u abc php updater.phar  from the /config/www/nextcloud/updater/ folder but I get to item 6, Extracting and it fails giving me the following error

    Quote

    ...PHP Warning:  require(/config/www/nextcloud/updater/../version.php): failed to open stream: No such file or directory in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658
    PHP Fatal error:  require(): Failed opening required '/config/www/nextcloud/updater/../version.php' (include_path='.:/usr/share/php7') in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658

    After aborting the update, I am unable to access NC at all, the webui shows Update in process.

    I have tried to run sudo -u abc php /config/www/nextcloud/occ maintenance:repair, but it seems I have no directory call occ in /config/www/nextcloud, so something has gone very awry.

    Any suggestions, if not for the upgrade, at least to get me back up and running on the older version?

  16. Every now and again, my syslog gets full of these lines

    Quote

    Dec 17 18:35:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm Code42Service: reading directory lblock 0

    Dec 17 18:35:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm Code42Service: reading directory lblock 0

    Dec 17 18:35:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm Code42Service: reading directory lblock 0

    Dec 17 18:35:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm Code42Service: reading directory lblock 0

    Dec 17 18:36:09 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm MdePeer1WeDftWk: reading directory lblock 0

    Dec 17 18:36:09 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm MdePeer1WeDftWk: reading directory lblock 0

    Dec 17 18:36:09 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm MdePeer1WeDftWk: reading directory lblock 0

    Dec 17 18:36:09 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm MdePeer1WeDftWk: reading directory lblock 0

    Dec 17 18:36:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm W1544923155_Mai: reading directory lblock 0

    Dec 17 18:36:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm W1544923155_Mai: reading directory lblock 0

    Dec 17 18:36:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm W510768909_Main: reading directory lblock 0

    Dec 17 18:36:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm W510768909_Main: reading directory lblock 0

    Dec 17 18:36:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm pool-16-thread-: reading directory lblock 0

    Dec 17 18:36:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm pool-16-thread-: reading directory lblock 0

    Dec 17 18:37:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm pool-16-thread-: reading directory lblock 0

    Dec 17 18:37:59 Speedy kernel: EXT4-fs error (device sdd1): ext4_find_entry:1455: inode #2: comm pool-16-thread-: reading directory lblock 0

    When I look at the drives I have after this error occurs, I note that there is nothing mounted at sdd1, so it looks like the server is hunting for a mount that is no longer present.

    This has happened previously and restarting the server cleared the problem and on this occasion all was well for 2 weeks or so before the problem returned. This time, the problem appears to have started after I swapped out back up hard drives that I use for Crashplan (Code42) docker 3 days ago (and as such, I am unsure if I should be posting here or in the Crashplan support pages). To swap out the disks, I turn off the docker, unmount the disks, remove the disks, swap the disks and restart the docker. The backup process normally carries on with the replaced disk. I normally swap out disks twice a week, so I don't understand why this hasn't happened on a previous swap out.

    I'd appreciate any pointers please. If I need to post this over at the Crashplan docker support site, I will do so.

    speedy-diagnostics-20201217-1847.zip

×
×
  • Create New...