psm321

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by psm321

  1. 33 minutes ago, EDACerton said:

    1. The "core" of Unraid (the main array) isn't open source, and there aren't any "equals" to it in the OSS world (there are things that are similar, but nothing that has the whole feature set to my knowledge).

    That depends on what you consider "core"... The unraid kernel driver (which is what I would call "the main array") is in fact open source.  But I guess shfs is a major component for people too...

    I'm pretty concerned that there's been no answer on the privacy issues...

  2. My servers are currently on 6.7/6.8 and I'm looking to upgrade to latest.  However, some of the changes I've read about concern me, so I want to retain the ability to go back if I cannot adjust to them.  I understand I'd have to back up my configurations and such -- just wanted to double-check that the actual unraid driver on-disk format/superblock isn't going to get upgraded to something the older version can't read.

  3. So I've read various pages and see that the recommended method to replace a data drive with a larger one is to simply unassign the old one, assign the new one, and let it rebuild onto it.  My question, out of curiosity, is how the filesystem is grown (since there don't seem to be any instructions for it and people seem to say that it "just works").  I would assume that the kernel-level unraid md driver only literally rebuilds the disk image.  Does emhttpd throw some magic on top of that that edits the partition table and grows the fs after the rebuild is done or something?  Just want to understand exactly what I'm doing before I proceed (I'm a nerd!)

  4. I am not by any means an expert at unraid or data recovery -- only replying because nobody has.  If somebody more experienced comes along you should probably listen to them instead of me!

     

    If you're comfortable with the shell, you could try this script:

     

    https://gist.github.com/Changaco/45f8d171027ea2655d74

     

    Note: I haven't tried it and am no expert.

     

    General data recovery rules (which you already seem familiar with, but just to reiterate): stop using (writing to) the device that the file was deleted from, and make sure to not try to recover to that device.

     

    For this script in particular, you'll also need to make sure the drive is unmounted.  Stopping the array is probably the simplest way to do it (though technically you risk things being written during the stopping process)

  5. Also, if you're like me and get the "bright idea" to give your vdisks serial numbers in the XML to distinguish them instead of using different sizes and upsetting your OCD, don't be like me and put spaces in the serial number or you'll waste a few hours figuring out that unraid doesn't like that.  Probably best to avoid other special characters too and just stick with alphanumerics.

  6. Just a quick note in case somebody else gets stuck on this: I was having trouble getting my VM to see the USB stick, which was plugged into a USB3.0 port.  I had to change the emulated controller from the default ehci USB2.0 to be an xhci USB3.0 one (either nec or qemu works, though a quick google seems to show qemu is preferred) to get it to work.

    • Like 1
  7. Yes, I realize now that it was stupid to do, but I set md_sync_thresh and md_sync_window to 0 while trying to experiment with why a file move was slow.  But now, I can't change those values or stop the parity check because all the various waits etc. in the code are looking for pending counts to be < 0.  Just wondering if anybody has ideas for getting out of it other than just doing an unclean shutdown

  8. 8 minutes ago, CHBMB said:

     

    There's a way to do so, but it's unsupported, so once you do so, you're on your own.

     

    Scripts in this directory are run at startup

     

    So you could map like this

     

    
    -v '/mnt/cache/appdata/letsencrypt/60-customscript:'/etc/cont.init.d/60-customscript'

    Scripts in that directory are run in numerical/alphabetical order.

    Perfect, thanks!

  9. On 3/22/2018 at 11:40 PM, psm321 said:

    Are there by chance any hooks in the startup of the container to run a custom user script?  I need to patch the certbot rfc2136 plugin with some hacks to get it to work with my DNS provider (I don't understand my hacks well enough to actually submit upstream).  I figure I'll probably need to build my own container layer on top, but wanted to check before going ahead with that.

    I just wanted to bump this once before working on a custom container as I'm coming up on renewal.  Sorry if I missed a reply somewhere, didn't find one in a quick search.

  10. 7 hours ago, L0rdRaiden said:

    I have the same problem, I can access with SSH but if I open the web terminal I get this error in the log

     

    Mar 30 22:27:58 MediaCenter login[30658]: ILLEGAL ROOT LOGIN on '/dev/pts/9'
    Mar 30 22:28:13 MediaCenter login[30785]: ILLEGAL ROOT LOGIN on '/dev/pts/8'

     

    Same issue in chrome and firefox, all I did between working and not working was to create a win 10 VM

    You have too many ptys open.  Apparently each qemu (VM) uses one up to provide a serial console to the VM.  You likely have some combo of 8 total VMs+web terminals+preclears running.  Other than closing some of them, the other option would be to allow more ptys to be used for a root login by appending lines like

    pts/8
    pts/9
    pts/10
    pts/11

    etc. into /etc/securetty.

     

    Removing /etc/securetty entirely also accomplishes this, but I don't claim to understand the security implications (if any) of doing this.

     

  11. 53 minutes ago, aptalca said:

     

    Please correct me if I'm wrong, but I believe that is hardcoded into the dns plugins. Not sure if the plugins provide any options to be entered into the cfg files.

     

    The documentation for certbot and the plugins is pretty awful. I had to go through the source code to figure out the wildcard options

    I'm on my phone so I didn't check all of them, but at least some of the plugins have an individual option for it.  The one I used has it and so does the nsone one.

     

    https://certbot-dns-rfc2136.readthedocs.io/en/latest/

     

    https://certbot.eff.org/docs/using.html#dns-plugins

     

    err, command-line options, not config file options afaict.  That would be too convenient :)

  12. 1 hour ago, fivestones said:

    I read something above about a problem with letsencrypt and some other newer TLD. I'm using "im" TLD. Maybe this is part of the problem? But it works find if I'm doing it with specific domains/subdomains, and only fails with wildcard.

     

    I exec'd into the docker and had a look at /var/log/letsencrypt/letsencrypt.log. It's pretty long and I'm not sure what I'm looking for to diagnose this. I see at the end where it lists the same incorrect TXT record being found when it does the acme-challenge. Maybe there is something in this file that would be helpful to help figure out why this is failing?

    I would personally suggest trying a longer propogate delay.  I was having similar issues with 30 seconds.  Unfortunately AFAIK there's not a container variable for this -- I ended up editing something inside the container.

  13. 4 hours ago, kaiguy said:

    Thanks for this!

     

    The only reference to port 443 I could find was:

     

    
    tcp6    0   0 :::443          :::*         LISTEN      5814/docker-proxy

    Which I believe would be the LE container. Not sure why it won't let me start the container with 443... especially since it's been working fine for months.

     

    Perhaps it's running and unRAID lost track of it?  (not sure if that's possible, I'm new to all this).

     

    docker ps | grep "0.0.0.0:443"

    should tell you what container is using it

  14. 8 hours ago, kaiguy said:

    I'm suddenly running into a problem where it appears LetsEncrypt container won't load because 443 is already in use. But I don't even have SSL enabled on my server, and prior to disabling, I sent the HTTPS port to 444. No changes to my server config, but I did have an unclean shutdown and am having a parity check (but that really shouldn't have any effect).

     

    
    /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint letsencrypt (7c3343119f45bcf4276a0xxxxxxxxxf6791f5be978ae5): Bind for 0.0.0.0:443 failed: port is already allocated.

     

    I can't figure this one out! Any thoughts? THANKS!

    netstat -nptl

    (I believe it's in nerd tools) should show you what's listening on that port

  15. Are there by chance any hooks in the startup of the container to run a custom user script?  I need to patch the certbot rfc2136 plugin with some hacks to get it to work with my DNS provider (I don't understand my hacks well enough to actually submit upstream).  I figure I'll probably need to build my own container layer on top, but wanted to check before going ahead with that.

  16. 58 minutes ago, smdion said:

    Check again in the optional settings: 

     

    https://hub.docker.com/r/linuxserver/letsencrypt/

     

    If its not in your template, you can manually add variables in unRAID.

    I had in fact added STAGING myself after reading the dockerhub docs (it wasn't in the template, including under additional settings).  I was reporting that that variable no longer works since --server was added to the certbot commandline inside the container a few days ago to support ACMEv2, as --staging (which STAGING sets) does not work with --server

  17. Not sure if this is on-topic here since STAGING isn't exposed in the unraid template, but the --staging and --server parameters to certbot don't seem to work together (even when I manually edit the server URL to be the staging v2 one).  I'm working around this by removing $STGNG from the certbot line in 50-config and setting the --server URL to staging v2

     

    --server value conflicts with --staging

     

  18. Thank you for this!

     

    Could you please consider removing these lines from nginx/site-confs/default?

      error_page 403 /core/templates/403.php;
      error_page 404 /core/templates/404.php;

    These cause user creation with a weak password to fail silently (the button appears to do nothing) instead of showing an error.

     

    See:

    https://github.com/nextcloud/server/issues/3847#issuecomment-287740126

    https://github.com/nextcloud/server/pull/2004#issuecomment-291007260

     

    Thanks!