psm321

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by psm321

  1. That depends on what you consider "core"... The unraid kernel driver (which is what I would call "the main array") is in fact open source. But I guess shfs is a major component for people too... I'm pretty concerned that there's been no answer on the privacy issues...
  2. Thanks! Appreciate the heads-up on cache pools too!
  3. My servers are currently on 6.7/6.8 and I'm looking to upgrade to latest. However, some of the changes I've read about concern me, so I want to retain the ability to go back if I cannot adjust to them. I understand I'd have to back up my configurations and such -- just wanted to double-check that the actual unraid driver on-disk format/superblock isn't going to get upgraded to something the older version can't read.
  4. Did you ever figure out where to find raid6_gen_syndrome?
  5. Thanks! I see that it does xfs_growfs on every disk it mounts And runs sgdisk to create a new partition table on some subset of disks, seems most likely newly-added ones.
  6. So I've read various pages and see that the recommended method to replace a data drive with a larger one is to simply unassign the old one, assign the new one, and let it rebuild onto it. My question, out of curiosity, is how the filesystem is grown (since there don't seem to be any instructions for it and people seem to say that it "just works"). I would assume that the kernel-level unraid md driver only literally rebuilds the disk image. Does emhttpd throw some magic on top of that that edits the partition table and grows the fs after the rebuild is done or something? Just want to understand exactly what I'm doing before I proceed (I'm a nerd!)
  7. Ok, thanks to some fun kernel hackery, I got myself out of the situation so no longer urgently need an answer. Still curious if there was a different way to go about it though.
  8. I am not by any means an expert at unraid or data recovery -- only replying because nobody has. If somebody more experienced comes along you should probably listen to them instead of me! If you're comfortable with the shell, you could try this script: https://gist.github.com/Changaco/45f8d171027ea2655d74 Note: I haven't tried it and am no expert. General data recovery rules (which you already seem familiar with, but just to reiterate): stop using (writing to) the device that the file was deleted from, and make sure to not try to recover to that device. For this script in particular, you'll also need to make sure the drive is unmounted. Stopping the array is probably the simplest way to do it (though technically you risk things being written during the stopping process)
  9. Also, if you're like me and get the "bright idea" to give your vdisks serial numbers in the XML to distinguish them instead of using different sizes and upsetting your OCD, don't be like me and put spaces in the serial number or you'll waste a few hours figuring out that unraid doesn't like that. Probably best to avoid other special characters too and just stick with alphanumerics.
  10. Just a quick note in case somebody else gets stuck on this: I was having trouble getting my VM to see the USB stick, which was plugged into a USB3.0 port. I had to change the emulated controller from the default ehci USB2.0 to be an xhci USB3.0 one (either nec or qemu works, though a quick google seems to show qemu is preferred) to get it to work.
  11. Yes, I realize now that it was stupid to do, but I set md_sync_thresh and md_sync_window to 0 while trying to experiment with why a file move was slow. But now, I can't change those values or stop the parity check because all the various waits etc. in the code are looking for pending counts to be < 0. Just wondering if anybody has ideas for getting out of it other than just doing an unclean shutdown
  12. I just wanted to bump this once before working on a custom container as I'm coming up on renewal. Sorry if I missed a reply somewhere, didn't find one in a quick search.
  13. For anyone else reading this, I did end up ordering an ASM1142 card ( https://www.amazon.com/gp/product/B00XKEBYYE ) and it works fine natively.
  14. You have too many ptys open. Apparently each qemu (VM) uses one up to provide a serial console to the VM. You likely have some combo of 8 total VMs+web terminals+preclears running. Other than closing some of them, the other option would be to allow more ptys to be used for a root login by appending lines like pts/8 pts/9 pts/10 pts/11 etc. into /etc/securetty. Removing /etc/securetty entirely also accomplishes this, but I don't claim to understand the security implications (if any) of doing this.
  15. I'm on my phone so I didn't check all of them, but at least some of the plugins have an individual option for it. The one I used has it and so does the nsone one. https://certbot-dns-rfc2136.readthedocs.io/en/latest/ https://certbot.eff.org/docs/using.html#dns-plugins err, command-line options, not config file options afaict. That would be too convenient
  16. I would personally suggest trying a longer propogate delay. I was having similar issues with 30 seconds. Unfortunately AFAIK there's not a container variable for this -- I ended up editing something inside the container.
  17. Perhaps it's running and unRAID lost track of it? (not sure if that's possible, I'm new to all this). docker ps | grep "0.0.0.0:443" should tell you what container is using it
  18. netstat -nptl (I believe it's in nerd tools) should show you what's listening on that port
  19. Are there by chance any hooks in the startup of the container to run a custom user script? I need to patch the certbot rfc2136 plugin with some hacks to get it to work with my DNS provider (I don't understand my hacks well enough to actually submit upstream). I figure I'll probably need to build my own container layer on top, but wanted to check before going ahead with that.
  20. I had in fact added STAGING myself after reading the dockerhub docs (it wasn't in the template, including under additional settings). I was reporting that that variable no longer works since --server was added to the certbot commandline inside the container a few days ago to support ACMEv2, as --staging (which STAGING sets) does not work with --server
  21. Not sure if this is on-topic here since STAGING isn't exposed in the unraid template, but the --staging and --server parameters to certbot don't seem to work together (even when I manually edit the server URL to be the staging v2 one). I'm working around this by removing $STGNG from the certbot line in 50-config and setting the --server URL to staging v2 --server value conflicts with --staging
  22. Thank you for this! Could you please consider removing these lines from nginx/site-confs/default? error_page 403 /core/templates/403.php; error_page 404 /core/templates/404.php; These cause user creation with a weak password to fail silently (the button appears to do nothing) instead of showing an error. See: https://github.com/nextcloud/server/issues/3847#issuecomment-287740126 https://github.com/nextcloud/server/pull/2004#issuecomment-291007260 Thanks!
  23. Never mind, I was right the first time. lsof shows that the preclears are in fact using up ptys.
  24. Thanks. Seems to be working fine so far at least, will see if it finishes. I just checked with who and they're not using up pty's, so it's not related to the web terminal problem at least.