KptnKMan

Members
  • Posts

    269
  • Joined

  • Last visited

Everything posted by KptnKMan

  1. Hi all, I've been trying to get a script working to manage backups and send an email with attachment at the end. I'm using mailgun as my email sender, and have configured the Notification settings for it in GUI. After much googling and reading, I see that there are 2 (Kind of 3) methods: - use ssmtp eg (Which invokes sendmail): ssmtp [email protected] < /mnt/user/ftp/test_attachment.txt - use sendmail directly eg: ssmtp [email protected] < /mnt/user/ftp/test_attachment.txt - use the recommended notify (without email): /usr/local/emhttp/webGui/scripts/notify -i warning -s "my subject" -d "some description" I am also trying to get email subsystem of notify to work, without success: /usr/local/emhttp/webGui/scripts/notify smtp-init -i normal -s "Subject" -d "Something" -m "Message" Email notifications in the unRAID GUI work when I send a test mail there. I receive the mail in my gmail. The problem is that: - I cant get Notify to send any email at all, I need an example if possible. - I cant get ssmtp/sendmail to work, as it sends email as root, and does not work, just like in this old thread. I've tried configuring the revaliases, but I understand this should be avoided. Does anyone know the proper way to send an email using CLI, and would be able to provide an example? Thanks for any help. Edit: When I use sendmail/ssmtp, I get this response basically every time: [<-] 220 ak47 ESMTP ready [->] EHLO blaster [<-] 250 SMTPUTF8 [->] AUTH LOGIN [<-] 334 VXNlcm5hbWU6 [->] Ymxhc3RlckBzdHJha2VyLm1l [<-] 334 UGFzc3dvcmQ6 [<-] 235 2.0.0 OK [->] MAIL FROM:<[email protected]> [<-] 250 Sender address accepted [->] RCPT TO:<[email protected]> [<-] 250 Recipient address accepted [->] DATA [<-] 354 Continue [->] Received: by blaster (sSMTP sendmail emulation); Wed, 11 Mar 2020 01:00:14 +0100 [->] From: "Console and webGui login account" <[email protected]> [->] Date: Wed, 11 Mar 2020 01:00:14 +0100 [->] [->] . [<-] 250 Great success [->] QUIT [<-] 221 See you later. Yours truly, Mailgun The email sends as `[email protected]` and has no anything, but goes to my spam as an empty email with sender `Console and webGui login account <[email protected]>`. Help, anyone?
  2. Thats great advice, thanks. I'm also using the CA User Scripts plugin for this, so good to know. I've had it in my mind to get down and get a more robust solution like this in place over my current simple copy. I'm no slouch o Shell scripts, so I'm going to see if I can write some scripts to accomplish these tasks. Thanks again.
  3. This is amazing @Hoopster, thanks for putting this up. I've been looking at improving my backup process between servers, and integrating pfSense backups to unRAID. Along with the CA User Scripts plugin, I'm going to give this a try. I realise also that this is quite a while after your post, do you have any things you added to this, or is it still working as-is to now?
  4. That's good to know. Thanks @johnnie.black I was thinking I might start the copy job and it'll be done by the time I wake up tomorrow.
  5. I have a couple questions that I'm unsure about at the moment, reading the FAQ and googling doesnt seem to have shown anything up. So I'm ready to copy my data from my (erroneous) 2x 3TB into my array, but I'm wondering the best way to do so. I have about 2.2TB of data from the 2x 3TB disks, and my array looks like this: - Do I simply copy the data over the share /mnt/users/X and let unRAID deal with where to put the data? - Do I copy the data into one of the /mnt/diskX areas? Like the respective 3TB disks into /mnt/disk3 and /mnt/disk4? - Do I need to wait until the current reconstruction is complete before beginning copying data back in?
  6. Thanks again for the assistance, looks like the parity rebuild went without issues on the new drives. Today, decided I would swap in the (previously parity) 2x 6TB drives to replace the (remaining) 2x 3TB drives. I wanted to test my hotswap capabilities so I did this without restarting. I also wanted to use the double parity to rebuild both drives at once, reducing TTL. A few things, for others who might want to try this: - Stopped the array - Pulled 2x 3TB drives (#3 and #4) - Insterted first 6TB disk (replacing #3), and waited a few minutes for drive to come online - Checked UI, confirmed 6TB disk, and assigned to #3 - Insterted second 6TB disk (replacing #4), and waited a few minutes for drive to come online - Checked UI, confirmed 6TB disk, and assigned to #4 - Started the array Looks like things are rebuilding properly: Looks like things are rebuilding, and my data is still accessible while its emulated. Great.
  7. Looks like everything went well. Everything has been reassigned, and I removed the disks in Step 7, and swapped in the new 2x 8TB parity disks. I checked all my drive logs before beginning, and saw that one of my other 3TB disks is having some strange log errors, but not write errors. Decided to remove that disk at the same time as the erroneous drive, but means that I wont have the free space to copy everything in. At least this way I will have all new drives and the same capacity, in the end. So it looks like once the parity is finished rebuilding, I'll need to swap the 2x 6TB disks in before copying my data back in.
  8. Thanks, this is great to know I'm looking at this correctly. Gonna start with this and see how I get on. Edit: I'll report back my results.
  9. Either way, I got no help when I REALLY needed it tbh. For me at least, this only highlights that this mailing list is unreliable. Sure, maybe its not arriving in people's inboxes, and we can pass it off as that, but it makes me wonder how many people have sent mails to this list and they never even showed up. I want to thank you for your help though, I really appreciate it.
  10. My new drives arrived (yay!). Really hoping someone has some time to maybe point me in the right direction? Am I making any mistakes here?
  11. I can see that too, but I followed the exact same process for all my mails, following the exact instructions for using the mailing list. Is it not odd that I received my own message, from their mailing list (As I replied to it here), but it doesn't show up otherwise in their achives? ¯\_(ツ)_/¯
  12. Here is my own response to my second mail asking for help. No responses. Not sure where my first is, I cant seem to find it. I followed all the bot and mail list instructions exactly. Oh well. https://lore.kernel.org/linux-btrfs/CAMry8Zs8omAJGqyJWL=O5=pKBq5yhq1+tnKvS9OFEooZNsv-GQ@mail.gmail.com/ Looking at the archive, I can still see many unanswered mails. Edit: Anyway, I'm past that, learned my lesson. happy to not use btrfs anymore. I do have a more current issue that I would really appreciate help with resolving if anyone has time:
  13. Hi everyone, Firstly, apologies for the verbosity here, I want to get this correct. I'm looking for some advice as I'd like to make sure I've got this right, and would not like to mess up my array (again). In my system I have: - 1x 1TB cache - 2x 6TB parity disks (new in 2019) - 2x 6TB array disks (new in 2019) - 4x 3TB array disks (carried over from my old server) - a few extra Unassigned Disks I've currently got a couple of things happening: - 1 of the old 3TB drives in my array has recently started to show errors. - 2x new 8TB WD Red drives are arriving today for my main unRAID system, ordered to replace and upgrade. So I'd like to (I think this is probably the correct order to do things): - Remove that faulty 3TB array disk, not replace with anything (effectively shrink). - Swap in 2x 8TB disks as my parity, replacing the 2x 6TB disks there now. - Swap in 2x 6TB current-parity disks into my array, replacing 2 of the 3TB disks. I've read up this article on removing disks and shrinking arrays: https://wiki.unraid.net/Shrink_array There seem to be a few options here, one of them including "Remove Drives Then Rebuild Parity". This seems interesting as I would preferably not like to rebuild parity twice, if possible. I'd like to find out: - Would it be possible to remove the 3TB disk, and replace the parity disks at the same time in step 7? - Once everything is running, can I just copy the contents from the removed disk as an Unassigned Disk, on top of the newly mounted array? - After that, once everything settles (parity and data copied in), I would only be left with swapping in the 2x 6TB disks as normal? Thanks for any advice.
  14. Thanks everyone who helped me with this issue. After a few weeks of messing around, I eventually gave up and took the config loss and started over with new settings. For notice, I tried to politely reach out to the btrfs mailing list multiple times, and had no response back. I wouldn't recommend trying that channel, as I subscribed and saw many people having issues with btrfs and no responses. And I mean a many people. So I've gone to a single 1TB NVME cache and nightly backups, thanks for that tip. Been working great, although I haven't tested a restore yet. Gonna do a dry run sometime soon. For the record also, my new cache is on xfs and I don't think I'll be touching btrfs again. It's far too buggy and the bugs in it have left me with a sour taste. A warning to future people who may stumble upon this: Backups and avoid BTRFS. A heartfelt thanks to everyone that helped me. 🙂
  15. I only recently found out about this particular plugin, before I had a script I made myself. Going to be using this going forward I think. Is there an easy place to find plugins?
  16. Thanks, I'm reaching out to the mailing list to see if there's anything.
  17. Well, that sucks. I really enjoy using Unraid, but I'm getting frustrated with losing all my appdata every other month because of an update or bug. It always seems to happen at the worst moment. Anyway... Attached, array started, I've not formatted the first nvme yet. Thanks for taking a look. blaster-diagnostics-20200128-0950.zip
  18. I'm in need of some serious help, if anyone has time to help me out. At this point, I've managed to get both disks installed back in the original server, and put into the cache pool. However, the cache pool seems unmountable, and I need help troubleshooting. I'm also warned that Unraid wants to format the disk that was previously removed: Will this delete all data and permanently lose everything? Any advice on how to proceed?
  19. Thanks for the reply. The array was created in 6.7.0/6.7.1, server is running 6.8.1 now. I saw this mentioned in another thread. Does this^ mean there is no recovery data? Here is a log of the commands in the FAQ: Linux 4.19.94-Unraid. root@blaster:~# mkdir /bt root@blaster:~# mount -o usebackuproot,ro /dev/nvme1 nvme1 nvme1n1 nvme1n1p1 root@blaster:~# mount -o usebackuproot,ro /dev/nvme1n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme1n1p1, missing codepage or helper program, or other error. root@blaster:~# mount -o degraded,usebackuproot,ro /dev/nvme1n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme1n1p1, missing codepage or helper program, or other error. root@blaster:~# mount -o degraded,usebackuproot,ro /dev/nvme0n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error. root@blaster:~# mount -o ro,notreelog,nologreplay /dev/nvme1 nvme1 nvme1n1 nvme1n1p1 root@blaster:~# mount -o ro,notreelog,nologreplay /dev/nvme1n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme1n1p1, missing codepage or helper program, or other error. root@blaster:~# /dev/nvme1n1p1 /bt -bash: /dev/nvme1n1p1: Permission denied root@blaster:~# btrfs restore -v /dev/nvme1n1p1 /bt bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super ERROR: superblock bytenr 274877906944 is larger than device size 250059317248 Could not open root, trying backup super root@blaster:~# btrfs restore -vi /dev/nvme1n1p1 /bt bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super ERROR: superblock bytenr 274877906944 is larger than device size 250059317248 Could not open root, trying backup super root@blaster:~# btrfs check --repair /dev/nvme1n1p1 enabling repair mode WARNING: Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck can successfully repair all types of filesystem corruption. Eg. some software or hardware bugs can fatally damage a volume. The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting repair. Opening filesystem to check... bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree ERROR: cannot open file system
  20. I seem to be having bad luck with Unraid. Today, one of my nvme cache disks seemingly failed, and I sourced a larger replacement SSD to replace it. I read docs and shutdown the array and started it without the bad cache disk, replaced the disk and added it back to the cache pool replacing the missing broken disk and waited. At this point, I saw that the cache pool is empty, without any data. Not sure why this happened. So I'm assuming that my cache is broken. Great. My 'failed' nvme seems to be mysteriously working again, and I've put it back in the server to try recover data from it but it appears without a FS. I've been trying steps from here: Using those steps tried to mount the disk to copy data from it, but I get errors that I cant mount the FS. Can anyone help me to recover the files off the remaining old cache disk that I can access?
  21. At this point, I'm wondering if I should cut my losses here, and just backup my USB, reformat USB and start over as a "new server", then import my array disks. Is that possible?
  22. I'm afraid not, it's very strange. It seems like the docker UI is somehow only half connected to the config. When I changed the port, I noticed the the Web UI link did not update, but I fixed that and it still doesn't work when pointed to correct port. My VMs start fine, and the dockers can start, but the docker networking seems to be all weird.
  23. Switched them back to bridge, and they are still inaccessible. New port shows up in UI, but nothing is accessible on those ports. Emptied my browser cache, etc, which I've seen suggested in other threads. I'm starting to think that this installation is borked.
  24. Docker log for the container is repeating: 2019-10-12 21:19:49,920 DEBG 'start' stderr output: No protocol specified tint2: could not open display! 2019-10-12 21:19:59,926 DEBG 'start' stdout output: [info] tint2 not running 2019-10-12 21:19:59,935 DEBG 'start' stderr output: No protocol specified tint2: could not open display! 2019-10-12 21:20:09,939 DEBG 'start' stdout output: [info] tint2 not running 2019-10-12 21:20:09,946 DEBG 'start' stderr output: No protocol specified tint2: could not open display!