sketchy

Members
  • Posts

    56
  • Joined

  • Last visited

Everything posted by sketchy

  1. I too had tried all CA with no connections. Though Disabling the VPN then re enabling it has fixed whatever the issue was...
  2. I can see the above entries in my log, so i have to assume that any issue i have is related to PIA rather than the container?
  3. I was connected to Sweden but torrents would not download. Then i think following an update to the container i lost the web UI. I swapped to the Czech Republic as per @rikdegraaff suggestion above. The Web UI returned after about 10-15 minutes. There were a lot of entries similar to the below in the log. Eventually i got connected, I'd guess 2-3 cycles through the 12 connection attempts? 2020-08-25 17:21:54,237 DEBG 'start-script' stdout output: [warn] Exit code '52' from curl != 0 or no response body received [info] 12 retries left [info] Retrying in 10 secs... I swapped the Czech endpoint for Berlin and then to Torronto, both PIE port forwarding endpoints. With these endpoints the VPN established and the Web UI started far quicker. However my torrent for Ubuntu 20.04 appears idle, and does not download. Any ideas?
  4. Aha, good spot @itimpi. That may be what the parsing error was complaining about. I'll put another asterisk on the end for the 'day of week' requirement, see how it likes that! Edit: Looks like that cleared the issue. Cheers @itimpi
  5. Looking for some advice on a script I've added (which i believe runs successfully) as unsure where the parsing issue is. Seeing a lot of the below in syslog: Aug 23 16:48:01 Tower crond[1440]: failed parsing crontab for user root: /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/copy_pibackup_to_Backup_share/script > /dev/null 2>&1 This is the script: #!/bin/bash /usr/bin/rsync -aPX /mnt/cache/pibackup.img /mnt/user/Backup/ Name of user script, description and custom cron: cheers
  6. Wanted to thank @ken-ji this method works nicely. I'm sure anyone wanting to follow your instructions will know this already, but thought i'd just point out that: "* modify /boot/config/sshd_config to set the following line" should read "* modify /boot/config/ssh/sshd_config to set the following line", i think?
  7. Perfect explanation, thank you @trurl i understand now.
  8. I think the reason i chose the approach i did was because i assumed after shrinking the raid down using the "Remove Drives Then Rebuild Parity" method, i'd need to rebuild parity onto the existing parity disk, rather than using the free'd up disk. If i've now understood correctly i can free up a disk and use that disk to rebuild parity under one 'new config'. Excellent.
  9. Ahh i see, i think. Because i had to rebuild parity anyway, it would've been quicker to use the "Remove Drives Then Rebuild Parity" method, rather than clear the drive before removing, then rebuild parity (as i did).
  10. I took advantage of not needing to rebuild parity during the process of shrinking the raid. Since i am repurposing that free'd up data disk as a replacement for the failing parity i am forced to rebuild parity?
  11. Incase this helps others. I received guidance under another thread to free up a data disk and use in place of the failing parity disk.
  12. I have successfully removed the data disk from the config and reassigned that disk as the replacement for the failing parity disk. The 'Parity sync / Data rebuild' process has now started. I think it's brilliant that a break-fix procedure like this can be completed without needing to physically go to the server, or even a reboot! Admittedly i will need to take it down to remove the failing disk. Either way, a true testament to the capability of unraid. Thanks for the guidance. I'll update once the parity has been rebuilt and make this solved.
  13. I should've read the script, just found the line: echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n"
  14. dd command complete: dd bs=1M if=/dev/zero of=/dev/md6 status=progress 3000578867200 bytes (3.0 TB, 2.7 TiB) copied, 75736 s, 39.6 MB/s dd: error writing '/dev/md6': No space left on device 2861589+0 records in 2861588+0 records out 3000592928768 bytes (3.0 TB, 2.7 TiB) copied, 75736.5 s, 39.6 MB/s Before i stop the array and set the new config, is the 'No space left on device' an acceptable response? It shows as 3TB copied so i'm guessing this is fine, and no further records can be written.
  15. Apologies. In future i will do just that. I decided to use the clear drive then remove method. 'dd' command in progress, updates later. Thanks for the help so far guys.
  16. Thanks Apologies for my garbled/over complicated post!
  17. Thanks for the reply @johnnie.black @trurl Another thread indicates that the parity is failing I agree it would be pointless to rebuild parity before replacing. I need to free up a data disk in my array in order to re purpose it as a parity drive. The final steps of that 'shrinking array procedure (via "Remove Drive Then Rebuild Parity") means my parity is not accurate and needs rebuilding. Can i swap out the data disk i free'd up and use as the new parity drive before parity is rebuilt. Specifically, after step 7 shut the server down and swap out the old parity disk for the new one, then power back up and rebuild the parity of the new configuration (one less drive) onto the new parity disk? This kills two birds with one stone, the necessary parity rebuild of the 'remove drive then rebuild' procedure, and the physical replacement of my failing parity drive. Just a little unsure of how acceptable this would be as i am deviating from the recommended procedure. Alternatively as i'm only removing one drive from the array use the Clear drive then remove method, maintaining the current parity. Then following that procedure, replace the parity drive with the drive i free'd up. @johnnie.black Since your reply, I'm at a point now where i have finished mirroring (rsync) the contents of my 3TB disk to a 2TB disk. As far as i know the parity drive has not failed during this process. This 3TB data drive is the one i plan to re purpose as the parity drive.
  18. I have a failing parity drive. Can i free up a data disk using the "Remove Drives Then Rebuild Parity" method, but when rebuilding the parity for this new configuration (one less drive), use the old data drive i free'd up to replace the failing parity drive? Probably not advisable to rebuild parity onto my failing parity drive, then replace the parity drive? The drive i plan to free up is the same size as my existing parity drive, 3TB. I am currently moving the data off of this 3TB drive onto a smaller 2TB drive, so i could also do the "Clear Drive Then Remove Drive" method, preserving parity. Then perform the parity drive replacement. Not sure which is the best approach really?
  19. Thankyou @johnnie.black for checking the smart report. I will replace this disk.
  20. My parity disk is showing as having read errors in Fix Common Problems and errors under the main page: In the system log: Aug 13 15:34:43 Tower kernel: print_req_error: I/O error, dev sdg, sector 4983868568 Aug 13 15:34:43 Tower kernel: md: disk0 read error, sector=4983868504 I'd guess read errors at a couple of hundred sectors. The disk has not been disabled and both short and extended SMART tests have passed (disk smart report attached). I have performed two parity checks last check today finding 0 errors. Attached is also the full diagnostics. Am i right in saying that these bad sectors are likely to increase in number and that i should start thinking about replacing this disk? tower-smart-20180813-2018.zip tower-diagnostics-20180813-2019.zip
  21. Understood! The healthy speed bump i have now will save me days. Thanks very much.
  22. Crikey you spotted that quickly! You're right, i am using -z, force of habit. I have removed that option from the command and now see a massive improvement. The transfer is now showing high 20's to low 30's. Thank you @johnnie.black Think i can squeeze any further improvement? tower-diagnostics-20180804-0927.zip
  23. I'm doing a large disk to disk rsync (running through the process of changing disk filesystem), and seeing slow write speed. ~4 to 10MB/s. Looking for some guidance on identifying what is causing such slow performance, and if it can be improved. For the purpose of the long rsync task i switched "Tunable (md_write_method):" to "reconstruct write". Please see diagnostics attached. Cheers tower-diagnostics-20180804-0904.zip
  24. Perfect. Makes sense, @itimpi. Thank you. Especially for the speedy reply!