sketchy

Members
  • Posts

    56
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sketchy's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I too had tried all CA with no connections. Though Disabling the VPN then re enabling it has fixed whatever the issue was...
  2. I can see the above entries in my log, so i have to assume that any issue i have is related to PIA rather than the container?
  3. I was connected to Sweden but torrents would not download. Then i think following an update to the container i lost the web UI. I swapped to the Czech Republic as per @rikdegraaff suggestion above. The Web UI returned after about 10-15 minutes. There were a lot of entries similar to the below in the log. Eventually i got connected, I'd guess 2-3 cycles through the 12 connection attempts? 2020-08-25 17:21:54,237 DEBG 'start-script' stdout output: [warn] Exit code '52' from curl != 0 or no response body received [info] 12 retries left [info] Retrying in 10 secs... I swapped the Czech endpoint for Berlin and then to Torronto, both PIE port forwarding endpoints. With these endpoints the VPN established and the Web UI started far quicker. However my torrent for Ubuntu 20.04 appears idle, and does not download. Any ideas?
  4. Aha, good spot @itimpi. That may be what the parsing error was complaining about. I'll put another asterisk on the end for the 'day of week' requirement, see how it likes that! Edit: Looks like that cleared the issue. Cheers @itimpi
  5. Looking for some advice on a script I've added (which i believe runs successfully) as unsure where the parsing issue is. Seeing a lot of the below in syslog: Aug 23 16:48:01 Tower crond[1440]: failed parsing crontab for user root: /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/copy_pibackup_to_Backup_share/script > /dev/null 2>&1 This is the script: #!/bin/bash /usr/bin/rsync -aPX /mnt/cache/pibackup.img /mnt/user/Backup/ Name of user script, description and custom cron: cheers
  6. Wanted to thank @ken-ji this method works nicely. I'm sure anyone wanting to follow your instructions will know this already, but thought i'd just point out that: "* modify /boot/config/sshd_config to set the following line" should read "* modify /boot/config/ssh/sshd_config to set the following line", i think?
  7. Perfect explanation, thank you @trurl i understand now.
  8. I think the reason i chose the approach i did was because i assumed after shrinking the raid down using the "Remove Drives Then Rebuild Parity" method, i'd need to rebuild parity onto the existing parity disk, rather than using the free'd up disk. If i've now understood correctly i can free up a disk and use that disk to rebuild parity under one 'new config'. Excellent.
  9. Ahh i see, i think. Because i had to rebuild parity anyway, it would've been quicker to use the "Remove Drives Then Rebuild Parity" method, rather than clear the drive before removing, then rebuild parity (as i did).
  10. I took advantage of not needing to rebuild parity during the process of shrinking the raid. Since i am repurposing that free'd up data disk as a replacement for the failing parity i am forced to rebuild parity?
  11. Incase this helps others. I received guidance under another thread to free up a data disk and use in place of the failing parity disk.
  12. I have successfully removed the data disk from the config and reassigned that disk as the replacement for the failing parity disk. The 'Parity sync / Data rebuild' process has now started. I think it's brilliant that a break-fix procedure like this can be completed without needing to physically go to the server, or even a reboot! Admittedly i will need to take it down to remove the failing disk. Either way, a true testament to the capability of unraid. Thanks for the guidance. I'll update once the parity has been rebuilt and make this solved.
  13. I should've read the script, just found the line: echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n"
  14. dd command complete: dd bs=1M if=/dev/zero of=/dev/md6 status=progress 3000578867200 bytes (3.0 TB, 2.7 TiB) copied, 75736 s, 39.6 MB/s dd: error writing '/dev/md6': No space left on device 2861589+0 records in 2861588+0 records out 3000592928768 bytes (3.0 TB, 2.7 TiB) copied, 75736.5 s, 39.6 MB/s Before i stop the array and set the new config, is the 'No space left on device' an acceptable response? It shows as 3TB copied so i'm guessing this is fine, and no further records can be written.
  15. Apologies. In future i will do just that. I decided to use the clear drive then remove method. 'dd' command in progress, updates later. Thanks for the help so far guys.