Jump to content

hawihoney

Members
  • Posts

    3,513
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by hawihoney

  1. Oh, rsync runs every night - but I call it for every share I take a backup from. Pictures, Projects, Programs, Documents, ... last time I did count 12 shares. Roundabout one million files of a long lasting digital Life. Some shares will be backuped once per night, some once per week and some once per month. Is there a better way? TIA
  2. Thanks for your answer. Ah, I use User Scripts plugin to rsync backups from one server to a second unRAID server in our house. Can I omit these messages or rotate the logs on my own. syslog is several 100K currently. That's not much but leads to over 70% fullness. Can I increase the syslog threshold? How? Thanks in advance.
  3. Here's the diagnostics. I can't see what causes 73%. Yesterday it was at 51%. Any ideas? Thanks in advance. tower-diagnostics-20180723-1343.zip
  4. I need to restart my two servers every three weeks because the dashboard shows the log (out of 'flash log docker') is running full (100%). There seems to be no autorotate for logs or something like that. How can I omit the required reboots? And the last question is, how can I find out what's eating up the log space? Many thanks in advance.
  5. Could you please add a red star behind the label 'Watch Directory:'. This entryfield needs to be filled. Creation of the docker will fail if that entryfield is not filled. Took me some time to find that error. Thanks.
  6. Since two days I try to get this docker working. I do receive the same error always: [telly] [info] booting telly v0.5 [telly] [info] Reading m3u file /mnt/user/Daten/Plex/Test.m3u... [telly] [error] unable to read m3u file, error below [telly] [error] m3u files need to have specific formats, see the github page for more information [telly] [error] future versions of telly will attempt to parse this better panic: Unable to open playlist file goroutine 1 [running]: main.main() /go/src/app/main.go:193 +0x1b62 The only thing regarding the m3u file I found on Github is a missing comma after the '-1'. But I do have these commas in there. Looking at the source code (link below) Telly should output more specific outputs. Is it possible that this docker has not been updated? https://github.com/tombowditch/telly-m3u-parser/blob/master/m3u.go What's wrong with my attempts? What's the exact m3u-format this docker needs? Many thanks in advance. #EXTM3U #EXTINF:-1,Das Erste HD http://xxx.xxx.xxx.xxx:xxxx/folder/folder/file.ts #EXTM3U #EXTINF:1,Das Erste HD http://xxx.xxx.xxx.xxx:xxxx/folder/folder/file.ts #EXTM3U #EXTINF:-1,Das Erste HD rtp://xxx.xxx.xxx.xxx@:[email protected]:xxxx
  7. Now working. Thanks. Had to disable and reenable the packages to fetch the new releases.
  8. What version? unRAID? 6.5.1. make throws that error simply when calling it without any parameters. I think make and unistring from the DevPack don't match. make needs something from unistring that unistring no longer supports. I'm a Linux noob but if I should guess I would bet that unistring is from a newer package/release/... than make.
  9. Since the latest unRAID release I do have problems with DevPack modules. make for instance shows the following error. According to the screenshot attached to this post, make and libunistring are installed and up-to-date. What's wrong with my installation? Thanks in advance. root@Tower:~# make make: error while loading shared libraries: libunistring.so.0: cannot open shared object file: No such file or directory
  10. I need a little help with the plugin settings. What folders/share to put where? This is my current folder struct: /mnt/cache/system/appdata/ /mnt/cache/system/appdata/letsencrypt/ /mnt/cache/system/appdata/mariadb/ /mnt/cache/system/appdata/nextcloud/ /mnt/cache/system/domains/ /mnt/cache/system/ISOs/ /mnt/cache/system/docker.img /mnt/cache/system/libvirt.img These are the possible folder-/share-entryfields: Appdata Share (Source): /mnt/cache/system/appdata/ Destination Share: /mnt/user/Data/unRAID/Tower/appdata_backup/ USB Backup Destination: /mnt/user/Data/unRAID/Tower/boot_backup/ libvirt.img Destination: /mnt/user/Data/unRAID/Tower/domains_backup/ I found my folder struct in some of the unRAID video tutorials. But I don't know how the CA Backup/Restore plugin will find the desired folders/files. There's an entryfield for libvirt.img but none for the docker.img. There's an entryfield for the appdata (docker) folder but none for the domains (VM) folder. Are these entryfields missing? Or, how to backup my folder struct with CA Backup/Restore? Thanks in advance.
  11. I'm looking for some help with maintenance scripts that should run via cron in this NectCloud docker. E.g.: How do I setup cron jobs within this docker? I would like to export calendar ICS to my array daily. Same for contacts VCF. And finally, mysqldump of the NectCloud DB. How can I do that? Somebody willing to share some knowledge? Help is highly appreciated.
  12. Yes, follow that doc to the bit. As I said, I had to repeat the installation because I took one wrong assumption.
  13. No, the first password for root is part of the MariaDB docker settings. Then I added a new user/password during the creation of the NextCloud database. When you start NextCloud for the first time, you add the admin user/password for NextCloud itself. So, I use three passwords. It's not necessary, but I did it that way. ***Edit*** I did use that installation description. Had to do it twice because I changed my.cnf instead of custom.cnf. Was an old habbit. https://blog.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/
  14. Just a remark to your questions: I went through DuckDNS, Letsencrypt, NextCloud, MariaDB, Plex within the last two weeks. One thing I did change in the end was the handling of my existing files in NextCloud. I did create the NextCloud User Share and pointed /data to it in the docker settings as written in your post. But I added the folder, where all my existing data was stored, as an additional path as well. I did experience lots of permission annoyances because I have a ton of existing scripts and tools I use from the command line or Windows AND NextCloud. So finally I did add my existing data as "External Storage" and use it as such in NextCloud. That way I can go my old way and the new NextCloud way. In the end I'm newly impressed by unRAID and it's capabilies - and that after 10 years of usage (first unRAID server bought in 2008).
  15. Could it be that easy? Wow, worked immediately. Out of the box. Have plex1.t***.duckdns.org and plex2.t***.duckdns.org now. Thanks a million. One last question - more Plex related: If I remove port forwarding of 3240x from my router Plex tells me about missing direct connection. I mean, what is that 3240x port used for if the connection works over 443? This one puzzles me a bit.
  16. Two unRAID servers, a Plex docker on each machine, DuckDNS and LetsEncrypt on the first machine - how to do that? Below is my current configuration. Because I can open port 80/443 to one single machine only, I create redirections in the nginx default conf. My questions: - Is this ok/safe or is there a better way? - Plex on the second machine reports indirect connections only. Is there a way to get around that? - Please have a look at my proxy_pass settings. I use https there. Is this ok? Many thanks in advance. Router: port 80 (extern) --> port 81 (intern) port 443 (extern) --> port 444 (intern) DuckDNS subdomains t***1.duckdns.org t***2.duckdns.org DuckDNS container (on first unRAID machine): SUBDOMAINS: t***1,t***2 LetsEncrypt container (on first unRAID machine): Email: h***[email protected] Domainname: duckdns.org Subdomain(s): t***1 Only subdomains: true Plex network settings (on first machine): External URL: https://t***1.duckdns.org/plex01/ --> working perfect Plex network settings (on second machine): External URL: https://t***1.duckdns.org/plex02/ --> working indirect nginx/site-confs/default: - first machine is 192.168.178.35 - second machine is 192.168.178.34 Many thanks in advance. upstream backend { server 192.168.178.35:19999; keepalive 64; } server { listen 443 ssl default_server; listen 80 default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers '***'; ssl_prefer_server_ciphers on; client_max_body_size 0; location = / { return 301 /; } location /web { # serve the CSS code proxy_pass https://192.168.178.35:32400; } location /plex01 { # proxy request to plex server auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass https://192.168.178.35:32400/web; } location /plex02 { # proxy request to plex server auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass https://192.168.178.34:32400/web; } location ~ /netdata/(?<ndpath>.*) { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend/$ndpath$is_args$args; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } }
  17. I'm not sure if my failing drive was a parity drive before. But as it's SMART values are ok I bet it was. I don't swap data drives if SMART values are ok. But I swap parity drives with perfect SMART values if I need a bigger one.
  18. Yes, wipefs did the trick. Double check that this drive is not part of the array. Double check the correct device name. Issue "wipefs /dev/xxx" (replace xxx by correct device) without any additional parameters to check the current FS. This step is not necessary. It's just to check if it gives a non-unRAID-FS output. Finally "wipefs --all /dev/xxx" to wipe the disk. Lasts a second or so. After that I could attach the disk immediately to UD.
  19. I tried this twice now. I doesn't work for me. After a reboot, and when starting the first rsync between both machines, I'm always welcomed by this message: root@Tower2:~# rsync -avPX --delete-during --protect-args -e ssh "[email protected]:/mnt/user/xyz/" /mnt/user/xyz/ The authenticity of host '192.168.178.35 (192.168.178.35)' can't be established. ECDSA key fingerprint is SHA256:SSFXwWXedKMmxBao0vvheifFEfIoiiQl5rtfPuZ8x3w. Are you sure you want to continue connecting (yes/no)? Here are my steps I did twice: On Tower: ---------- # ssh-keygen -t rsa -b 2048 -f /boot/config/ssh/tower_key Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): [press enter here] Enter same passphrase again: [press enter here] On Tower2: ----------- # ssh-keygen -t rsa -b 2048 -f /boot/config/ssh/tower2_key Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): [press enter here] Enter same passphrase again: [press enter here] On Tower: ---------- # scp /boot/config/ssh/tower_key.pub root@tower2:/boot/config/ssh On Tower2: ----------- # scp /boot/config/ssh/tower2_key.pub root@tower:/boot/config/ssh On Tower: ---------- mkdir -p /root/.ssh cp /boot/config/ssh/tower_key /root/.ssh/id_rsa cat /boot/config/ssh/tower2_key.pub > /root/.ssh/authorized_keys chmod g-rwx,o-rwx -R /root/.ssh On Tower2: ----------- mkdir -p /root/.ssh cp /boot/config/ssh/tower2_key /root/.ssh/id_rsa cat /boot/config/ssh/tower_key.pub > /root/.ssh/authorized_keys chmod g-rwx,o-rwx -R /root/.ssh Add to /boot/config/go on Tower: ----------------------------------- mkdir -p /root/.ssh cp /boot/config/ssh/tower_key /root/.ssh/id_rsa cat /boot/config/ssh/tower2_key.pub > /root/.ssh/authorized_keys chmod g-rwx,o-rwx -R /root/.ssh Add to /boot/config/go on Tower2: ------------------------------------ mkdir -p /root/.ssh cp /boot/config/ssh/tower2_key /root/.ssh/id_rsa cat /boot/config/ssh/tower_key.pub > /root/.ssh/authorized_keys chmod g-rwx,o-rwx -R /root/.ssh I really would like to get this running. Many thanks in advance.
  20. I don't know if this is a Docker or Unassigned Devices thing. I do receive permission denied for all sorts of files and directories. If I look at directories and files from within docker User "abc" and Group "abc" have permission. Outside the docker container it's "nobody:users". The docker container is running as "99:100" and all directories but "/config" are set to "Slave/RW". What else do I need to do? Here's docker view vs. unRAID view: root@Tower:/mnt/disks/UA01/nzbget# docker exec -it nzbget ls -lisa /nzbget/intermediate total 12 4297828992 0 drwxrwxrwx 3 abc abc 73 Mar 26 10:38 . 99 0 drwxrwxrwx 7 abc abc 124 Mar 27 07:59 .. 2147483745 12 drwxrwxrwx 2 abc abc 8192 Mar 26 11:03 'Test.#17489' root@Tower:/mnt/disks/UA01/nzbget# ls -lisa /mnt/disks/UA01/nzbget/intermediate/ total 12 4297828992 0 drwxrwxrwx 3 nobody users 73 Mar 26 10:38 ./ 99 0 drwxrwxrwx 7 nobody users 124 Mar 27 07:59 ../ 2147483745 12 drwxrwxrwx 2 nobody users 8192 Mar 26 11:03 Test.#17489/ Attached you'll find the important settings. If you need diagnostics I can attach it too. Any help is highly appreciated.
  21. Thanks for posting. I can't see custom br0 on my docker details - it's custom eth0 only. What do I need to do?
  22. This is the output of wipefs. DOS? On a 3TB drive? Last time I used DOS was when using 1,44MB diskettes. Really weird. Linux 4.14.26-unRAID. root@Tower:~# wipefs /dev/sdg DEVICE OFFSET TYPE UUID LABEL sdg 0x1fe dos root@Tower:~#
  23. This drive was part of an unRAID machine. I did replace it with a 6TB. Don't know what filesyste it had (ReiserFS, XFS, BTRFS or parity). Do I issue wipefs for the partition sdg1 or the drive sdg? Thanks for helping.
×
×
  • Create New...