Jump to content

Hoopster

Members
  • Posts

    4,573
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by Hoopster

  1. What deuxcolors says to do here is worth trying. It certainly cannot hurt anything. However, according to the post he quoted, the extra mapping of /tmp to /tmp supposedly addresses the inability to play any video with EAC3 audio. However, you stated you can't stream anything to any device. Your problem is not related to just content with EAC3 audio, is this correct?
  2. So far the server has never failed to power on and start unRAID array through IPMI. I understand why your "server alive" checks were in the script and once I move mine offsite in the future, I will likely do that at as well. For now, and so I would have less to troubleshoot in the beginning, I took it out. I am only waiting for 3 minutes after the IPMI call and assuming the server is up. It almost always is within 2 minutes and the script waits for three. Fingers crossed that it stays this way Thanks for your original work on this.
  3. I have two servers called MediaNAS (main server) and BackupNAS (backup server). I do a one-way backup of all new/changed files from MediaNAS to BackupNAS. My script does the following: 1. power on backupserver via IPMI 2. setup email headers 3. backup shares from MediaNAS to BackupNAS 4. record everything in log files 5. email me the backup summary and logs 6. power down the backup server vi IPMI You can get rid of the email stuff and the IPMI stuff (although the logs are nice even if you don't email them to yourself) and modify the rsync lines to your liking (eliminating ssh if you like) as a starting point. Here is my modified script which eliminates all of the original poster's "check to see if the server is up" logic and backs up share to share instead of from disk to disk. #!/bin/bash #description=This script backs up shares on MediaNAS to BackupNAS #arrayStarted=true echo "Starting Sync to BackupNAS" echo "Starting Sync $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log # Power On BackupNAS ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxx chassis power on # Wait for 3 minutes echo "Waiting for BackupNAS to power up..." sleep 3m echo "Host is up" sleep 10s # Set up email header echo To: [email protected] >> /boot/logs/cronlogs/BackupNAS_Summary.log echo From: [email protected] >> /boot/logs/cronlogs/BackupNAS_Summary.log echo Subject: MediaNAS to BackupNAS rsync summary >> /boot/logs/cronlogs/BackupNAS_Summary.log echo >> /boot/logs/cronlogs/BackupNAS_Summary.log # Backup Pictures Share echo "Copying new files to Pictures share ===== $(date)" echo "Copying new files to Pictures share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Pictures share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Pictures.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Pictures/ [email protected]:/mnt/user/Pictures/ >> /boot/logs/cronlogs/BackupNAS_Pictures.log # Backup Videos Share echo "Copying new files to Videos share ===== $(date)" echo "Copying new files to Videos share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Videos share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Videos.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Videos/ [email protected]:/mnt/user/Videos/ >> /boot/logs/cronlogs/BackupNAS_Videos.log # Backup Movies Share echo "Copying new files to Movies share ===== $(date)" echo "Copying new files to Movies share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Movies share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Movies.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Movies/ [email protected]:/mnt/user/Movies/ >> /boot/logs/cronlogs/BackupNAS_Movies.log # Backup TVShows Share echo "Copying new files to TVShows share ===== $(date)" echo "Copying new files to TVShows share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to TVShows share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_TVShows.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/TVShows/ [email protected]:/mnt/user/TVShows/ >> /boot/logs/cronlogs/BackupNAS_TVShows.log # Backup Documents Share echo "Copying new files to Documents share ===== $(date)" echo "Copying new files to Documents share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log echo "Copying new files to Documents share ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Documents.log rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x" /mnt/user/Documents/ [email protected]:/mnt/user/Documents/ >> /boot/logs/cronlogs/BackupNAS_Documents.log echo "moving to end ===== $(date)" echo "moving to end ===== $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log # Add in the summaries cd /boot/logs/cronlogs/ echo ===== > Pictures.log echo ===== > Videos.log echo ===== > Movies.log echo ===== > TVShows.log echo ===== > Documents.log echo Pictures >> Pictures.log echo Videos >> Videos.log echo Movies >> Movies.log echo TVShows >> TVShows.log echo Documents >> Documents.log tac BackupNAS_Pictures.log | sed '/^Number of files: /q' | tac >> Pictures.log tac BackupNAS_Videos.log | sed '/^Number of files: /q' | tac >> Videos.log tac BackupNAS_Movies.log | sed '/^Number of files: /q' | tac >> Movies.log tac BackupNAS_TVShows.log | sed '/^Number of files: /q' | tac >> TVShows.log tac BackupNAS_Documents.log | sed '/^Number of files: /q' | tac >> Documents.log # now add all the other logs to the end of this email summary cat BackupNAS_Summary.log Pictures.log Videos.log Movies.log TVShows.log Documents.log > allshares.log zip BackupNAS BackupNAS_*.log # Send email of summary of results ssmtp [email protected] < /boot/logs/cronlogs/allshares.log cd /boot/logs/cronlogs mv BackupNAS.zip "$(date +%Y%m%d_%H%M)_BackupNAS.zip" rm *.log #Power off BackupNAS gracefully sleep 30s ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxxx chassis power soft
  4. Both my servers are on the same LAN. The principles and rsync commands are the same regardless of location of servers. I just did it via ssh because my server root logins are password protected and I wanted this all to run without user input (password prompt). I also plan to eventually move the backup server offsite and I want it to be secure over the Internet when that happens. If you have two always-on servers, want to run the backup manually, or have no password on the login to the unRAID server, it will be less complicated. Since my backup runs unattended once a week at 1am on Monday, I have all the email logic in my script as well to tell me what happened. It took me a few days to work through this as I had to learn a lot about how rsync and ssh worked. Most of my issues were ssh related, so, if that does not matter to you, this is really not that difficult to script.
  5. rsync is included in unRAID. I am using rsync to backup between two unRAID servers via ssh. It's all automated and even powers the backup server on/off via IPMI when it's time for a backup. Even though my backups are between two unRAID servers, the same rsync principles/syntax should apply between Synology and unRAID. Here's the discussion that got me started (with plenty of my own comments as I worked through it).
  6. So this was an EAC3 audio fix only. Perhaps I don't have many videos with EAC3 audio.
  7. Hmm, interesting that you had to add a /tmp to /tmp mapping in addition to the /transcode to /tmp. I have only the latter and things configured as I explained previously and trancodes are definitely happening in RAM as shown below: No additional /tmp to /tmp mapping was needed and I am specifying /transcode as path in Plex.
  8. My LAN is 192.168.1.x (the USG/Gateway is 192.168.1.1) and the Public IP/WAN address of the USG is, of course, something very different . The 10.21.10.x network is not anything I have configured. It must have been assigned as an internal network by the the Deluge install. Glad it is working for you now. Strange that you had to change the 58846 port and then it worked. Not likely, but, was some other docker using that port?
  9. this is over my head. can you provide more detail please? thank you! deuxcolors is referring to mapping a container path to a host path for transcoding purposes and then specifying the container path in the Plex server transcoder settings. /tmp is RAM on the unRAID server. He mapped /tmp to /tmp. In my case I have /transcode mapped to /tmp but the concept is the same. Edit Plex docker settings and do the following: 1 - Edit the appropriate host path variable (or add a new one) for your transcode mapping. 2 - Enter the desired container path name (I used /transcode, deuxcolors used /tmp) and enter a host path of /tmp if you want to transcode to RAM. 3 - Click Save button in Edit Configuration. 4 - Click Apply button in Docker Edit screen. 5. Open Plex docker WebUI 6. Click on Settings 7. Select Server Settings 8. Select Transcoder 9. Enter the name of the container path you specified in step two as the Transcoder temporary directory. deuxcolors mapped /tmp to /tmp so his entry here is /tmp This tells Plex to transcode videos to the /tmp directory on the host which is in RAM. This could also be done to a directory on your cache drive by specifying the Host Path as something like /mnt/user/cache/plex/transcode in step two. The important thing is that in Plex transcode settings you specify the name of the Container path variable as the Transcoder location. 10. Click Save Changes button
  10. I am using DelugeVPN and connect without issue. For comparison, here are my settings (none of which I modified, they are all the defaults): Note: Incoming Ports: Use Random Ports is not checked and it is not necessary to set it to the port variables specified in the docker setup. Outgoing Ports: Use Random ports is selected, but no ports are specified. You have nothing in the Network Interface and I suspect this is a problem. These are default docker settings, I modified nothing. I also have a Ubiquiti USG router and it is not necessary to set any port forwarding rules for Deluge. I have set some port forwarding rules for OpenVPN and Plex, but nothing for Deluge. I don't know of this will help, but, it comes from my working setup and there are several differences. Note: I do not have an IP address assigned to this docker. It is using the same IP as the unRAID server.
  11. I use ipmitool in my backup script to power on my backup server. After the backup is complete, the script calls ipmitool again to gracefully shut down the server until the next backup runs.
  12. No need to apologize, we all learn by doing. I learned this the first time I assigned an IP address to a docker and had to do some searching to find out why I could not ping or ssh to the host. FYI, in case you want to isolate your dockers on their own network and have communication between dockers (but not with the unRAID host), here is a good guide for how to do it:
  13. That's as designed. Dockers assigned their own IP address on macvlan are isolated from the host. There are several posts in the forums from bonienl and others about this. There is a way around that, but, by default, there is no communication with the host once dockers have their own IPs on macvlan.
  14. How about IPMITool? I am loading it in my go file, but, if it is in the NerdPack I can clean up the go file. I believe there are quite a few unRAID users with IPMI-capable motherboards. Here is the Slackware specific version (1.8.13) I am running: https://slackware.pkgs.org/14.2/slackonly-x86_64/ipmitool-1.8.13-x86_64-1_slonly.txz.html And here is the latest SourceForge version (1.8.18): https://sourceforge.net/projects/ipmitool/ If this has already been considered and rejected for whatever reason,, I understand and will just continue to load in the go file.
  15. I created a test domain at noip.com and manually set it to an incorrect public iP address. When generated.conf was created, the correct public IP address was set. Restarting the container did not force an update either by issuing a restart command or by stopping and then starting the container. The only way I could force the IP address update was to remove the docker, delete the generated.conf file (leaving noip.conf in place) and reinstall the docker. Docker removal is necessary because a new docker of the same name and appdata path cannot be created until the prior one is deleted. This is true, of course, whether it is installed via Community Applications or the docker run command line. I did it both ways. My conclusion is that only by creating a new generated.conf file based on noip.conf can an update be forced. Unfortunately, the only way I could do that was by uninstalling the docker container, deleting the existing generated.conf file and reinstalling the docker container.
  16. Yes. I figured no-ip wasn't giving you a lot of control through their script. However, I did notice that creating generated.conf immediately forced an update. Perhaps restarting the docker does the same. I gave up and deleted the no-ip domain that was also being updated by the other router in Mexico. I now have a different domain registered so I can be assured nothing other than my current router is attempting to update it. However, I will manually change the IP address at noip.com to something bogus, restart the no-ip docker and see if that forces an IP address update. When I get a chance to try this, I will post an update with the results.
  17. Everything functions fine with the docker when it is assigned its own IP address. In my case some call traces are generated as a result. As a test, I removed the IP address assignment on the docker last night and the call traces went away. Previously, they were occurring every 4-5 hours. However, even with the call traces, everything system wide seemed to function fine. The only observable difference so far has been the inability to ssh into the controller via its assigned IP address. I suspect proper VLAN and routing definition in the controller could resolve that if it was needed. SSH into the USG, switches, APs still worked. I will likely go back to the assigned IP address on the docker once I figure out the call traces. I don't know if others have seen this with IP addresses assigned to dockers, but, I could see no negative side effects as a result. However, it does make me a bit nervous that they are occurring.
  18. This discussion thread explains why, after assigning an IP address to the UniFi docker, there is no communication between the host (unRAID) and the docker. Apparently, it is a by-design security measure of the macvlan implementation. There may be a way around it with vLANS and static routes defined in the UniFi controller.
  19. /mnt/cache/appdata/unifi contains three folders; /data /logs and /run /data would seem to be the most likely candidate for what you are looking for. /mnt/cache/appdata/unifi and /var/lib/docker/containers/{gibberish} are not identical. UniFi and other dockers and plugins are installed from the Apps (Community Applications) tab in the unRAID GUI. There may be more than one docker (as is the case with UniFI) for the same app, so pick the container you prefer. I usually stick with Linuxserver.io containers where possible. My UniFi docker is from LSIO.
  20. There is no var/lib/unifi path. I can get to /var/lib/docker/containers. The folders under this path are long strings representing each docker container. One of those is the Unifi docker. I checked the advanced view in the UniFi docker edit page to see which string was the correct one and was able to access this folder. It is a very long string of letters and numbers. That was fun to type. In this folder I see various .json files as well as .conf files, hosts and hostname, etc. So, it is the UniFi docker folder.
  21. I tried PuTTY from both my laptop and desktop machines. Of course, they are on the same LAN and subnet as unRAID/UniFI docker. Both failed. If I turn off WiFi on my iPhone and try Cloud Access to the UniFi controller via the IP address assigned to the UniFI docker, it works. I know that is not an SSH test, but, I was curious and it does work.
  22. Now that you mention it, I had never tried it. SSH from the unRAID web terminal results in a "no route to host" error and PuTTY gets a "connection refused" error. So, no, I cannot SSH into the controller app. There is more tweaking to be done.
  23. One more refinement was necessary. Everything worked perfectly until I rebooted my main server. It is the server from which I am copying new files to the backup server. Generally, it is running 24x7 but, I upgraded it to 6.4.0 stable which required a few reboots to upgrade and to resolve a couple of minor issues. After reboot, the rsync operations failed. Although there was never a prompt for a password, my backup server was an unknown host. After confirming the host, a new known_hosts file was created in /root/.ssh which I copied to /boot/config/ssh. I then modified the go file and added an additional line to those documented in ken-ji's post which copies known_hosts from /boot/config/ssh to /root/.ssh on reboot. This solved the problem and the rsync commands function properly after a main server reboot.
  24. I assigned a separate IP address to my Unifi docker yesterday by editing the docker, setting network type to br0, and specifying the desired IP address. This caused the docker to reload. All ports were left at the defaults. When I accessed the WebUI, for about a minute the dashboard page showed pending on all my devices, but, it sorted itself out quickly and reactivated all devices. All I did was change the IP address in the docker and the UniFi controller took care of the rest when restarted. The docker shows up as a wired client in UniFi (on the same switch port as the unRAID server, of course) and I gave it an alias to identify it as the docker in the client list. It's all a very simple process.
  25. I followed these instructions to the letter on my main and backup servers. After running the indicated commands, modifying the go file and executing these commands on each server, moving files between the two servers is now possible without a password prompt. After rebooting my backup server, all the appropriate SSH files persist and the backup from main to backup server runs without a password prompt. Success! Thanks to @ken-ji and @tr0910 for this information. Combining Ken-Ji's instructions above and the intermediate tests in the original post and tr0910's sample script, automating rsync backup via ssh works great. Now it's on to refining my backup script and automating it via the User Scripts plugin. The unRAID community, as always, comes through again.
×
×
  • Create New...