tr0910

Members
  • Posts

    1449
  • Joined

  • Last visited

Everything posted by tr0910

  1. Just remember that unRaid is not Windows. With unRaid being Linux based, you don't have the same win32 attack surface, and you don't have a persistent boot disk. unRaid runs in RAM so many infections require only a reboot to clean up. With Windows you are correct, the only safe thing to do it burn it. But think of it this way. 1. Just fix my computer, please. I can't afford to lose anything. 2. Ok, I am willing to reformat. 3. I burnt it up to charcoal, then smashed everything with a sledgehammer. 4. Didn't just burn it, burnt the house and everything within 10 miles just to be certain. Getting people to step 2 is a big win, and removes most of the typical Windows gremlins. Step 3 is for Edward Snowden the drug lords, and Hillary Clinton
  2. Cloudberry is an interesting option and would handle the encryption at destination as desired. Rsync is great, but the encryption requirement will be difficult to implement. I wonder how Cloudberry can scale? You seem interested in this, test it out and tell us how it works?
  3. Never put you unRaid server in the DMZ. It totally depends on your password. If you had a secure password, and the exposure was only a few hours, you might be ok. But the trouble is, can you now trust your server? This reminds me of several times my daughter would infect her laptop with a virus. The only safe way forward was to use the windows disk to reformat and reinstall Windows on the laptop and she lost everything. It was painful, but it was good medicine. Now she is very careful. Sent from my chisel, carved into granite
  4. In my case, the source data is on a XFS disk, and the destination disk is on a XFS Encryped disk connected via unassigned devices. rsync is only looking at the date and time and filesize in my example. So no issue. However, once the XFS Encrypted drive is mounted with a correct keyfile, the contents are readable by anybody at destination. But if you want your files to be unable to be read at destination, you need to make them unreadable before sending. In your example, are you sending files to say Google Cloud and you don't trust Google Cloud to not snoop in your data? Me I don't care. I own the servers on each end, and they are located in my offices both end. I trust myself.... You could zip up the files with a strong password at source before sending. Then rsync will be sending zipped and password protected files. Your issue will be maintaining the zip process so that if a file changes at source, your zip process knows and rezips and makes it ready for resend. Example, you backup spreadsheet "myBankAccounts.xls" and it gets password protected in a zip file and sent to destination. But later you edit this file and change your bank balances, and you need to have it backed up with the newly modified data. In my examples, rsync takes care of this by the "rsync -avuX" switches. The important one is the "u" that tells rsync to check files that have changed and resend them if they are newer at source. Crashplan had encryption during backup built into it and I ran it for awhile for server to server backups, it didn't scale well, and many ended their use when they closed the linux solution.
  5. We are all waiting for somebody to take our hacked together solution and pare it down the the bare essentials, and create a better version. I agree with Hoopster, it just works. All the consumer friendly solutions have their issues. I've added a bit of more scripting to mine so that it creates nicely formatted emails summarizing the results of the backup. This is where you can spend a lot of time making the output look pretty. Attached is the email output and the code that generates a backup between USA and China. It is actually amazing how fast this can transfer data. And no VPN is being used here at all. It's not required for the actual transfer. Subject: China Web 6 Back 0 Doc 2 =============================== ##### USA WebBackups ##### Sat Feb 16 04:40:02 CST 2019 =============================== receiving incremental file list =============================== ##### China Backups ##### Sat Feb 16 04:40:15 CST 2019 =============================== sending incremental file list Number of files: 72,026 (reg: 66,554, dir: 5,472) Number of created files: 0 Number of deleted files: 0 Number of regular files transferred: 0 Total file size: 85,547,872,771 bytes Total transferred file size: 0 bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 2,490,176 File list generation time: 9.385 seconds File list transfer time: 0.000 seconds Total bytes sent: 6,848,560 Total bytes received: 6,619 sent 6,848,560 bytes received 6,619 bytes 95,876.63 bytes/sec total size is 85,547,872,771 speedup is 12,479.31 =============================== ##### China Documents ##### Sat Feb 16 04:41:26 CST 2019 =============================== sending incremental file list List of files transferred shows up here. Removed for security reasons. Number of files: 81,837 (reg: 74,447, dir: 7,390) Number of created files: 2 (reg: 2) Number of deleted files: 0 Number of regular files transferred: 2 Total file size: 707,160,237,755 bytes Total transferred file size: 12,445 bytes Literal data: 12,445 bytes Matched data: 0 bytes File list size: 851,903 File list generation time: 0.004 seconds File list transfer time: 0.000 seconds Total bytes sent: 7,629,047 Total bytes received: 8,864 sent 7,629,047 bytes received 8,864 bytes 116,609.33 bytes/sec total size is 707,160,237,755 speedup is 92,585.56 =============================== ##### Finished ##### Sat Feb 16 04:42:31 CST 2019 =============================== #!/bin/bash # This creates a backup of documents from China to USA, and sends web backup files from USA to China # A summary email is sent with a header listing backup summary activity. (number of files sent each way) # # Note IP addresses are hard coded as the duckdns was having problems from China. echo "Starting Sync between servers USA and China" # Set up email header echo To: [email protected] > /tmp/ChinaS1summary.log echo From: [email protected] >> /tmp/ChinaS1summary.log echo Subject: Backup rsync summary >> /tmp/ChinaS1summary.log echo >> /tmp/ChinaS1summary.log # Backup Disk 1 getting files from yesterday echo "##### USA WebBackups ##### `date`" echo "===============================" >> /tmp/ChinaS1summary.log echo "##### USA WebBackups ##### `date`" >> /tmp/ChinaS1summary.log echo "===============================" >> /tmp/ChinaS1summary.log dte=$(date -d "yesterday 13:00 " '+%Y-%m-%d') src="[email protected]:/mnt/disk1/downloads/ftp_dump/usa_newtheme/backup_"$dte"*.gz" dtetoday=$(date -d "today 13:00 " '+%Y-%m-%d') srctoday="[email protected]:/mnt/disk1/downloads/ftp_dump/usa_newtheme/backup_"$dtetoday"*.gz" dest="/mnt/disks/SS_1695/downloads/ftp_dump/usa_newtheme/" #dest="/mnt/disks/ST4_30A6/downloads/ftp_dump/usa_newtheme/" rsync -avuX --stats -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456" $src $dest > /tmp/Chinayesterday.log echo "##### China Backups ##### `date`" echo "===============================" >> /tmp/Chinayesterday.log echo "##### China Backups ##### `date`" >> /tmp/Chinayesterday.log echo "===============================" >> /tmp/Chinayesterday.log #Rsync to USA from China #Backups excluding some stuff like Software rsync -avuX --stats --exclude=Software --exclude=FirefoxProfiles --exclude=hp7135US -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456" /mnt/user/D2/Backups/ [email protected]:/mnt/user/Backups/ > /tmp/Chinabackups.log echo "##### China Documents ##### `date`" echo "===============================" >> /tmp/Chinabackups.log echo "##### China Documents ##### `date`" >> /tmp/Chinabackups.log echo "===============================" >> /tmp/Chinabackups.log #China\Documents #Documents rsync -avuX --stats --exclude=.* -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456" /mnt/user/Documents/ [email protected]:/mnt/user/Documents/ > /tmp/Chinadocuments.log echo "##### Finished generating summaries ##### `date`" echo "===============================" >> /tmp/Chinadocuments.log echo "##### Finished ##### `date`" >> /tmp/Chinadocuments.log echo "===============================" >> /tmp/Chinadocuments.log # Create the summaries stripping out the detailed files being transferred cd /tmp/ tac Chinayesterday.log | sed '/^Number of files: /q' | tac > yesterday.log tac Chinadocuments.log | sed '/^Number of files: /q' | tac > documents.log tac Chinabackups.log | sed '/^Number of files: /q' | tac > backups.log backups=$(sed -n '/Number of created files: /p' /tmp/backups.log | cut -d' ' -f7 | rev | cut -c 2- | rev) docs=$(sed -n '/Number of created files: /p' /tmp/documents.log | cut -d' ' -f7 | rev | cut -c 2- | rev) web=$(sed -n '/Number of created files: /p' /tmp/yesterday.log | cut -d' ' -f7 | rev | cut -c 2- | rev) echo "China Web "$web" Back "$backups" Doc "$docs # now add all the other logs to the end of this email summary cat ChinaS1summary.log yesterday.log backups.log documents.log > diskallsum.log cat ChinaS1summary.log Chinayesterday.log Chinabackups.log Chinadocuments.log > diskall.log # Adjust email subject to reflect results of backup run subject=`echo "China Web "${web}" Back "${backups}" Doc "${docs}` sed 's@drChina rsync summary@'"$subject"'@' /tmp/diskall.log > /tmp/diskallfinal.log zip China diskallfinal.log # Send email of summary of results ssmtp [email protected] < /tmp/diskallfinal.log cd /tmp mv China.zip /boot/logs/cronlogs/"`date +%Y%m%d_%H%M`_China.zip"
  6. I just downloaded and tested it. It seems to behave somewhat like rsnapshot. So yes, the 2nd and 3rd backups and on are really only symlinks. This means that as long as you are only modifying a few small files between backups, you get the benefit of a complete versioned backup with only a tiny bit of additional disk space being used for the versioning. I would not use this for backing up a complete server, but for backing up documents or source code that has significant potential for revisions, and you would like to keep a copy of all the revisions just in case. Straight rsync copy is more relevant for backing up media files that never change. For simple backups without versioning, I use the following... rsync -avu /from/some/source /to/some/dest I haven't tested your linked rsync_time_backup extensively yet so can't really comment further on it. Perhaps your problems are related to scale. Did you test with a small 1-3gb folder first and see how it behaves?
  7. Check your backup unRaid flash drive for the pro.key file. This is all you need from that. Save that file in a safe place then install new fresh unRaid on that 2nd flash Depending on the age of your backup of the current unRaid flash drive, you can use parts of it to recover your server Do you have a print out of the unRaid array screen listing your drives and which ones are parity, cache, data etc... ? Sent from my chisel, carved into granite
  8. What you are trying to accomplish is pushing the borders of the normal expertise you will find here. Because of that you will find that not all of your questions will be answered. I have a one server that has dual Intel Xeon 2670 and 96gb RAM. This server works best for flexible VM work as you are proposing. I have 5 Win 10 VM on this server and create and destroy many more on an ad hoc basis. But I don't pass through any video. I find pass through really isn't flexible for my needs. Requiring 1 video card for each VM just kills it for me. I find Microsoft RDP best for connecting to these VM in spite of what grid runner suggests. This is ok for normal browsing and office work. Try it and see if it works for you. The beauty is that dedicated video cablng is not required, and dedicated GPU are not required. The downside is that video is limited for gaming, streaming and other GPU intensive stuff. Basically just follow grid runners first video about windows 10 VM and ignore video 2. Then connect to the VM with windows RDP from a laptop and test it out. This is flexible VM at it's best. Sent from my chisel, carved into granite
  9. I started using Resilio but after a time just got rsync working. Resilio would sometime work and sometimes not. No experience with the others. Sent from my chisel, carved into granite
  10. Yes, get ready for a steep learning curve. The unRaid experts are quite willing to help in their area of expertise, and this is quite deep in the area of local data hoarding. However, when venturing beyond local operations, expect to do some pioneering on your own. I too would love to have snapshots working over large distances but I already have something working. I presently use rsync over ssh to backup servers from USA to other countries including behind the Great Firewall of China. This testifies to the resiliance of rsync over ssh. It just works. I have set up this using rsync that works quite well over ssh. See
  11. Interesting, have you used it this way? International data connections are hit and miss. I wonder how resilient this might be? What if I only want to snapshot a /mnt/disk1/sometopfolder/somesubfolder Is that ok? Not all data deserves this level of protection. I would only snapshot a very small fraction of my data.
  12. @johnnie.black That link Method 2 suggests both source and destination need to be on the same volume for snapshots. This won't work across the world will it? Method 1 uses cp. Is there any way this can be accomplished with rsync. It seems more resilient to bad internet connections. I want to snapshot between USA and Asia.
  13. @johnnie.black I would love to try this across the world sometime. Totally different continents. I have rsync backups over ssh working fine, but somethings would be better snapshotted. I used to have rsnapshot on old RFS drives using code from https://rsnapshot.org But this was in the old v5 days. How is btrfs different? Some of my disks are still XFS and some are BTRFS. Do I need both ends of the snapshot converted to BTRFS in order to do this?
  14. Peter's Open-VPN plugin has a known problem with 6.7 Nothing to do but wait for a 6.7 compatible update as some legacy crypto code longer is found in 6.7.
  15. We await your fix at your convenience. The docker is ok, but Peter's plugin is better. His works even if the array is stopped. If Peters Open-vpn plugin is important to you wait on installing 6.7 for a bit. It will get fixed.
  16. I was just about to reboot my server after upgrading to 6.7rc2 when I saw this. OpenVPN is required for me. I will postpone this reboot now, possibly for a long time........
  17. Right now encryption is only one key for the entire array, as well as encrypted unassigned devices. This is painful a as you cannot plug in an encrypted unassigned device with a different key. The limiting factor is the unRaid GUI only supports one key for the entire array including all plugged in unassigned devices. The underlying technology is well able to support multiple keys but the GUI needs to add support for this Sent from my chisel, carved into granite
  18. As long as it is a horrible image of your cat, you are good. Just don't use some cute image that has been shared a million times. Unraid will use the first 8mb of the file and ignore the rest. Sent from my chisel, carved into granite
  19. Mine does this but I use ipmi for power control Sent from my chisel, carved into granite
  20. I look forward to these kind of tools becoming more easily accessible. However you and I are in the minority. Unraid biggest user group are data hoarders, who don't really care much about security. I'm glad Tom keeps the product fresh and fully patched. Never hurts to ask. If he can do it easily, it may happen. Sent from my chisel, carved into granite
  21. Did you try and create a wireguard VM? Doesn't have to be slack. That should be trivial. Unraid KVM makes it easy. Adding to base os it's not trivial Sent from my chisel, carved into granite
  22. If you don't get a reply in a few hours, something is blocked. Sadly spam filters ensure email is no longer a certain way to send and receive stuff anymore. Do you have a alternate email you can try? Typically you should get a response in less than 1 day. Sent from my chisel, carved into granite
  23. Old slow and steady. That's what these drives are. Totally reliable but very slow. Sent from my chisel, carved into granite
  24. If you never tested the box when you put it away is hard to know. Might have been DOA. But it sounds like it might be worth your while to pull the mb and check for loose screws and other nasty things that could be causing a short. That and power connections. Those old industrial systems are tough as nails, and don't give up easily. Love em for the purpose you are suggesting. They spend most of their time down, so noise isn't an issue. Sent from my chisel, carved into granite
  25. And the reading is from your UPS. What UPS are you using? That may be why it isn't directly comparable to others. Sent from my chisel, carved into granite