bigjme

Members
  • Posts

    351
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bigjme's Achievements

Contributor

Contributor (5/14)

16

Reputation

  1. Thanks bastl, Mine was a little more generic as this is currently running on none unraid hardware. I'm aiming it more towards my remote backup server that uses SSH to backup my remote systems. My current script could be configured to run on most things remotely for example That script will certainly be useful to read through however
  2. So its been a long time since i looked at this previously but as i'm bored of the holidays i thought i'd look at a way to do this a little differently. Instead of full backups which take too much time and data, i looked at implementing an approach closer to what i think windows replication would use. I have not been using this script full time, its just a testing script for now with no error checks etc. but if anyone wants to comment or elaborate on this they are more than welcome to. Testing wise i have run this many times, shutting down and restarting the vm after various runs, even mounting the backup image to the vm and it all seems fine and works ok I have yet to bother to export the xml file as that seems like the easiest part in reality. In short this is how the script works: Loop through the vms listed at the top then loop through each drive assigned to it Make the folder for the backup to be stored if needed See if overlay 1 or 2 exist, if not it is an initial backup Create overlay 1 and backup the initial large base image Shrink the image back down a little using qemu-img and then move the shrunken version to be the main image If overlay 1 or 2 exists then handle things different Create a new snapshot (overlay) using a secondary snapshot name (state1 and state2 switch each run) Copy the previously used snapshot to the backup location as the changes since last backup Merge the old snapshot into the base image Rebase the partial image in the backup folder to the main image taken on the first run Commit the changes to the base image so the base backup file is now complete Remove the old snapshot file Shrink the image down again to keep the sizes small as snapshot commits do swell the underlined image if it is sparse The reason i went down this route is that because your only copying the changes since the last backup, the network load and thus time to complete is extremely low. As your also block committing a previous snapshot it should avoid errors in the commit. This should make the process a little more reliable and considerably faster than copying the base each time Now onto the downside. As your using an overlay image at all times after the first commit, this runs the risk of getting fairly large if the backup isn't run often when the vm is changing a lot. The upside is on my test 80GB centos install (12GB used) the script does a full backup and image shrink in 25 seconds (600MB disk change) so this could be run very often in a replication style system (6 seconds for a few hundred MB changes without the image shrink) Again, this is just me having a play and using various tools for fun to see if this was possible at all but so far, it seems to be working and any feedback from those with more knowledge than me would be appreciated. I must iterate again however, this is being used for testing and is not production ready in any way. Use this at your own risk To test, simply change the initial BaseBackupFolder, OverlayPath, and the VMNames #!/bin/bash echo "Starting backup system..." BaseBackupFolder="/mdev1/VM Backup/" OverlayPath="/mdev1/VM Overlays/" VMNames=( "Cent8Backup" ) arraylength=${#VMNames[@]} echo "Found ${arraylength} VMs that need backing up.." echo " " # Handle Backup for (( ii=0; ii<${arraylength}; ii++ )); do BackupPath="${BaseBackupFolder}${VMNames[$ii]}" DriveLetter=`virsh domblklist "${VMNames[$ii]}" --details | grep ^file | grep disk | awk -F' {2,}' '{print $3}'` FilePaths=`virsh domblklist "${VMNames[$ii]}" --details | grep ^file | grep disk | awk -F' {2,}' '{print $4}'` TotalDrives=${#DriveLetter[@]} echo "VM Name: ${VMNames[$ii]}" echo "Drives: ${TotalDrives}" echo " " for (( iii=0; iii<${TotalDrives}; iii++ )); do DiskName=$(basename -- "${FilePaths[$iii]}") OverlayDestination="${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay1.qcow2" BackupTarget="${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2" echo "Backing up: ${DriveLetter[$iii]}" if [ ! -d "${BackupTarget}" ]; then mkdir -p "${BackupPath}" fi if [[ ! -f "${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay1.qcow2" && ! -f "${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay2.qcow2" ]]; then echo "Initial overlay not found, creating first backup" echo "File Path: ${FilePaths[$iii]} -> ${BackupTarget}" echo "Overlay Path: ${OverlayDestination}" virsh snapshot-create-as --domain "${VMNames[$ii]}" guest-state1 --diskspec "${DriveLetter[$iii]}",file="${OverlayDestination}" --disk-only --atomic --no-metadata rsync -avh --info=progress2 --sparse "${FilePaths[$iii]}" "${BackupTarget}" else echo "Initial overlay found, running sequential backup" if [ ! -f "${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay2.qcow2" ]; then FilePaths[$iii]="${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay1.qcow2" OverlayDestination="${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay2.qcow2" BackupTarget="${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-partial.qcow2" echo "File Path: ${FilePaths[$iii]} -> ${BackupTarget}" echo "Overlay Path: ${OverlayDestination}" virsh snapshot-create-as --domain "${VMNames[$ii]}" guest-state2 --diskspec "${DriveLetter[$iii]}",file="${OverlayDestination}" --disk-only --atomic --no-metadata rsync -avh --info=progress2 --sparse "${FilePaths[$iii]}" "${BackupTarget}" virsh blockcommit "${VMNames[$ii]}" "${DriveLetter[$iii]}" --top "${FilePaths[$iii]}" --verbose --wait rm -f "${FilePaths[$iii]}" qemu-img rebase -f qcow2 -u -b "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2" "${BackupTarget}" qemu-img commit "${BackupTarget}" rm -f "${BackupTarget}" else FilePaths[$iii]="${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay2.qcow2" OverlayDestination="${OverlayPath}${VMNames[$ii]}-${DriveLetter[$iii]}-overlay1.qcow2" BackupTarget="${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-partial.qcow2" echo "File Path: ${FilePaths[$iii]} -> ${BackupTarget}" echo "Overlay Path: ${OverlayDestination}" virsh snapshot-create-as --domain "${VMNames[$ii]}" guest-state1 --diskspec "${DriveLetter[$iii]}",file="${OverlayDestination}" --disk-only --atomic --no-metadata rsync -avh --info=progress2 --sparse "${FilePaths[$iii]}" "${BackupTarget}" virsh blockcommit "${VMNames[$ii]}" "${DriveLetter[$iii]}" --top "${FilePaths[$iii]}" --verbose --wait rm -f "${FilePaths[$iii]}" qemu-img rebase -f qcow2 -u -b "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2" "${BackupTarget}" qemu-img commit "${BackupTarget}" rm -f "${BackupTarget}" fi fi echo "shrinking backup image" qemu-img convert -O qcow2 "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2" "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2.shrunk" rm -f "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2" mv "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2.shrunk" "${BackupPath}/${VMNames[$ii]}-${DriveLetter[$iii]}-main.qcow2" echo "image shrunk" # uncomment the following to completely flatten. Remeber to delete running overlay afterwards # virsh blockcommit "${VMNames[$ii]}" "${DriveLetter[$iii]}" --active --verbose --pivot echo " " done done It uses the same principle as i previously discussed. Where by we go from: Base -> Overlay1 Base -> Overlay1 -> Overlay2 Base -> Overlay2 Base -> Overlay2 -> Overlay1 Start again at point 1 At all points the file we copy is the middle overlay you can see in steps 2 and 4 Regards, Jamie
  3. I didn't no as my vm's were stored across my array and cache drives and they were all formatted up as XFS
  4. I'm afraid not. I had instances where the overlay images would fail to merge back in some instances so it would fail to backup on the next occurrence
  5. Hey Saarg I didn't think it would be but thank you for confirming. Regards, Jamie
  6. Hi All, So i did an update to the docker around an hour ago so its on :latest with no further updates available and the startup logs now have started to show this: [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... usermod: no changes ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 911 User gid: 911 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom scripts found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found: no field package.preload['resty.core'] no file './resty/core.lua' no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core/init.lua' no file '/usr/share/lua/5.1/resty/core.lua' no file '/usr/share/lua/5.1/resty/core/init.lua' no file '/usr/share/lua/common/resty/core.lua' no file '/usr/share/lua/common/resty/core/init.lua' no file './resty/core.so' no file '/usr/local/lib/lua/5.1/resty/core.so' no file '/usr/lib/lua/5.1/resty/core.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file './resty.so' no file '/usr/local/lib/lua/5.1/resty.so' no file '/usr/lib/lua/5.1/resty.so' no file '/usr/local/lib/lua/5.1/loadall.so') I've never seen these before and its happening on all my dockers using this image so it doesn't seem to be a configuration fault or anything else Having a quick google this doesn't seem to be a huge issue but is something worth mentioning in case it does have an affect on anything i'm unaware of Regards, Jamie
  7. I installed the user scripts plugin to unraid, added a new script named Fix Gpu Passthrough and set the content to #!/bin/bash echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind I then set it to run "At first array start only" and just reboot. It will then run that command automatically when the array starts which occurs prior to any VM's being booted so happens early enough to cause no issues
  8. Attached are the docker settings i used The registry edits do 2 things, i haven't used this in a while so i'm unsure if their still needed or not: Alter the appearance settings for Wine so the blueiris interfaces look correct. Without it, certain setting boxes didn't render properly (close, minimise, fullscreen) Tell blue iris to minimise to the system tray. I found that if you kept blue iris open or tried to minimise it then it would often crash by default as wine would get bogged down. This allows it to be minimised and re-opened without the crashes and stops the docker from rendering the video preview in the background Its been a while since i went through any of this, i turned the docker off after posting the registry fixes as i wasn't sure how to automate them I'm not sure exactly where wine keeps the windows registries stored in the system itself but i believe, as long as the c drive is mapped like in the above screen shot, it should persist everything even with updates Jamie
  9. So as i'm using more and more dockers i'm finding my docker screen getting a little messy (with 23 dockers) So my request is for some sort of docker grouping or categorisation in the docker gui. While it won't affect many users it may be useful to be able to group dockers by what their for and tidy up the interface For example, with my setup i'd want these groups: Home Media Services Web Services CCTV Utils Not Needed (i store old useful dockers here) It would be amazing to see something like this, potentially with a user create-able grouping list so we can make as many as needed? Even if it just acted as a filter that would be amazing Jamie
  10. OK I did not think of this at all.... Safe to say this would be a much easier approach than what I was thinking of as it's more scalable Looks like I'm going to need to learn some docker potentially! Thanks both for your replies. As I say, it was a bit of an odd one and one I didn't really expect a response from
  11. This is a bit of an odd one so forgive me and i know it may sound totally mad For years i've hosted various servers and websites on an unraid server. Its my job so i host test sites and everything else when i'm working remotely I have a custom reverse proxy coded and working in nginx and a number of nginx backend server. This allows me to route different websites to different instances of nginx entirely. Sometimes i get stuck with older websites or those that have bugs with certain php versions and i need to change it Right now the nginx docker ships with php 7.1.17 which is great. My question is, is there any way to change the php version without manually removing and installing a new one each load? I was thinking perhaps a set of stable php versions all installed at the same time but with fast-cgi on different ports. You could then change the fast-cgi port used by nginx and therefore change the php version used The most commonly selectable options i've seen are: 4.4 5.1 5.2 5.3 5.4 5.5 5.6 7.0 7.1 7.2 My idea is to use dockers instead of something like MultiPHP so everything was truely isolated. I know the idea behind it is mad and likely not usable by many but its worth a shot
  12. I managed to get it to run and work using these settings I couldn't figure out how to map the config folder for kerberos itself however so i'm assuming an update would wipe the settings within. One thing i did notice was it seems very memory heavy. A single 3mp camera was causing the docker to pull at least 3GB memory (climbing quickly) and 2 full system cores on my 5960x Safe to say my windows 10 vm running blue iris pulls around 2gb with a full windows install and a tiny 3% of 4 cores allocated in comparison Regards, Jamie
  13. Hi All So my unraid server uses an adaptec hba and sadly I'm unable to monitor it right now as I'm using unraid - primarily its just to check on the hba temperature as I have no other way to check it I was wondering if there was anyone with a built docker file for this or that would be able to create one? I know this topic was brought up around 2 years ago and there were issues with having the software compiled into the docker and I believe it was decided the software would have to be downloaded and installed on docker run which is fine I've found the following 2 Dockers that do this but their all on github so I've got no idea how to use them https://github.com/Fish2/docker-storman https://github.com/nheinemans/docker-storman I'm not sure if there are still licensing issues I'm not aware of, or it may be the case that this simply won't work at all but it would perhaps be useful to have for anyone on here running adaptec maybe now that their a lot more supported? Jamie
  14. Ok so for anyone following this or interested, this may be becoming more of just a general virsh support issue than anything as i have an idea of what may work for achieving at least some form of easier backups and having had to trash a vm recently this is becoming a more prominent issues for me Please note, i have not tried rolling back a vm to a snapshot yet so don't use the above as serious backups until tested more Now as i'm fairly new to virsh and there are no doubt some guru's on here, this is my new current script #!/bin/bash #Declare backup sources sources=( "/mnt/cache/VMImages/vm1/" "/mnt/disk1/Disk1Test/VM Image/vm2/" ) targets=( "/mnt/disks/Disk_1/VMImages/vm1" "/mnt/disks/Disk_1/VMImages/vm2" ) vmname=( "vm1" "vm2" ) arraylength=${#sources[@]} # Declare backup drives deviceuuid=( "6ed5043c-14ee-41f2-903d-d201ec50d39f" ) devicemount=( "/mnt/disks/Disk_1" ) devicelength=${#deviceuuid[@]} # Mount drives for (( ii=1; ii<${devicelength}+1; ii++ )); do if grep -qs "${devicemount[$ii-1]}" /proc/mounts; then echo "${devicemount[$ii-1]}" " - mounted" else echo "${devicemount[$ii-1]}" " - not mounted" mkdir "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - created mount path" mount -t xfs UUID="${deviceuuid[$ii-1]}" "${devicemount[$ii-1]}" if [ $? -eq 0 ]; then echo "${devicemount[$ii-1]}" " - mount success!" else echo "${devicemount[$ii-1]}" " - mount failed!" exit 1; fi fi done # Handle Backup for (( ii=1; ii<${arraylength}+1; ii++ )); do echo "Starting backup of" "${sources[$ii-1]}" " to " "${targets[$ii-1]}" mkdir -p "${targets[$ii-1]}" #virsh domblklist "${vmname[$ii-1]}" virsh snapshot-create-as --domain "${vmname[$ii-1]}" $(date '+%Y-%m-%d-%H-%M-%S') --diskspec hdc,file="/mnt/user/ArrayVDisks/TempVms/overlays/${vmname[$ii-1]}.qcow2" --disk-only --atomic files=( $(find "${targets[$ii-1]}" -name "*.qcow2") ) if [ ${#files[@]} -gt 0 ]; then echo "Running incremental backup - setting as inplace" rsync -ahv --delete --progress --inplace "${sources[$ii-1]}" "${targets[$ii-1]}" else echo "Running first backup - setting as sparse" rsync -ahv --delete --progress --sparse "${sources[$ii-1]}" "${targets[$ii-1]}" fi virsh blockcommit "${vmname[$ii-1]}" hdc --active --verbose --pivot rm "/mnt/user/ArrayVDisks/TempVms/overlays/${vmname[$ii-1]}.qcow2" #virsh snapshot-list "${vmname[$ii-1]}" done # Unmount drives for (( ii=1; ii<${devicelength}+1; ii++ )); do if grep -qs "${devicemount[$ii-1]}" /proc/mounts; then fuser -k "${devicemount[$ii-1]}" umount "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - unmounted" rmdir "${devicemount[$ii-1]}" echo "${devicemount[$ii-1]}" " - removed mount path" fi done So, this mounts the drive i specify to a mount point of my choosing, then loops through the VM's as needed. It creates a snapshot for each vm, checks if that vm has been backed up before, if it hasn't then rsync does a sparse image backup to preserve the space used rather than copying it at the full image size. If it exists then it does an inplace backup which just updates the sparse image, again keeping the files small This so far seems to work fine as i have a 30GB qcow2 vm thats using 5.6gb, 3 backup runs later and its still only 5.6gb as nothing in the vm ever really changes although it is running and there is no noticeable different to the vm while this is all running So now the image is copied the system commits the overlay image back into the base, removes the overlay image, and does the next vm, once its done it unmounts all the drives as needed. This is done so i can actually backup my vm's to multiple drives if needed - i use this for important documents on my array as well So that's all working fine but still there is that dreaded rsync that copies the entire image over every time which isn't ideal but is fine while i'm doing local backups So i had a thought earlier but i can't figure out for the life of me if its even remotely possible or not. From what i have read the command "virsh snapshot-create-as" is capable of creating a snapshot backing chain with multiple layers when this parameter is passed "--reuse-external". An example is below A brand new image is created and looks like this: base.img After 1 snapshot this looks like this: base.img -> snapshot1 After 2 snapshots this looks like this: base.img -> snapshot1 -> snapshot2 So my thought was, what happens if when we create snapshot1 we copy the base.img in our backup script When we create snapshot2 we copy snapshot1 We then commit snapshot1 back to the base.img using blockcommit and "--shallow --keep-relative" this then leaves: base.img -> snapshot2 The next backup run then creates snapshot1 based on snapshot2, then commits snapshot2 as above and loop like this but one creates snapshot1, the next creates snapshot2 etc. The file you then transfer would be the "middle" snapshot and would in essence be the difference between the snapshots resulting in a much smaller file to be copied In my mind that would work up to this point (although i know far too little to say if its plausible) Now the main issue i see comes in. In your backup destination you now have a load of snapshots overlays and a base file, but how on earth would you get those overlays to commit back to the main file? The overlay and base files would be aware of entirely different backing chains to each other so i'm not sure how to possibly maintain this My hope is that this thread may become more of a rambling of ideas in order to aid someone else come up with a good idea. At work we use windows hyperV with active replication (so if one server dies, server 2 is only so far behind and can be spun up in its place), i would love to be able to do something similar with my home unraid boxes and kvm I'm aware this post is long as this topic may be changing off topic so perhaps it may be worth moving it elsewhere? Either way, hopefully the scripts above may help someone Regards, Jamie
  15. What trouble is it having spants? Permissions or simply getting a mount point in the docker?