Jump to content

dja

Members
  • Posts

    198
  • Joined

  • Last visited

Posts posted by dja

  1. 1 hour ago, MorphiousGX said:

    @dja Thank you so much. OK couple of question, do I delete the other VM (Original)? And why can't I replace the files in the original one? 

    I'm not sure I understand what you are trying to do. The point of the back up is to restore your VM should the working version stop or otherwise be made unavailable. 

  2. HOW TO RESTORE!!!!


    Option 1- Script:
    Use the great script from @petchav, many thanks! See the video below for a guide on how to use this. 

     


    Option 2- Manual restore:

    You will need:
    1.  Your backup .img file (after extraction)
    2.  Backed up XML file
    3.  Your backed up .fd file

    Step 1- In terminal, extract img file from .zst backup. Example below. Replace with your .zst file name. 

    zstd -d -C --no-check 20211114_0300_vdisk1.img.zst


    You will likely get the error below IF you run the command above without --no-check

     Decoding error (36) : Restored data doesn't match checksum


    *Note, -no-check option MAY NOT be supported by Unraid.  If this doesn't work in terminal, try Cygwin (Windows) and run it. https://www.cygwin.com. Place your .zst file in the c:\Cygwin folder.  With Windows you can now also install a Linux environment as well. (Beyond scope of this guide!) Copy the backup to the local machine or it will take forever!

    ALSO- you can backup WITHOUT compression and save yourself some grief!!! (See options)

     

    Step 2- Place .img file back into directory where it was backed up from. Look in the backed up XML file for the following line and ensure that YOUR path AND files exist:
    image.png.0ce1f33b2b30e01b29070e405d6e2118.png

    Step 3- Place backed up .fd file to: (file name may vary)

    /etc/libvirt/qemu/nvram/4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd
    
    You may need to remove the 14 character timestamp at the start of the filename!!
    
    For example, remove 20211205_0300_ 
    20211205_0300_4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd

    Step 5- verify ALL files are where they were BEFORE the backup.  Look at XML file for file locations. 
    Step 6- Open  your .xml file. Copy the contents. Don't mess with it! 

    Step 7- Create a new VM in Unraid.  When asked, it does NOT matter what type, (Win11, 10 etcetera..) just pick one. Once you do that and BEFORE hitting "Create", you will have the VM options page. Select the "Form View" button at top right of screen. Change to  XML view. Select all contents and delete. Paste contents of backed up XML in. Select create and away you go! 

     

    • Like 4
    • Upvote 2
  3. 4 hours ago, MorphiousGX said:

    I am new here and thank you for the great plugin. 

    I searched here for a way to restore imaged. I have one machine I am backing up (to test it out) and it is a Windows VM.
    My back up has 3 files:

    1. VARS-pure-efi.df

    2. vdisk.img

    3. .xml file

     

    What's the best and safest way to restore the image to the original VM? I saw in a post that I have to create a new vm. I don't want to do that because I do have a passthrough GPU and I don't want to redo it every time I do a restore.

     

    Or maybe there is a built in feature to restore that I am not seeing?  I have the latest version of the tool. 

    I posted in the thread how to restore, maybe a couple pages back.

     

  4. 1 hour ago, Elmojo said:

    I wouldn't have a clue how to get to that share in terminal. I can do it through my regular windows explorer no problem, and I can get to it via MC by browsing to the disk it physically resides on, but beyond that I'm a bit lost, sorry.

    I looked at the logs, and it's weird. The actual log files show that there was an error, but not what it was specifically. I've attached them for your perusal.  If I open the [show log] in the GUI, then it shows mostly the same info, except that I see the actual error I asked about. No idea why they would be different.

    Here's a clip of the relevant part of the on-screen log, showing the error and surrounding info...

    2021-12-06 02:00:02 information: can_backup_vm flag is y. starting backup of BlueIris Server (W10) configuration, nvram, and vdisk(s).
    sending incremental file list
    BlueIris Server (W10).xml
    
    sent 6,671 bytes received 35 bytes 13,412.00 bytes/sec
    total size is 6,550 speedup is 0.98
    2021-12-06 02:00:02 information: copy of BlueIris Server (W10).xml to /mnt/user/Backup/xVM Backups/BlueIris Server (W10)/20211206_0200_BlueIris Server (W10).xml complete.
    sending incremental file list
    1f335446-855e-7049-961e-043582ac2630_VARS-pure-efi.fd
    
    sent 131,247 bytes received 35 bytes 262,564.00 bytes/sec
    total size is 131,072 speedup is 1.00
    2021-12-06 02:00:02 information: copy of /etc/libvirt/qemu/nvram/1f335446-855e-7049-961e-043582ac2630_VARS-pure-efi.fd to /mnt/user/Backup/xVM Backups/BlueIris Server (W10)/20211206_0200_1f335446-855e-7049-961e-043582ac2630_VARS-pure-efi.fd complete.
    2021-12-06 02:00:02 information: able to perform snapshot for disk /mnt/cache/domains/Windows 10/vdisk1.img on BlueIris Server (W10). use_snapshots is 1. vm_state is running. vdisk_type is raw
    2021-12-06 02:00:02 information: qemu agent found. enabling quiesce on snapshot.
    error: internal error: missing storage backend for 'file' storage
    
    2021-12-06 02:00:02 failure: snapshot command failed on vdisk1.snap for BlueIris Server (W10).
    2021-12-06 02:00:02 failure: snapshot_fallback is 0. skipping backup for BlueIris Server (W10) to prevent data loss. no cleanup will be performed for this vm.

     

    20211206_0200_unraid-vmbackup_error.log 8.97 kB · 0 downloads 20211206_0200_unraid-vmbackup.log 8.97 kB · 0 downloads

    I think the issue may be that you have it shutting down your VM and it is NOT shutting down. Notice the 2nd line below in logging. I may be reading the log message incorrectly though.

    Did you install the Virtio drivers on the VM? That is the easiest thing to check/do first and may resolve it. I would install that and allow the backup to run without shutting down the VM.  

    Also- where is your VM image (disk) file stored? See message below-
    image.thumb.png.9f4b2809c2b9fa4add63dd1c04309f99.png
     

    2021-12-06 02:00:02 failure: snapshot command failed on vdisk1.snap for BlueIris Server (W10).
    2021-12-06 02:00:02 failure: snapshot_fallback is 0. skipping backup for BlueIris Server (W10) to prevent data loss. no cleanup will be performed for this vm.

     

  5. Just now, Elmojo said:

    Thanks for the quick reply!

    I have a share on my array called "xVM Backups"

    It's at /mnt/user/Backup/xVM Backups/

    Is that wrong?

    That sounds right if the path exist. Can you navigate to the target in terminal?

    Also, can you post the log file? (forgot to ask!)  You may need to enable that. (3rd tab, 'other settings')  If you don't have it enabled now, you may need to run the backup again and then attach that file here.  I've got a Win10 VM running with Virtio drivers installed and it works well, you should be able to backup.  Do you have Virtio drivers installed for the VM? 

  6. 6 minutes ago, Elmojo said:

    I've set up to allow snapshots on my Win10 VM that's running my cameras.

    I think I've config'd per the instructions, but when the backup runs, the script fails, and the log shows the following error:

    "error: internal error: missing storage backend for 'file' storage"

    Any idea what I need to change?

    It works fine if I allow it to shut down the VM first, but I'd prefer to have it take a snapshot instead, if that will work.

    It sounds like the target for your backup is not quite right. Where are you backing up to? 

  7. On 11/25/2021 at 12:40 PM, Roudy said:

     

    Have you done a reboot since adding the uid/gid? Can you do an "ls -l" inside your "GoogleDrive" folder and paste the results?

    Yes, I did. No luck there.  I did ls -l with both root and my unraid user accounts. They both have identical listings? This has been the issue though, navigation and directory listing works ok, it is just creation and delete that is the issue.   image.thumb.png.35537c1840ec3100b2d41c6b34f2363c.png

  8. 14 hours ago, Roudy said:

     

    Add the below to the mount script and it will mount it as the nobody:users permissions.

     

    # create rclone mount

        --uid 99 \
        --gid 100 \

    Thanks @Roudy! Maybe I am missing something. Still getting the error? My mount script is below. I used fusermount -uz to kill existing mount and killed the script also before re-running. Does it need a reboot possibly? 

     

    Edit- it turns out my nobody:user ID are not 99 and 100, they are 98 and 99. I ran 
    getent group  | grep no

     

    Still no dice though. Getting same error. 

     

    ######################
    #### Mount Script ####
    ######################
    ## Version 0.96.9.3 ##
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Change the name of the rclone remote and shares to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: include custom command and bind mount settings
    # 4. Optional: include extra folders in mergerfs mount
    
    # REQUIRED SETTINGS
    RcloneRemoteName="GoogleDrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare=" /mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="400G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    #DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # OPTIONAL SETTINGS
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
    LocalFilesShare3="ignore"
    LocalFilesShare4="ignore"
    
    # Add extra commands or filters
    Command1="--rc"
    Command2=""
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    
    CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
    RCloneMountIP="192.168.1.41" # My unraid IP is 172.30.12.2 so I create another similar IP address
    NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
    VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them
    
    
    
    
    ####### END SETTINGS #######
    
    ###############################################################################
    #####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
    ###############################################################################
    
    ####### Preparing mount location variables #######
    RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
    LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
    MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location
    
    ####### create directories for rclone mount and mergerfs mounts #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
    mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
    if [[  $LocalFilesShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
    	LocalFilesLocation="/tmp/$RcloneRemoteName"
    	eval mkdir -p $LocalFilesLocation
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    	eval mkdir -p $LocalFilesLocation/"$MountFolders"
    fi
    mkdir -p $RcloneMountLocation
    
    if [[  $MergerfsMountShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    	mkdir -p $MergerFSMountLocation
    fi
    
    
    #######  Check if script is already running  #######
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    fi
    
    ####### Checking have connectivity #######
    
    echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
    ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
    if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
    else
    	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	exit
    fi
    
    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time $RcloneMountDirCacheTime \
    	--log-level INFO \
    	--poll-interval 15s \
    	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size $RcloneCacheMaxSize \
    	--vfs-cache-max-age $RcloneCacheMaxAge \
    	--bind=$RCloneMountIP \
    	--uid 99 \
    	--gid 100 \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 5
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
    		docker stop $DockerStart
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    		exit
    	fi
    fi
    
    ####### Start MergerFS Mount #######
    
    if [[  $MergerfsMountShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
    else
    	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    	else
    # check if mergerfs already installed
    		if [[ -f "/bin/mergerfs" ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
    		else
    # Build mergerfs binary
    			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
    			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
    			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
    			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
    # check if mergerfs install successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
    			sleep 5
    			if [[ -f "/bin/mergerfs" ]]; then
    				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
    			else
    				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
    				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    				exit
    			fi
    		fi
    # Create mergerfs mount
    		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
    # Extra Mergerfs folders
    		if [[  $LocalFilesShare2 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare2=":$LocalFilesShare2"
    		else
    			LocalFilesShare2=""
    		fi
    		if [[  $LocalFilesShare3 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare3=":$LocalFilesShare3"
    		else
    			LocalFilesShare3=""
    		fi
    		if [[  $LocalFilesShare4 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare4=":$LocalFilesShare4"
    		else
    			LocalFilesShare4=""
    		fi
    # make sure mergerfs mount point is empty
    		mv $MergerFSMountLocation $LocalFilesLocation
    		mkdir -p $MergerFSMountLocation
    # mergerfs mount command
    		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
    # check if mergerfs mount successful
    		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
    		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
    		else
    			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
    			docker stop $DockerStart
    			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    			exit
    		fi
    	fi
    fi
    
    ####### Starting Dockers That Need Mergerfs Mount To Work Properly #######
    
    # only start dockers once
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
    else
    # Check CA Appdata plugin not backing up or restoring
    	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    	else
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    		docker start $DockerStart
    	fi
    fi
    
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
    
    exit

     

    log.zip

  9. Having an issue with SMB shares. Apologies if this isn't supported.

     

    I am able to SSH into Unraid and delete (WinSCP or putty shell), upload and change  and create files from the mounted rclone drive but when connected from SMB share on a Windows device I can navigate and download, but deleting, renaming or creating fails. Although, on create it throws an error but still creates it (?) Am I missing something? Is this supported, do I need to adjust anything?

     

    The one difference I know of- SSH is using root vs. the user I have for logging in. (Which has permissions set to access r/w) 


    *Edit- so this does appear to be a root vs. normal user issue. How do I change my existing rclone mount to use another user without starting over? 


    image.png.f7f429e852990584d31add97a18875c5.png
     

    image.png.a41234e0a0c4634189781c3e5982ad33.png

  10. HOW TO RESTORE!!!!
    For anyone this might help- if you try to restore the img file that is in the zst compressed file: you may receive an error of - 
     Decoding error (36) : Restored data doesn't match checksum

     

    To get around this run the following (replace with your file name) 

    zstd -d -C --no-check 20211114_0300_vdisk1.img.zst

    Where 20211114_0300_vdisk1.img.zst is your file name to replace. If this doesn't work in terminal, try Cygwin (Windows) and run it. https://www.cygwin.com/

    Place your .zst file in the c:\Cygwin folder. Much easier! 

    For some reason, every VM backed up with zst selected is created like this and throws an error (for me). They ALL restore without issue (for me) doing this.  

    PS, you'll need to create a custom VM once your .img file is restored. Paste the contents of the associated backed up XML file in. LOOK AT YOUR XML FILE  closely!! Ensure the img file is in the same folder as was/is spec'd in the XML and also ensure the following are present: 

    (File names may vary, look at backed up files) 

     

    /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd
    /etc/libvirt/qemu/nvram/4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd
     

    For what its worth- I have disabled zst compression and just have it back up the img file!! No need to take a chance.  Hope this helps! 

     

    • Like 1
  11. 2 hours ago, zspearmint said:

    Hmmm…Remote access operates separately from the API. If you've configured things correctly on your side it should work.

     

    Is the local access URL https://<YOUR_HASH>.unraid.net?

    Yes, I am seeing that locally and when I try to use it remotely. It's not an urgent thing, but when I was 2000 miles away and couldn't get into my system remotely (due to my mistakes with VPN) it would have been handy! :) Thanks for the continued help. 

  12. 19 hours ago, zspearmint said:

    Apologies y'all are seeing issues as well. If one the following doesn't fix your issue then we'll have a fix out soon that may require a plugin update.

    • open a terminal window & try unraid-api restart
    • sign out and in again
    • remove & install plugin
    • reboot

    @dja thanks for your detailed video. This specific bug is related to the API overwriting the cached user details after page load. I think option 3 should fix it.

    Removing and adding the plugin did work to get me signed in, but I still got the API error. I had to reboot. This seems to happen every 30-40 days or so. Is remote access supposed to work as well? After reboot I get the online status and 'all good' from /myservers page...but clicking local access nothing ever loads. It just sits and spins.  I do have a custom port fwd to 443. 

  13. Mark me down as another who can't backup on 6.10.0-rc1...same errors as above.  Working until rc1.

     

    Please update :) 

     

    And thanks for the work on this plugin!

    2021-10-11 10:43:42 Start logging to log file.
    2021-10-11 10:43:42 information: send_notifications is 1. notifications will be sent.
    2021-10-11 10:43:42 information: only_send_error_notifications is 0. normal notifications will be sent if send_notifications is enabled.
    2021-10-11 10:43:42 information: keep_log_file is 1. log files will be kept.
    2021-10-11 10:43:42 information: number_of_log_files_to_keep is 1. this is probably a sufficient number of log files to keep.
    2021-10-11 10:43:42 information: enable_vm_log_file is 0. vm specific logs will not be created.
    2021-10-11 10:43:42 information: backup_all_vms is 1. vms_to_backup will be ignored. all vms will be backed up.
    2021-10-11 10:43:42 information: use_snapshots is 1. vms will be backed up using snapshots if possible.
    2021-10-11 10:43:42 information: kill_vm_if_cant_shutdown is 0. vms will not be forced to shutdown if a clean shutdown can not be detected.
    2021-10-11 10:43:42 information: set_vm_to_original_state is 1. vms will be set to their original state after backup.
    2021-10-11 10:43:42 information: number_of_days_to_keep_backups is 0. backups will be kept indefinitely. be sure to set number_of_backups_to_keep to keep backups storage usage down.
    2021-10-11 10:43:42 information: number_of_backups_to_keep is 3. this is probably a sufficient number of backups to keep.
    2021-10-11 10:43:42 information: inline_zstd_compress is 1. vdisk images will be inline compressed but will not be compared afterwards or post compressed.
    2021-10-11 10:43:42 information: zstd_level is 3.
    2021-10-11 10:43:42 information: zstd_threads is 2.
    2021-10-11 10:43:42 information: snapshot_extension is snap. continuing.
    2021-10-11 10:43:42 information: snaphot extension not found in vdisk_extensions_to_skip. extension was added.
    2021-10-11 10:43:42 information: snapshot_fallback is 0. snapshots will fallback to standard backups.
    2021-10-11 10:43:42 information: pause_vms is 0. vms will be shutdown for standard backups.
    2021-10-11 10:43:42 information: enable_reconstruct_write is 0. reconstruct write will not be enabled by this script.
    2021-10-11 10:43:42 information: backup_xml is 1. vms will have their xml configurations backed up.
    2021-10-11 10:43:42 information: backup_nvram is 1. vms will have their nvram backed up.
    2021-10-11 10:43:42 information: backup_vdisks is 1. vms will have their vdisks backed up.
    2021-10-11 10:43:42 information: start_vm_after_backup is 0. vms will not be started following successful backup.
    2021-10-11 10:43:42 information: start_vm_after_failure is 0. vms will not be started following an unsuccessful backup.
    2021-10-11 10:43:42 information: disable_delta_sync is 0. rsync will be used to perform delta sync backups.
    2021-10-11 10:43:42 information: rsync_only is 0. cp will be used when applicable.
    2021-10-11 10:43:42 information: actually_copy_files is 1. files will be copied.
    2021-10-11 10:43:42 information: clean_shutdown_checks is 20. this is probably a sufficient number of shutdown checks.
    2021-10-11 10:43:42 information: seconds_to_wait is 30. this is probably a sufficient number of seconds to wait between shutdown checks.
    2021-10-11 10:43:42 information: keep_error_log_file is 1. error log files will be kept.
    2021-10-11 10:43:42 information: number_of_error_log_files_to_keep is 10. this is probably a sufficient error number of log files to keep.
    2021-10-11 10:43:42 information: started attempt to backup Windows 10 to /mnt/user/UNRAID/YALE-FORTY-FIVE/YALE-FORTY-FIVE/BACKUP/VM_Backups
    2021-10-11 10:43:42 information: Windows 10 can be found on the system. attempting backup.
    2021-10-11 10:43:42 information: removing old local Windows 10.xml.
    2021-10-11 10:43:42 information: creating local Windows 10.xml to work with during backup.
    2021-10-11 10:43:42 information: /mnt/user/UNRAID/YALE-FORTY-FIVE/YALE-FORTY-FIVE/BACKUP/VM_Backups/Windows 10 exists. continuing.
    2021-10-11 10:43:42 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. Windows 10 is running. can_backup_vm set to y.
    2021-10-11 10:43:42 information: actually_copy_files is 1.
    2021-10-11 10:43:42 information: can_backup_vm flag is y. starting backup of Windows 10 configuration, nvram, and vdisk(s).
    2021-10-11 10:43:42 information: copy of Windows 10.xml to /mnt/user/UNRAID/YALE-FORTY-FIVE/YALE-FORTY-FIVE/BACKUP/VM_Backups/Windows 10/20211011_1043_Windows 10.xml complete.
    2021-10-11 10:43:42 information: copy of /etc/libvirt/qemu/nvram/4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd to /mnt/user/UNRAID/YALE-FORTY-FIVE/YALE-FORTY-FIVE/BACKUP/VM_Backups/Windows 10/20211011_1043_4a2b120f-0ea9-846a-6e11-f097002e442d_VARS-pure-efi.fd complete.
    2021-10-11 10:43:42 information: able to perform snapshot for disk /mnt/cache/domains/UNRAID_VM1/vdisk1.img on Windows 10. use_snapshots is 1. vm_state is running. vdisk_type is raw
    2021-10-11 10:43:42 information: qemu agent found. enabling quiesce on snapshot.
    2021-10-11 10:43:42 failure: snapshot command failed on vdisk1.snap for Windows 10.
    2021-10-11 10:43:42 failure: snapshot_fallback is 0. skipping backup for Windows 10 to prevent data loss. no cleanup will be performed for this vm.
    2021-10-11 10:43:42 information: finished attempt to backup Windows 10 to /mnt/user/UNRAID/YALE-FORTY-FIVE/YALE-FORTY-FIVE/BACKUP/VM_Backups.
    2021-10-11 10:43:42 information: cleaning out logs over 1.
    2021-10-11 10:43:42 information: removed '/mnt/user/UNRAID/YALE-FORTY-FIVE/YALE-FORTY-FIVE/BACKUP/VM_Backups/logs/20211011_1042_unraid-vmbackup.log'.
    2021-10-11 10:43:42 information: cleaning out error logs over 10.
    2021-10-11 10:43:42 information: did not find any error log files to remove.
    2021-10-11 10:43:42 warning: errors found. creating error log file.
    2021-10-11 10:43:42 Stop logging to error log file.

     

    • Thanks 1
  14. 5 minutes ago, trurl said:

    Corrupt docker.img

     

    Why have you given 60GB to docker.img? Have you had problems filling it? 20G is usually more than enough and making it larger won't fix problems filling it, it will only make it take longer to fill. The usual cause of filling docker.img is an application writing to a path that isn't mapped.

     

    Read all this:

    https://wiki.unraid.net/Manual/Troubleshooting#Docker

    Thanks @trurl

    I have Unifi devices set to record verbose logging and (perhaps wrongly) thought that was filling things up because it would require a higher amount of storage. How can I purge the log? There is about 25 GB on my cache eating up space for each of the devices logging. I have since disabled that logging to be minimal. 

  15. 2 hours ago, jonathanm said:

    Yes, I exploit this for a bit of security through obscurity. I have multiple file management containers with different levels of array access, all running on the same port. The least dangerous container is set to auto start. None of the more dangerous containers will start while the neutered one is running, so gaining full access with Krusader means shutting down one container and manually starting the other. They all answer on the same port, so it all looks the same from the client end.

    That makes sense, and I get it- but for IPs it should be easy to throw a ping out before applying the change and if a reply is received- warn the user that it may be in use? (Do you want to continue?) 

  16. 15 minutes ago, dheg said:

      

     

    I fixed it, one of the dockers (Heimdall) was using port 443, you put me in the right path!

    Instead of messing with the command line, I tried first to start in safe mode, the webgui loaded without issues.

    I changed the docker port to 443 and rebooted, everything is fine now.

     

    Thanks!

    Nice! Glad it worked! 

    I love Unraid, but if I *had* to complain about anything it might be config things like this that can creep up when you accidentally set a setting the wrong way and it conflicts and breaks something else. Maybe docker can get some error/conflict checking when assigning network ports/IPs in the future.  I know...we should be more careful! :)

  17. 4 minutes ago, dheg said:

     

    Thanks for this, it's not yet sorted, I was hoping for some clues.

    The only thing I can think of is that I have assigned those ports to docker.

    Do you know how can I check docker from the command line?

    Well, I would disable docker and reboot. See if that clears it up. You might try this to get GUI and disable the service. 
    /etc/rc.d/rc.docker stop

    You could then do-
    /etc/rc.d/rc.nginx-fpm restart
    /etc/rc.d/rc.php-fpm restart

    See if your GUI comes back, disable or otherwise investigate.  *Disclaimer, not a pro. Don't think this will hurt, anything- but I'm putting it out there! :)

  18. Did you get this sorted?

    I'm seeing this repeated near the end of logging quite a bit.  (Line 3231 in syslog)
    Something to look at possibly. 
     

    Jul 25 22:06:08 gaia root: Starting Nginx server daemon...
    Jul 25 22:06:08 gaia nginx: 2021/07/25 22:06:08 [emerg] 19819#19819: bind() to 0.0.0.0:443 failed (98: Address already in use)
    Jul 25 22:06:08 gaia nginx: 2021/07/25 22:06:08 [emerg] 19819#19819: bind() to [::]:443 failed (98: Address already in use)
    Jul 25 22:06:08 gaia root: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
    Jul 25 22:06:08 gaia root: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)

     

×
×
  • Create New...