Jump to content

IceBoosteR

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by IceBoosteR

  1. Didn't solved the issue. And after spending days with the restore, I just installed a new W10 version and installed files and software over. Took way to much time but there was no option left. -Closed-
  2. Okey, now I am totally out of ideas. I even restored the VM with ActiveBackupForBusiness from Synology (yes long story!) but I am still stuck in the freakin' shell Edit: Alright, several restores later, I can also admit that the repair option from a Windows 10 installer is not helping. Mounted the disk in GPARTED, I am seeing two filesystem. "system-reserved" and the actual data. Looks like there is no efi there whatsoever? Question is, if I can add one somewhere (have 1 MB space left) or stuff like that. Edit2:Just tried again to use an OVA export along this video: and also this: https://blog.ricosharp.com/posts/2019/Converting-ova-file-to-qcow2 No luck. It looks like its unable to boot anything from that stupid vdisk. I relaly dont want to buy another Windows license and install all the software again. Yikes, really thought its going to be easy to handle VMs
  3. Maybe to use online snapshots you would need to use qcow2 instead of RAW. That would be my first guess.
  4. Hey @JTok, just started using this plugin. Awesome work. I was going to check whether I should start writing small script for myself, using snapshots for my VMs, but then I found your plugin. Good stuff really. Found out, that "Number of days to keep backups" must be 0, or higher/equal 7. 1-6 are not working, its not in the specified format. Went through the code, figured out in line 1419 elif [ "$number_of_days_to_keep_backups" -ge 7 ] && [ "$number_of_days_to_keep_backups" -le 180 ]; then Why have you configured 7 as the minimum? I mean, makes sense for backups really, but image someone linke my who wants to test it first, see if stuff gets deleted after one day, and then let it run automatically. Something you may want to consider to change? Also I think you miss an exception for deleting potential error logs. Means, you have proper exception for the script not to faul, but the logfil suggest this as an error, wheras I think its fine (or at this point you actually awaiting to find a file?). find: '/mnt/default_ssd_cache/Backup/Zeus/VMs/logs/*unraid-vmbackup_error.log': No such file or directory 2021-07-02 22:31:19 information: did not find any error log files to remove. Line 2785 deleted_files=$(find "$backup_location/$log_file_subfolder"*unraid-vmbackup.log -type f -printf '%T@\t%p\n' | sort -t $'\t' -gr | tail -n +$log_files_plus_1 | cut -d $'\t' -f 2- | xargs -d '\n' -r rm -fv --) Then, unfortunately, after taking the backup, the copy of the file failed, based on the scripts output: 2021-07-02 22:46:33 failure: copy of /mnt/user/VMs/Athene-Server/vdisk1.img to /mnt/default_ssd_cache/Backup/Zeus/VMs/Athene-Server/20210702_2243_vdisk1.img.zst failed. But I can confirm, the files have been created: root@Zeus-Server:/mnt/default_ssd_cache/Backup/Zeus/VMs/Athene-Server# ls -lisa total 30203084 523493 0 drwxrwxrwx 1 root root 308 Jul 2 22:46 ./ 522862 0 drwxrwxrwx 1 root root 60 Jul 2 22:43 ../ 525076 128 -rw-rw-rw- 1 root users 131072 Jul 2 22:46 20210702_2243_76070b23-2217-0c51-808e-9be6c25b8f0e_VARS-pure-efi.fd 525075 8 -rw-rw-rw- 1 root root 7194 Jul 2 22:46 20210702_2243_Athene-Server.xml 525009 4225000 -rwxrwxrwx 1 root users 4326397656 Jul 2 22:46 20210702_2243_vdisk1.img.zst* 525038 25977948 -rwxrwxrwx 1 root users 26601417527 Jul 2 22:46 20210702_2243_vdisk2.img.zst* Looking int othe script, lines 203-228 of default-script, youre doing the copy and rely on the error code of rsync and/or copy. As there is no possibility to debug this, I can't really see where the issue is. Can you suggest how I can help on this? Cleaned already the folders up, just to ensure it's really not a permission issue, or an old snap. BTW, are you running incremental snapshots, or everytime full snaps? And, for those who want/need to use snaps, I think the deletion is a problem in general. See I now have choosen my SSD cache for backups, assuming mover will move them later. You mentioned that "PATH" must not be /mnt/user* so you can only choose the cache or a specifc disk. The latter can cause problems if the disk is full etc. So question here is, what happens to those files that will be moved by Mover, as the path is no longer "/mnt/mycache". What is preventing you from using "user" or "user0" in the first place? Unsure if this format of feedback suits you, otherwise I can create pull requests on git Best, IceBoosteR
  5. Hi everyone, writing this post as I spend the last day trying to get things to work, but failed. So I need some help from you guys. Long story short, I migrated from DSM (Synology OS) to Unraid. On DSM I had some VMs running. Before I migrated I saved the VMs completely: - as RAW disk - as export OVA - as export OVA with VMWare compatible. Regardless of what I am doing, I can't boot up my VMs. So, I was able to convert the VMWare compatible file into a RAW file, as well as in a qcow2 disk. The normal ova file was not able to be converted into any format, the convert was throwing an "invalid whatever" error. But, from the RAW disks that I have, Synology Virtual Machine Manager is basically KVM, I wasn't able to get any setup to boot. Basically I tried every. single. option. in Unraid VM manager. And that in all different styles and combinations. So I changed the BIOS, the disk location, the disk format, how the disk is accessed (virtio, SATA, IDE...) aka vDisk Bus and many more. In all cases the machine came up, was able to "see" a disk but was unable to mount or boot from it. In SeaBIOs it says something like "not able to boot from disk" ... "no bootable disk found". In OVMF BIOS I am always redirected to the interactive Shell. It was showing BLK devices, and alos if I exit the shell, the BIOS was able to see "QEMU virtual disk". So from there I did a lot of things right, but not enough to boot from it. The interactive shell really refused any command (or I used the wrong ones)? I googled through the internet, but found no really good help for my case - so I need your help. The only thing I did find out, was from this site (note: german) https://www.bjoerns-techblog.de/2019/02/migration-von-synology-vms/ that DSM is using /dev/sda as default disk. But KVM is accessing it via /dev/vda. But I was not able to edit anything in any grub config, or at least did not found it. Instead, I tried the XML config and tried to edit the disk target (default: target dev='hdc' bus='virtio'/>) to something else, but no success. So my guess is, the disk is there but can be mounted (wrong file layout or stuff) or the actual drivers are somewhat missing. Booting up fresh installs of W10 and Ubuntu worked like a charm. In this case, I am trying to get a W10 VM to run. Any help is much appreciated. -IceBoosteR
×
×
  • Create New...