Jump to content

SirCadian

Members
  • Posts

    47
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SirCadian's Achievements

Rookie

Rookie (2/14)

11

Reputation

  1. Afraid I don't. I think I may have identified the issue though. Docker wouldn't start after I posted my last update. It looks like the docker img was corrupt. I deleted it and re-added all my containers. I still saw some CPU spikes that seem to be related to the Dynamic Cache Directories plugin (high 'find' and 'shfs' CPU usage) so I've disabled. Things seem stable now. I'll live with it for a while and see if the issue is resolved. If not, i'll grab a diagnostics and issue timestamp and post here. Thanks for your help so far.
  2. Yes. Although I had stopped Docker and VMs at that point as it was the only way I could get the diags. I can try to get everything up and running again and grab some diags then if it will help?
  3. I use a user script to create persistent syslog. It's still an issue, I can't seem to run anything on it without the system becoming unstable. I've attached the diags. If you need earlier syslogs than the one in the diag zipfile, I can attach those. Thanks for looking at this. falcon-diagnostics-20240626-1557.zip
  4. My Unraid server has been running fine for a few years now. I ran into problems yesterday where the server has become increasingly unresponsive. The GUI windows take a long time (minutes) to load or won't load at all. The terminal won't load. I tried to identify the issue but am not any clearer as to what is causing it. I'm seeing IOWAIT kick in periodically and shfs seems to frequently take up to 75% CPU. Things I have tried include: - Rebooting the server. - Disabling all the dockers. - Disabling all the vms. At one point I had all dockers and vms stopped and had a parity check running. I tried to grab a diagnostics and this failed while trying to download the 132k zipfile. I managed to cancel the parity check and then re-ran the diagnostics. It completed the second time and downloaded. I'm currently leaving dockers and vms disabled whilst I try to run a parity check but I've no idea what to try next. I'm surprised the issue is happening when dockers and vms are disabled. Any suggestions as to what I can try to diagnose and fix?
  5. Excellent. Thanks for the help. Much appreciated.
  6. Thanks. I've just taken another backup as I made some changes to one of my VMs recently. I might create a user script to a weekly backup of it. Do all the VMs need to be stopped for me to back it up? It looks like the file isn't locked, so I should just be able to rsync it to my backups location?
  7. I have a quick couple of questions that I can't find answers to in the forums. I use the vmbackup plugin to backup my VMs. This is backing up a .fd, .qcow2/img and .xml file for each vm. I have one manual backup of libvirt.img from some time ago, does this file change regularly and should I be running a scheduled backup of it? Is there anything else I need to backup in order to be able to restore my vms? Thanks!
  8. OK. Last night's backup went through as expected. 2024-02-07 06:45:58.617 INFO BACKUP_STATS Files: 65 total, 81,341M bytes; 65 new, 81,341M bytes 2024-02-07 06:45:58.617 INFO BACKUP_STATS File chunks: 12739 total, 81,341M bytes; 792 new, 4,772M bytes, 3,009M bytes uploaded 2024-02-07 06:45:58.617 INFO BACKUP_STATS Metadata chunks: 4 total, 943K bytes; 4 new, 943K bytes, 723K bytes uploaded 2024-02-07 06:45:58.617 INFO BACKUP_STATS All chunks: 12743 total, 81,342M bytes; 796 new, 4,773M bytes, 3,010M bytes uploaded 2024-02-07 06:45:58.617 INFO BACKUP_STATS Total running time: 00:15:55 Duplicacy is definitely de-duplicating and only uploading changed chunks. Upload last night was around 4% of the total backup size. Duplicacy is also compressing the uncompressed data on upload so it takes ~50% less space in the remote bucket. I'll likely still only run this once a week as I have copies held both locally on the array and on my local PC (once daily, a user script waits until it sees Unraid mount a remote share on my PC and then copies the local backups across). Now I just need to have a think about VM backup, particularly my Home Assistant instance...really don't fancy having to rebuild that from scratch in the event of a catastrophic failure.
  9. Not testing untared vs tared but am testing that tarfiles get deduped properly (see post above). Looks to work but want to double check.
  10. I'm in the process of testing that now and will let you know. Local uncompressed backups are about double the size of the compressed backups. Duplicacy has built in compression which reduces the remote backup size down to around half the size of the local backups. It seems to have de-duplicated properly as it only transferred around 10% of the appdata backup chunks during the backup process. I want to run another backup overnight to double check this though.
  11. I didn't, I had just assumed Duplicacy wouldn't look in the tar files. After your comment I dug around in the Duplicacy forums and came across this post. Thanks for the pointer, I'll give things a whirl without compression and see how it goes.
  12. I'm running Unraid 7.12.6. I've been a happy user of unraid for a few years now. I'm thinking about re-organising my data and had a couple of questions: 1. If I create a user share and map it to a number of disks, do those disks only spin-up individually rather than as a group when a file is accessed? i.e. I have usershare1 allocated to disk1 & disk2, and I have the Dynamix Cache Directories plugin installed and set to cache usershare1. If I read a file through usershare1 that is physically on disk 2, will only disk 2 spin up? 2. Do hard-links work for user shares? i.e. Can a hard-link be created on disk1 in usershare1 that links to a file on disk2 in the same user share? I understand that I don't get to decide which disks these reside on...just trying to understand if hard-links will only work intermittently in a user share context (depending on whether unraid decides to locate the hard link and file on the same disk). Thanks in advance for any help. ----------------------------------- Update I found a good explainer on hard-links and how they work here -https://www.redhat.com/sysadmin/linking-linux-explained I used that to run a quick experiment in the terminal to look at hard-links and if/how they work in an Unraid user share context. Hard links do seem to work in user shares. If you attempt to create a hard link to a file in a user share, a hard link will be created to the file on the same physical disk as the file, even if your allocation method would ordinarily put any new files on a different disk. e.g. I have a "Backups" share set to high-water allocation between disk1 and disk3. disk1 is 89% full, disk3 is 17% full. All new files in the share are normally allocated to disk3. I created a file directly on disk1 (by creating it in /mnt/disk1/Backups) and then created a hard link to that file through the user share (by creating it in /mnt/user/Backups). The hard link winds up sitting in disk1, not disk3. As hard links take very little space (they report to be the same size as the file they link to but in practice are just the size of the filename plus a few bytes that point to the actual file) this would make sense...they're not going to fill up the disk. ------------------------------------- Update 2 Ran some tests to work out spin up/down behaviour. I span down disk2 & disk3 and did a recursive directory listing of disk2 in a terminal (ls -alR /mnt/disk2). Neither disk spun up so I'm guessing that Dynamix Cache Directories is doing its job. I then accessed a file that is physically located on disk2 via a user share that is allocated to both disk2 and disk3. Only disk2 spun up, disk3 stayed in standby. Hopefully this will be of use further down the line to someone else who has similar questions.
  13. I've been trying this. The current version of appdata backup will create a seperate tar file for each container's appdata. It will also backup the container config XML as a separate file. So for example, for Nextcloud I have: nextcloud.tar.gz my-nextcloud.xml I have similar file pairs for every container. You'll also have a timestamped flash-backup tar file if you've selected the option to do a flash backup. Word of warning regarding rclone (assume you mean rclone rather than rsync?) to cloud. As the tar files are subtly different each time you take a backup, every tar file will be uploaded every time you rclone. The benefits of deduplicating and only synchronising changes is lost. For me this means approximately 70Gb of upload every time I run the remote copy to cloud. I've settled on a remote copy to an SMB share on another local machine for now. What I'd like to see in the future is the ability to backup into separate directories without tar/gzip so that tools like rclone and duplicacy can synchronise only small changed files up to the cloud rather than entire appdata directory tarfiles.
  14. Thanks, I'll keep an eye out for updates. In the meantime, if I change to stopping all containers, backing up and then restarting all containers...do PreBackup or PostBackup run in the window while the containers are stopped? I'm assuming not.
  15. A few quick questions about scripts: Do pre-run and post-run execute before/after appdatabackup does anything at all, so right at the start and end of the whole backup process? Are pre-backup and post-backup run per docker container, so if I have 10 containers they will run 10 times? Are pre-backup/post-backup run after a container has been stopped and before it is restarted? Are any parameters passed to the scripts? It would be useful to have the current backup directory name and the current container directory name passed via parameters. Currently I work out the current backup directory name by just grabbing the most recently created directory like this "BACKUPDIR=$(ls -td /mnt/user/Backups/appdatabackup/* | head -1)" but that feels like a bit of a kludge. Essentially, I'm trying to get a working remote backup of appdata to Backblaze using Duplicacy to de-duplicate. At the moment, appdatabackup wraps everything up into per-container tar files. This is fine/preferable for local backup but because each tar file probably contains at least one changed file, every tar file is different from the previous backup and the entire contents of appdata (~70Gb for me) gets uploaded every time I run the backup. I plan to use a pre-backup script (assuming it runs after each container has been stopped) to copy the container appdata directory to a local backup location and then using a scheduled Duplicacy backup to send the files to Backblaze. That should give me local tar backups from appdata backup and a nice versioned de-duplicated backup on Backblaze that makes efficient use of my cloud storage and network connection.
×
×
  • Create New...