saitoh183 Posted August 22 Share Posted August 22 Backup is still running since the 17th and abort isnt working...how can i kill besides restarting server if possible. It is done just not stopping. Quote Link to comment
KluthR Posted August 22 Author Share Posted August 22 26 minutes ago, saitoh183 said: Backup is still running Interesting. Please open a terminal and enter „ps aux | grep backup“. I cant imagine what should run this long after the „Done“ message… Quote Link to comment
saitoh183 Posted August 22 Share Posted August 22 1 hour ago, KluthR said: Interesting. Please open a terminal and enter „ps aux | grep backup“. I cant imagine what should run this long after the „Done“ message… Quote Link to comment
KluthR Posted August 22 Author Share Posted August 22 Please check /tmp/appdata.backup/running. This file should contain a number. Run „ps aux | grep *number*“. Quote Link to comment
Vitek Posted August 22 Share Posted August 22 How to make plugin stop create backup xml file for a containers that are already deleted and not exists anymore? In backup folder I can see plenty of my-*.xml files that point to the container that is not exists. Quote Link to comment
TrivariAlthea Posted August 22 Share Posted August 22 On 8/19/2024 at 9:44 PM, KluthR said: The one which is set as destination: /mnt/cloud1/appdata however: you are sure that the filesystem is ok? The error message is very clear. What does unraid show in its log screen? please try without compression (or use single core option, not milticore). Will check once I get home, completely missed your response as I haven't got a notification... Will aalso try the single core, no compression approach. On 8/19/2024 at 10:47 PM, Kilrah said: The filesystem is "virtual" since it's an rclone mount. Maybe it chokes on files taking time to be created, too big, connection lost, whatever changed on the cloud provider side... I'd rather back up to a local destination, then sync that to the rclone mount. Hi, would you mind providing a way of doing so? I always worked with rclone only in terms of a direct mount. Thanks a lot! Quote Link to comment
Kilrah Posted August 22 Share Posted August 22 1 hour ago, Vitek said: How to make plugin stop create backup xml file for a containers that are already deleted and not exists anymore? In backup folder I can see plenty of my-*.xml files that point to the container that is not exists. Go to Apps-Previous Apps and remove ones you don't care about anymore. 1 Quote Link to comment
saitoh183 Posted August 22 Share Posted August 22 11 hours ago, KluthR said: Please check /tmp/appdata.backup/running. This file should contain a number. Run „ps aux | grep *number*“. Quote Link to comment
bmartino1 Posted August 22 Share Posted August 22 FYI with unriad version 7 we may need to add and make a backup of more folders for VM recovery and data With the introduction of snapshots, we will need to copy and backup the snapshot and snapshotdb folder to keep for recovery. Quote Link to comment
sumsh Posted August 23 Share Posted August 23 Is it possible to use this tool to update but NOT backup a docker container? For example, Cloudflared has no configuration files, so generates warnings on backup. I’d like to keep it updated, but no need to backup. Quote Link to comment
ytddewqf Posted August 24 Share Posted August 24 Good morning, Whilst reviewing my appdata backup, I noticed that it was backing up historic docker xml files going back years, most of which I no longer use. This led me to locate my "previously installed apps" list under Apps (never even knew these where there), and removed them all. However, after another manual backup, the old xml files remain. I've tried scrubbing the docker image, and looking around my system to see where else these old docker templates could be stored, but I've not had much luck. Any advice would be greatly appreciated. Quote Link to comment
itimpi Posted August 24 Share Posted August 24 The templates are stored on the flash drive under config/plugins/dockerMan/templates-user/ 1 Quote Link to comment
ytddewqf Posted August 24 Share Posted August 24 10 minutes ago, itimpi said: The templates are stored on the flash drive under config/plugins/dockerMan/templates-user/ Thank you kindly. Quote Link to comment
KluthR Posted August 25 Author Share Posted August 25 On 8/22/2024 at 7:16 PM, saitoh183 said: Hmm. Process is dead but the system still lists it (not in the screenshot). Delete the „running“ file for now. Quote Link to comment
KluthR Posted August 25 Author Share Posted August 25 On 8/23/2024 at 3:38 PM, sumsh said: Is it possible to use this tool to update but NOT backup a docker container? For example, Cloudflared has no configuration files, so generates warnings on backup. I’d like to keep it updated, but no need to backup. I would say: yes. Open the container advanced settings. I believe there is a option to skip just the backup Quote Link to comment
KluthR Posted August 25 Author Share Posted August 25 On 7/21/2024 at 2:30 PM, Zotarios said: Is there a way in bash to know if the script is running? There is a /tmp/appdata.backup/running file which gets removed when the script does not run anymore. Quote Link to comment
KluthR Posted August 25 Author Share Posted August 25 On 7/22/2024 at 7:57 AM, JUST-Plex said: Where does the original archive come from? I call Unraids backup method and save the original archive then. Its Unraids way of making the flash backup. Quote Link to comment
TrivariAlthea Posted August 25 Share Posted August 25 On 8/22/2024 at 2:11 PM, TrivariAlthea said: Will check once I get home, completely missed your response as I haven't got a notification... Will aalso try the single core, no compression approach. Hi, would you mind providing a way of doing so? I always worked with rclone only in terms of a direct mount. Thanks a lot! @Vitekalso I would agree with you on the fact that it is failing because of delay/network speeds, but it makes little sense because it worked months before... According to my chat with pcloud, there were no changes on their side, and I am not aware of any changes on my side either. @KluthRtried with no compression and the input/output error is now in a different container, in terms of unraid logs itself here is what was popping during the backup process: Aug 25 20:18:46 MS-01 kernel: ata1: EH complete Aug 25 20:18:49 MS-01 kernel: ata1.00: exception Emask 0x0 SAct 0x6c0001c0 SErr 0x0 action 0x0 Aug 25 20:18:49 MS-01 kernel: ata1.00: irq_stat 0x40000008 Aug 25 20:18:49 MS-01 kernel: ata1.00: failed command: READ FPDMA QUEUED Aug 25 20:18:49 MS-01 kernel: ata1.00: cmd 60/a0:d0:f0:02:70/03:00:0e:00:00/40 tag 26 ncq dma 475136 in Aug 25 20:18:49 MS-01 kernel: res 41/40:00:cf:04:70/00:00:0e:00:00/40 Emask 0x409 (media error) <F> Aug 25 20:18:49 MS-01 kernel: ata1.00: status: { DRDY ERR } Aug 25 20:18:49 MS-01 kernel: ata1.00: error: { UNC } Aug 25 20:18:49 MS-01 kernel: ata1.00: configured for UDMA/100 Aug 25 20:18:49 MS-01 kernel: sd 1:0:0:0: [sdb] tag#26 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=1s Aug 25 20:18:49 MS-01 kernel: sd 1:0:0:0: [sdb] tag#26 Sense Key : 0x3 [current] Aug 25 20:18:49 MS-01 kernel: sd 1:0:0:0: [sdb] tag#26 ASC=0x11 ASCQ=0x4 Aug 25 20:18:49 MS-01 kernel: sd 1:0:0:0: [sdb] tag#26 CDB: opcode=0x28 28 00 0e 70 02 f0 00 03 a0 00 Aug 25 20:18:49 MS-01 kernel: I/O error, dev sdb, sector 242222287 op 0x0:(READ) flags 0x0 phys_seg 57 prio class 2 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222216 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222224 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222232 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222240 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222248 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222256 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222264 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222272 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222280 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222288 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222296 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222304 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222312 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222320 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222328 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222336 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222344 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222352 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222360 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222368 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222376 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222384 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222392 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222400 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222408 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222416 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222424 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222432 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222440 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222448 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222456 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222464 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222472 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222480 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222488 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222496 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222504 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222512 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222520 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222528 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222536 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222544 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222552 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222560 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222568 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222576 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222584 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222592 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222600 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222608 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222616 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222624 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222632 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222640 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222648 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222656 Aug 25 20:18:49 MS-01 kernel: md: disk1 read error, sector=242222664 Aug 25 20:18:49 MS-01 kernel: ata1: EH complete Aug 25 20:21:57 MS-01 kernel: veth5cc4831: renamed from eth0 Aug 25 20:22:00 MS-01 kernel: eth0: renamed from veth50c9307 Aug 25 20:22:42 MS-01 flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update Aug 25 20:24:05 MS-01 kernel: docker0: port 6(vethea6cb41) entered disabled state Aug 25 20:24:05 MS-01 kernel: vethce9f960: renamed from eth0 Aug 25 20:24:05 MS-01 kernel: docker0: port 6(vethea6cb41) entered disabled state Aug 25 20:24:05 MS-01 kernel: device vethea6cb41 left promiscuous mode Aug 25 20:24:05 MS-01 kernel: docker0: port 6(vethea6cb41) entered disabled state Aug 25 20:24:10 MS-01 kernel: docker0: port 6(veth92d2af9) entered blocking state Aug 25 20:24:10 MS-01 kernel: docker0: port 6(veth92d2af9) entered disabled state Aug 25 20:24:10 MS-01 kernel: device veth92d2af9 entered promiscuous mode Aug 25 20:24:10 MS-01 kernel: eth0: renamed from vetha835d21 Aug 25 20:24:10 MS-01 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth92d2af9: link becomes ready Aug 25 20:24:10 MS-01 kernel: docker0: port 6(veth92d2af9) entered blocking state Aug 25 20:24:10 MS-01 kernel: docker0: port 6(veth92d2af9) entered forwarding state Also, when the unraid host stops to be reachable after the backup process stops, it displays this in the log: Aug 25 20:25:00 MS-01 winbindd[10145]: [2024/08/25 20:25:00.472105, 0] ../../source3/winbindd/winbindd_samr.c:71(open_internal_samr_conn) Aug 25 20:25:00 MS-01 winbindd[10145]: open_internal_samr_conn: Could not connect to samr pipe: NT_STATUS_CONNECTION_DISCONNECTED Aug 25 20:25:01 MS-01 winbindd[10145]: [2024/08/25 20:25:01.641314, 0] ../../source3/winbindd/winbindd_samr.c:71(open_internal_samr_conn) Aug 25 20:25:01 MS-01 winbindd[10145]: open_internal_samr_conn: Could not connect to samr pipe: NT_STATUS_CONNECTION_DISCONNECTED Just in case, also shared debug logs, the id is: `a9ae0219-8896-467e-8649-36b1f89f27d4` Quote Link to comment
d3m3zs Posted August 25 Share Posted August 25 Hi @KluthR, I have error [26.08.2024 00:31:52][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [26.08.2024 00:31:52][ℹ️][Main] Backing up from: /mnt/user/docker [26.08.2024 00:31:52][ℹ️][Main] Backing up to: /mnt/user/data/backup_containers/ab_20240826_003152 [26.08.2024 00:31:52][❌][Main] Cannot create destination folder! [26.08.2024 00:31:56][ℹ️][Main] Checking retention... [26.08.2024 00:31:56][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) [26.08.2024 00:31:56][ℹ️][Main] ❤️ Debug log id = 5c5a43ea-cc18-4bf3-9cdc-af25581ed09a Could @KluthR take a look? Quote Link to comment
KluthR Posted August 26 Author Share Posted August 26 Error seems clear: check if the destination is writeable. 1 Quote Link to comment
Kilrah Posted August 26 Share Posted August 26 (edited) 11 hours ago, TrivariAlthea said: tried with no compression and the input/output error is now in a different container, in terms of unraid logs itself here is what was popping during the backup process: That shows you have serious problems with your disk1, either bad connection or it's failing and unable to read some sectors, presumably what is supposed to be backed up. Unrelated to the backup plugin. Edited August 26 by Kilrah Quote Link to comment
TrivariAlthea Posted August 26 Share Posted August 26 6 hours ago, Kilrah said: That shows you have serious problems with your disk1, either bad connection or it's failing and unable to read some sectors, presumably what is supposed to be backed up. Unrelated to the backup plugin. Correct, it seems that smart checks are failing because of old age. Will buy a new drive and replace it. After that is done I will come back here and update if that solved the issue! Quote Link to comment
jwiener3 Posted August 26 Share Posted August 26 On 8/19/2024 at 9:09 AM, jwiener3 said: I am having an issue where I am getting an error about destination being unavailable or not writeable when running the backup to a NFS share. This was working before and stopped and if I go to the CLI I can go to the destination directory and create folders and files. I have tried re-entering both the source and destination directories and it hasn't helped. Does this process run under different privileges? I have submitted the debug with the following ID, 05996226-dd22-42e0-b7e1-5ff3453cd482, but it looks empty. How can I troubleshoot this? [19.08.2024 08:59:19][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [19.08.2024 08:59:19][❌][Main] Destination is unavailable or not writeable! [19.08.2024 08:59:19][ℹ️][Main] Checking retention... [19.08.2024 08:59:19][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) Checking on this to see if anyone has a suggestion on how can I troubleshoot the credentials the backup is using. I do not have any issues creating files and folders from the CLI of UNRAID. Quote Link to comment
d3m3zs Posted August 26 Share Posted August 26 16 hours ago, KluthR said: Error seems clear: check if the destination is writeable. Thanks, didn`t touch it and it stopped working, just run chmod -R 777 /mnt/user/data/backup_containers And now it seems fixed. Quote Link to comment
TrivariAlthea Posted August 27 Share Posted August 27 11 hours ago, TrivariAlthea said: Correct, it seems that smart checks are failing because of old age. Will buy a new drive and replace it. After that is done I will come back here and update if that solved the issue! So, I fixed the whole rclone issue by changing the cache dir "--cache-dir=" as by default it caches to ram and the total size of my existing backups overfilled the ram. Changing it to a share fixed the issue I was having all together. (Caching at least for writes is required with pcloud)... The drive issues were not related to it, although I have already ordered a replacement hard drive to fix that too Thanks for all the help with troubleshooting! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.