Deadringers

Members
  • Posts

    16
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Deadringers's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. Hey, I'm monitoring the Eth0 interface with Zabbix but it's not adding up with what Unraid is reporting. Zabbix is reporting a few Kbps, but Unraid is reporting 100+ Mbps. Unraid is correct as I was watching a film through plex container, which uses this interface. Is there something I need to change? Screenshots attached here of what Unraid web ui shows me, and what Zabbix is reporting.
  2. Hey @KluthR Thanks for your work on this and the new app! I updated to latest Unraid (6.12.6) and installed the new app as requested. However my backups are not working any more and there are 3 issues I'm unsure about. 1: The App is complaining about backing up external volumes - I'm not sure exactly what this means? 2: I think it's related to number 1, it's also complaining about removing container mappings as they are "Source paths" - but again I'm unsure what this means exactly. 3: Perhaps related to 1 and 2, it's just saying that the container doesn't have any volume to backup. But if it's removed a mapping due to the "source path" issue, then I guess that is the problem to fix? I've "shared my debug log" with you, the ID is: 939bd04b-535a-47e5-85b0-4eb58aeef6df [12.02.2024 10:21:19][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [12.02.2024 10:21:19][ℹ️][Main] Backing up from: /mnt/user/appdata/CloudBerryBackup, /mnt/user/appdata/pihole, /mnt/user/appdata/qbittorrent [12.02.2024 10:21:19][ℹ️][Main] Backing up to: /mnt/user/Unraid_App_Data_Backups/ab_20240212_102119 [12.02.2024 10:21:19][ℹ️][Main] Selected containers: CloudBerryBackup, pihole, qbittorrent [12.02.2024 10:21:19][ℹ️][Main] Saving container XML files... [12.02.2024 10:21:19][ℹ️][Main] Method: Stop all container before continuing. [12.02.2024 10:21:19][ℹ️][qbittorrent] Stopping qbittorrent... done! (took 6 seconds) [12.02.2024 10:21:25][ℹ️][pihole] Stopping pihole... done! (took 5 seconds) [12.02.2024 10:21:30][ℹ️][CloudBerryBackup] Stopping CloudBerryBackup... done! (took 1 seconds) [12.02.2024 10:21:31][ℹ️][Main] Starting backup for containers [12.02.2024 10:21:31][ℹ️][qbittorrent] Removing container mapping "/mnt/user/appdata/qbittorrent" because it is a source path (exact match)! [12.02.2024 10:21:31][ℹ️][qbittorrent] Should NOT backup external volumes, sanitizing them... [12.02.2024 10:21:31][⚠️][qbittorrent] qbittorrent does not have any volume to back up! Skipping. Please consider ignoring this container. [12.02.2024 10:21:31][ℹ️][pihole] Should NOT backup external volumes, sanitizing them... [12.02.2024 10:21:31][ℹ️][pihole] Calculated volumes to back up: /mnt/user/appdata/pihole/pihole, /mnt/user/appdata/pihole/dnsmasq.d [12.02.2024 10:21:31][ℹ️][pihole] Backing up pihole... [12.02.2024 10:22:32][ℹ️][pihole] Backup created without issues [12.02.2024 10:22:32][ℹ️][pihole] Verifying backup... [12.02.2024 10:22:42][ℹ️][CloudBerryBackup] Removing container mapping "/mnt/user/appdata/CloudBerryBackup" because it is a source path (exact match)! [12.02.2024 10:22:42][ℹ️][CloudBerryBackup] Should NOT backup external volumes, sanitizing them... [12.02.2024 10:22:42][⚠️][CloudBerryBackup] CloudBerryBackup does not have any volume to back up! Skipping. Please consider ignoring this container. [12.02.2024 10:22:42][ℹ️][Main] Set containers to previous state [12.02.2024 10:22:42][ℹ️][CloudBerryBackup] Starting CloudBerryBackup... (try #1) done! [12.02.2024 10:22:44][ℹ️][pihole] Starting pihole... (try #1) done! [12.02.2024 10:22:46][ℹ️][qbittorrent] Starting qbittorrent... (try #1) done! [12.02.2024 10:22:49][ℹ️][Main] Checking retention... [12.02.2024 10:22:49][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) [12.02.2024 10:22:49][ℹ️][Main] ❤️
  3. You can all ignore me... I fixed it and it was an issue on my switch not the unraid device!
  4. I've installed an Intel dual gigabit PCIe card today, which I planned to pass through to a pfsense vm. At first boot one of the ports on the card took over as "main" or "default" port and I then used the "interface Rules" to assign my onboard nic as the "eth0". Since then, nothing works at all with regards to unraid & network access. It's got the IP/gateway configured, but it's seemingly not "using" the ethernet port? I've booted PFsense into GUI mode, and have pinged 8.8.8.8 while taking a packet capture on the switch. I see zero traffic from that unraid box on the wire as far as my switch is concerned. If I do a traceroute from the gui on the unraid box, it fails and the only hop it it's own IP. It "feels" like this unraid box has not correctly initiated it's own network connection, and is unable to use it's own port now...
  5. Hey all, I've found that the mover script is not working correctly on a schedule. It hasn't moved any files for over 1.5 months as far as I can tell. However, when I manually ran it today, it moved all the files no problem. Screenshot of my settings below - can someone please help me understand if I've done something wrong?
  6. Hey all, So I understand that the "load" number in Linux can give you an indication of how busy the CPU is. e.g. a load of 1 on a 1CPU machine would be kinda like saying you're at 100% CPU utilization. But on my system I'm seeing a few conflicting things... htop is reporting less than 15% total CPU usage htop is also reporting a load average of 9-10 On my 4 core / 8 thread CPU this should be 100% usage Unraid "Dashboard" is showing 60-80 % CPU usage. None of it is quite adding up for me? Screenshots below:
  7. I see this was updated, thank you very much! It's now able to backup 100s of GB to Backblaze without failing like it did before... Only took Cloudberry since November to fix this
  8. hey @Djoss - FYI they have a new version out now - 4.1 If you could please update this that would be awesome
  9. Hey @Djoss, I raised a bug* with CloudBerry and they've fixed it. They advised: Now I notice on your gitlab the cloudberry image you use is: So just to confirm, when they release the fix in 4.1, you'll update the docker image (at some point) to use that? Thanks for all your work on this! * The bug is relating to how they backup to B2, and CloudBerry were not following the correct proceedure when receiving a 503 or 500 message from B2. Documentation here: https://www.backblaze.com/blog/b2-503-500-server-error/#:~:text=The second 503 is a,back to the dispatch server.
  10. Hi @Djoss, Thanks for your time on this so far! I'm just struggling with permissions on CloudBerry on restore. The container is able to read from my storage and back up those files without issue, but when I do a restore it fails with a permissions error. I tried to give all permissions with the following but no luck: chmod a+rwx /mnt/disk2/UNRAID_Backups/ Running the container with the "Privileged" switch to on did not help either. This is the location of where I store my app-data files using the app "CA Backup / Restore Appdata". So "CA Backup / Restore Appdata" writes to that directory, CloudBerry then backs it up to Backblaze, but Cloudberry is unable to then restore those files due to a permissions error. Any ideas?
  11. Is anyone successfully using this docker container to backup their files/folders to something like Backblaze? I came from a QNAP NAS which had a builtin app that worked flawlessly over my (then) much smaller internet connection. (40 down 5 up). So as you can imagine backing up anything took ages... but it did get there and incremental backups were excellent! I now have 1,000 down and 50 up, but my experiance with this product is just terrible.. Here has been my experiance so far: Backup seems to upload at my full 50 Mbps upload..and that's where the good news ends.. It errors frequently with useless logs such as: error 1003 contact support... Support have asked me to do multiple tests/refresh databases but nothing has fixed it despite 20+ emails with them. They said backblaze gave a "time out" at some point, but I've got it to retry 10 times over 10 seconds... so I can't believe it's that. They also kind of mentioned it in passing, rather than pressing the issue so I guess it wasn't the problem... It gave me an error that I had reached the 5TB limit, despite my data set being only ~ 600 GB. Over the course of 2 weeks, it took up over 1.7TB on backblaze, yet the amount of data I had selected did not change from the initial 600GB. I figured out what this was*...But Support brushed it aside saying "look it works on my lab machine with 5 files..can you test again please? *From the logs, and looking at the "detailed report" it generates, it was clear that it would: Upload the files from the beginning Fail with a generic error message Start again from the beginning Fail with a generic error...repeat etc etc. So basically upload the same few 100GB of data then fail.. I'm at my witts end here on how to just backup my data from unraid to Backblaze... it shouldn't be this hard right!?