RinxKninks

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by RinxKninks

  1. Plain & simple: I started using borgmatic writing to rotating removable hard drives, each containing another repositorie. With the same "run" of borgmatic I'd like to copy data already being deduplicated etc. to this target disks, that I'd like not to process by the borgmatic backup process again. So I think about rsnapshot. But the borgmatic docker doesn't have access to the rsnapshot bin on the unraid system - which I understand, nor can I just copy and use rsnapshot into the borgmatic docker. So do you have an idea on how to use rsnapshot as a after_backup hook in the borgmatic/config?
  2. NerdPack is gone and after a while I found NerdTools to replace them on UR 6.11.x. Yet I also decided on getting into and using borgmatic.
  3. So on the same page here, yet the fresh installed docker of Nextcloud Official Image (!) initializes on "br-0" and if I switch it to "bridge" - which I need to get the NginxProxyManager handling it - I run into the missing Network settings problem. How did you solve yours? ___ My "solution" Since NPM (nginx proxy manager) in my Unraid setup cannot reach the Docker in the br-0 IP range, I either had to change in Unraid (6.11.1) under Settings-Docker the "Host access to custom networks". This option, even when dockers are all switched off, is not changeable in my machine. As I understand I'd have to install more network Interfaces what would lead to higher power consumption and I don't what else this would lead to) or change the docker. So using the recommended "Nextcloud Official" docker Image did not work for me, since it only worked using br-0 , but then without the NPM and in the bridge mode I got no IP. [ switching the official MariaDB Docker between br-0 with a fixed IP or the bridge mode was no problem]. Now I use knex666's Nextcloud Image, NC and MariaDB work, yet my beloved internal Collabora doesn't. Yet this is another story to solve.
  4. Habe das gerade erfolgreich durchgeführt... Allerdings wird unter UR 6.11.1 anstelle des internen Docker Netzwerks 172.x.x.x:Port das Netzwerk br0 (10.x.x.x) eingetragen, was zu einer regulären IP im LAN führte. Das konnte ich mit dem Offiziellen MariaDB Container verbinden, aber der NginxPorxyManager kann wohl nur auf 172.x.x.x, in dem er sich ja selber befindet, weiter leiten und der Offizielle NC Container lässt sich nicht via Bridge auf das UR-Docker interne LAN umstellen /dann bleibt die Spalte Port Mappings nach dem Start leer. Eine Idee, wie ich das ändern kann?
  5. VM Backup often running v0.2.3 - 2021.03.11 Seems to me like the VM Backup is running multiple times in the backup queue. I think this started before upgrading to UR 6.10.3. Before it took about 45-60 minutes zu backup the few VMs, now it is doing so multiple times taking, maybe 4,5 hours. now I remember: before updating unraid I tried to initiate a manual backup and that didn't start, then, over a couple of minutes I hit the backup button multiple times and that seems to have gone into the scheduled behaviour. where might I see where those (redundand?) backup tasks are stored, since this behaviour endures an unraid restart. p.s. as last user postet: Great Job - I've been using your script for a while and it save me a lot of time. Thanks for that!
  6. Ups - thanks - I'll delete mine, if this works so that other looking for PS won't be directed here.
  7. Just got this running with this settings on ESXi - no user authentication tho: Name esxi_back NFS-Server 192.168.123.123 NFS-Share /mnt/user/esxi_back NFS-Version NFS3
  8. Hi, I am used to backup "Data" to different rotating backups. Since I moved to unraid a month ago now I am aiming on automating this on unraid instead putting the backup hdd in my work pc and initiate backup script (calling different rsnapshot configs) and wait for the backups to run. The (otherwise fine) rsnypshot docker definetly does not cover my scenario ( https://forums.unraid.net/topic/97258-support-linuxserverio-rsnapshot/ ). On the input side I run different rsnapshot.config(s). Each of them used in different intervals, like: data (install, files, pictures) media (rarely changes so this one is done about every two month) serverdata (results of appData backup, unRAID VM Backup script and other archives) - this one doesen't contain so much differential data, so not the ideal candidate for rsnapshot, but for now part of this construct The output writes to rotating and different sized disks: Each rsnapshot.conf (i.e. rsnapshot-data.conf) looks for a target directory - exisiting or not on a mounted backup medium. If there is no target for "rsnapshot-media" the script will skip and the next, i.e. rsnapshot-media.config tries to find its directories: snapshot_root /mnt/disks/red8/rsnap-media/ no_create_root 1 Some disks are contain one or more of the above rsnapshot targets directories and the disks are mounted to the unraid server. Problem 1: "Unassigned devices" lets me mount disks, but creates for each attached disk (in my case never more than one disk at a time) a different (target) directory, based on the name of the disk. How can I mount changing disks (being attached to the same /dev/sdd ) to a same mount directory? (I could give all disks the same name, but this wouldn't be a nice solution). Problem 2: Since the plugin "User Scripts" does not work for the rsnapshots (they don't run through, even not using a background process) so I don't bother why and prefer running the rsnapshots using cron jobs. So I put the configs into /boot/custom/etc, but I can not make the script itself executable. As I understand this would be a place where such self- introduced extensions would last for updates of unraid ? What do I have to comply with?
  9. Don't know, if your question has been answered, so https://rdiff-backup.net/ states: ..."In August 2019 Eric Lavarde with the support of Otto Kekäläinen from Seravo and Patrik Dufresne from Minarca took over, completed the Python 3 rewrite and finally released rdiff-backup 2.0 in March 2020."
  10. tkx - googled and understood as "time the CPU could go on with jobs but is waiting for -in this case- harddisks to do their physical reading and writing business"
  11. I am wondering a liitle why while copying lots of music (within unraid) using mc on the shell the CPU load under the tab dashboard shows a combined CPU load of 70% (4 cores combined). Using System Stats the CPU load seems to be around 20%. at the same time. One thing is: Why do the two CPU infos differ that much and second: does the SATA driver (what ever does this in the kernel) have a poor performance, meaning burning lots of electricity while moving data? I copy via shell on unraid from SATA disk RAID (1 Parity + 1 DAta Disk) to a SATA mounted EXT4 HDD (attached to a, how do you say, internal "swap cage"). All devices are plugged to the onboard controller on Intel C246 Chipset. While not copying files the CPU load shown under tab dashboard varies typically between 8%-15%. Machine: ThinkSystem ST50, Processor: Xeon E2224G / Chipset Intel C246, 40GB RAM CPU load while copying files from RAID (1 HDD+1P) to SATA HDD
  12. Fine: now updating the community applications works again
  13. German in germany here, getting on 6.9.1: plugin: updating: community.applications.plg Cleaning Up Old Versions Fixing pinned apps Setting up cron for background notifications plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/archive/community.applications-2021.03.10-x86_64-1.txz ... failed (Invalid URL / Server error response) plugin: https://raw.githubusercontent.com/Squidly271/community.applications/master/archive/community.applications-2021.03.10-x86_64-1.txz download failure (Invalid URL / Server error response) other than that (not updating the community plugin and, as I suppose others as well - checking for docker updates took really long) the system works fine. Can't ping but should ur be able to do so? Linux 5.10.21-Unraid. root@Unraid:~# ping https://raw.githubusercontent.com/ ping: https://raw.githubusercontent.com/: Name or service not known root@Unraid:~# ping https://heise.de ping: https://heise.de: Name or service not known
  14. My VMs lost my br0 in update 6.9.0 to 6.9.1 and setting the MTU & restart did the trick. unraid Cannot get interface MTU on 'br0': No such device