Marc Heyer

Members
  • Posts

    10
  • Joined

  • Last visited

Marc Heyer's Achievements

Noob

Noob (1/14)

0

Reputation

  1. No way your server alone is drawing 120W idle with the array powered down. I have a system with a 7700k and it draws 80-100W max. while doing a parity check. There is nothing else connected to the power meter? How do you measure the power draw?
  2. Oh okay, thank you. I assumed "fast method" refers to the docker start command. Makes more sense now. I will look up the syslog.
  3. Hello, im using the script to backup my docker appdata with great success. It´s very nice to have everything in one place. But im still having trouble that some of my containers dont get started when the snapshot it taken. What does ("echo "Start containers (fast method):") fast method mean? How can i try a other method of starting my containers? Maybe this could solve my problem. I looked up the docker command documentation but i only found "docker start container". And this method is used by the script. Greetings Marc
  4. Yes, in terminal just type: powertop --auto-tune
  5. Top, thank you very much. Another small thing i noticed is that the script is not able to start some of my containers, when they are in the network of another container (i have a container with built in vpn and 4 other containers are in the vpn network and route there traffic through it). Is this something that i can fix or is this a limitation of the script? The CA Backupd and Restore plugin starts these containers fine. 4 times this message: Error response from daemon: cannot join network of a non running container: 0a846d99759136a6fa326f96a37f84c3d02d76eda7a1b5eb2bba8d0e1e0baef4 and the last line: Error: failed to start containers: 19ad2717be4d, 3d6775704304, 9288c9002b3e, edd9b4ad4544
  6. How does V1.5 behave, when the Appdata folder is not named Appdata? My Appdata path is "mnt/gamma/docker". I ran the script with this as my appdata path and i think it ran through despite giving the error that "mnt/*/appdata" could not be accessed? From the log: "ls: cannot access '/mnt/*/appdata': No such file or directory Created snapshot of /mnt/gamma/docker to /mnt/gamma/.docker_snapshot Start containers (fast method):" I think it worked anyway, the files are there and the containers started. Is it save this way or should i edit something?
  7. I tried it, but modified the command to the delete the oldest daily backup because the one you picked was already gone. And it ran just a couple of seconds, maybe 10 seconds or so. The target is a remote unraid xfs share. I found another user on reddit, who does it the same way and his script runs much shorter. Maybe i try from scratch sometime, when i have the time and access to the mashine.
  8. No, doenst seem like it. I attached the output of one share below. Nothing with slow in this one and all the others. The script ran 15 hours today 😅
  9. I don´t think so. It lists them all, but transfers almost none. For example my nextcloud data share is listed with 142000 files, but created 0, deleted 0, transferred 0. Total bytes sent 3.75M, received 24,84K, speedup 18,959.32. The other shares list way less of that and the speedup is 784,059,192.20 on a 200GB share. I have an NFS share mounted from the remote server to transfer to. Everytime i check the logs, it is at the clean up stage of one of the shares. Edit: almost forgot: Thank you for your reply!
  10. Hello and thank you for your script. I´m using it to backup some important shares over to a remote server through a wireguard connection. My connection supports 50 Mbps up and the remote server has 300 Mbps down (fiber). I can see that my server is using the full bandwith of 50 Mbps when the script transfers files. The contents of the shares don´t change much on a day to day basis, but the script runs upwards of 10 hours a day, even when the log says it transfers almost no data per share. Maybe i dont fully understand the concept of this rsync script, but i thought, when the files don´t change it just creates a hardlink. When i read the logs correctly, the most time consuming part is the "managing" of folders etc., not the transfer itself. The remote mashine only has a Celeron J4125 (it´s only purpose is the backup), would an upgrade help?