b0m541

Members
  • Posts

    191
  • Joined

  • Last visited

Everything posted by b0m541

  1. Thank you for the writeup! I still have some questions regarding using multiple users. The way it is described above it looks like the users will be created inside the container. Correct? Do we put the passwords in cleartext into the conf files? The container will transform them into the hash-salted form then when creating the users? Do we also need to create the same user on the unraid host side, using the same UID and GID? If not, do we need to avoid collisions with UID and GID on the unraid host side when choosing values for TM_UID and TM_GID? Do encrypted backups now work for you?
  2. So it actually is an issue with 6.10.3? Has this been acknowleded by limetech to be fixed? And yeah, the container sounds to be the best option for me in the meanwhile.
  3. Greetings, I had time machine running round-robin on a Synology DSM and on unRAID 6.9.x. That worked fine, both file servers were used in an alternating fashion. A while ago I upgraded from 6.9.x to 6.10.3 and now I realize that the macOS time machine backups are only working with Synology DSM. When it is the turn of taking the backup to the unRAID share (and its setup has not been changed) the mac is looking for the share and opening files (c.f. File Activity plugin), but then not writing any backup. I see no error message on the UI, haven't delved into the macOS logs yet. I then removed the old backup from the share and started over on the mac OS side. The initial full (huge) backup was taken. But then the incremental backups don't happen (even if triggered manually). I am not saying that this is a problem with unRAID 6.10.3, but I couldn't rule it out. Any ideas what is wrong here? Thank you for your time!
  4. I looked in Comunity Apps. Do you mean something else by "CA" (not Community Apps?)? This is what it says there (only a description for the previous update, not for the one from yesterday): So in this case, as there is no mention, you are saying there are updates to the base system or minor app updates? Did I understand you correctly?
  5. @VRx quick question before considering installing today's update: whats new in today's container update? (there was no description in the template history.)
  6. I wouldn't worry about that too much. Just take look at this thread. No one really interacting, except for the two of us Of course I understand that this is a voluntary effort with limited resources, and as long as vchanger is on the roadmap and not always pushed to the end of the list... I hope it is coming soon. I have disabled my tier 2 backups currently due to this.
  7. Yeah, thats more of a "hack" and rather discouraged in the documentation. vchanger is actually more flexible, has a lot more options. I would think it brings in more than one would get by just manually setting symlinks. Sure, I can't force you to put vchanger in, still it would be a addition per request considered worthwhile from one of the more responsive users in this thread (me). I am actually wondering how many users the container actually has. I can see 5000+ downloads, but no actual activity here except people asking, if this container supports LTO. Not that it would matter, just curious. Oh, and thanks for the quick fix of the permission issue.
  8. with disk volumes there simply is no alternative to vchanger if not everything fits on 1 device. This is relevant for people who have Bacula volumes on disks that are not assigned to the array.
  9. the latest version of the container seems not to use UID 101 any more for the volume files it is accessing. Since previous containers did write volumes with UID 101 the new container is unable to read and append to the volumes written by the previous container versions. This seems not to be a bigger problem as Bacula just creates a new volume if it cannot append to an older volume. However, this leads to warning messages in the log file which might irritate the user. Warning: mount.c:216 Open of File device "redacted" (/mnt/redacted) Volume "redacted" failed: ERR=file_dev.c:189 Could not open(/mnt/redacted,OPEN_READ_WRITE,0640): ERR=Permission denied Marking Volume "redacted" Read-Only in Catalog. Created new Volume="redacted", Pool="redacted", MediaType="redacted" in catalog. Still, I would think such a change is something we should strive to avoid. I am not sure what actual improvement is brought by the latest container version. I was hoping it would be including vchanger, alas, it does not. That would have been desirable.
  10. Oh ok, I did not have the big picture that there are actually two settings in different places that are interrelated: Parity Check Tuning: Pause array operations while mover is running Mover Tuning: Let scheduled mover run during a parity check / rebuild I will set them as follows because I want mover to run daily, but not have a running Parity Check at the same time: Parity Check Tuning: Pause array operations while mover is running: Yes Mover Tuning: Let scheduled mover run during a parity check / rebuild: Yes Correct? I am happy that this is clear to me now. To help others to avoid this confusion maybe the help texts of both Tuning plugins for the above mentioned options could be amended to point out that there is an interrelated option in the other tuning plugin and what combinations lead to what result. Thanks for the clarification.
  11. In an earlier response you wrote "This feature is intended to work the other way around and pause a parity check if it detects mover running." So why would it prevent the mover to start? So I have "Let scheduled mover run during a parity check / rebuild:" on "No" I now have set it to "Yes". This does not enable the "Move" button under Main > Array operation. However, the "Move Now" button under Settings > Scheduler > Mover Settings can be clicked and the script actually runs. That means, that I need to set "Let scheduled mover run during a parity check / rebuild:" back to "No" after the Mover has run through, otherwise the Mover might be scheduled during a Parity Check while the check is NOT paused. Derived from this I have a proposal for improvement: Settings > Scheduler > Mover Settings > Move Now seems to execute /usr/local/sbin/mover. This seems to be a wrapper for Parity Check Tuning. Hence, this script is aware whether the current Parity Check is really running or paused. Would it be possible to ensure there, the following?: -if the Parity Check is running (and not paused) and "Let scheduled mover run during a parity check / rebuild:" is set to "No" - then do NOT start the real mover script -if the Parity Check is paused while running and "Let scheduled mover run during a parity check / rebuild:" is set to "No" - then start the real mover script This way the user does not need to twiddle with "Let scheduled mover run during a parity check / rebuild:" to be able to manually start the mover while the Parity Check is paused. Technically this would be a small change. Are you willing to make it?
  12. No I have not. Which one can be used safely? It seems that /usr/local/sbin/mover is the wrapper for Mover Tuning. Is it safe to start /usr/local/sbin/mover.old ?
  13. I see. The button Settings > Scheduler > Move now is not disabled. When pressing the mover still won't run. From the log: emhttpd: shcmd: /usr/local/sbin/mover &> /dev/null & root: Parity Check / rebuild in progress. Not running mover Clearly the mover script also checks whether a parity check is in progress and does not know about it being paused. I think this is a considerable problem as with the party check tuning the overall check may take 1 or more weeks and during this time the mover will not be able to free the cache. To solve this it would be necessary to talk to the core developers, e.g. to introduce a "paused" state for partity checks and to allow the mover to operate while parity check is in paused state.
  14. I don't think I made it clear that this does not work as proposed: see attached snapshot This is what I meant. How can I start the mover?
  15. could it make sense to be able to run the mover while the parity check is paused? currently this does not seem to be possible. If have set "Pause array operations while mover is running" to "yes". Not sure what exactly "array operations" means. Does "array operations" mean "parity check" ? I think its not great to run the mover while the parity check runs, but it might be OK while the parity check is paused. Is this possible with the current version of the plugin? If not, does this idea make sense at all and would you put it on your roadmap?
  16. before installing the db-backup container I'd like to understand whether it actually meets my needs: - I have a number of Debian VMs that are running some DBMSes in containers, eg postgres, mysql, mariadb - does db-backup allow me to create, collect and store DB dumps in each VM in each container? - does it merely collect and store the dumps, or does it also create the DB dumps before collecting them? I did read the documentation, also on Github. To me it is not clear how to srt this up. Do I just need one instance of db-backup to collect DB dumps from all containers in all VMs? Or do I need as many container instances of db-backup as there are containers from which to collect the dumps?
  17. Actually in the git there is vchanger 1.0.3: git clone https://git.code.sf.net/p/vchanger/code vchanger Update: the downloads have been updated to also show 1.0.3
  18. I found someone who is successfully using changer 1.0.2 with Bacula 11. So implementing vchanger in the container should be feasible.
  19. I will check out whether vchanger 1.0.2 should work with bacula 11. As long as the autochanger commands haven't changed - I guess - it should. But I will come back with a more definitive response.
  20. I have a request for the next version of this container: please include vchanger I tried to build it from the sources myself within the container, but there is (understandably) no compiler. https://sourceforge.net/projects/vchanger/files/?source=navbar Why would we want this in the container? There is a very good but also quite lenghty explanation in the source archive. TL;DR for backups to the unRAID array this is not a necessary tool, as the array can be upscaled transparently from the perspective of Bacula. But if you have another storage tier for backups that is not on the array, but e.g. on external USB storage, you run into a nasty limitation of Bacula: Bacula is not able to handle more than 1 logical device per job. If you write all jobs to the same device it will fill up at some point and you have no way to upscale it, you can only recycle volumes at that point. vchanger makes usage of fixed-space and removable storage very flexible and comfortable. It is simply necessary to have this if one does not want to put all jobs on the array (e.g. one has multi-tiered backup layers). Would you be willing to include vchanger in the container?
  21. I migrated from 6.9.x to 6.10.x - so far Bacula/Baculum works happily.
  22. The Problem with Baculum not properly starting still happens now and then. I looked in the logs the way you proposed and I just find a lot of these: [Thu May 12 07:05:02.667379 2022] [mpm_prefork:notice] [pid 148] AH00163: Apache/2.4.53 (Debian) configured -- resuming normal operations [Thu May 12 07:05:02.667730 2022] [core:notice] [pid 148] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND' AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.18. Set the 'ServerName' directive globally to suppress this message [Fri May 13 00:22:16.190020 2022] [core:warn] [pid 149] AH00098: pid file /var/run/apache2/apache2.pid overwritten -- Unclean shutdown of previous Apache run?
  23. I found the culprit. I was using the older gfjardin preclear plugin. I removed that and installed your UD preclear plugin. It is running the clear now. One thing I noticed: both, Preview Progress and Preclear Disk Log complain: Preview Progress (continuous repetition): tput: unknown terminal "screen" Preclear Disk Log (continuous repetition): Jun 21 02:41:34 preclear_disk_redacted: tput: unknown terminal "screen" I installed screen manually via the nerd pack and stopped and resumed the preclear. Now the Previe Progress and Preclear Disk Log work as expected. Not sure why screen wasn't installed by the plugin, though.