VRx

Moderators
  • Posts

    108
  • Joined

  • Last visited

Everything posted by VRx

  1. I would have to test it. Sources say about the last update in May 2020. At the beginning of 2021, bacula 11 was released. The conclusion is that this solution does not necessarily work with bacula 11, and as I wrote above, the container with bacula 9.x will not be developed anymore. Also, you have to remember that this is a containerized solution where mounting physical disks is more complicated. Currently, due to recent problems with the bacula repository, I am testing a new way of building the container. In the next version I plan to add support for sending email notifications - this is almost ready solution, but due to problems with the bacula repository, I had to deal with the CD process in the first place. Then I will be able to work on the test implementation of the vchanger.
  2. Healthcheck has been added for new users who do a fresh install. Sometimes I got signals about errors in the docker log telling that bind is not running, such a situation can take place and it is not a bad thing. After creating initial configuration using the GUI, there will stop logging this info. For this purpose, a healthcheck has been added, which will signal when the docker status is healthy or not. If it shows that it is healthy, you don't have to worry about errors in docker logs.
  3. New version pushed! * fixed bugs with UnraidOS >6.9.2 @rwysocki_bones thanks for the report and your time
  4. I tried to do few tests on unraid version 6.10 but there are so many bugs on various applications with this version that I went back to version 6.9.2. Some problematic applications are necessary for me for everyday use of the server. In this situation I would have to spend a lot of time diagnosing problems and correcting them, but this is not my responsibility, apparently other developers are not interested in their applications. After initial testing it looks like Bacula-Server should work after UnRAID update
  5. I tried to do few tests on unraid version 6.10 but there are so many bugs on various applications with this version that I went back to version 6.9.2. Some problematic applications are necessary for me for everyday use of the server. In this situation I would have to spend a lot of time diagnosing problems and correcting them, but this is not my responsibility, apparently other developers are not interested in their applications. There could be some issue with file priviliges (for example rndc key), but correcting it manually should help.
  6. Try to set (777) rwx for all on bind9 folder
  7. @rwysocki_bones chown root:root /etc/bind/* chmod 664 /etc/bind/* And then try to run named using container console: /usr/sbin/named -4 -c /etc/bind/named.conf -L /var/log/named.log And then check /var/log/named.log
  8. @rwysocki_bones paste ownership and privileges from Container Console: ls -la /etc/bind Did You create new config by WebUI like I asked in previous post?
  9. @rwysocki_bones go to docker console and execute: /usr/sbin/named -4 -c /etc/bind/named.conf -L /var/log/named.log Wait about 1 minute, check if named is not running using ps aux command, and then paste here /var/log/named.log content. This files are created automatically when You initiate first configuration from WebUI (watch attached screen) If You cleared all config, first of all, run bind container, go to webUI, Create Primary Configuration File, and then check if proces 'named' is running. If it is running, send me PM, I try to help You to copy configuration or maybe configure one from them as master, and second as slave DNS (in this case you will have to edit zones configuration only on the master) !!! You can't copy only particular config files, but some others not !!!
  10. @rwysocki_bones If You cleared config it is possible until there is no named.conf file. You could try to start container, and then log in Web UI, there should be information about the need to create a configuration. After this, supervisor should start without errors.
  11. End of development for Bacula 9.6 images versions! Older versions of the client work well with the newest server version. The last update of the bacula9.6 code was in December 2020.
  12. Open Your's Unraid shell and run command: docker exec $(docker ps | grep "pwa666/bacula-server" | awk '{print $1}') cat /var/log/apache2/error.log copy and paste here output
  13. @b0m541Did Your mover touch Baculum config path or Apache log path?
  14. Every time when Auto Update Applications, make update on this container, unraid is reporting gitlab container not stoped properly.
  15. Today I have succesfully migrated my testing DSM VM from 6.2 to 7.0. Run shell on unraid, and run serial console on VM with DSM7 virsh console VM_NAME you should see xpenology shell, You probably could log in using Your administrator account. Check /var/log/synoupdate.log, is there any errors? Check network configuration, is there any ip assigned? For me e1000e doesn't work properly, it was flapping up and down. I have added e1000 extension, and set e1000 virtual network card. (what I can see with e1000 down/up speed in GUI is looking properly, but with e1000e and DSM6.2 it shows not possible very high speed for download and upload) Next thing that was not good in my config was usb device model. When I set it to 'qemu-xhci', everything goes very well.
  16. @live4soccer7 some people on the xpenology forum are bragging about running DSM7 on UnRAID, but as you can see, they're hiding out here https://xpenology.com/forum/search/?q=unraid&quick=1&type=forums_topic&item=53817
  17. Director (left menu) > Configure Director (upper menu) > JobDefs > Edit
  18. This is probably problem. Maybe there is a chance to manually initiate UD ping to remote server, maybe by rc.unasigned script. I think there is no need to making automatic background checks to often or another complicated resolve this.
  19. It is pinged and online, when backup is starting. As I sad, Synology could be online few hours, but when I log in to WebUI it is not showing on Main Tab. It showing up after few seconds. If I log in and try to start script without opening Main Tab, and waiting till it show synology smb, script is failing. Synology is starting about 04:00 AM, backup starts at 04:40, failing, and I'm loging in to Unraid UI about 09:00 AM and still failing. But if I switch to Main Tab and wait till remote samba show up, is ok.
  20. Is it the same with ups login in syslog?
  21. I have some scripts (User.Script plugin) which are making backups some data from unraid server to remote smb share. Remote share is placed on Synology. Synology is shutting down after backups done, and auto powering on before backup cron schedule. i'm using /usr/local/sbin/rc.unassigned mount /usr/local/sbin/rc.unassigned umount to mount and unmount share before and after backup But most of times rc.unassigned can not mount share, I found that when I log in to Unraid WebUI on Main Tab, there is no my smb share, but it is still present in configuration files. Sometimes it is present, and I can manually run script immediatly, but often I must wait few seconds when it is showing on Main Tab, then I can run script. This is not caused by too late power on. Synology is waking up about 20 minutes before cron schedule. When script cannot mount share it is exiting with message. After few hours, I'm logging in to WebUI, there is no samba share, but after few saconds whatching Main Tab, it is showing up. In the script I made ping check and dns check before mounting. Yesterday was good: Jan 25 04:40:18 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 25 04:40:18 UnraidStation unassigned.devices: Mount SMB share '//SynologyStation/backup' using SMB default protocol. Jan 25 04:40:18 UnraidStation unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,credentials='/tmp/unassigned.devices/credentials_backup' '//SynologyStation/backup' '/mnt/remotes/backup_disk1' Jan 25 04:40:18 UnraidStation kernel: CIFS: Attempting to mount //SynologyStation/backup Jan 25 04:40:19 UnraidStation unassigned.devices: Successfully mounted '//SynologyStation/backup' on '/mnt/remotes/backup_disk1/'. Jan 25 04:40:19 UnraidStation unassigned.devices: Adding SMB share 'backup_disk1'. Jan 25 04:42:01 UnraidStation unassigned.devices: Removing Remote SMB/NFS share '//SynologyStation/backup'... Jan 25 04:42:01 UnraidStation unassigned.devices: Unmounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 25 04:42:01 UnraidStation unassigned.devices: Synching file system on '/mnt/remotes/backup_disk1'. Jan 25 04:42:01 UnraidStation unassigned.devices: Unmount cmd: /sbin/umount -t cifs '//SynologyStation/backup' 2>&1 Jan 25 04:42:01 UnraidStation unassigned.devices: Successfully unmounted 'backup' Jan 25 04:42:01 UnraidStation unassigned.devices: Removing SMB share 'backup_disk1' But most times it is like this an 26 04:40:05 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:40:05 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:40:16 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:40:16 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:40:26 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:40:26 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:40:37 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:40:37 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:40:47 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:40:47 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:40:58 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:40:58 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:41:09 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:41:09 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:41:19 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:41:19 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:41:30 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:41:30 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:41:40 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:41:40 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. Jan 26 04:41:51 UnraidStation unassigned.devices: Mounting Remote SMB/NFS Share '//SynologyStation/backup'... Jan 26 04:41:51 UnraidStation unassigned.devices: Remote SMB/NFS server 'SynologyStation' is offline and share '//SynologyStation/backup' cannot be mounted. And after this loop it is exiting.
  22. sSMTP[17864]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=xxx outbytes=759 What is it mean? Who is receiving this? What is sent and why?
  23. I'm sorry but no. I try to build some new image drivers included for about 2 weeks (actually i have big reconfiguration in my lab environment) Important! Last week I build smaller image using debian-slim, but there is some problem on Unraid with this image: 1. "all texts has @@ at both the start and end" - https://github.com/vrx-666/bacula-server/issues/1 2. I noticed error when container is first time starting after base image change. Today I published update resolving first issue, but this is simpla come back to standard debian image, so second problem may occur again with this update. Simple resolution is to delete container, and install it again, with same paths, ports, ip, user, password. Nothing should disapear, any backups, settings and history. I'm very sorry for my bad automated testing.