murkus Posted May 29 Share Posted May 29 On 5/27/2024 at 9:46 PM, VRx said: Today's image have amazon cloud driver added. Thanks, is that a different driver than the S3 driver that was around for some years now? Thats fine, too, just need to look into how to use that, as I configured for the S3 driver so far. Quote Link to comment
MikePiko Posted June 21 Share Posted June 21 On 4/19/2024 at 11:15 AM, VRx said: which version did You use? postgresql vs sqlite3 and 11 or 13 maybe 15 ? can confirm the Error from the French User. I installed a fresh Unraid yesterday and just installed Bacula Server (default sqlite version) if you change the admin password to something else you will endup in a broken webpage with "Error code: 100 Message: Problem with connection to remote host. cURL error 0: ." after login. If you delete the docker and the remaining files in appdata and install it fresh again with the default settings (no password change) everything works well. Quote Link to comment
VRx Posted June 27 Author Share Posted June 27 On 5/29/2024 at 10:10 PM, murkus said: is that a different driver than the S3 driver that was around for some years now? Yes it is different, I can find manual if You need, I found it previous for testing, but I'm not using cloud backup. On 6/21/2024 at 7:18 PM, MikePiko said: if you change the admin password to something else you will endup in a broken webpage with I must test it one more time. I thought I had already improved it Quote Link to comment
murkus Posted July 5 Share Posted July 5 (edited) On 6/27/2024 at 10:30 PM, VRx said: Yes it is different, I can find manual if You need, I found it previous for testing, but I'm not using cloud backup. I must test it one more time. I thought I had already improved it Thanks, I have updated to you updated version. Since I am not usin AWS S3, but my own minio instance I have set the HostName accordingly for the Cloud resource. It seems the driver is ignoring the HostName field. Can you confirm this? `Invalid endpoint: https://s3..amazonaws.com Unknown error during program execvp` To work for me it would need use as URL the Cloud > HostName I have configured. I also played with the aws driver script in the plugins folder. It became clear that it is using the python script "aws" to access s3. I have my ca-certificates.crt mapped into the container (because I am using my internal CA to sign internal certificates), but python isn't using it. I found descriptions how this can be achieved, but it would be something that would better be initialized by the container (as pip isn't installed by default): pip config set global.cert /etc/ssl/certs/ca-certificates.crt conda config --set ssl_verify /etc/ssl/certs/ca-certificates.crt Could you kindly add something like this, so that python will use the certificates at this path for SSL verification? /etc/ssl/certs/ca-certificates.crt Bottom line: there are at least 2 item now working atm: (1) hand-over of the variables for the driver script (i.e. including the cloud > HostName) (2) SSL verification using own CA certificates Edited July 5 by murkus added a lot more info Quote Link to comment
VRx Posted July 6 Author Share Posted July 6 15 hours ago, murkus said: `Invalid endpoint: https://s3..amazonaws.com Unknown error during program execvp` Here is docs which I used with testing configuration. https://docs.baculasystems.com/BEDedicatedBackupSolutions/StorageBackend/cloud/cloud-backup.html#cloud-accounts You must do like a ceph endpoint in the exapmle. I've found somewere that for other cloud storage than amazon You must use: UriStyle = Path Default is Virtualhost. Quote Link to comment
murkus Posted July 6 Share Posted July 6 (edited) 3 hours ago, VRx said: Here is docs which I used with testing configuration. https://docs.baculasystems.com/BEDedicatedBackupSolutions/StorageBackend/cloud/cloud-backup.html#cloud-accounts You must do like a ceph endpoint in the exapmle. I've found somewere that for other cloud storage than amazon You must use: UriStyle = Path Default is Virtualhost. Thanks for your quick response. I was already using Path, not Virtualhost. The trick is to use BlobEndpoint in addition to HostName. Then it will actually use that endpoint. If you get this: `[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006) Child exited with code 1` (1) map the host file /etc/ssl/certs/ca-certificates.crt to the same location in the container (2) set the container variable AWS_CA_BUNDLE to this exact path Thanks for your continued effort, highly appreciated! Edited July 6 by murkus updated 1 Quote Link to comment
murkus Posted July 24 Share Posted July 24 (edited) So far I was successfully making backup copies to minIO S3 using copy jobs. Each volume is a folder on minIO and has an individual file called part (numbered) for each job in the volume. Works fine. The part files are all around 5GB large. Now I was for the firs time making copies of volumes to a different pool on minIO. The upload of the part files failed consistently, but the error doesn't seem to make sense: Quote part.2 state=done retry=1/10 size=193.0 GB duration=4994s msg=/opt/bacula/plugins/aws_cloud_driver upload minio01-vol-8388 part.2. upload failed: - to s3://bacula-tier3-01/minio01-vol-8388/part.2 An error occurred (InvalidArgument) when calling the UploadPart operation: Part number must be an integer between 1 and 10000, inclusive Child exited with code 2 bacula-sd JobId 23057: minio01-vol-8388/part.1 state=done size=297 B duration=3s the part number 2 clearly is between 1 and 10000. The problem would probably something else, but I have no idea what it is. the part files that failed upload are large, around 200-400GB. Any ideas? Edited July 24 by murkus Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.