[Support] ich777 - Application Dockers


ich777

Recommended Posts

8 minutes ago, Goldmaster said:

I am syncing just over 2tb of stuff and had to leave overnight, I came to take a look at the progress and see this after leaving luckybackup running for several hours.

luckyBackup uses rsync as backend, ao to speak it is more of a fance frontend for rsync.

 

9 minutes ago, Goldmaster said:

Crashed

Is it possible that you have many many many small files in those 2TB?

Usually this is an indication that the container crashed because it ran out of RAM.

 

10 minutes ago, Goldmaster said:

I'm wondering weather to switch to dirsync pro as that can scan, then compare, similar to goodsync.

You can but it would be basically be the same if you got lots of files to sync and from my experience DirSyncPro can use a lot of RAM when you have lots of small files.

 

You can however create smaller sync chunks in luckyBackup on a per folder basis to circumvent this.

 

I‘m using luckyBackup to sync my whole server and had no issues so far.

Link to comment
4 hours ago, ich777 said:

luckyBackup uses rsync as backen

Yeah, thank you. I think I just need to be patient. Trouble is that I can't leave my server running over night, as its next to where I sleep. So I am wondering if I could somehow resume building filelist? It's not practical to leave it running all the time. The s3 sleep plugin doesn't put the server to sleep, only shut it down.

Link to comment
2 minutes ago, Goldmaster said:

So I am wondering if I could somehow resume building filelist?

I don't think so because rsync isn't made for that since the files could change on the source and that would be pretty bad.

 

3 minutes ago, Goldmaster said:

It's not practical to leave it running all the time. The s3 sleep plugin doesn't put the server to sleep, only shut it down.

May I ask how often do you sync? 2TB can't take more then a day usually and it should shut off fine in the evening or at least split your syncs into separate folders and run them on a schedule.

Even DirSyncPro can't do that...

Link to comment
1 hour ago, ich777 said:

May I ask how often do you sync?

I haven't synced for quite some time. I had a 2tb drive that was full and couldn't take any more. So the temporary solution was to exclude less important folders, until i could not do much else. I have now got an 8tb drive, moved the copied stuff over from the old 2tb drive and let it synced.

Link to comment
23 minutes ago, Goldmaster said:

I have now got an 8tb drive, moved the copied stuff over from the old 2tb drive and let it synced.

But the file list generation shouldn't take that long, how much files are you backing up?

Even my Nextcloud folder (which is currently huge with a lot of pictures and so on) won't take much longer than about 20minutes to generate the file list.

 

23 minutes ago, Goldmaster said:

I haven't synced for quite some time.

But then one night without shutting down the server should do the job just fine so that everything finishes or am I wrong? I think that you don't have to let it run over night for  when this huge sync is finished correct?

Link to comment
1 hour ago, ich777 said:

how much files are you backing up?

It currently at 31,779,000 files and still going. Im fairly certain something is wrong, but the docker mappings and all that are correct. It's running in dry run anyhow, and my /mnt/ which the container sees is actually  /mnt/user/ is set to read only.

1 hour ago, ich777 said:

one night without shutting down the server should do the job just fine


Well if i did not have that bug i mentioned earlier then it would have done the job fine.

I might leave it and if it gets to when i go bed then i will have to turn the server off (s3 sleep plugin doesnt work).

Link to comment
3 minutes ago, Goldmaster said:

there's no way it could run out of ram. in my signature, it does show i have 128gbs of ram

This was just a guess, I really don't know what the cause of this issue was because I don't know what you are backing up.

 

4 minutes ago, Goldmaster said:

It currently at 31,779,000 files and still going. Im fairly certain something is wrong, but the docker mappings and all that are correct. It's running in dry run anyhow, and my /mnt/ which the container sees is actually  /mnt/user/ is set to read only.

It is never a good idea to sync everything blindly because you are most certainly backing up files that aren't necessary, anyways, let it run and report back when it's finished creating the file list.

Link to comment

Sorry to bother you ich, i'm sure i'm doing something wrong.  For the life of me, i cant get the scheduler to work. 

 

I've created a task for each of the folders i want backed up using the "default" profile.  If i click RUN, it works just fine. 

image.thumb.png.409ee2ee78b6eeece41f99b265e48ba0.png

 

I'm trying to setup a schedule for it to run all the tasks in the default profile every friday at 1am

image.thumb.png.268a8944479a61b09c643e7e02ad5b17.png

 

I click OK, and then cronIT !! and then i check in on friday morning and looks like nothing happens.  What am i doing wrong?  

Here's what the crontab looks like
image.png.6c9c906ce77ec90a839bdb8c4d1af097.png

Link to comment
18 minutes ago, Kilrah said:

  

 

 

You're not ticking "Console mode".

Is that really all i was doing wrong?  

 

When a scheduled backup is running, does it still show in the Information Window?  Maybe its just going on behind the scenes?

I set a backup to run a few min ago, and i can hear some movement from the drives on my backup server, but i dont see anything in LuckyBackups happening.

Link to comment
16 minutes ago, danimal86 said:

Is that really all i was doing wrong?  

Yes. The description also mentions that.

 

17 minutes ago, danimal86 said:

When a scheduled backup is running, does it still show in the Information Window?  Maybe its just going on behind the scenes?

Behind the scenes.

 

17 minutes ago, danimal86 said:

I set a backup to run a few min ago, and i can hear some movement from the drives on my backup server, but i dont see anything in LuckyBackups happening.

This is the default behaviour because luckyBackup is designed to be a Desktop application and the schedules are only working when you use the console mode.

Link to comment
8 hours ago, ich777 said:

let it run and report back when it's finished creating the file list.

This is crazy, 16 hours later and this is far as it gets.

image.thumb.png.815d4b7f181c07f15964f9beb772a1cb.png

I will have to cancel, im sure something is wrong. im backing up 2.98tb of content.

this is the number of files and folder in my mnt/user folder
357417115_Screenshot2023-03-03122448.thumb.png.76f3621933d90de00fc8260ccc004dc4.png

 

Only thing i can do is try and see how the backup performs on a 2nd 2tb protable drive that i have that is nearly full, with the last used profile for that and if it takes much shorter than i could restore the appdata settings as i must have removed a folder i excluded. But again none of this makes any logical sense. Seams to be going on forever. I have not changed docker settings. only moved from the original 2tb drive onto the new 8tb drive.

Link to comment
4 minutes ago, Kilrah said:

You probably have soft/hard links in there that refer to each other and cause an infinite loop.

Thats absurd as I know I dont have any hard or soft links. I haven't changed anything in docker config. The only thing i had changed was removing previously exuded directories, as I now have a bigger backup drive. I have gone about adding files normally. I could excude hard and soft links if possible.

Still a little amusing in a way, but i did think something was stuck in some form of scanning itself.

Link to comment
7 hours ago, Goldmaster said:

Still a little amusing in a way, but i did think something was stuck in some form of scanning itself.

Please keep in mind that if you use raync it would be completely the same.

 

Have you yet tried to split your sync into smaller syncs so to speak maybe try to create one sync per share.

Link to comment
4 hours ago, ich777 said:

Have you yet tried to split your sync into smaller syncs so to speak maybe try to create one sync per share.

Using my previous settings, it seems to now take around 10 minutes total time.

image.thumb.png.219d3290cab453c092ec46a20a4b6d82.png

 

What I will do is remove 1 exuded folder, and it should take just over the same amount of time.

It did dawn on me that I did use docker folder in the past, rather than image file, and i think after removing exuded folders, that being the system folder which has the docker folder and inside that, has both docker folder and docker img file, luckybackup was trying to back up the docker folder hence the long time. i will remove 1 exuded folder at a time and see.

  • Like 1
Link to comment
5 hours ago, Kilrah said:

Yep, definitely exclude that. And if you moved back to using an image you should remove the folder.

This has solved the problem, will remove the old docker folder. did try just deleting it in kruasder, but seamed to not do anything.

 

I guess run sudo rm rf -f /mnt/user/docker/docker/ or something?

Link to comment
El 5/16/2022 a las 1:43 AM, ich777 dijo:

Puede cambiar eso creando una variable como esta:

Imagen.png.f4618a270d663c32f28507d4f8d43f71.png

 

Pero eso no cambiaría mucho, incluso cuando deje de funcionar. Pero realmente no puedo pensar qué causa esto en su sistema porque parece que usted es el único que recibe ese error.

También estoy experimentando problemas con las carpetas de JDarcocker a pesar de que es con nadie 000 el resultado de los permisos de los archivos creados es drwxrwx---

Link to comment
4 hours ago, NEURO said:

También estoy experimentando problemas con las carpetas de JDarcocker a pesar de que es con nadie 000 el resultado de los permisos de los archivos creados es drwxrwx---

Please set these permissions to 777

 

It can often be the case that the downloaded files are archived with these permissions that you’ve wrote (770) and after unpacking the files the permissions are not changed, this always depends how the files where archived.

Link to comment
11 hours ago, ich777 said:

Establezca estos permisos en 777

 

A menudo puede darse el caso de que los archivos descargados se archiven con estos permisos que ha escrito (770) y después de descomprimir los archivos los permisos no se cambian, esto siempre depende de cómo se archivaron los archivos.

here change 000 for 777??

Captura de pantalla 2023-03-05 120709.png

Link to comment

@ich777I am on Unraid 6.11.5 and had Photoprism running with the standard sqlite database. That was just to try the basic functionality. However I have a large Photo collection and in the Photoprism documentation it is said, better to run it on a mysql type of database such as Mariadb

 

So I removed the original container Photoprism including it's container image then cleaned Appdata and deleted photoprism subfolder, to make a complete new installation of Photoprism as well as the mariadb (used the one from linuxserver).

Sidenote: I have the mariadb-original container from @mguttalready installed and working with Nextcloud and NGINx proxy. I thought it is save to use a different Mariadb container, not to mis things up...

 

All containers are in Bridge mode on my Unraid server

To prevent conflicts I changed the port of the mariadb container for Photoprism to 3307, as 3306 is already in use for mariadb-original container, which I use for nextcloud

I also changed all the path variables so, I get different appdata path for the new Photoprism mariadb

 

After everything was installed using the webGUI of Unraid, Mariadb and Photoprism shown in the Unraid Docker list.

I checked the logs in the 2 containers: no faults, no errors, no warnings - it looked normal

Then I wanted to start the Photoprism webUI and Firefox browser on my Ubuntu PC showed this:

image.thumb.png.caaaa7119c56f1ba1e1b2eae75e32908.png

here is the docker list:

image.thumb.png.39d478492414e1277ab654955d5e4778.png

 

What do I need to do, to connect mariadb container (in the list it is named mariadb-PP) correctly to Photoprism;

Is there anything special, I need to do in addition to carefully follow the advice in the container templates (such as correct passwords, etc.)

 

How to configure mariadb correctly with Photoprism.

Haven't found any good advice yet here in the forum.

Thank you for help!

 

 

Link to comment
4 minutes ago, ullibelgie said:

What do I need to do, to connect mariadb container (in the list it is named mariadb-PP) correctly to Photoprism;

Configure it as instructed:

grafik.png.2f830074a054ece4ba64aa6fb4f342ff.png

 

I really don't know what you've configured so far and if you already created a database and user with password in the newly installed MariaDB container for PhotoPrism (never use the root account for a application in MariaDB).

I also don't think that you have no errors in the logs because otherwise you would be able to connect to the PhotoPrism container.

 

Here is also the documentation from PhotoPrism for MariaDB and over here you can find how to create a user with a password, database and grant access to it (please don't use localhost because otherwise you can't connect to it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.