Jump to content
aptalca

(Support) Aptalca's docker templates

1188 posts in this topic Last Reply

Recommended Posts

Make sure your docker.img file is not full, and your cache drive (or wherever the configuration files are)

Share this post


Link to post

Cache drive is fine:

 

root@Tower:~# df -h /mnt/cache

Filesystem      Size  Used Avail Use% Mounted on

/dev/sdh1      224G  103G  121G  46% /mnt/cache

 

don't believe its full:

 

Label: none  uuid: ac4f2842-7bc8-4d78-80b1-0d6a37468ef7

Total devices 1 FS bytes used 949.55MiB

devid    1 size 10.00GiB used 4.04GiB path /dev/loop1

 

/dev/loop1      10485760    1069572    7451036  13% /var/lib/docker

root@Tower:/mnt/cache# btrfs filesystem df /var/lib/docker

Data, single: total=2.01GiB, used=886.60MiB

System, DUP: total=8.00MiB, used=16.00KiB

System, single: total=4.00MiB, used=0.00B

Metadata, DUP: total=1.00GiB, used=62.94MiB

Metadata, single: total=8.00MiB, used=0.00B

GlobalReserve, single: total=32.00MiB, used=0.00B

 

 

 

Share this post


Link to post

Your first log shows:

using existing mysql database

 

But your second shows:

moving mysql to config folder

 

Did you delete the mysql folder in your config folder in between?

 

Somehow your install is screwed up. You should try reinstalling the container. To do that, you can click on the container's name in unraid webgui to edit, not change anything and hit save. It should reinstall with the same settings.

 

Based on the line from the second log, you may have lost your mysql database, because it is only supposed to be copied on first install, if there is no existing mysql database in the config folder. You might have to set up your cameras again.

 

Let me know how that goes.

Share this post


Link to post

Thanks,

 

No I didn't delete any mysql folder.

 

I tried the reinstall as you stated and that appeared to resolve it:-

 

*** Running /etc/my_init.d/firstrun.sh...

apache.conf already exists

zm.conf already exists

moving mysql to config folder

using existing data directory

creating symbolink links

setting the correct local time

 

Current default time zone: 'Europe/London'

Local time is now: Fri Aug 14 15:01:45 BST 2015.

Universal Time is now: Fri Aug 14 14:01:45 UTC 2015.

 

increasing shared memory

starting services

* Starting MySQL database server mysqld

...done.

* Checking for tables which need an upgrade, are corrupt or were

not closed cleanly.

* Starting web server apache2

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.7. Set the 'ServerName' directive globally to suppress this message

*

Starting ZoneMinder: success

 

*** Running /etc/rc.local...

*** Booting runit daemon...

*** Runit started as PID 675

Aug 14 15:02:07 ccbc02026a2d syslog-ng[682]: syslog-ng starting up; version='3.5.3'

Aug 14 15:02:07 ccbc02026a2d zmupdate[670]: INF [Checking for updates]

Aug 14 15:02:08 ccbc02026a2d zmupdate[670]: INF [Got version: '1.28.1']

 

however I cannot connect to the WebUI, it just presents an index of /zm with events & tools folders

 

I'll try deleting everything and setting it up again.

 

 

Share this post


Link to post

OK, deleting the Zoneminder container & image didn't help, ended up starting from scratch and disabled dockers, removed it's .img and started again (yours is the first docker I have tried so not a big dela).

 

All OK now, hopefully won't happen again.

Share this post


Link to post

OK, deleting the Zoneminder container & image didn't help, ended up starting from scratch and disabled dockers, removed it's .img and started again (yours is the first docker I have tried so not a big dela).

 

All OK now, hopefully won't happen again.

 

Is your local config folder in a cache-only share?

Share this post


Link to post

Well it died again overnight in the same fashion as before and won't restart  :(

 

Think I'll go back to a different solution dockers don't seem very successful for me.

Share this post


Link to post

I am REALLY not an expert... but have you tried to scrub your cache drive? Its like an error checking/repairing utility for BTRFS drives. Go into the Main screen where you see all drives listed. Click on the word "cache" where your cache drive is listed. On the page that opens there is a button that says "Scrub"... run that and refresh screena nd it will show if there are errors. If you have errors, you will need to remove the -r in the text box and run it again.

 

If you get uncorrectable errors, you need to backup the data on your cache drive, re-format it and rebuild again.

 

Also check the SMART report oon same page below to see if there issues with HDD.

 

A bad drive caused some headaches for me...

Share this post


Link to post

Updated JDownloader2 and OpenRemote containers

 

OpenRemote now runs as user nobody instead of root

 

JDownloader2 is now using hurricanehernandez's xrdp v1.3 base. Clipboard should now work. In order to copy/paste to and from the container's gui window, use the menu through ctrl-alt-shft. Whatever you copy within the container should show up in the box in that side menu. Whatever you put into that box will be available in the container's clipboard. Pretty cool, really.

Share this post


Link to post

My cache drive is xfs cannot find any scrub option for that.

 

I thought cache drive had to be BTRFS in order for Docker to work... can anyone please confirm. You are sttoring the Docker.img file on the cache drive right?

Share this post


Link to post

My cache drive is xfs cannot find any scrub option for that.

 

I thought cache drive had to be BTRFS in order for Docker to work... can anyone please confirm. You are sttoring the Docker.img file on the cache drive right?

I have my Docker.img stored on a XFS cache disk.  It was a ReiserFS before I converted to all XFS for array and Cache drives.  As far as I know only the data format inside the .img file has to be BTRFS format.  Only when you use a Cache Pool do you have to have BTRFS on the Cache drives as the pooling function is part of BTRFS.

Share this post


Link to post

My cache drive is xfs cannot find any scrub option for that.

 

I thought cache drive had to be BTRFS in order for Docker to work... can anyone please confirm. You are sttoring the Docker.img file on the cache drive right?

 

That used to be the case in earlier v6 betas, where you needed to format the drive to btrfs to host docker images. But then they created the docker.img file. That image file itself is formatted as btrfs so you no longer need the whole cache drive to be btrfs. All docker images are now stored inside that btrfs formatted image file.

Share this post


Link to post

Well it died again overnight in the same fashion as before and won't restart  :(

 

Think I'll go back to a different solution dockers don't seem very successful for me.

 

You can check your drive for errors. You can also recreate your docker.img. You can also do a memtest (an option during unraid boot)

 

Something is messing up your docker installation, I don't know what. Please double check that your config folder is on the cache drive in a cache only share. Also make sure that you entered that location in the container settings as /mnt/cache/blahblah rather than /mnt/user/blahblah (I don't know why but many people reported problems with using the user share path)

 

Many of us have been using these containers for a long time and they have been very stable.

Share this post


Link to post

Hi,I ran a smart test of my cache drive (SSD) and it was fine, I'm using it for recordings from my PVR and for my Windows VMs so get's a lot of use and they have all been OK and 100% stable.

 

I deleted the docker.img and recreated as per previous update but it didn't help.

 

My config folder is on /mnt/cache/docker/<configs> solely on the cache drive.

 

I agree something is messing up my docker but have no idea what  :-[

Share this post


Link to post

Hi,I ran a smart test of my cache drive (SSD) and it was fine, I'm using it for recordings from my PVR and for my Windows VMs so get's a lot of use and they have all been OK and 100% stable.

 

I deleted the docker.img and recreated as per previous update but it didn't help.

 

My config folder is on /mnt/cache/docker/<configs> solely on the cache drive.

 

I agree something is messing up my docker but have no idea what  :-[

 

Since you mentioned running VMs, it possible that you might be running out of ram?

Share this post


Link to post

JDownloader2 looks interesting so I tried it out. But you can't paste anything in to it so I am trying to figure out what the point of this docker is. Is there some trick I am not aware of?

Share this post


Link to post

JDownloader2 looks interesting so I tried it out. But you can't paste anything in to it so I am trying to figure out what the point of this docker is. Is there some trick I am not aware of?

 

2 options:

 

1) Clipboard works through the side menu you can open by ctrl-alt-shft. Whatever you paste into that box appears in the container clipboard. Whatever you copy inside the container, appears in that box

 

2) JDownloader2 has a great remote web interface you can set up in its settings. Then you go to my.jdownloader.org from any browser to access your jdownloader interface. No need to copy paste into the container gui

Share this post


Link to post

AmazonEcho Home Automation Bridge container has been updated.

 

Maximum memory set for java so it no longer ties up unnecessary RAM (java by default ties up a certain amount of ram based on total available ram and it can end up hogging a large chunk on high memory environments; on my server it tied up about 1GB out of 8GB for no reason when the same app can easily run on 300MB on rpi)

 

Also the new version allows for custom version installs. Just pass a new environment variable under advanced settings: VERSION=0.X.X

If you update and it stops working for you, you can easily go back to the previous version.

Share this post


Link to post

Hi, i would like to make a request regarding your JDownloader docker:

 

I've read in this thread, that it is possible to change the umask of the docker container:

http://lime-technology.com/forum/index.php?topic=33922.45

 

The Problem i have is, i want to move or rename or handle files after they are downloaded with another user via smb share. This is forbidden, as i am not user "nobody".

Do you know, if there is a setting to tell jDownloader to actually use another umask while extracting files.

Share this post


Link to post

Hi, i would like to make a request regarding your JDownloader docker:

 

I've read in this thread, that it is possible to change the umask of the docker container:

http://lime-technology.com/forum/index.php?topic=33922.45

 

The Problem i have is, i want to move or rename or handle files after they are downloaded with another user via smb share. This is forbidden, as i am not user "nobody".

Do you know, if there is a setting to tell jDownloader to actually use another umask while extracting files.

 

Sorry, I just saw this.

 

The problem is not really the umask, but the fact that jdownloader sets the permissions as read only for the group.

 

In unraid, the files are supposed to belong to nobody:users and all smb users also belong to the same users group. As long as the files have write permission for the group, all smb users should be able to modify them.

 

The issue with JDownloader is that it gives write permission to only the main user, and not the group. That's why smb users cannot move or delete.

 

I just realized this and I'm looking into it.

Share this post


Link to post

I helped myself by adding this to the go file:

 

echo "* * * * * /usr/local/emhttp/plugins/dynamix/scripts/newperms /mnt/cache/entpackt/ 1> /dev/null" >> /var/spool/cron/crontabs/root

 

This points to my jdownloader folder where stuff is extracted to.

Share this post


Link to post

I'm glad you figured out the new permissions, that was going to be my suggestion and thanks for sharing the go file command for others' reference.

 

I tried to come up with ways to fix the docker container but couldn't think of a good method as it allows users to pick any location as the download location. Giving the user the ability to change the umask would complicate things and it wouldn't solve the issue of other smb users not being able to access.

 

I think your solution is the best way to go about it until JDownloader2 adds the ability to set downloaded file permissions.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.