unRAID Server Release 6.0-rc5-x86_64 Available


Recommended Posts

This may be unrelated to rc5 but I remember seeing this in one of beta releases. A plugin update (community repositories) failed due to no space in /var/log. How do I fix this?

 

Filesystem      Size  Used Avail Use% Mounted on
tmpfs           128M  128M     0 100% /var/log
/dev/sda1        30G  6.8G   23G  24% /boot
/dev/md1        2.8T  1.9T  903G  68% /mnt/disk1
...
..
/dev/sdp1       676G  488G  185G  73% /mnt/cache
shfs             32T   21T   12T  64% /mnt/user0
shfs             33T   21T   12T  64% /mnt/user
/dev/loop0       20G  5.0G   14G  27% /var/lib/docker
/dev/loop1      1.4M  116K  1.2M   9% /etc/libvirt

http://lime-technology.com/forum/index.php?topic=31752.msg374338#msg374338

 

Thank you!

There is one maxed out mount... I think it's my docker.img? I'll have to check which instance ate up all the space...

 

root@archive:~# df
Filesystem       1K-blocks        Used  Available Use% Mounted on
tmpfs               131072        2180     128892   2% /var/log
/dev/sda1          7744512      472280    7272232   7% /boot
/dev/md1        3906899292  3341603232  565296060  86% /mnt/disk1
....
/dev/sdj1       1953514552   331689940 1620799676  17% /mnt/cache
shfs           62493446056 54849197136 7644248920  88% /mnt/user0
shfs           64446960608 55180887076 9265048596  86% /mnt/user
/dev/loop0        15728640    15523888     118608 100% /var/lib/docker
/dev/loop1            1843          80       1553   5% /etc/libvirt

 

Edit: urgh, it seems that recreating/readding my docker instances from my previous template is faster than trying to understand how btrfs layer shits and the thousands of subvolumes in my docker.img.

 

For other newb users like me who've never gone through this before: recreating new docker based on your past template is a breeze. All your settings are saved in the template, including directory and port mappings.

Link to comment
  • Replies 140
  • Created
  • Last Reply

Top Posters In This Topic

I'm not sure if this is anything or not, just throwing out some observations as I was reviewing my syslog, I saw the following entries, which had me wondering if maybe there isn't some sort of race conditions or order of operations between:

  • 'docker',
  • 'dnsmasq'
  • '/etc/resolv.conf' file population

 

It looks like Docker starts before /etc/resolv.conf is populated. A possible reason why my dockers work is I setup a read-only map of '/etc/resolv.conf' so when it gets modified on the unRAID host the updates automagically happens in the docker.

You might be onto something...

 

According to this doc:

https://docs.docker.com/articles/networking/

 

Docker already does exactly what you describe, although it looks to me it's mounted rw.  Can you post output of:

 

docker exec <container> mounts

 

of one of your containers?

 

 

 

ps. I'd also be grateful if you changed your sig  ;)

 

There is some information in the doc limetech linked that could explain the docker dns issue:

 

You might wonder what happens when the host machine's /etc/resolv.conf file changes. The docker daemon has a file change notifier active which will watch for changes to the host DNS configuration.

...

When the host file changes, all stopped containers which have a matching resolv.conf to the host will be updated immediately to this newest host configuration. Containers which are running when the host configuration changes will need to stop and start to pick up the host changes ...

 

If a container starts before dhcpd has updated the host resolv.conf, then the container will have a stale version of resolv.conf. A container stop and start will then pick up the changes.

 

Note: For containers which were created prior to the implementation of the /etc/resolv.conf update feature in Docker 1.5.0: those containers will not receive updates when the host resolv.conf file changes. Only containers created with Docker 1.5.0 and above will utilize this auto-update feature.

 

I'm guessing sparklyballs has a bunch of sparkly new containers created after Docker 1.5.0. I haven't seen any problems, but my containers are 3 months old which is before unRAID released with docker 1.5.0 (I think it first came public in unRAID 6.0beta15, Apr 17).

Link to comment

I'm not sure if this is anything or not, just throwing out some observations as I was reviewing my syslog, I saw the following entries, which had me wondering if maybe there isn't some sort of race conditions or order of operations between:

  • 'docker',
  • 'dnsmasq'
  • '/etc/resolv.conf' file population

 

It looks like Docker starts before /etc/resolv.conf is populated. A possible reason why my dockers work is I setup a read-only map of '/etc/resolv.conf' so when it gets modified on the unRAID host the updates automagically happens in the docker.

You might be onto something...

 

According to this doc:

https://docs.docker.com/articles/networking/

 

Docker already does exactly what you describe, although it looks to me it's mounted rw.  Can you post output of:

 

docker exec <container> mounts

 

of one of your containers?

 

 

 

ps. I'd also be grateful if you changed your sig  ;)

 

There is some information in the doc limetech linked that could explain the docker dns issue:

 

You might wonder what happens when the host machine's /etc/resolv.conf file changes. The docker daemon has a file change notifier active which will watch for changes to the host DNS configuration.

...

When the host file changes, all stopped containers which have a matching resolv.conf to the host will be updated immediately to this newest host configuration. Containers which are running when the host configuration changes will need to stop and start to pick up the host changes ...

 

If a container starts before dhcpd has updated the host resolv.conf, then the container will have a stale version of resolv.conf. A container stop and start will then pick up the changes.

 

Note: For containers which were created prior to the implementation of the /etc/resolv.conf update feature in Docker 1.5.0: those containers will not receive updates when the host resolv.conf file changes. Only containers created with Docker 1.5.0 and above will utilize this auto-update feature.

 

I'm guessing sparklyballs has a bunch of sparkly new containers created after Docker 1.5.0. I haven't seen any problems, but my containers are 3 months old which is before unRAID released with docker 1.5.0 (I think it first came public in unRAID 6.0beta15, Apr 17).

 

Nice breakdown.

 

All 3 of my tweaked dockers were created 3 months ago, likely using Docker 1.4.1 inside the  Boot2Docker environment.

Link to comment

I'm not sure if this is anything or not, just throwing out some observations as I was reviewing my syslog, I saw the following entries, which had me wondering if maybe there isn't some sort of race conditions or order of operations between:

  • 'docker',
  • 'dnsmasq'
  • '/etc/resolv.conf' file population

 

It looks like Docker starts before /etc/resolv.conf is populated. A possible reason why my dockers work is I setup a read-only map of '/etc/resolv.conf' so when it gets modified on the unRAID host the updates automagically happens in the docker.

You might be onto something...

 

According to this doc:

https://docs.docker.com/articles/networking/

 

Docker already does exactly what you describe, although it looks to me it's mounted rw.  Can you post output of:

 

docker exec <container> mounts

 

of one of your containers?

 

 

 

ps. I'd also be grateful if you changed your sig  ;)

 

There is some information in the doc limetech linked that could explain the docker dns issue:

 

You might wonder what happens when the host machine's /etc/resolv.conf file changes. The docker daemon has a file change notifier active which will watch for changes to the host DNS configuration.

...

When the host file changes, all stopped containers which have a matching resolv.conf to the host will be updated immediately to this newest host configuration. Containers which are running when the host configuration changes will need to stop and start to pick up the host changes ...

 

If a container starts before dhcpd has updated the host resolv.conf, then the container will have a stale version of resolv.conf. A container stop and start will then pick up the changes.

 

Note: For containers which were created prior to the implementation of the /etc/resolv.conf update feature in Docker 1.5.0: those containers will not receive updates when the host resolv.conf file changes. Only containers created with Docker 1.5.0 and above will utilize this auto-update feature.

 

I'm guessing sparklyballs has a bunch of sparkly new containers created after Docker 1.5.0. I haven't seen any problems, but my containers are 3 months old which is before unRAID released with docker 1.5.0 (I think it first came public in unRAID 6.0beta15, Apr 17).

 

 

For sure that's it and explains why we can't reproduce.  Thanks man, I don't know how me missed that note.

Link to comment

Just noticed some really slow transitions & other weirdness in the Docker tab...

 

First restart after upgrading to rc5 hung after loading the fixes for areca cards that keep the same names for containers for about 5 minutes or so (I had a 20s delay in the go script).  emhttp never started, and the server was not reachable, nor were any dockers running.  Since the array wasn't mounted, I did a restart from putty (telnet was working), and everything came up as per normal.

 

When I went into plex to make sure it was running, noticed there was an update on the plex-pass track, so I went to edit my plex install with the current version number (I'm forcing upgrades using the version variable in the docker config).  It took about 25-30 seconds for the edit screen to come up, and once I submitted my changes, the screen that normally shows that it's downloading the changed bits for the docker came back and said 0 bytes loaded, and had the done at the bottom. I clicked the log button, and watched the log be populated by the removal and reinstallation of the docker (albeit very slowly).  This is all different behavior than I had in RC4 and previous versions..

 

Link to comment

just performed an update from rc4 to rc5, used the built in webui to do it, rebooted and noticed that all of the docker containers except two have been deleted, i can see the docker images (now marked as orphaned),

 

if i do a docker ps -a it ties up with what the webui is showing me, namely that there is now only two container remaining. the single windows 8.1 vm i have defined seems to be unaffected and started as per normal.

 

let me know guys if you want a copy of the diagnostics zip.

 

one other thing to note that has been a bug for a while with my setup is the dashboard/memory size/installed, this is still reporting "16384 MB (max. 8 GB)", i have 16 GB installed so not sure where "max. 8 GB" is coming from, but its incorrect.

 

off to re-create my docker containers :-) oh my god, just spotted my post count is the devil's number, scary :-)

 

edit - quick screenshot of the carnage ndoguh.jpg

Link to comment

Just upgraded to RC5 and all went very nicely.    First time I've used the built-in "Update" button and it's definitely nice to have this integrated so nicely.

 

All the share/disk status indicators are now working perfectly [as the release notes indicate they should  :) ].

 

Link to comment

... You can see the complete change log in webGui Plugins page after 'check for updates' you click the blue Info button next to the version.

 

Just noticed this post => That is a SLICK feature !!  Between the built-in Update button and the nifty display of the release notes, this is a BIG improvement over the old download/UnZip/Copy-to-Flash/Read-the-Text-File  process.

 

Link to comment

info screen,

 

xjS3BlA.png

 

 

 

clicking more takes me to this...

 

 

vkYCdWp.png

 

 

the mouseover on the more button says

 

unraid-nas/Tools/SystemProfiler

Do you have dynamix system info installed?

 

 

WJyKA6G.png

 

i used to have dynamix stats until a few weeks ago.

 

 

root@Unraid-Nas:/boot/config/plugins# ls

DockerSearch.plg*            docker.categorize/      dynamix.cache.dirs/      dynamix.system.temp.plg*

NerdPack/                    docker.categorize.plg*  dynamix.cache.dirs.plg*  ipmitool/

NerdPack.plg*                dockerMan/              dynamix.kvm.manager/    ipmitool.plg*

community.applications/      docker_search/          dynamix.plg*

community.applications.plg*  dynamix/                dynamix.system.temp/

root@Unraid-Nas:/boot/config/plugins# cd dynamiz

-bash: cd: dynamiz: No such file or directory

root@Unraid-Nas:/boot/config/plugins# cd dynamix

root@Unraid-Nas:/boot/config/plugins/dynamix# ls

docker-update.cron*        dynamix.cfg*  monitor.ini*  notifications/      plugin-check.cron*  users/

dynamix-2015.01.21.tar.gz*  monitor.cron*  mover.cron*  parity-check.cron*  status-check.cron*

root@Unraid-Nas:/boot/config/plugins/dynamix#

Link to comment

i used to have dynamix stats until a few weeks ago.

The more button takes you to the screen for it.  Obviously a bug since the button shouldn't be there if the plugin isn't installed.

 

root@Unraid-Nas:/boot/config/plugins/dynamix# cat dynamix.cfg
[display]
date="%A, %d-%m-%Y"
time="%R"
number=",."
scale="-1"
align="right"
tabs="0"
text="1"
view=""
total="1"
spin="1"
usage="0"
icons="1"
banner=""
theme="white"
unit="C"
hot="45"
max="55"
poll="0"
refresh="-1000"
critical="90"
warning="70"
sysinfo="/Tools/SystemProfiler"

Link to comment

Memory reporting is still askew in the System Information popup:

8jTxl3G.png

 

You can check this info yourself at the command line, run dmidecode.  DMI info is stored in the BIOS, and manufacturers have never placed a priority on its accuracy or completeness.  One thing that might be fixable and you can check, is whether the Max Installed is coming from the right number.

 

I suggest adding an asterisk next to each data item coming from the DMI, with a note at the bottom indicating something like "*  Info from DMI, may be unreliable".

 

My own is corrupted, always shows "Invalid entry length (0). DMI table is broken! Stop."  Since Epox is long gone, there's no hope of a new BIOS, so no point in me reporting it.

 

If you have the System Info plugin, almost all of the info comes from the DMI.  However, Ethernet information comes from ethtool eth0, and I suggest adding the info from ethtool -i eth0 and maybe ifconfig to it (see this).

Link to comment

 

You can check this info yourself at the command line, run dmidecode.  DMI info is stored in the BIOS, and manufacturers have never placed a priority on its accuracy or completeness.  One thing that might be fixable and you can check, is whether the Max Installed is coming from the right number.

 

 

That was done here:  http://lime-technology.com/forum/index.php?topic=40368.msg379824#msg379824

 

 

Link to comment

Having some difficulty adding a new disk.  I changed the slots on the main page, but no other dropdowns come up to add the disk!  Something I'm missing?

Post a screenshot.

 

Not related, but something you might clean up

Jun  8 20:34:34 Archive logger: plugin: installing: /boot/config/plugins/dynamix.plg
Jun  8 20:34:34 Archive logger: plugin: not installing older version

Link to comment

still no change with the docker dns issue..

 

mr. balls, please try these commands and post output:

 

cat /etc/resolv.conf
cat /etc/hosts
hostname

 

Then pick one of your containers with no dns and type:

 

docker exec <container-name> cat /etc/resolv.conf
docker exec <container-name> cat /etc/hosts
docker exec <container-name> cat /etc/hostname

replacing <container-name> with the actual container name.

 

give me a little while, i've cycled them bridge to host and host to bridge and they're all active.

 

just working on something on the server.

 

here are the results of above run on a docker currently experiencing the DNS issue.

 

root@GianGi:~# cat /etc/resolv.conf                                                                                                
# Generated by dhcpcd from br0
# /etc/resolv.conf.head can replace this line
domain fios-router.home
nameserver 192.168.1.1
# /etc/resolv.conf.tail can replace this line
root@GianGi:~# cat /etc/hosts
# Generated
127.0.0.1	GianGi localhost
root@GianGi:~# hostname
GianGi
root@GianGi:~# docker exec NZBGet cat /etc/resolv.conf          

nameserver 8.8.8.8
nameserver 8.8.4.4root@GianGi:~# docker exec NZBGet cat /etc/hosts      
172.17.0.2	dae7d1bb71d5
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
root@GianGi:~# docker exec NZBGet cat /etc/hostname
dae7d1bb71d5

Link to comment

Here's some before and afters...it doesn't matter what I set the slots to, 14, 18, 20, still the same number remain.  Also, though I didn't get a shot of this, if I change the number of cache slots, they don't change either.  And I'll get to cleaning that up!

 

Works for me, tried IE, FF, Chrome.  What browser are you using?

Link to comment
Guest
This topic is now closed to further replies.