unRAID Server Release 6.2.0-rc1 Available


Recommended Posts

[fuse/shfs does support symlinks but not hard links.

 

Are you sure? I don't know a lot about FUSE and I've just heard about SHFS. I have read that SHFS stopped development in 2004 and has been superseded by SSHFS. In my searching for a solution or workaround I've seen numerous pages mentioning hard link support for FUSE and SSHFS. If that's the case replacing SHFS with SSHFS (if possible) should allow hard linking.

 

'shfs' is limetech-proprietary fuse-based user share file system - has nothing to do with other projects out there that might also be named 'shfs'.

Link to comment
  • Replies 155
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

As warning out there for the new 'preferred' setting for 'Use cache disk'. Do not use it if your share is bigger than the cache disk. It will try to copy the whole share to the cache disk filling it up. If mover runs, it will try to move data from cache and you get a loop basically. Moving stuff of and to.

 

I thought i'd put it out there.

 

No it won't move back and forth unless you change the setting.  If cache fills up it just doesn't move the file(s).

 

So what are you saying:

 

1) Mover ignores data stored on a cache drive when 'Use cache disk' is set to 'preferred'.

2) unRAID will stop moving data as soon as cache drives fills up.

3) Mover will not start moving data unless 'Use cache disk' is changed to 'Yes'.

 

Note: When #1 is set, unRAID still attempts to copy to cache drive even though it's full (which leads to system logs notifications about that). While unRAID is attempting to copy over files to the cache drive you change the setting from 'preferred' to 'yes', unRAID doesn't flush the queue and still attempts to copy files over. Scheduled 'Mover' task kicks in and you got yourself a situation.

 

Link to comment

After upgrading to RC1, when I go to install a Docker container using a template that I have not used before, all the VARIABLE sections are missing the actual variable names?

 

Is this by design??

 

I've attached 2 images... one if from the RC1 and shows no variable names, just the default value... and the other shows what it used to look like.

 

When I edit existing docker configs I see the correct layout... so this is ONLY happening when I'm installing something new that has never been installed.

Not the behavior I'm seeing which container are you adding?

 

I see it specifically when I add deluge-vpn, sagetv-server-java-8, and any pretty much anything that  I add when I add it from the "Apps" tab.  But as i said, If I edit an existing container, or try to add from the my templates section.. it shows up correctly.

 

 

Link to comment

I also installed the CA plugin, so I just have stock unRAID.    If I manually add my template repo

https://github.com/stuckless/sagetv-dockers/tree/master/unRAID/stuckless-sagetv

 

And select something from there... I don't get any variable names... just default values...

 

I have 2 machines that are running unRAID Basic, and I updated both today to the RC1, and this is the behaviour that I'm seeing on both :(

 

I guess I need to see if there is a way I can downgrade to another version and wait for RC2.

 

EDIT:  I downloaded to beta-23 and I get the proper display for the docker configuration.  Odd that no-one else experienced this but I experienced it on 2 difference machines :(

Selection_007.png.a6490692a10343c8c3d6c0393db021d4.png

Selection_008.png.f8c5e0e449968e259329c01769f6a757.png

Link to comment

Personally, I strive to keep disk usage below 90% because anything above that could degrade performance for writes on certain filesystems.

 

I believe this is a new recommendation? Can you expand on this in the context of RFS and XFS? Happy to start a new thread if needed as it sounds like quite an important consideration.

 

He prefaced his opinion with "Personally" so I don't think he meant that this was an official recommendation, or even a generally held recommendation, although I believe it may be.  Someone who works at LimeTech is allowed to have personal opinions, right?  ;)

 

I agree with him that it's always wise to leave some free space, especially if it's ReiserFS involved.  I'm not sure we have enough evidence to say the same for XFS.  XFS does seem to handle a full disk better than ReiserFS, at least so far.  We do know ReiserFS can sometimes have difficulty when too full.  "Personally", I like to keep at least 50GB free, on any drive with any format.

Link to comment

Personally, I strive to keep disk usage below 90% because anything above that could degrade performance for writes on certain filesystems.

 

I believe this is a new recommendation? Can you expand on this in the context of RFS and XFS? Happy to start a new thread if needed as it sounds like quite an important consideration.

 

He prefaced his opinion with "Personally" so I don't think he meant that this was an official recommendation, or even a generally held recommendation, although I believe it may be.  Someone who works at LimeTech is allowed to have personal opinions, right?  ;)

 

I agree with him that it's always wise to leave some free space, especially if it's ReiserFS involved.  I'm not sure we have enough evidence to say the same for XFS.  XFS does seem to handle a full disk better than ReiserFS, at least so far.  We do know ReiserFS can sometimes have difficulty when too full.  "Personally", I like to keep at least 50GB free, on any drive with any format.

 

As a disk gets filled up, it gets trickier to find suitable regions of unused space regardless of filesystem (although ReiserFS is notoriously bad). The time it takes is related to the proportion of the space free rather than the raw amount, hence the classical 10% rule of thumb.

 

This is of course way less of an issue with the typical archive scenario of many unRAID users (including me) where the disk is filled once and then mostly read from. I keep some space free in case things get written to the “wrong” drive due to split levels.

Link to comment

Agree it's useful to have a bit of free space if you're still writing to a drive.

 

I've got 3 servers (plus a test one that I change around a good bit) => of the 3 primary servers, one is RFS (my oldest media server) ... with 12 of the 14 drives 100% full (< 1GB of free space), and the other 2 have a modest amount of free space and are the only ones still written to.    There's no ready penalty from full drives, so this works just fine.

 

My other 2 servers are both XFS -- one has a LOT of free space; the other has a few fairly full drives (> 99%), but they still have 30GB or so free, and I've seen NO degradation of writes to these drives.  XFS is clearly MUCH better than Reiser in terms of writes to full drives.

 

My 6.2 test unit is all XFS ... I'll "play" a bit with it to see if this behavior is any different with 6.2, but I doubt that it will be, as the only real difference is the 2nd parity drive, which shouldn't have any impact on the file system behavior.

 

By the way, since this IS the 6.2 RC thread, I'll note that I just upgraded to the RC and all is working just fine  :)

Link to comment

I upgraded from 6.1.9 to 6.2.0rc-1 this afternoon, and I had a couple of issues:

 

First, I guess eth0 and eth1 were swapped somehow... my system started using a different mac address, which meant it got a different DHCP lease from the router.  Definitely made me thankful that the IP address was echo'd to the console :)

 

The other issue was that none of the new default shares were created at first, I think they finally appeared after I started my first VM.  From there it just took time to recreate the docker.img file in its new location, move files to the new isos and domains shares, re-download all my dockers, and redefine my VMs to use the new locations.  I suppose all of that wasn't technically necessary, but I figured I might as well get on board with the new setup.

 

 

I don't get any variable names... just default values...

 

I am seeing the same thing, except it happens with all of my existing dockers too -- for instance, on the main edit screen it shows:

  Key 1: 99

  Key 2: 100

But I have to hit the edit button to find out that Key 1 is "PUID" and key 2 is "PGID".  I didn't grab a new screenshot because they look just like the ones stuckless already provided here:

  https://lime-technology.com/forum/index.php?topic=50240.msg482620#msg482620

 

Here are the relevant portions of one of my user template xml files, in case something obvious is wrong:

<Environment>
  <Variable>
    <Value>99</Value>
    <Name>PUID</Name>
    <Mode/>
  </Variable>
  <Variable>
    <Value>100</Value>
    <Name>PGID</Name>
    <Mode/>
  </Variable>
</Environment>

<Config Name="Key 1" Target="PUID" Default="99" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">99</Config>
<Config Name="Key 2" Target="PGID" Default="100" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">100</Config>  

Link to comment

I don't get any variable names... just default values...

 

I am seeing the same thing, except it happens with all of my existing dockers too -- for instance, on the main edit screen it shows:

  Key 1: 99

  Key 2: 100

But I have to hit the edit button to find out that Key 1 is "PUID" and key 2 is "PGID".  I didn't grab a new screenshot because they look just like the ones stuckless already provided here:

  https://lime-technology.com/forum/index.php?topic=50240.msg482620#msg482620

 

Here are the relevant portions of one of my user template xml files, in case something obvious is wrong:

<Environment>
  <Variable>
    <Value>99</Value>
    <Name>PUID</Name>
    <Mode/>
  </Variable>
  <Variable>
    <Value>100</Value>
    <Name>PGID</Name>
    <Mode/>
  </Variable>
</Environment>

<Config Name="Key 1" Target="PUID" Default="99" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">99</Config>
<Config Name="Key 2" Target="PGID" Default="100" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">100</Config>  

 

Ah, so the issue is that the Description is empty.  Was something supposed to update the description to something like:

 

<Config Name="Key 1" Target="PUID" Default="99" Mode="" Description="Environment Variable: PUID" ...

Link to comment

As warning out there for the new 'preferred' setting for 'Use cache disk'. Do not use it if your share is bigger than the cache disk. It will try to copy the whole share to the cache disk filling it up. If mover runs, it will try to move data from cache and you get a loop basically. Moving stuff of and to.

 

I thought i'd put it out there.

 

No it won't move back and forth unless you change the setting.  If cache fills up it just doesn't move the file(s).

 

So what are you saying:

 

1) Mover ignores data stored on a cache drive when 'Use cache disk' is set to 'preferred'.

2) unRAID will stop moving data as soon as cache drives fills up.

3) Mover will not start moving data unless 'Use cache disk' is changed to 'Yes'.

 

1) Yes

2) Not exactly.  If the copy operation of a particular file fails, the partial copy on the target is deleted but the script keeps running.  For example, say there's 500MB free and it's copying a 1GB file - that will fail, but if the next file to be copied is smaller than 500MB that will succeed.

3) Files are possibly moved from array to cache for 'Prefer', and from cache to array for 'Yes'.  The mover will skip any shares with setting 'Only' or 'No'.

 

Note: When #1 is set, unRAID still attempts to copy to cache drive even though it's full (which leads to system logs notifications about that).

Yes, bear that in mind when assigning cache setting 'Prefer'.  BTW with btrfs cache pool it's not uncommon to see very large cache disks.

 

While unRAID is attempting to copy over files to the cache drive you change the setting from 'preferred' to 'yes', unRAID doesn't flush the queue and still attempts to copy files over.

Once the 'mover' script decides it can move data of a share, it completes the operation on that share without checking if the config setting changed during the transfer.

Scheduled 'Mover' task kicks in and you got yourself a situation.

Don't do that unless you know the implications.  ;)

 

You can look at the mover script: /usr/local/sbin/mover and see exactly what's going on.

Link to comment

Agree it's useful to have a bit of free space if you're still writing to a drive.

 

I've got 3 servers (plus a test one that I change around a good bit) => of the 3 primary servers, one is RFS (my oldest media server) ... with 12 of the 14 drives 100% full (< 1GB of free space), and the other 2 have a modest amount of free space and are the only ones still written to.    There's no ready penalty from full drives, so this works just fine.

 

My other 2 servers are both XFS -- one has a LOT of free space; the other has a few fairly full drives (> 99%), but they still have 30GB or so free, and I've seen NO degradation of writes to these drives.  XFS is clearly MUCH better than Reiser in terms of writes to full drives.

 

My 6.2 test unit is all XFS ... I'll "play" a bit with it to see if this behavior is any different with 6.2, but I doubt that it will be, as the only real difference is the 2nd parity drive, which shouldn't have any impact on the file system behavior.

 

By the way, since this IS the 6.2 RC thread, I'll note that I just upgraded to the RC and all is working just fine  :)

 

garycase, gubbgnutten and RobJ thanks for the replys and I have to say I agree with what you are saying. This is one reason why i quoted and queried the original recommendation to see if we could move this to a firmer non "personal" recommendation.

 

10% reservation if adhered to could represent two complete drives worth of space on a full populated unRAID server which seems quite high. Equally I havent seen any slow down with XFS and high fill levels but as most here my data on these drives ire relatively static.

 

However in among these assumptions are real uses cases where larger reservation would make a real world difference and I would like to nail that down to a point where a couple of paragraphs in the manual could explain it or even better the GUI could feed back to the user. Should we fork this thread/do me have enough interest to resolve it?

Link to comment

I don't get any variable names... just default values...

 

I am seeing the same thing, except it happens with all of my existing dockers too -- for instance, on the main edit screen it shows:

  Key 1: 99

  Key 2: 100

But I have to hit the edit button to find out that Key 1 is "PUID" and key 2 is "PGID".  I didn't grab a new screenshot because they look just like the ones stuckless already provided here:

  https://lime-technology.com/forum/index.php?topic=50240.msg482620#msg482620

 

Here are the relevant portions of one of my user template xml files, in case something obvious is wrong:

<Environment>
  <Variable>
    <Value>99</Value>
    <Name>PUID</Name>
    <Mode/>
  </Variable>
  <Variable>
    <Value>100</Value>
    <Name>PGID</Name>
    <Mode/>
  </Variable>
</Environment>

<Config Name="Key 1" Target="PUID" Default="99" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">99</Config>
<Config Name="Key 2" Target="PGID" Default="100" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">100</Config>  

 

Ah, so the issue is that the Description is empty.  Was something supposed to update the description to something like:

 

<Config Name="Key 1" Target="PUID" Default="99" Mode="" Description="Environment Variable: PUID" ...

What's going on here is that it is indeed a bug in 6.2RC1.

 

Previously dockerMan would take the name of the variable that was listed on a v1 template (99.9%) of the ones present and then move it over to the v2 section (only present within CA on 0.1%) of apps.  And then display the v2 section.  Now its not doing that.

 

Just another item on my list of why I believe that moving to v2 was a mistake.

Link to comment

 

- docker: fix update to always request manifest.v2 information

 

Updates Always Available (and subsequent pulls of 0B) are still happening on Dolphin (aptalca/docker-dolphin:latest).  Happens on a virgin docker.img file.
Link to comment

 

Previously dockerMan would take the name of the variable that was listed on a v1 template (99.9%) of the ones present and then move it over to the v2 section (only present within CA on 0.1%) of apps.  And then display the v2 section.  Now its not doing that.

 

Just another item on my list of why I believe that moving to v2 was a mistake.

 

Is the v2 template stuff documented somewhere?  When i was creating my templates, I ended up just finding existing unRAID templates and figuring it out from there... it would be nice if the unRAID web ui provided a quick link to the docker template structure.  Is the v2 templates a CA thing or an unRAID thing?

Link to comment

 

Previously dockerMan would take the name of the variable that was listed on a v1 template (99.9%) of the ones present and then move it over to the v2 section (only present within CA on 0.1%) of apps.  And then display the v2 section.  Now its not doing that.

 

Just another item on my list of why I believe that moving to v2 was a mistake.

 

Is the v2 template stuff documented somewhere?  When i was creating my templates, I ended up just finding existing unRAID templates and figuring it out from there... it would be nice if the unRAID web ui provided a quick link to the docker template structure.  Is the v2 templates a CA thing or an unRAID thing?

It is documented here: http://lime-technology.com/wiki/index.php/DockerTemplateSchema  However the absolute best and easiest way to create the xml file is to just add a container via the docker tab, fill out the the appropriate entries, then hit save (turn on authoring mode within docker settings) (followed by a copy/paste - never delete anything from this resulting template)  Any way else is going to give you a world of trouble.

 

The only time you need to manually edit the resulting xml is to handle the tags listed here:  http://lime-technology.com/forum/index.php?topic=40299.0

 

And it is an unRaid thing, and one which I have been vocal about it being a mistake in moving to rather than merely expanding the v1 specification.

Link to comment

As warning out there for the new 'preferred' setting for 'Use cache disk'. Do not use it if your share is bigger than the cache disk. It will try to copy the whole share to the cache disk filling it up. If mover runs, it will try to move data from cache and you get a loop basically. Moving stuff of and to.

 

I thought i'd put it out there.

 

No it won't move back and forth unless you change the setting.  If cache fills up it just doesn't move the file(s).

 

So what are you saying:

 

1) Mover ignores data stored on a cache drive when 'Use cache disk' is set to 'preferred'.

2) unRAID will stop moving data as soon as cache drives fills up.

3) Mover will not start moving data unless 'Use cache disk' is changed to 'Yes'.

 

1) Yes

2) Not exactly.  If the copy operation of a particular file fails, the partial copy on the target is deleted but the script keeps running.  For example, say there's 500MB free and it's copying a 1GB file - that will fail, but if the next file to be copied is smaller than 500MB that will succeed.

3) Files are possibly moved from array to cache for 'Prefer', and from cache to array for 'Yes'.  The mover will skip any shares with setting 'Only' or 'No'.

 

Note: When #1 is set, unRAID still attempts to copy to cache drive even though it's full (which leads to system logs notifications about that).

Yes, bear that in mind when assigning cache setting 'Prefer'.  BTW with btrfs cache pool it's not uncommon to see very large cache disks.

 

While unRAID is attempting to copy over files to the cache drive you change the setting from 'preferred' to 'yes', unRAID doesn't flush the queue and still attempts to copy files over.

Once the 'mover' script decides it can move data of a share, it completes the operation on that share without checking if the config setting changed during the transfer.

Scheduled 'Mover' task kicks in and you got yourself a situation.

Don't do that unless you know the implications.  ;)

 

You can look at the mover script: /usr/local/sbin/mover and see exactly what's going on.

 

Thank you :)

Link to comment

It is documented here: http://lime-technology.com/wiki/index.php/DockerTemplateSchema  However the absolute best and easiest way to create the xml file is to just add a container via the docker tab, fill out the the appropriate entries, then hit save (turn on authoring mode within docker settings) (followed by a copy/paste - never delete anything from this resulting template)  Any way else is going to give you a world of trouble.

 

The only time you need to manually edit the resulting xml is to handle the tags listed here:  http://lime-technology.com/forum/index.php?topic=40299.0

 

And it is an unRaid thing, and one which I have been vocal about it being a mistake in moving to rather than merely expanding the v1 specification.

 

Does unRaid 6.1.9 generate v2 My-Templates, or will it have issues upgrading from 6.1.9 to 6.2 RC1 as well?

Link to comment

It is documented here: http://lime-technology.com/wiki/index.php/DockerTemplateSchema  However the absolute best and easiest way to create the xml file is to just add a container via the docker tab, fill out the the appropriate entries, then hit save (turn on authoring mode within docker settings) (followed by a copy/paste - never delete anything from this resulting template)  Any way else is going to give you a world of trouble.

 

The only time you need to manually edit the resulting xml is to handle the tags listed here:  http://lime-technology.com/forum/index.php?topic=40299.0

 

And it is an unRaid thing, and one which I have been vocal about it being a mistake in moving to rather than merely expanding the v1 specification.

 

Does unRaid 6.1.9 generate v2 My-Templates, or will it have issues upgrading from 6.1.9 to 6.2 RC1 as well?

6.1.x generates v1 Templates.  6.2 Generates templates that are compatible with both v1 and v2 (ie they will work under 6.1)  However until a dynamix webUI update is available to fix the bug in the variables naming under RC1 you're pretty much either guessing what values to put in there or having to toss docker into authoring mode and then looking at the template you're adding and then adjust the entries accordingly. 

 

Fortunately this only bug only appears to affect environment variables which a minority (although substantial) of apps use.

 

The whole problem here is adding a new app (or using default values for a previous one) under 6.2RC1 (the betas worked correctly).  99% of the templates out there (doesn't matter how you add them - either through repositories or through CA) are v1 templates and if they happen to use environment variables, you will be in a guessing game as to what goes where.  If the template you're adding happens to be a v2 template (I can only think of 3-4 of them) then it will indeed work correctly.

 

IE: If you happen to add CrashPlan through CA (v2 template and using env variables) it will work correctly on 6.1.9 and 6.2

If you happen to add any of the VPN templates (all v1 and using env variables) it will work correctly on 6.1.9, any of the 6.2betas, but not 6.2RC1)

Link to comment

It is documented here: http://lime-technology.com/wiki/index.php/DockerTemplateSchema  However the absolute best and easiest way to create the xml file is to just add a container via the docker tab, fill out the the appropriate entries, then hit save (turn on authoring mode within docker settings) (followed by a copy/paste - never delete anything from this resulting template)  Any way else is going to give you a world of trouble.

 

The only time you need to manually edit the resulting xml is to handle the tags listed here:  http://lime-technology.com/forum/index.php?topic=40299.0

 

And it is an unRaid thing, and one which I have been vocal about it being a mistake in moving to rather than merely expanding the v1 specification.

 

Does unRaid 6.1.9 generate v2 My-Templates, or will it have issues upgrading from 6.1.9 to 6.2 RC1 as well?

6.1.x generates v1 Templates.  6.2 Generates templates that are compatible with both v1 and v2 (ie they will work under 6.1)  However until a dynamix webUI update is available to fix the bug in the variables naming under RC1 you're pretty much either guessing what values to put in there or having to toss docker into authoring mode and then looking at the template you're adding and then adjust the entries accordingly. 

 

Fortunately this only bug only appears to affect environment variables which a minority (although substantial) of apps use.

 

The whole problem here is adding a new app (or using default values for a previous one) under 6.2RC1 (the betas worked correctly).  99% of the templates out there (doesn't matter how you add them - either through repositories or through CA) are v1 templates and if they happen to use environment variables, you will be in a guessing game as to what goes where.

 

Thanks for the info. I have a few docker containers, that I kept private because I couldn't dedicate the time for public support, so I'll have to look through the dockerfile to see if they're impacted. I might just have to wait until RC2 to update.

Link to comment

It is documented here: http://lime-technology.com/wiki/index.php/DockerTemplateSchema  However the absolute best and easiest way to create the xml file is to just add a container via the docker tab, fill out the the appropriate entries, then hit save (turn on authoring mode within docker settings) (followed by a copy/paste - never delete anything from this resulting template)  Any way else is going to give you a world of trouble.

 

The only time you need to manually edit the resulting xml is to handle the tags listed here:  http://lime-technology.com/forum/index.php?topic=40299.0

 

And it is an unRaid thing, and one which I have been vocal about it being a mistake in moving to rather than merely expanding the v1 specification.

 

Does unRaid 6.1.9 generate v2 My-Templates, or will it have issues upgrading from 6.1.9 to 6.2 RC1 as well?

6.1.x generates v1 Templates.  6.2 Generates templates that are compatible with both v1 and v2 (ie they will work under 6.1)  However until a dynamix webUI update is available to fix the bug in the variables naming under RC1 you're pretty much either guessing what values to put in there or having to toss docker into authoring mode and then looking at the template you're adding and then adjust the entries accordingly. 

 

Fortunately this only bug only appears to affect environment variables which a minority (although substantial) of apps use.

 

The whole problem here is adding a new app (or using default values for a previous one) under 6.2RC1 (the betas worked correctly).  99% of the templates out there (doesn't matter how you add them - either through repositories or through CA) are v1 templates and if they happen to use environment variables, you will be in a guessing game as to what goes where.

 

Thanks for the info. I have a few docker containers, that I kept private because I couldn't dedicate the time for public support, so I'll have to look through the dockerfile to see if they're impacted. I might just have to wait until RC2 to update.

If you've already added them via 6.2beta then the my-templates will be correct.  Unless your still on 6.1 then yeah, wait until a webUI is updated beyond this bug I don't see any show stoppers with this release.
Link to comment

After upgrading to RC1, when I go to install a Docker container using a template that I have not used before, all the VARIABLE sections are missing the actual variable names?

 

Is this by design??

 

I've attached 2 images... one if from the RC1 and shows no variable names, just the default value... and the other shows what it used to look like.

 

When I edit existing docker configs I see the correct layout... so this is ONLY happening when I'm installing something new that has never been installed.

 

This is a bug and will be fixed in the next release.

Link to comment

 

- docker: fix update to always request manifest.v2 information

 

Updates Always Available (and subsequent pulls of 0B) are still happening on Dolphin (aptalca/docker-dolphin:latest).  Happens on a virgin docker.img file.

 

Hmmm, not able to reproduce here.  I added this container on a test machine here (with rc1) after which it showed 'up-to-date'.  Did a Check for Updates but still remained 'up-to-date'.

 

I then did a 'docker pull aptalca/docker-dolphin:latest' from the terminal to verify:

latest: Pulling from aptalca/docker-dolphin
a3ed95caeb02: Already exists
3b1d42cd9af9: Already exists
d2ff49536f4d: Already exists
2dcca790c489: Already exists
ae857e8dd13c: Already exists
19cb749cf27e: Already exists
e28a0cb79a9f: Already exists
3fa6f956b718: Already exists
a89aff67a3de: Already exists
59b44b3d197b: Already exists
6c31adf2fea1: Already exists
7d4cf029adfb: Already exists
Digest: sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a
Status: Image is up to date for aptalca/docker-dolphin:latest

 

Finally, I verified the digest from docker pull above to what's stored in /var/lib/docker/unraid-update-status.json for that image and it all checks out:

...
    "aptalca/docker-dolphin:latest": {
        "local": "sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a",
        "remote": "sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a",
        "status": "true"
    }
...

 

Are you seeing a different sha256 from docker pull for this container?

Link to comment

 

- docker: fix update to always request manifest.v2 information

 

Updates Always Available (and subsequent pulls of 0B) are still happening on Dolphin (aptalca/docker-dolphin:latest).  Happens on a virgin docker.img file.

 

Hmmm, not able to reproduce here.  I added this container on a test machine here (with rc1) after which it showed 'up-to-date'.  Did a Check for Updates but still remained 'up-to-date'.

 

I then did a 'docker pull aptalca/docker-dolphin:latest' from the terminal to verify:

latest: Pulling from aptalca/docker-dolphin
a3ed95caeb02: Already exists
3b1d42cd9af9: Already exists
d2ff49536f4d: Already exists
2dcca790c489: Already exists
ae857e8dd13c: Already exists
19cb749cf27e: Already exists
e28a0cb79a9f: Already exists
3fa6f956b718: Already exists
a89aff67a3de: Already exists
59b44b3d197b: Already exists
6c31adf2fea1: Already exists
7d4cf029adfb: Already exists
Digest: sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a
Status: Image is up to date for aptalca/docker-dolphin:latest

 

Finally, I verified the digest from docker pull above to what's stored in /var/lib/docker/unraid-update-status.json for that image and it all checks out:

...
    "aptalca/docker-dolphin:latest": {
        "local": "sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a",
        "remote": "sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a",
        "status": "true"
    }
...

 

Are you seeing a different sha256 from docker pull for this container?

Actually, this is what I was seeing:
    "aptalca/docker-dolphin:latest": {
        "local": null,
        "remote": "sha256:9cc5f3d41b09b915a2024eef870c23f219d6036d9bc01aa03cab6ce3cbf6a08a",
        "status": "undef"
    }

 

But I looked very closely at the my* template after you couldn't reproduce and the problem was somehow there was some trailing spaces in the repository entry.  It would pull correctly but messed up the updates.

 

Might not be a bad idea to do trim's on all the entries in dockerMan

Link to comment

Docker is throwing me this error:

 

Screen%20Shot%202016-07-10%20at%2021.30.42.png

 

Not sure if this is related to this rc1 but I have no problem accessing the Internet from other machines.

Its not 6.2  Maybe try setting static DNS addresses in Network Settings (and also ensure the gateway is correct)
Link to comment
Guest
This topic is now closed to further replies.