upupgrade to 6.1 problem


Recommended Posts

Hi,

I am trying to upgrade from v6.1

I followed the instructions

Copied the target url.

However the following is the log message that is returned....

[ i do not know if it is worth mentioning.... have been eperiencing long delays in the rendering of the web pages... I had hoped that upgrading wold solve this probem...]

 

image.png.9f0ad19bc9fea10e6516e437f7dbf2e7.png

 

 

Edit:  I ujst checked... i am unable to hit the server from my windows explorer

Pinging  the server gives this

 

image.png.487e961be15e62a96ef64b44ed696926.png

Edited by Fishypops
Link to comment

Wow.... this looks ike work..... 🙂

So... as I said right now connectivity to the drives is down....

....

Was this caused by the attempt to upgrade ? 

- if so shoud I restore the copy I took of the flashdive immediately prior to the upgrade attempt ?

- would a hard reboot do anything to help  ?

===UPDATE / EDIT :  I ust rebooted... i have the webgui back ... through my phone browser...

 

So... I will work through the instructions..

 

Thanks for your help in advance

Edited by Fishypops
Link to comment
2 hours ago, Fishypops said:

Wow.... this looks ike work.....

 Well worth it. Nobody can support that old version since nobody is running it.

 

I'm not sure if Community Applications was around back then. In any case, once you get the OS upgraded install the Community Applications plugin and it will help you choose and install all other plugins and dockers you want.

 

Also some new features. Be sure to setup Notifications to alert you immediately by email or other agent as soon as Unraid detects a problem. Possibly you already have some problems you don't know about.

Link to comment

To recap on where I am...

 

Situation:

  • I am running version 6.1.9
  • I have the WEBGUI accessible
  • I have access to the Unraid mapped drives via explorer
  • I want to upgrade to the latest version
  • As per the previous post the repository for the 6.1.x upgrade is not available.
  • I read through the links....

I don't see the solution in the posts / links

 

What do I do now ?

 

I am trying....

 

 

 

Link to comment

The basic NAS function will work fine.  All of your data will be safe and the shares will work as they do now.  Many plugins, Dockers and VM's can be problematic.  It is best to remove ALL of them!

 

Begin by shutting down your server.  Remove the Flash Drive and make a complete copy/backup on its contents onto your PC.  (This is your safety net.)  Now install the flash drive again and reboot the server. 

 

Basically, you need to uninstall all plugins, Dockers and VM's.  You then would install the most recent version using the the new repository that I referred you to in the second post of this thread.  This will/should give your Basic NAS setup with the server name, IP address, passwords, and the other settings that you had with 6.1.9.  

 

Follow @trurl's instructions that he spelled out in this thread. 

 

IF you have trouble or problems, posts back in this thread with the issues and attach a Diagnostics file to the post.  Tools   >>>  Diagnostics  

Edited by Frank1940
Link to comment
12 hours ago, Frank1940 said:

The basic NAS function will work fine.  All of your data will be safe and the shares will work as they do now.  Many plugins, Dockers and VM's can be problematic.  It is best to remove ALL of them!

 

Begin by shutting down your server.  Remove the Flash Drive and make a complete copy/backup on its contents onto your PC.  (This is your safety net.)  Now install the flash drive again and reboot the server. 

 

Basically, you need to uninstall all plugins, Dockers and VM's.  You then would install the most recent version using the the new repository that I referred you to in the second post of this thread.  This will/should give your Basic NAS setup with the server name, IP address, passwords, and the other settings that you had with 6.1.9.  

 

Follow @trurl's instructions that he spelled out in this thread. 

 

IF you have trouble or problems, posts back in this thread with the issues and attach a Diagnostics file to the post.  Tools   >>>  Diagnostics  

Hurray !!!!!   🙂   It worked .... all that trepidation about nothing !!!!!

FYI.  I did exactly what you said to do.... removed plugins  (none of which i use anyway)

Rebooted  (already had a recent copy of th flashdrive).

Inserted the link into the plugins

(https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg)

Executed

Rebooted

Hay viola !!     [  🙂 - i know ;-)... ]

 

Brilliant.  Very happy.  Thanks very much to both of you.

 

 

Link to comment
  • 3 weeks later...

Hi.....

Since I performed the parity swap my file transfer (write) to the unraid is painfully slow (700 kB/s). In addition I am experiencing  frequent buffering when playing media files from my nVidia media player. Nothing else has changed to my setup  Previously i enjoyed multi MB/s.

Is there something else I should have done ?

Attached is my diagnostic log.

supernova-diagnostics-20190110-2204.zip

Thanks again in advance.

Link to comment

May or may not be related:

Jan 10 03:40:01 supernova root: mover: cache not present, or only cache present

Odds on, you've got a docker application that has one of its path mappings  set to be /mnt/cache/....  Since you don't have a cache drive, this is going to wind up being stored in RAM.

 

 

Actually a good thing, because I noticed this problem with some templates a while ago (specifying /mnt/cache explicitly on the template for the /config), and had been meaning to think about how I want to have CA handle that for those users that don't have a cache drive.

Link to comment
5 hours ago, Squid said:

Actually a good thing, because I noticed this problem with some templates a while ago (specifying /mnt/cache explicitly on the template for the /config), and had been meaning to think about how I want to have CA handle that for those users that don't have a cache drive.

Perhaps a simple solution would be to transform the /mnt/cache to /mnt/user and issue a warning that this has happened when pressing Apply on the docker?  Alternatively this could be deemed an error that has to be corrected before proceeding.  That seems a minimal impact change.

 

Not sure what the correct approach would be if the cache drive is missing at runtime as that may well mean the cache drive has dropped offline (or failed to mount due to file system corruption) and in such a case one would not want such a transform to happen.   Probably better to not even start the docker in such a case (and have an appropriate error displayed if the user tries to manually start the container).  However I can see this might be more difficult to implement sensibly.  Not sure if this could be made even more generic by saying the mount point must exist under /mnt/disk or /mnt/disks so it catches other locations becoming unexpectedly unavailable.  However this can get complicated by cases where one does want to write to ram for reasons like a temporary work area for transcodes.

Link to comment
3 hours ago, itimpi said:

Probably better to not even start the docker in such a case (and have an appropriate error displayed if the user tries to manually start the container).  However I can see this might be more difficult to implement sensibly.  Not sure if this could be made even more generic by saying the mount point must exist under /mnt/disk or /mnt/disks so it catches other locations becoming unexpectedly unavailable.  However this can get complicated by cases where one does want to write to ram for reasons like a temporary work area for transcodes.

It can only be handled at the CA level when installing.  After installation, if the folder(s) do not exist, docker itself outside of unRaid's control will automatically create the folders.

Link to comment
8 minutes ago, Squid said:

It can only be handled at the CA level when installing.  After installation, if the folder(s) do not exist, docker itself outside of unRaid's control will automatically create the folders. 

I agree.   A check that CA can carry out is that the mount point exists at the point of installing (or editing) a container.   That I would think is as far as CA itself can go. It might be a good idea to introduce in such checks an option to explicitly say that it is not necessary for a particular path to exist to allow for cases such as when RAM is really wanted as a location.

 

However that does not stop the Unraid code that launches a docker container from carrying out checks before actually issuing the docker command to start a container.   That does put it outside the remit of  CA and would be in Limetech maintained code I would think ( am not sure exactly how the docker containers get launched at that level).   Do you think that raising a feature request for this might be a good idea?

Edited by itimpi
Link to comment

There was some discussion somewhere about this (can't remember though if it was private however) recently.  But, I think that it get's rather complicated and hard to implement prior to the docker daemon starting since the daemon doesn't restart containers via the same docker run commands that you see when adding.

Link to comment
40 minutes ago, Squid said:

There was some discussion somewhere about this (can't remember though if it was private however) recently.  But, I think that it get's rather complicated and hard to implement prior to the docker daemon starting since the daemon doesn't restart containers via the same docker run commands that you see when adding.

I think it must have been private as I do not remember seeing it.   However I can definitely seeing this being more complicated.

 

However if CA can add some simple checks when adding/editing containers it would at least pick up most cases where the user (perhaps inadvertently) configured a container to user RAM without realising it.

Edited by itimpi
Link to comment

The biggest offender was @binhex and his default mappings for /data specified /mnt/cache.  After I noticed the problem, he changed all of his templates.  There is still only the odd template still out there that does this (for anything other than /config which is a special case, as the template system itself changes the /config mappings to match whatever the user has in default appdata location)

Link to comment

I have attached a ping log for the server.... the first ping is from before I rebooted the server (whilst the file was taking forever to copy I wanted to check the ping response time).....After I rebooted... I performed the subsequent pings.... as you can see....after the reboot the ping response was terrible....

Any idea why this wold be ?

 

Thanks

 

image.png.82ca8843cde29975618a8924fb68a41b.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.