Jump to content

Downgrade steps to 5.0 from 6.0


markro1

Recommended Posts

How do I go about downgrading from 6.01 back to 5.0 (Stable)?  I have had issues all over the place; drives disappearing, shares unresponsive/disappearing, WEBGUI unresponsive,  it has been a massive pain.  I have had to reset or power down my system 10 times in 4 days because of all the crashes/issues.  If I hadn't been an unraid user for the past 5 years I would be looking elsewhere.  All my shares are reiserfs.  I just want to go back to my safe and happy place.

 

If all my shares are reiserfs and I have a copy of my flash dir before I upgraded to 6 can I just recopy over the flash dir again? 

 

Thanks,

 

Mark

Link to comment

How do I go about downgrading from 6.01 back to 5.0 (Stable)?  I have had issues all over the place; drives disappearing, shares unresponsive/disappearing, WEBGUI unresponsive,  it has been a massive pain.  I have had to reset or power down my system 10 times in 4 days because of all the crashes/issues.  If I hadn't been an unraid user for the past 5 years I would be looking elsewhere.  All my shares are reiserfs.  I just want to go back to my safe and happy place.

 

If all my shares are reiserfs and I have a copy of my flash dir before I upgraded to 6 can I just recopy over the flash dir again? 

 

Thanks,

 

Mark

 

Your experience is definitely not the norm. Can you explain how you did the upgrade, was it a clean install or did you copy over existing v5 settings/files ?

 

Link to comment

Hi All - thanks for the reply.

 

I used the 'wget' command to fetch the unRAIDServer.plg file and copied over the config files per the instructions, which seemed to work fine.  I came from 5.06.

 

I had a drive go red ball after the initial upgrade but a reboot fixed it. 

During another reboot it red balled again so I played around with drive/slots and cables and eventually it seems to have come back - or at least it has persistently stayed up. 

I had the same thing happen to another drive slot and that I am ignoring for now since it doesn't have any data on it and I may just remove it from the array once I downgrade. 

I have also have had issues with permissions and not being able to access my Public shares so I ran the New Permissions script a few times and then rebooted which seemed to fix it.  I thought it was strange a reboot was required.

I am also running a VM on another PC but it uses the Unraid server to host its VMDKs (on cache not Moved) and to access the Unraid shares where most of my NW files are stored.  As soon as I start DL an NZB with SABNZB on the VMPlayer, I lose access to the Unraid Webgui and the drives shares and have to hard power down the Unraid server to get it back.  I have access to the telnet console but the powerdown script wont work.

 

I have had zero systematic issues on 5 or 4 over the years but seemed to have pressed my luck with 6.  I love Unraid but need to roll back for stability purposes.  It is actually in the hung up state right now - I have no Webgui, no Shares but do have telnet access.  I telneted in and made a copy of my syslog so on reboot it wouldn't be deleted.  I further discovered - it looks like the shares come in and out of responsiveness.  Meaning you \\share and it hangs up explorer for 10-15mins but then shows up for a minute or 2 and then is gone again. 

 

I felt I should explain my situation a little better and post a syslog since it may help the team discover the bugs. 

 

I think after a reboot to clear it up the GUI and shares I should run fsck/reiserfsck cause of all the dirty powerdowns I have had.

 

Thanks All.

 

Mark

syslog_markro1.zip

Link to comment

I'm very sorry for all the trouble you have had.  A few comments -

 

* Red balls - if you get a drive red balled, the OS installed should be irrelevant, which means if you are having red ball problems, they could continue no matter what OS you install or run.  The only possible difference I can think of is if it was a software driver (for the disk controller) problem, then that would mean that the newer 64 bit driver is more buggy than the older 32 bit driver, and that is remotely possible but extremely unlikely.

 

* New Permissions - you should not have had to reboot.  I don't know whether FUSE presents the actual file permissions to the User Share access, or only copies them at User Share creation time, which would mean that you would have to restart the array to see corrected permissions on User Share files.  That would be the same whether on v5 or v6.  Now that you've run it, they're good for v5 too.

 

* Powerdown - if the Gui was hanging then the built-in powerdown would not work either.  You have to install the Powerdown plugin, which should always work effectively, no matter what's happening.

 

* Config files - you might check the Files on flash drive section of the upgrade guide, to compare with what you copied.

Link to comment

So I downgraded to version 5 smoothly.  But as soon as I started up my VM and starting DL NZBs ....bang, lost all access to the server - I couldn't even ping it let aside telnet to it.  I captured a screen from the console that was connected to it and there are a bunch of reiserfs i/o calls?  At this point it isn't a v6 issue maybe some sort of corruption in the file system.  Looks like I am going to FSCK until I am SICK of it.

unraid5_crash_console.jpg.95258705c330bb2888dd239b1594a416.jpg

Link to comment

I ran Memtest for a few passes and that was error free. 

Then I ran dosfsck on the SD Card - came up with a bunch of differences. 

I told it to repair them and it seemed to work. 

Then I rebooted but it would not boot to Unraid. 

I replaced the Unraid SD card with another one and formatted it restored my Unraid 5 config files.

After the array came up I put it in maintenance mode and let it rebuild parity just because it hadnt had a chance and with all the issues it seemed like a good idea. 

Plus it would stress the drives a bit and I wanted to exercise them a bit to see if they were OK. 

 

Parity check finished - Last checked on Thu Jul 9 14:33:05 2015 PDT (today), finding 9595317 errors.

> Duration: 16 hours, 47 minutes, 38 seconds. Average speed: 66.2 MB/sec

9.5 Million parity errors seems pretty high....

 

After the parity finished I started reiserfsck the disks. One of my drives had some errors that needed fixing which I did using the rebuild-tree command. I reran reiserfsck on it again and it came up clean.

 

Since the errors/crashes I was having seemed to be during heavy writes - I copied a few large files to some shares. Including some to the cache drive.  Nothing seemed to go wrong....

 

I will update when I get brave enough to start using the VM again and DL NZBs.

 

Thanks for all the help so far.

 

Mark

 

 

 

 

 

 

Link to comment

I ran Memtest for a few passes and that was error free. 

Good idea.

 

After the array came up I put it in maintenance mode and let it rebuild parity just because it hadnt had a chance and with all the issues it seemed like a good idea. 

Plus it would stress the drives a bit and I wanted to exercise them a bit to see if they were OK. 

 

Parity check finished - Last checked on Thu Jul 9 14:33:05 2015 PDT (today), finding 9595317 errors.

9.5 Million parity errors seems pretty high....

I just want to be sure I understand what happened, as you say you rebuilt parity, then reported the result of a parity check.  Did you actually just run a parity check, or did you rebuild parity then run a parity check?  That error count seems to imply a parity check only.  (Or there's something drastically wrong!)

 

After the parity finished I started reiserfsck the disks. One of my drives had some errors that needed fixing which I did using the rebuild-tree command. I reran reiserfsck on it again and it came up clean.

The --rebuild-tree option is somewhat of a radical fix.  You don't want to run that unless you HAVE to.  However, your crash pic indicates that the Reiser file system must have choked on something really bazaar on that drive.

Link to comment

Thanks for the responses - I ran a parity check and chose the option "Write corrections to parity disk".  Which I guess is different than a rebuild.  I have since run 2 parity checks with that option enabled and found zero errors.  Is the only way to do a parity rebuild to go to Utils and do a New Config and then assign the drives?  Is this the next step I should do?  I was planning on running it for a while - I am a little chicken to try and reproduce the problem.

 

Thanks,

 

Mark

Link to comment

I ran a parity check and chose the option "Write corrections to parity disk".  Which I guess is different than a rebuild.  I have since run 2 parity checks with that option enabled and found zero errors.

Good.

 

Is the only way to do a parity rebuild to go to Utils and do a New Config and then assign the drives?  Is this the next step I should do?

If you only have a few drives, you can use New Config, but I think it's easier to just unassign the Parity drive, start and stop the array (to make it forget the old parity drive), then assign the parity drive again.  When you next start the array, it should build parity.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...