unRAID Server Release 6.0-beta13-x86_64 Available


limetech

Recommended Posts

Well thats good then! My docker image has been copied to the array and back to the cache drive on multiple occasions so I think I'm OK there.

 

Also:

 

1. Thanks for moving the Start/Stop/Clear Statistics button back to the same page as the drive stats, that was driving me up the wall.

2. Please put back the checkboxes to start/stop dockers all in one go. I can only way I can see to start/stop them now is one by one from the Dashboard!?

Link to comment
  • Replies 239
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Well thats good then! My docker image has been copied to the array and back to the cache drive on multiple occasions so I think I'm OK there.

 

No you are definitely not ok.  If your docker image lives on a btrfs device right now, you are subject to the bug...period.  If you want to fix it, you have to move it to a non-btrfs formatted device, then create a folder on the btrfs device, run the command I referenced in the other post to set nodatacow on the folder, then copy the docker image back to the btrfs device in that folder.  Now the nodatacow flag will be set.

 

If you move that file again to a non-btrfs device and then back again, the nodatacow flag will NOT persist.

 

Also:

 

1. Thanks for moving the Start/Stop/Clear Statistics button back to the same page as the drive stats, that was driving me up the wall.

 

Lol, no problem.

 

2. Please put back the checkboxes to start/stop dockers all in one go. I can only way I can see to start/stop them now is one by one from the Dashboard!?

 

You can stop / start dockers from the docker page itself by clicking on the icon in the table (it works the same as the dashboard).  We are not bringing back the start / stop checkboxes.  The two click process to start / stop is activated through a context menu which is pulled up when you click the app icon.  If you want to start several containers in one swoop, toggle your autostarts, stop docker, then start docker again.

 

How often are you wanting to start / stop just a select number of containers at the same time?  I would think this is pretty rare.

Link to comment

thank you. i don't think i need BTRFS for the cache drive. it did it manually when i inserted it as a cache drive. i backuped the files. how can i reformat / unformat it with some terminal commands? i'm absolutly fine with XFS for the cache drive.

 

 

I was wondering if the problem was BTRFS related!  There have been reports that those who encountered problems with BTRFS formatted cache drives have not fixed the issue by going back to B12, which is why Limetech are meant to be rushing out a B14 to fix issues in this area.  However whether this will help with your issue I have no idea.

 

You mention that your cache disk is BTRFS formatted?  Is there a reason for this (e.g. drive pooling, TRIM support for SSD) or is it just a legacy of the fact that at one point it was a requirement for docker support.  If it was the latter then it might be worth switching to XFS as that seems to be the most stable option.

 

One recovery path might be to see if you can mount the cache disk outside the array to copy off any files you need.  Once that is done you could remove the current partitions and add it back to unRAID as a cache disk to be reformatted, and having done that copy back any files you want back there.

Link to comment

thank you. i don't think i need BTRFS for the cache drive. it did it manually when i inserted it as a cache drive. i backuped the files. how can i reformat / unformat it with some terminal commands? i'm absolutly fine with XFS for the cache drive.

 

 

I was wondering if the problem was BTRFS related!  There have been reports that those who encountered problems with BTRFS formatted cache drives have not fixed the issue by going back to B12, which is why Limetech are meant to be rushing out a B14 to fix issues in this area.  However whether this will help with your issue I have no idea.

 

You mention that your cache disk is BTRFS formatted?  Is there a reason for this (e.g. drive pooling, TRIM support for SSD) or is it just a legacy of the fact that at one point it was a requirement for docker support.  If it was the latter then it might be worth switching to XFS as that seems to be the most stable option.

 

One recovery path might be to see if you can mount the cache disk outside the array to copy off any files you need.  Once that is done you could remove the current partitions and add it back to unRAID as a cache disk to be reformatted, and having done that copy back any files you want back there.

I think you want to avoid doing it from the command line, since that may be what causes the issue. See the last paragraph of this post.

 

I think you can just stop the array then click on the cache disk to get to its configuration page then select the new format and when you start the array unRAID will reformat it.

Link to comment

Well thats good then! My docker image has been copied to the array and back to the cache drive on multiple occasions so I think I'm OK there.

 

No you are definitely not ok.  If your docker image lives on a btrfs device right now, you are subject to the bug...period.  If you want to fix it, you have to move it to a non-btrfs formatted device, then create a folder on the btrfs device, run the command I referenced in the other post to set nodatacow on the folder, then copy the docker image back to the btrfs device in that folder.  Now the nodatacow flag will be set.

 

If you move that file again to a non-btrfs device and then back again, the nodatacow flag will NOT persist.

 

 

Right - now I understand. I'll sort this out when I next restart the server.

 

 

2. Please put back the checkboxes to start/stop dockers all in one go. I can only way I can see to start/stop them now is one by one from the Dashboard!?

 

You can stop / start dockers from the docker page itself by clicking on the icon in the table (it works the same as the dashboard).  We are not bringing back the start / stop checkboxes.  The two click process to start / stop is activated through a context menu which is pulled up when you click the app icon.  If you want to start several containers in one swoop, toggle your autostarts, stop docker, then start docker again.

 

How often are you wanting to start / stop just a select number of containers at the same time?  I would think this is pretty rare.

 

I don't have autostart on because the SSH Plugin I use needs to be stopped and then started to take my customisations into effect (ssh key only auth). Problem is with the way my dockers are setup it complains on the starting of ssh again that port 22 is already in use. Therefore I don't use autostart as if I reboot my server I restart SSH and then start my dockers.

 

If you can think of a solution for that little issue then I'll turn autostart on again and live happily that it is working 100% :)

Link to comment

Well thats good then! My docker image has been copied to the array and back to the cache drive on multiple occasions so I think I'm OK there.

 

No you are definitely not ok.  If your docker image lives on a btrfs device right now, you are subject to the bug...period.  If you want to fix it, you have to move it to a non-btrfs formatted device, then create a folder on the btrfs device, run the command I referenced in the other post to set nodatacow on the folder, then copy the docker image back to the btrfs device in that folder.  Now the nodatacow flag will be set.

 

If you move that file again to a non-btrfs device and then back again, the nodatacow flag will NOT persist.

 

Right - now I understand. I'll sort this out when I next restart the server.

 

 

2. Please put back the checkboxes to start/stop dockers all in one go. I can only way I can see to start/stop them now is one by one from the Dashboard!?

 

You can stop / start dockers from the docker page itself by clicking on the icon in the table (it works the same as the dashboard).  We are not bringing back the start / stop checkboxes.  The two click process to start / stop is activated through a context menu which is pulled up when you click the app icon.  If you want to start several containers in one swoop, toggle your autostarts, stop docker, then start docker again.

 

How often are you wanting to start / stop just a select number of containers at the same time?  I would think this is pretty rare.

 

I don't have autostart on because the SSH Plugin I use needs to be stopped and then started to take my customisations into effect (ssh key only auth). Problem is with the way my dockers are setup it complains on the starting of ssh again that port 22 is already in use. Therefore I don't use autostart as if I reboot my server I restart SSH and then start my dockers.

 

If you can think of a solution for that little issue then I'll turn autostart on again and live happily that it is working 100% :)

 

I am interested in what the SSH plugin your using gives you over the default enabled SSH service that's running on unRAID.  The root of your issue is stemming from that customization.  Assuming that's a non-replaceable solution (your SSH plugin), I'm still not sure why your containers have anything to do with SSH.  With docker exec, SSH access into containers isn't really necessary.

 

To avoid derailing this thread, can you send me a PM describing your setup more so I can better understand?

Link to comment

No you are definitely not ok.  If your docker image lives on a btrfs device right now, you are subject to the bug...period.  If you want to fix it, you have to move it to a non-btrfs formatted device, then create a folder on the btrfs device, run the command I referenced in the other post to set nodatacow on the folder, then copy the docker image back to the btrfs device in that folder.  Now the nodatacow flag will be set.

 

If you move that file again to a non-btrfs device and then back again, the nodatacow flag will NOT persist.

 

Hi, I was reading that thread and I saw that my Cache drive is BTRFS.  My Docker.img is on it since a while; in fact, the cache is BTRFS because the way Unraid6 previously require a BTRFS partition for the Docker (before it use an image) and I never re-format it to XFS.

 

Are you saying that I should reformat my Cache to XFS and put back my data on it or i'll have issue down the road ?

 

 

Link to comment

I updated to beta13 today through the webgui.  After the reboot, my cache drive pool is also showing unformated.  I assume I have lost the syslog since I rebooted (as instructed by the webgui after the update).  From reading this thread, it sounds like this is being addressed in beta14.  Since that is not available, is there a working method to go back to beta13 or to extract my data from the pool and reformat?

 

SanDisk_SDSSDH2064G_125150403747 (sdt) * 64.0 GB Unformatted 90 1 0 btrfs

SanDisk_SDSSDH2064G_125150404363 (sdj) * 64.0 GB - - 492 2 0 btrfs

Total Pool of two disks * 0 B 0 B 0 B 582 3 0

 

 

 

Link to comment

I updated to beta13 today through the webgui.  After the reboot, my cache drive pool is also showing unformated.  I assume I have lost the syslog since I rebooted (as instructed by the webgui after the update).  From reading this thread, it sounds like this is being addressed in beta14.  Since that is not available, is there a working method to go back to beta13 or to extract my data from the pool and reformat?

 

SanDisk_SDSSDH2064G_125150403747 (sdt) * 64.0 GB Unformatted 90 1 0 btrfs

SanDisk_SDSSDH2064G_125150404363 (sdj) * 64.0 GB - - 492 2 0 btrfs

Total Pool of two disks * 0 B 0 B 0 B 582 3 0

If you haven't rebooted since the reboot that booted -beta13, I'd like to see your system log.

Link to comment

I only rebooted once after the install asked for the reboot.  I have attatched my syslog for your review.

 

Thanks in advance!

 

Yup, here's the culprit:

 

Feb 18 15:17:06 Iron emhttp: writing MBR on disk (sdj) with partition 1 offset 64, erased: 0

 

To recover you will have to know how you partitioned the device to begin with.  More info about this will be posted with -beta14 announcement.

Link to comment

I believe I used the instructions here: 

 

  Command Line Method

 

  Lime Technology - unRAID Server Community » Company » Announcements » unRAID 6 Beta 6:  Btrfs Quick-Start Guide

 

  http://lime-technology.com/forum/index.php?topic=33806.0

 

As long as you don't try to rebuild file system, or otherwise mess with drive, when -beta14 comes out (or after downgrading back to -beta12) you should be able to type that exact same 'sgdisk' command to restore the partition table.

Link to comment

For the second time today, i've lost contact with the server.  Everything was fine, then I tried to access a share from windows, and it's not available.

 

I can't get in from Putty, or windows.  I also have lost all control of the Windows VM that I was just working with 2 minutes earlier.

 

I have no syslog, as the system has froze.  I had to hard boot earlier today, and the parity check is a little over half finished, after almost 9 hours (since I hard booted this morning.)

 

Cache drive is XFS, and I've not had any other troubles that I'm aware of.  System had been working great for over a month on beta12, prior to upgrading.

 

I think I need to go back to beta12.  If there is anything you want me to check prior to reverting, please let me know.

 

thanks.

Link to comment

As long as you don't try to rebuild file system, or otherwise mess with drive, when -beta14 comes out (or after downgrading back to -beta12) you should be able to type that exact same 'sgdisk' command to restore the partition table.

 

This should work even though I am using a cache pool?

Link to comment

For the second time today, i've lost contact with the server.  Everything was fine, then I tried to access a share from windows, and it's not available.

 

I can't get in from Putty, or windows.  I also have lost all control of the Windows VM that I was just working with 2 minutes earlier.

 

I have no syslog, as the system has froze.  I had to hard boot earlier today, and the parity check is a little over half finished, after almost 9 hours (since I hard booted this morning.)

 

Cache drive is XFS, and I've not had any other troubles that I'm aware of.  System had been working great for over a month on beta12, prior to upgrading.

 

I think I need to go back to beta12.  If there is anything you want me to check prior to reverting, please let me know.

 

thanks.

 

Do you have a console/ipmi hooked up?  If so do you see an exception reported with pharse "intel_pstate" mentioned anywhere? as in:

http://lime-technology.com/forum/index.php?topic=38193.msg354253#msg354253

 

Link to comment

But I forgot we rebuilt apcupsd package to be more secure by making a slight change in /etc/apc*.conf to have it listen only on localhost instead of all interfaces.

 

I hope you are not talking about restricting the "Network Information Server" from listing to the lan port.  I currently have unRaid as the server and my PC and Router as clients as they all use the same UPS.  I want unRaid to shutdown if there is a power outage.  My server runs 24/7/365 and with a directly connected usb cable is not relying on another PC to initiate a shutdown.

 

If my router gets corrupted, I can always reload it.  If my PC is on, I am probably here and can manually shut it down.  So I want to keep using unRaid as the server vs being a client.  If it is merely editing a configuration file, I can do that but would prefer to have it work out of the box as it does today and as the source does.  If it has to be set of as a default, how about a config setting on the plugin to enable listing to other ports.

 

Yes that's the change we made to the apcupsd package, the NIS service only listens on 127.0.0.1 now instead of 0.0.0.0 by default.  (Config file: /etc/apcupsd/apcupsd.conf).

 

We'll probably make this a config option on the webGui but our intentions were to lock this down by default.

 

This needs to be reverted back to 0.0.0.0 because I use my unraid as apcupsd host as well.

Link to comment

For the second time today, i've lost contact with the server.  Everything was fine, then I tried to access a share from windows, and it's not available.

 

I can't get in from Putty, or windows.  I also have lost all control of the Windows VM that I was just working with 2 minutes earlier.

 

I have no syslog, as the system has froze.  I had to hard boot earlier today, and the parity check is a little over half finished, after almost 9 hours (since I hard booted this morning.)

 

Cache drive is XFS, and I've not had any other troubles that I'm aware of.  System had been working great for over a month on beta12, prior to upgrading.

 

I think I need to go back to beta12.  If there is anything you want me to check prior to reverting, please let me know.

 

thanks.

 

Do you have a console/ipmi hooked up?  If so do you see an exception reported with pharse "intel_pstate" mentioned anywhere? as in:

http://lime-technology.com/forum/index.php?topic=38193.msg354253#msg354253

 

I do have console connected, but as soon as I do PCI/video passthru, it kills the reporting/information from console, so I can't see anything beyond the original boot information :(

Link to comment

Now that dynamix is built into beta13, what do I need to do when I go back to beta12?  Do I need to 'uninstall' dynamix before reverting; after?

 

I've uninstalled apcupsd, since it's only for beta13, and also powerdown plugin, since it was updated since beta13.

 

What else do i need to uninstall/revert as part of going back to beta12?

Link to comment

Now that dynamix is built into beta13, what do I need to do when I go back to beta12?  Do I need to 'uninstall' dynamix before reverting; after?

 

I've uninstalled apcupsd, since it's only for beta13, and also powerdown plugin, since it was updated since beta13.

 

What else do i need to uninstall/revert as part of going back to beta12?

 

There is a link on the OP of the apcupsd thread for a b12 version of the apcupsd plugin.

 

http://lime-technology.com/forum/index.php?topic=34994.0

 

The powerdown plugin has not been changed for b13.  I only added the release notes.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.