unRAID Server Release 6.2.0-beta18 Available


Recommended Posts

Although this may well be true of the way that Limetech has elected to implement dual parity it is not necessarily so with all dual parity schemes.  Some of them take into account the disk position as part of the calculation of the second parity and CAN identify which disk has the error if a single bit goes wrong and apply the correct auto-correction

 

Not exactly.  Refer to this paper:

https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf

 

Look at section 4.  There it talks about possibility of using the algebra to detect which disk is wrong in a single-disk corruption case.  However note this:

 

Finally, as a word of caution it should be noted that RAID-6 by itself cannot (in the

general case) even detect, never mind recover from, dual-disk corruption. If two disks are

corrupt in the same byte positions, the above algorithm will (again, in the general case)

introduce additional data corruption by corrupting a third drive.

 

For this reason, we have not added this feature.  Perhaps could be added in the future.

 

Having said that the ability to re-order data disks without invalidating parity is a good plus for the simpler scheme that LimeTech seem to have adopted.

 

This is absolutely not true.  I think this is documented somewhere, but, when you have both parity disks, you absolutely cannot rearrange the data disks without invalidating Parity2 (ie, Q).  unRAID does detect this case and won't let you start the array.  If you have to rearrange data disks then you have to also unassign Parity2 first.  If you have no Parity2 disk to start with, this does not apply.

Link to comment
  • Replies 421
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I got it working switching around with NVMe disk and the HDD between Cache and Disk Drives. Letting UNRAID reformat.

But atm. i have the following error:

"SG_IO: questionable sense data, results may be incorrect"

 

This is a known issue because linux 'smartmontools' (smartctl command) does not support (yet) extracting SMART data from NVMe devices.  The most you can get is the temperature.

Link to comment

just curious regarding 2nd parity.

 

before if i wanted a bigger drive i upgrade my parity drive and add the bigger drive

will i have to replace 2 parity drives now to install 1 bigger data drive??

do both parity drives have to be bigger than the data drive or just 1??

 

Right, both parity drives need to be as larger or larger than all data drives (but the two parity drives can be different sizes).

 

Another limitation in 6.2-beta: we haven't implemented the 'swap disable' feature yet.

Link to comment

So i have found a couple of "bugs" in the vm side of the new version.

 

1. Selecting "View XML" for a vm that is already running crashes the vm and the web gui. The desktop client is still running and can be used but only the basic command line. After this i was also unable to edit my vm with the error "jamie.xml.new" already exists

2. Editing my windows 10 vm using the normal "Edit" button removes the primary vm image entirely from the page and i have to manually set this to the correct path (this started to happen after point 1 also)

3. It seems that point 1 is also caused when installing Avast on a windows 10 VM, i have replicated this issue on 4 different installs at different update versions

 

I have also found that some windows 10 ISO's that work with SeaBios so not get picked up by OVMF at all

 

I am so far into my 12th install of windows 10 today and have yet to have one that is fully usable (the one i got usable was destroyed by what i mentioned in point 1)

 

I'm going to keep seeing what i can find and hopefully find fixes to my current issues

 

Regards,

Jamie

Link to comment

 

Having said that the ability to re-order data disks without invalidating parity is a good plus for the simpler scheme that LimeTech seem to have adopted.

 

This is absolutely not true.  I think this is documented somewhere, but, when you have both parity disks, you absolutely cannot rearrange the data disks without invalidating Parity2 (ie, Q).  unRAID does detect this case and won't let you start the array.  If you have to rearrange data disks then you have to also unassign Parity2 first.  If you have no Parity2 disk to start with, this does not apply.

 

Mmm.. excuse my ignorance but what does "rearranging disks" mean ?  Does that mean that a disk in the array cannot be moved to another sata connection without invalidating parity for example ?  If so, is there not a risk that adding a new sata card for example will "reshuffle" drive assignments and invalidating parity ?

Link to comment

Mmm.. excuse my ignorance but what does "rearranging disks" mean ?  Does that mean that a disk in the array cannot be moved to another sata connection without invalidating parity for example ?  If so, is there not a risk that adding a new sata card for example will "reshuffle" drive assignments and invalidating parity ?

 

It means that the disk slots cannot be rearranged, like you can do with single parity, you can still change controller.

Link to comment

Mmm.. excuse my ignorance but what does "rearranging disks" mean ?  Does that mean that a disk in the array cannot be moved to another sata connection without invalidating parity for example ?  If so, is there not a risk that adding a new sata card for example will "reshuffle" drive assignments and invalidating parity ?

 

It means that the disk slots cannot be rearranged, like you can do with single parity, you can still change controller.

 

Ah.... so what is now"disk1" cannot be made "disk2" ?

 

Ok... That is of no importance..

Link to comment

 

Having said that the ability to re-order data disks without invalidating parity is a good plus for the simpler scheme that LimeTech seem to have adopted.

 

This is absolutely not true.  I think this is documented somewhere, but, when you have both parity disks, you absolutely cannot rearrange the data disks without invalidating Parity2 (ie, Q).  unRAID does detect this case and won't let you start the array.  If you have to rearrange data disks then you have to also unassign Parity2 first.  If you have no Parity2 disk to start with, this does not apply.

 

Mmm.. excuse my ignorance but what does "rearranging disks" mean ?  Does that mean that a disk in the array cannot be moved to another sata connection without invalidating parity for example ?  If so, is there not a risk that adding a new sata card for example will "reshuffle" drive assignments and invalidating parity ?

Pretty sure that it means the settings within unRaid.  Shuffling drives around physically and/or changing a controller card will not shuffle the disk assignments around because unRaid keeps track of the assignments by serial number and not by the SDx assignments by linux
Link to comment

Mmm.. excuse my ignorance but what does "rearranging disks" mean ?  Does that mean that a disk in the array cannot be moved to another sata connection without invalidating parity for example ?  If so, is there not a risk that adding a new sata card for example will "reshuffle" drive assignments and invalidating parity ?

 

It means that the disk slots cannot be rearranged, like you can do with single parity, you can still change controller.

 

Ah.... so what is now"disk1" cannot be made "disk2" ?

 

Ok... That is of no importance..

 

Exactly right.

Link to comment

So for what ever reason i seen to only be able to keep my vm's running for around an hour before they lock my server up.

 

My vm's are on my cache drive which is an NVMe device. Sadly it manages to crash windows into an unrecoverable state (unable to restore, or overwrite for a boot fix)

 

I'm not sure why this is happening, and it seems to be entirely random now, i can be on the internet, installing anti-virus (avg does not case the issue i mentioned earlier) or installing software.

 

I'm going to keep trying and seeing if i can find a pattern but so far it seems rather random in general

 

Regards,

Jamie

Link to comment

I looked through all of the posts, it doesn't appear that anyone has asked this yet.

 

If you are using Seabios + GPU passthrough, will starting up the VM also take down the WebGUI now? Or has that problem been fixed for both the console and WebGUI?

 

I'd test it myself but can't take down the system that supports IOMMU atm.

Link to comment

This is absolutely not true.  I think this is documented somewhere, but, when you have both parity disks, you absolutely cannot rearrange the data disks without invalidating Parity2 (ie, Q).  unRAID does detect this case and won't let you start the array.  If you have to rearrange data disks then you have to also unassign Parity2 first.  If you have no Parity2 disk to start with, this does not apply.

 

I was almost sure I tested this yesterday but wanted to confirm before posting, v6.2 won’t let you rearrange disk slots even when using single parity, e.g, swapping disk1 and 2 positions gives you “Too many wrong and/or missing disks!”,  it should be possible like on v6.1, correct?

 

Link to comment

This is absolutely not true.  I think this is documented somewhere, but, when you have both parity disks, you absolutely cannot rearrange the data disks without invalidating Parity2 (ie, Q).  unRAID does detect this case and won't let you start the array.  If you have to rearrange data disks then you have to also unassign Parity2 first.  If you have no Parity2 disk to start with, this does not apply.

 

I was almost sure I tested this yesterday but wanted to confirm before posting, v6.2 won’t let you rearrange disk slots even when using single parity, e.g, swapping disk1 and 2 positions gives you “Too many wrong and/or missing disks!”,  it should be possible like on v6.1, correct?

Yes for 6.1 that is permitted.  I'll look at that for 6.2.

 

One more refinement: in 6.1, or in 6.2 with only P and one data drive, or only Q and one data drive, or P + Q + 1 data drive - this is effectively a RAID1 (3-way in case of P+Q+D) - that is the data on all the involved devices is identical.

Link to comment

Hi all.

 

I installed 6.2beta18 yesterday, added a 2nd parity drive and ran a parity sync. From there everything was fine, dockers and plugins seem to work. I had some performance issues with samba so I decided to stop the array and reboot. The array wouldn't stop even after 15 minutes, apparently it was still un mounting user shares but I couldn't see any machines that were using it.

So I forced a reboot from SSH and it rebooted. Now its booted, I can get in via SSH and the flash drive is also shared on the network but the web interface will not open. Any ideas?

Link to comment

 

 

Found a couple of things different to 6.1.9

 

1) I have an update available on nerdpack plugin (shown under settings) however the update button doesnt do anything. (lshw-B.02.17-x86_64-1_SBo_LT.txz)

 

2) I have some lines in my go script to install a python program:

- it needs to install PIP, for which I use https://pip.pypa.io/en/stable/installing

- That program gives an error "ImportError: cannot import name HTTPSHandler"

 

* I have seen reference to Python needing openssl here: http://stackoverflow.com/questions/32054580/httpshandler-error-while-installing-pip-with-python-2-7-9

 

I think lshw is included in 6.2. I'm working on an update for Nerdpack. Some packages won't work with 6.2.  You need python 2.7.11 I updated the virtual machine wake on lan plugin and the speedtest plugin to use this version.

https://github.com/dmacias72/unRAID-plugins/raw/master/source/packages/python-2.7.11-x86_64-2.txz

Here's also pip and pysetuptools. You can do pip install --upgrade pip to get latest.

https://github.com/dmacias72/unRAID-plugins/raw/master/source/packages/pip-7.1.2-x86_64-7_slack.txz

https://github.com/dmacias72/unRAID-plugins/raw/master/source/packages/pysetuptools-18.2-x86_64-2_slack.txz

Link to comment

Pretty excited about this!

 

NOTE:  If you do NOT have a cache device, you will need to add those three shares manually if you wish to utilize apps or VMs on unRAID.

 

This confuses me a bit. I don't use a cache disk, and run docker from a mounted SSD. What do I need to do to add these shares? Are they SMB Shares? User Shares?

 

Has anyone without a cache disk tried the upgrade?  Any further details on the above?  My docker lives outside the array on a separately-mounted disk.  I don't really want or need a cache drive, but don't want docker on the array, either.  Has support for that configuration been eliminated?

 

No this should work just fine. I removed my cache drive from the array, which gave me a drive called sdc. I mounted sdc1 to /mnt/diskA via ssh. From there I just went into settings for docker, and told it to create a docker image under /mnt/DiskA. Enabled docker and setup a docker image. For VMs, it was being a bit odd at first as I think I had the Libvirt storage location on the array, but the default VM location on diskA. I wasn't getting the VM tab to show up in the list until I moved the Libvirt storage location to diskA as well. Once I did that I was able to click on one of the new template buttons, and select the ISO, and it would automatically locate my vDisk onto diskA for me. The VM started up and booted to a cd to install just fine.

 

After a reboot I just had to recreate the mount point /mnt/diskA again, and run mount /dev/sdc1 /mnt/diskA and the system had no issues with detecting the data and running docker and VMs again. I'm guessing you probably would have those in a startup script to make it work correctly each time.

Link to comment

Before I attempt to convert my two unassinged SSD/zfs pool drives to a cache pool - are we allowed to set and stick the raid mode?  I want to combine the 2 240G drives into a single unprotected 480G pool

 

Thanks

Myk

 

Yes, you can post-configure your cache pool as raid0.

 

After assigning both SSDs to your cache pool and starting the array, you can click on the first Cache disk and Balance with the following options for raid0:

-dconvert=raid0 -mconvert=raid0

 

Mar 13 17:32:55 Tower kernel: BTRFS error (device sdd1): balance will reduce metadata integrity, use force if you want this

 

 

what is the format to add force option?

 

Myk

Link to comment

Mar 13 17:32:55 Tower kernel: BTRFS error (device sdd1): balance will reduce metadata integrity, use force if you want this

 

 

what is the format to add force option?

 

Myk

 

You can use instead:

 

-dconvert=raid0 -mconvert=raid1

 

This will convert your data to raid0 but leave the metadata in raid1, metadata takes up very little space and the added redundancy can be useful.

Link to comment

 

 

Found a couple of things different to 6.1.9

 

1) I have an update available on nerdpack plugin (shown under settings) however the update button doesnt do anything. (lshw-B.02.17-x86_64-1_SBo_LT.txz)

 

2) I have some lines in my go script to install a python program:

- it needs to install PIP, for which I use https://pip.pypa.io/en/stable/installing

- That program gives an error "ImportError: cannot import name HTTPSHandler"

 

* I have seen reference to Python needing openssl here: http://stackoverflow.com/questions/32054580/httpshandler-error-while-installing-pip-with-python-2-7-9

 

I think lshw is included in 6.2. I'm working on an update for Nerdpack. Some packages won't work with 6.2.  You need python 2.7.11 I updated the virtual machine wake on lan plugin and the speedtest plugin to use this version.

https://github.com/dmacias72/unRAID-plugins/raw/master/source/packages/python-2.7.11-x86_64-2.txz

Here's also pip and pysetuptools. You can do pip install --upgrade pip to get latest.

https://github.com/dmacias72/unRAID-plugins/raw/master/source/packages/pip-7.1.2-x86_64-7_slack.txz

https://github.com/dmacias72/unRAID-plugins/raw/master/source/packages/pysetuptools-18.2-x86_64-2_slack.txz

 

Thanks for the info!. I have got around the PIP problem by writing a function in NodeRed - it was for reading values from my heating controller.

 

Link to comment

Ok update to my previous post, my unRaid box wasn't allowing access to the web interface because auto start on the array was enabled. I disabled that and rebooted again.

The machine seems to be locking up when it tries to start the array, maybe lockup is not the right word but its certainly taking far longer then the normal 1 minute to start the array. Maybe the 2nd parity drive is breaking things?

Link to comment

Ok update to my previous post, my unRaid box wasn't allowing access to the web interface because auto start on the array was enabled. I disabled that and rebooted again.

The machine seems to be locking up when it tries to start the array, maybe lockup is not the right word but its certainly taking far longer then the normal 1 minute to start the array. Maybe the 2nd parity drive is breaking things?

 

My vote is a filesystem issue on one of the disks when mounting, can you get a syslog after trying to start? If not try this, then attach syslog.

Link to comment

Ok update to my previous post, my unRaid box wasn't allowing access to the web interface because auto start on the array was enabled. I disabled that and rebooted again.

The machine seems to be locking up when it tries to start the array, maybe lockup is not the right word but its certainly taking far longer then the normal 1 minute to start the array. Maybe the 2nd parity drive is breaking things?

 

My vote is a filesystem issue on one of the disks when mounting, can you get a syslog after trying to start? If not try this, then attach syslog.

 

Thanks mate.

 

Syslog below, uploaded to Dropbox due to size (200KB).

https://www.dropbox.com/s/xsth0oblt2c10hg/syslog.txt?dl=0

Link to comment

Mar 13 17:32:55 Tower kernel: BTRFS error (device sdd1): balance will reduce metadata integrity, use force if you want this

 

 

what is the format to add force option?

 

Myk

 

You can use instead:

 

-dconvert=raid0 -mconvert=raid1

 

This will convert your data to raid0 but leave the metadata in raid1, metadata takes up very little space and the added redundancy can be useful.

 

ok, that worked, note tho - have to stop and restart array for cache pool size to change

 

Thanks

Myk

 

Link to comment

Ok update to my previous post, my unRaid box wasn't allowing access to the web interface because auto start on the array was enabled. I disabled that and rebooted again.

The machine seems to be locking up when it tries to start the array, maybe lockup is not the right word but its certainly taking far longer then the normal 1 minute to start the array. Maybe the 2nd parity drive is breaking things?

 

My vote is a filesystem issue on one of the disks when mounting, can you get a syslog after trying to start? If not try this, then attach syslog.

 

Thanks mate.

 

Syslog below, uploaded to Dropbox due to size (200KB).

https://www.dropbox.com/s/0m73mgxwr7m8ii5/syslog?dl=0

 

Yep, disk8 has filesystem issues, see https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Drives_formatted_with_XFS

 

It should fix it but if not better start a new support topic since it’s not 6.2beta related.

 

 

Link to comment

Ok update to my previous post, my unRaid box wasn't allowing access to the web interface because auto start on the array was enabled. I disabled that and rebooted again.

The machine seems to be locking up when it tries to start the array, maybe lockup is not the right word but its certainly taking far longer then the normal 1 minute to start the array. Maybe the 2nd parity drive is breaking things?

 

My vote is a filesystem issue on one of the disks when mounting, can you get a syslog after trying to start? If not try this, then attach syslog.

 

Thanks mate.

 

Syslog below, uploaded to Dropbox due to size (200KB).

https://www.dropbox.com/s/0m73mgxwr7m8ii5/syslog?dl=0

 

Yep, disk8 has filesystem issues, see https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Drives_formatted_with_XFS

 

It should fix it but if not better start a new support topic since it’s not 6.2beta related.

 

I'll give that a go and start a separate thread if issues arise from this. Thank you for your help.

Link to comment
Guest
This topic is now closed to further replies.