unRAID Server Release 5.0-rc11 Available


limetech

Recommended Posts

Hi,

 

I upgraded from a perfect rc8 install to rc11 which is not quite so perfect.

 

The issue I am having is that my user shares are no longer accessible.  From the syslog (attached) it appears it is something to do with "Transport endpoint is not connected".

 

Same issue reported here - http://lime-technology.com/forum/index.php?topic=25928.0

 

Note I also have Plex server running.

 

Any help appreciated.

syslog21022013.txt

Link to comment
  • Replies 354
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Update. There are no issues introduced since R8 to R11, stuff is only solved.

To be fair, that is not a true statement. (ex. shfs/mover errors now that didnt exist for me prior to RC11).

But one should try updating to the latest RC and report to Tom, so he knows how many and what type of issue(s) are being experienced.

And if the issue(s) are show stoppers they can downgrade after reporting in.

 

 

 

 

Link to comment

I wonder if other people experience considerable lower parity check speeds with rc11 compared to rc5?

 

If you have a look at http://lime-technology.com/forum/index.php?topic=25890.0 ,you will realise that the answer  is 'Yes'.

 

However, there doesn't seem to be an explanation or a fix yet.  It just seems that the SAS2LP-MV8 doesn't work well on rc11.

Didnt know about that thread, thanks. Ive rebooted into rc11, with a pci latency of 32 in stead of 64, to see if that makes any difference. It does not. Maybe there are other settings to look at? The sas2lp itself is running at 5Gbps.

 

It looks like its only parity checking that is slower, copying to/from the array over the network is the same as with rc5. No complaints there.

Link to comment

Has something changed regarding support for the Areca ARC 1200 controller in RC10 and RC11? My RAID0/RAID1 configuration stops working at random since RC10 and now RC11. Resulting in loosing my cache and parity volume/disk. This has worked on RC8a for several months without any issue. I will investigate further, just like to know.

 

Running baremetal now NOT virtualized.

Link to comment

Since there is no comment coming from Limetech about this issue, i would emphasize AGAIN that rc11 has a big problem with slow parity speeds on X9SCM boards probably with SAS2LP cards. Please comment.

 

e.g. parity speed using rc5 starts at 140+ MB/s and averages out at about 100MB/s, while on rc11 it STARTS at about 50MB/s... this is not good. I've tried both using stock, bare metal versions (so no plugins or simplefeatures etc, just unraid).

 

Please solve.

Link to comment

Since there is no comment coming from Limetech about this issue, i would emphasize AGAIN that rc11 has a big problem with slow parity speeds on X9SCM boards probably with SAS2LP cards. Please comment.

 

e.g. parity speed using rc5 starts at 140+ MB/s and averages out at about 100MB/s, while on rc11 it STARTS at about 50MB/s... this is not good. I've tried both using stock, bare metal versions (so no plugins or simplefeatures etc, just unraid).

 

Please solve.

Did you compare the total time it takes to complete a parity check? This gives a better quantization than some initial readings.

 

I agree that (in my case too) rc11 is slower than older versions, but the total duration isn't that much longer to call it a big problem...

 

To improve parity check speed, I did modify the tunables (made all numbers 10 times bigger)

 

Link to comment

Nope, dont even know what that is. I just replace both binaries on the flash and reboot either rc5 or rc11 and start a parcheck. Nothing more. Both versions are using the same settings, same hardware, same bios, same everything, NOTHING is changed while swithcing other then both binaries.

Link to comment

Has something changed regarding support for the Areca ARC 1200 controller in RC10 and RC11? My RAID0/RAID1 configuration stops working at random since RC10 and now RC11. Resulting in loosing my cache and parity volume/disk. This has worked on RC8a for several months without any issue. I will investigate further, just like to know.

 

Two things to check, although it might be better to move this to a separate thread in the unRAID OS 5.0-rc forum.

 

Obtain an RC8a syslog for comparison with an RC10 or RC11 syslog, to see what version of Areca support each uses.  Then take a look at, or provide a copy of, the current syslog where the drives drop, to attempt to characterize what actually is happening/failing.

Link to comment

Since there is no comment coming from Limetech about this issue, i would emphasize AGAIN that rc11 has a big problem with slow parity speeds on X9SCM boards probably with SAS2LP cards. Please comment.

 

e.g. parity speed using rc5 starts at 140+ MB/s and averages out at about 100MB/s, while on rc11 it STARTS at about 50MB/s... this is not good. I've tried both using stock, bare metal versions (so no plugins or simplefeatures etc, just unraid).

 

Please solve.

 

I hesitate to answer this, because you are clearly upset, and not likely to care for my response, and for that I apologize.  But I'm afraid most of us probably don't think this is a big problem, when it only involves a very few users on a specific hardware combination (not yet fully categorized either), and does not involve data integrity or normal data access.  It only involves a slower parity check, something that is not a time critical operation, something that most run at night and really don't care how long it takes.  That some users do care is fine, but they also should realize there is nothing fundamentally wrong, just a little time lost.

 

I know you want this solved, but right now the only real solution (beyond the given workarounds and some tunables tweaking) would be to declare "don't use that motherboard (combination?) with UnRAID if you don't like longer parity checks".  That's not a desirable declaration, especially when everything seems to work safely, just more slowly on some operations.  In the past, we would mention in the wiki that certain hardware worked with UnRAID, but with certain limitations (replace the NIC, don't use the xyz ports, etc).

 

I personally, and I believe others too, would like to see v5.0 finalized, then move on to the many other items that have been waiting.  This would become one of those items, knowing that it is possible that it may become one of those issues that is never completely solved.  UnRAID has already lost a year or two, because of hardware and driver issues, basically Linux issues, not UnRAID software issues.

Link to comment

I respect your take on this, but its simply not to be blaimed on the hardware used. The hardware is not changing, only the unraid version is... rc5 performs, rc11 does not. My guess is this is another sign of the structural hardware/software/compatibility issues unraid seems to be suffering. Until things like this are fixed, This should not released as stable, its simple as that. If there are huge differences for the worse between rc's like this, something is seriously wrong.

 

Furthermore, the hardware used is not some cheap budget crap or old stuff lying around, it is expensive, hi-end server grade hardware. If there was a system unraid should run PERFECT on, it would be something like this...

Link to comment
Since there is no comment coming from Limetech about this issue, i would emphasize AGAIN that rc11 has a big problem with slow parity speeds on X9SCM boards probably with SAS2LP cards. Please comment.

 

e.g. parity speed using rc5 starts at 140+ MB/s and averages out at about 100MB/s, while on rc11 it STARTS at about 50MB/s... this is not good. I've tried both using stock, bare metal versions (so no plugins or simplefeatures etc, just unraid).

 

Please solve.

 

Don't want to suggest the obvious or go all off-topic, but how are individual drive reads and writes on your system w/rc11? Best would be to see raw r/w speeds of the same drives connected via onboard vs SAS2LP ports, say via dd.

 

And if you already have threads going trying to track this down then feel free to ignore me completely. :)

Link to comment

Don't want to suggest the obvious or go all off-topic, but how are individual drive reads and writes on your system w/rc11? Best would be to see raw r/w speeds of the same drives connected via onboard vs SAS2LP ports, say via dd.

i will test that, but drive speeds should be the same under rc5 or rc11 dont you think? I mean, if i find the speeds under rc11 are slower, what does that tell us? It just makes my point. Also, i experience no difference in TRANSFER speed to or from the array over the network with rc5 or rc11...

Link to comment

write speeds on anything above RC5 are significantly slower then on RC5 due to a change in kernel needed to fix other issues. This slowness seems to be limited to newer hardware (for examply my motherboard) and has relation to the amount of memory in the system (Assigned to the unraid process in case of virtualisation).

There are a few ways to significantly reduce this effect up to a very workable situation. Check out relevant threads.

 

Even though I am personally effected by this issue I seriously advise anyone envolved to stop seeiing this as a block for release because it will kill of unraid in the end. There will ALLWAYS be situations where specific hardware will cause issues, not everyone can be pleased.

 

This is why I would really like a constant advice by Tom on suggested hardware. At some point in time  was looking for new hardware and I found myself doing a lot of forum backgrounding. I would like to have been able to just buy advised building blocks.

 

On the other hand my sixth sense tells me that Tom is at the moment working on Unraid64 alpha release, and that is exactly what I think is needed so lets keep each other happy nagging about this and let Tom do his thing :-)

Link to comment

Disk speeds on rc5, the slowest is sdi with 117MB/s.

sdf is the parity disk (the first test). All drives are the same 4TB Hitachi (see sig).

 

TESTS ARE DONE WITH AN ACTIVE, ONLINE ARRAY!!!

 

RC5 disk speeds

root@UNRAID:~# hdparm -tT /dev/sdf

/dev/sdf:
Timing cached reads:   8218 MB in  2.00 seconds = 4113.64 MB/sec
Timing buffered disk reads: 428 MB in  3.01 seconds = 142.12 MB/sec
root@UNRAID:~# hdparm -tT /dev/sde

/dev/sde:
Timing cached reads:   8348 MB in  2.00 seconds = 4179.27 MB/sec
Timing buffered disk reads: 482 MB in  3.01 seconds = 160.38 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdd

/dev/sdd:
Timing cached reads:   8472 MB in  2.00 seconds = 4241.50 MB/sec
Timing buffered disk reads: 486 MB in  3.01 seconds = 161.65 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdc

/dev/sdc:
Timing cached reads:   8488 MB in  2.00 seconds = 4249.54 MB/sec
Timing buffered disk reads: 490 MB in  3.01 seconds = 162.75 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdi

/dev/sdi:
Timing cached reads:   8416 MB in  2.00 seconds = 4213.25 MB/sec
Timing buffered disk reads: 352 MB in  3.00 seconds = 117.31 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdh

/dev/sdh:
Timing cached reads:   8040 MB in  2.00 seconds = 4025.07 MB/sec
Timing buffered disk reads: 474 MB in  3.01 seconds = 157.44 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdg

/dev/sdg:
Timing cached reads:   8438 MB in  2.00 seconds = 4224.20 MB/sec
Timing buffered disk reads: 496 MB in  3.01 seconds = 164.81 MB/sec

 

Disk speeds RC11

root@UNRAID:~# hdparm -tT /dev/sdf

/dev/sdf:
Timing cached reads:   8256 MB in  2.00 seconds = 4135.54 MB/sec
Timing buffered disk reads: 476 MB in  3.01 seconds = 158.38 MB/sec
root@UNRAID:~# hdparm -tT /dev/sde

/dev/sde:
Timing cached reads:   7668 MB in  2.00 seconds = 3838.49 MB/sec
Timing buffered disk reads: 482 MB in  3.01 seconds = 160.09 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdd

/dev/sdd:
Timing cached reads:   8004 MB in  2.00 seconds = 4006.79 MB/sec
Timing buffered disk reads: 486 MB in  3.01 seconds = 161.55 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdc

/dev/sdc:
Timing cached reads:   7868 MB in  2.00 seconds = 3938.93 MB/sec
Timing buffered disk reads: 490 MB in  3.01 seconds = 162.97 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdi

/dev/sdi:
Timing cached reads:   8114 MB in  2.00 seconds = 4061.32 MB/sec
Timing buffered disk reads: 352 MB in  3.01 seconds = 117.08 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdh

/dev/sdh:
Timing cached reads:   7428 MB in  2.00 seconds = 3718.17 MB/sec
Timing buffered disk reads: 464 MB in  3.08 seconds = 150.85 MB/sec
root@UNRAID:~# hdparm -tT /dev/sdg

/dev/sdg:
Timing cached reads:   7158 MB in  2.00 seconds = 3583.00 MB/sec
Timing buffered disk reads: 496 MB in  3.01 seconds = 164.75 MB/sec

 

So, basically no difference in results. The sdi drive is a bit slower, i will look in to that, but this is at both versions.

Link to comment

Ok.

 

rc5

Linux 3.0.35-unRAID.
root@UNRAID:~# dd  if=/dev/zero  of=/mnt/disk1/testfile  bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 21.2893 s, 49.3 MB/s

 

rc11

Linux 3.4.26-unRAID.
root@UNRAID:~# dd  if=/dev/zero  of=/mnt/disk1/testfile  bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 25.5272 s, 41.1 MB/s

 

So about the same i guess. A tad slower perhaps.

Link to comment

My measurements (RC5 is faster than RC11 and RC8)...

 

RC11 (linux 3.4.26)

2TB disk

1048576000 bytes (1.0 GB) copied, 31.9798 s, 32.8 MB/s

1048576000 bytes (1.0 GB) copied, 24.0046 s, 43.7 MB/s

1048576000 bytes (1.0 GB) copied, 25.8743 s, 40.5 MB/s

1048576000 bytes (1.0 GB) copied, 23.9629 s, 43.8 MB/s

1048576000 bytes (1.0 GB) copied, 26.0995 s, 40.2 MB/s

Average: 40.2 MB/s

 

3TB disk

1048576000 bytes (1.0 GB) copied, 23.3049 s, 45.0 MB/s

1048576000 bytes (1.0 GB) copied, 21.9795 s, 47.7 MB/s

1048576000 bytes (1.0 GB) copied, 24.5101 s, 42.8 MB/s

1048576000 bytes (1.0 GB) copied, 23.0835 s, 45.4 MB/s

1048576000 bytes (1.0 GB) copied, 24.0225 s, 43.6 MB/s

Average: 44.9 MB/s

 

RC5 (linux 3.0.35)

2TB disk

1048576000 bytes (1.0 GB) copied, 21.7467 s, 48.2 MB/s

1048576000 bytes (1.0 GB) copied, 23.7081 s, 44.2 MB/s

1048576000 bytes (1.0 GB) copied, 21.8871 s, 47.9 MB/s

1048576000 bytes (1.0 GB) copied, 23.6609 s, 44.3 MB/s

1048576000 bytes (1.0 GB) copied, 21.0582 s, 49.8 MB/s

Average: 46.9 MB/s (17% faster)

 

3TB disk

1048576000 bytes (1.0 GB) copied, 19.8749 s, 52.8 MB/s

1048576000 bytes (1.0 GB) copied, 21.2992 s, 49.2 MB/s

1048576000 bytes (1.0 GB) copied, 17.6082 s, 59.6 MB/s

1048576000 bytes (1.0 GB) copied, 20.5398 s, 51.1 MB/s

1048576000 bytes (1.0 GB) copied, 18.7175 s, 56.0 MB/s

Average: 53.7 MB/s (20% faster)

 

RC8 (linux 3.4.11)

2TB disk

1048576000 bytes (1.0 GB) copied, 25.8166 s, 40.6 MB/s

1048576000 bytes (1.0 GB) copied, 23.4188 s, 44.8 MB/s

1048576000 bytes (1.0 GB) copied, 26.2175 s, 40.0 MB/s

1048576000 bytes (1.0 GB) copied, 24.1458 s, 43.4 MB/s

1048576000 bytes (1.0 GB) copied, 26.4477 s, 39.6 MB/s

Average: 41.7 MB/s

 

3TB disk

1048576000 bytes (1.0 GB) copied, 23.7752 s, 44.1 MB/s

1048576000 bytes (1.0 GB) copied, 22.1375 s, 47.4 MB/s

1048576000 bytes (1.0 GB) copied, 23.5523 s, 44.5 MB/s

1048576000 bytes (1.0 GB) copied, 22.8074 s, 46.0 MB/s

1048576000 bytes (1.0 GB) copied, 24.4973 s, 42.8 MB/s

Average: 45.0 MB/s

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.