Release Information


Recommended Posts

... the main reasons for the 5.0 release is >2Tb support. This part has been working since beta1.

 

Are you sure about that?

 

What part ? that this was one of (my) main reasons for wanting the 5.0 release, or that it was working. Maybe I was wrong in stating that it worked from beta1, but on the forums there a lots of people running betaXX with >2Tb drives without problems.

it was 5.0b8 if i remember correctly

 

Ok, in any case, for more than a year ;)

Link to comment
  • Replies 186
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

 

I know some feel nothing should be introduced, But... Even Tom came back a few time on how many drives would we want. Just about everyone came back with 24, which is not unreasonable if it can be done. My personal thoughts are 5.0 needs to be more than what 4.7 was. To keep up with the times.

 

 

So you complain, vehemently, in the rc1 thread about how long this is taking (page 3), but want feature creep.

 

There has been many cool ideas, network bonding, multiple unRAID servers displayed as one, additional parity drive, etc. these could all wait until 5.1, what has been mentioned here is nothing new.

 

Eveything else you dont know what your talking about, I will leave it at that, as a favor to someone.

 

Link to comment

Don't know if this is the right place to post but, I have been running RC-3 since its release on a HP N40L with 3TB drives successfully it is rock solid for my configuration and Parity Check is speedily. I also have another server with 4.7 who has 2 TB drives and is running without problems. Besides the > 2TB drive support and a better UI, which I like, what else is of significant difference between the 4.7 final and the 5 RC-3 release?

 

Thanks

O2G

Link to comment

Upgraded to RC3 yesterday from 4.7

Had to run the Upgrade Permission script from the command line due to the fact the web interface was timing out.

 

I went to delete a directory in one of my shares and Windows gave me an error "You do not have rights to delete directory owned by nobody"

 

I know Samba half decently and looked for the logs but could not find them to post.

 

I only have 1 user setup, and that is the root user.  Everything else is by default

 

(Btw the speed of the new release is super.  Parity checks and Copies are noticeably faster!  Can't wait until we can do some add-on development more stably.)

Link to comment

All this talk of NFS problems has me confused.  I am not seeing any NFS problems.  Perhaps I just don't know where to look.  I am only using NFS for rsyncing between servers to make mirror images on a weekly basis.  (My servers are running 5b14 and 5rc3 and the backup servers live offsite.)  What NFS problems should I be seeing with this setup?

 

NFS is used as follows:

mkdir /mnt/s1disk1
mkdir /mnt/s1disk2
mkdir /mnt/s1disk3

mount -t nfs server1:/mnt/disk1/ /mnt/s1disk1
mount -t nfs server1:/mnt/disk2/ /mnt/s1disk2
mount -t nfs server1:/mnt/disk3/ /mnt/s1disk3

rsync -av --stats --progress /mnt/s1disk1/ /mnt/disk1/  >> /boot/logs/cronlogs/t1disk1.log
rsync -av --stats --progress /mnt/s1disk2/ /mnt/disk2/  >> /boot/logs/cronlogs/t1disk2.log
rsync -av --stats --progress /mnt/s1disk3/ /mnt/disk3/  >> /boot/logs/cronlogs/t1disk3.log

Link to comment

I can confirm that both spin down issues appear to be fixed, in this RC.  I started my server, initiated a parity check, stopped after about 5 minutes, rebooted and set hardware clock back 15 minutes, booted and started parity check, aborted after about 7 minutes, rebooted and set hardware clock ahead 15 minutes, and booted and started parity check.  At no time did any spin downs occur, and the time was corrected perfectly.  This of course is not a definitive test, since it is a rather short time period, but others seem to also report success, with no adverse reports at all (so far).  I'll edit the UnRAID Bug Tracking trial wiki page accordingly.

 

On a personal note, for a few minutes, my parity speed seemed to be as fast as always.  But within the first 3 to 7 minutes, the hard disk errors began to multiply, with long hangs (about 40 seconds each), so I had to abort the parity checks very early.  Errors are associated with the same drive as reported in earlier report.  I started a long SMART test, and it stopped rather early with read errors, reallocated sectors went from 0 to 4, current pending rose to 103, and raw_read_error_rate dropped drastically to exactly equal the threshold rate.  Started another long SMART test, and it too finished very early, with another 4 reallocated sectors, and another severe drop in raw_read_error_rate.  Drive is now failing SMART, and appears to be failing rapidly, so won't access again except in an emergency.  If the long test had continued on the rest of the drive, I suspect the 8 reallocated sectors would be much much higher!  :'(  Great drive too (SAMSUNG HD501LJ), has been extremely reliable, quick, and cool.  Now I need to replace it, but without the benefit of a recent parity check!  Lessons learned?  I clearly underestimated the seriousness of the drive errors (timeouts on Read DMA's), thought they did not really matter.  I should have realized that the errors indicated that the drive was taking a very long time to read a number of sectors, and that should have been a red flag.

Link to comment

will post full logs if needed just never noticed this, is it normal when the array is offline that it appears in the syslog that every 5 seconds it does a load unload inventory routine??  After starting the array all is good.

 

May 11 19:58:36 Tower emhttp: Device inventory:

May 11 19:58:36 Tower emhttp: WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791 (sda) 2930266584

...

May 11 19:58:36 Tower kernel: mdcmd (1): import 0 8,32 2930266532 Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA

May 11 19:58:36 Tower kernel: md: import disk0: [8,32] (sdc) Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA size: 2930266532

May 11 19:58:36 Tower kernel: mdcmd (2): import 1 8,0 2930266532 WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791

...

May 11 19:58:36 Tower emhttp: shcmd (11): /usr/local/sbin/emhttp_event driver_loaded

May 11 19:58:36 Tower emhttp_event: driver_loaded

May 11 19:58:37 Tower emhttp: shcmd (12): rmmod md-mod |& logger

May 11 19:58:37 Tower emhttp: shcmd (13): modprobe md-mod super=/boot/config/super.dat slots=6 |& logger

May 11 19:58:37 Tower emhttp: shcmd (14): udevadm settle

May 11 19:58:37 Tower kernel: md: unRAID driver removed

May 11 19:58:37 Tower kernel: md: unRAID driver 2.1.3 installed

May 11 19:58:37 Tower emhttp: Device inventory:

May 11 19:58:37 Tower emhttp: WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791 (sda) 2930266584

...

May 11 19:58:37 Tower kernel: mdcmd (1): import 0 8,32 2930266532 Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA

May 11 19:58:37 Tower kernel: md: import disk0: [8,32] (sdc) Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA size: 2930266532

...

May 11 19:58:37 Tower emhttp: shcmd (15): /usr/local/sbin/emhttp_event driver_loaded

May 11 19:58:37 Tower emhttp_event: driver_loaded

May 11 19:58:42 Tower emhttp: shcmd (16): rmmod md-mod |& logger

May 11 19:58:42 Tower emhttp: shcmd (17): modprobe md-mod super=/boot/config/super.dat slots=6 |& logger

May 11 19:58:42 Tower kernel: md: unRAID driver removed

May 11 19:58:42 Tower emhttp: shcmd (18): udevadm settle

May 11 19:58:42 Tower emhttp: Device inventory:

May 11 19:58:42 Tower emhttp: WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791 (sda) 2930266584

 

I've noticed it too in my syslogs.  As near as I can tell, it seems to be associated with each refresh of the Webgui while the array is offline.  If I stop doing anything with the Webgui, the syslog messages stop too.

Link to comment

Finally had a chance to upgrade from rc2-test to rc3 and the buffering thing still exists.  There is nothing in the log (no lines at all) during the transfer.  b14 still gives consistent write speeds while the RCs are giving peaks and troughs.  They complete in roughly the same time.

 

Will try to isolate if it's a disk related buffer or a networking related buffer.

 

If 'they complete in roughly the same time', are you sure there is a problem?  It really sounds like it is exactly the same transfer, except that one is averaging the speed over a much longer window than the other.  The RC 'peaks and troughs' are probably a more accurate picture of what is happening.

 

Put another way, if both were trying to transfer at maximum speeds, but one slows way down periodically, then we would expect its transfer time to be much longer, but it is not.  The alternative scenario would be that b14 transfers at a steady but moderate speed, and the RC's transfer is 'bursty' (in bursts with pauses).  In each case, it looks like the destination is the bottleneck, can only receive so much data at a time, whether it gets it in bunches or evenly spaced out.

 

I could be wrong, but I don't think there is a serious issue here, just a cosmetic one.

Link to comment

Besides the > 2TB drive support and a better UI, which I like, what else is of significant difference between the 4.7 final and the 5 RC-3 release?

 

The 5 series is more of a rewrite to better support the future, much of which is underlying and not necessarily visible yet.  For more of the detail, see the Release Notes wiki page.

Link to comment

Besides the > 2TB drive support and a better UI, which I like, what else is of significant difference between the 4.7 final and the 5 RC-3 release?

 

The 5 series is more of a rewrite to better support the future, much of which is underlying and not necessarily visible yet.  For more of the detail, see the Release Notes wiki page.

 

Thanks I read through the release notes from 4.7 through the various betas but because the betas changed so drastically and were all over the place because of difficulties with the controllers and NIC drivers, I  just wanted another opinion. I guess I wait with the update until the version 5 release.

Link to comment

will post full logs if needed just never noticed this, is it normal when the array is offline that it appears in the syslog that every 5 seconds it does a load unload inventory routine??  After starting the array all is good.

 

May 11 19:58:36 Tower emhttp: Device inventory:

May 11 19:58:36 Tower emhttp: WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791 (sda) 2930266584

...

May 11 19:58:36 Tower kernel: mdcmd (1): import 0 8,32 2930266532 Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA

May 11 19:58:36 Tower kernel: md: import disk0: [8,32] (sdc) Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA size: 2930266532

May 11 19:58:36 Tower kernel: mdcmd (2): import 1 8,0 2930266532 WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791

...

May 11 19:58:36 Tower emhttp: shcmd (11): /usr/local/sbin/emhttp_event driver_loaded

May 11 19:58:36 Tower emhttp_event: driver_loaded

May 11 19:58:37 Tower emhttp: shcmd (12): rmmod md-mod |& logger

May 11 19:58:37 Tower emhttp: shcmd (13): modprobe md-mod super=/boot/config/super.dat slots=6 |& logger

May 11 19:58:37 Tower emhttp: shcmd (14): udevadm settle

May 11 19:58:37 Tower kernel: md: unRAID driver removed

May 11 19:58:37 Tower kernel: md: unRAID driver 2.1.3 installed

May 11 19:58:37 Tower emhttp: Device inventory:

May 11 19:58:37 Tower emhttp: WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791 (sda) 2930266584

...

May 11 19:58:37 Tower kernel: mdcmd (1): import 0 8,32 2930266532 Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA

May 11 19:58:37 Tower kernel: md: import disk0: [8,32] (sdc) Hitachi_HDS5C3030ALA630_MJ1321YNG0E2EA size: 2930266532

...

May 11 19:58:37 Tower emhttp: shcmd (15): /usr/local/sbin/emhttp_event driver_loaded

May 11 19:58:37 Tower emhttp_event: driver_loaded

May 11 19:58:42 Tower emhttp: shcmd (16): rmmod md-mod |& logger

May 11 19:58:42 Tower emhttp: shcmd (17): modprobe md-mod super=/boot/config/super.dat slots=6 |& logger

May 11 19:58:42 Tower kernel: md: unRAID driver removed

May 11 19:58:42 Tower emhttp: shcmd (18): udevadm settle

May 11 19:58:42 Tower emhttp: Device inventory:

May 11 19:58:42 Tower emhttp: WDC_WD30EZRX-00MMMB0_WD-WCAWZ1155791 (sda) 2930266584

 

I've noticed it too in my syslogs.  As near as I can tell, it seems to be associated with each refresh of the Webgui while the array is offline.  If I stop doing anything with the Webgui, the syslog messages stop too.

 

I've noticed this as well. If I uninstall SimpleFeatures it doesn't seem to do it.

Link to comment

If 'they complete in roughly the same time', are you sure there is a problem?  It really sounds like it is exactly the same transfer, except that one is averaging the speed over a much longer window than the other.  The RC 'peaks and troughs' are probably a more accurate picture of what is happening.

 

Put another way, if both were trying to transfer at maximum speeds, but one slows way down periodically, then we would expect its transfer time to be much longer, but it is not.  The alternative scenario would be that b14 transfers at a steady but moderate speed, and the RC's transfer is 'bursty' (in bursts with pauses).  In each case, it looks like the destination is the bottleneck, can only receive so much data at a time, whether it gets it in bunches or evenly spaced out.

 

I could be wrong, but I don't think there is a serious issue here, just a cosmetic one.

 

 

It could be nothing more than some optimization that removed a lot of processing or seek time overhead.  Or if it's caused by a huge buffer in the networking layer then it's not much of a problem.

 

If it's in the disk io system and it's a buffer causing the issue it would mean that at a point in time there could be significant differences between what is on the data disk and what is on the parity disk.  If the process becomes unstuck while there is an imbalance (power failure, segfault, disk failure, etc) it raises the likelihood of data corruption. 

 

I raised the question because the behavior is quite different between the releases and I know that excessive buffering can cause issues.

Link to comment

Upgraded from 5B14 to 5RC3 all is well except for the repeated messages in the syslog:

 

 

May 13 07:50:57 Tower kernel: scsi_verify_blk_ioctl: 40 callbacks suppressed (Drive related)

May 13 07:50:57 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:50:57 Tower last message repeated 3 times

May 13 07:51:01 Tower crond[1222]: failed parsing crontab for user root: cron=""  (Minor Issues)

May 13 07:51:14 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:51:14 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:51:14 Tower kernel: smartctl: sending ioctl 2285 to a partition!

May 13 07:51:14 Tower last message repeated 7 times

May 13 07:51:45 Tower kernel: scsi_verify_blk_ioctl: 40 callbacks suppressed (Drive related)

May 13 07:51:45 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:52:01 Tower last message repeated 5 times

May 13 07:52:01 Tower kernel: smartctl: sending ioctl 2285 to a partition!

May 13 07:52:01 Tower last message repeated 7 times

May 13 07:52:32 Tower kernel: scsi_verify_blk_ioctl: 40 callbacks suppressed (Drive related)

May 13 07:52:32 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:52:46 Tower last message repeated 5 times

May 13 07:52:46 Tower kernel: smartctl: sending ioctl 2285 to a partition!

May 13 07:52:47 Tower last message repeated 7 times

May 13 07:53:17 Tower kernel: scsi_verify_blk_ioctl: 40 callbacks suppressed (Drive related)

May 13 07:53:17 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:53:32 Tower last message repeated 5 times

May 13 07:53:32 Tower kernel: smartctl: sending ioctl 2285 to a partition!

May 13 07:53:32 Tower last message repeated 7 times

May 13 07:54:02 Tower kernel: scsi_verify_blk_ioctl: 40 callbacks suppressed (Drive related)

May 13 07:54:02 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:54:16 Tower last message repeated 5 times

May 13 07:54:16 Tower kernel: smartctl: sending ioctl 2285 to a partition!

May 13 07:54:16 Tower last message repeated 7 times

Link to comment

Upgraded from 5B14 to 5RC3 all is well except for the repeated messages in the syslog:

 

 

May 13 07:50:57 Tower kernel: scsi_verify_blk_ioctl: 40 callbacks suppressed (Drive related)

May 13 07:50:57 Tower kernel: hdparm: sending ioctl 2285 to a partition!

May 13 07:50:57 Tower last message repeated 3 times

 

Disable your additional addons or upgrade them to their latest version. I think this has been tracked back to a misbehaved addon. SimpleFeatures unraid_notify is the misbehaved addon.

Link to comment

hello;

just upgraded my aging server from 4.7 to RC3. Clean install - moved over just the config files. All came up w/o any problem, all my antique hardware recognized, and the array started.

I've ran the permissions script, re-added the users, and re-assigned them to the user shares, with the appropriate permissions.

 

I notice that all the disks are exported along with the user shares.  I don't see any way to prevent sharing the disks (1...n) and share only the media folders. Can someone clarify?

(in the previous version each of the sharing options was more granular, and placed in the same menu/screen... flash, disk (smb), disk(nfs), user shares - now is not that obvious. Did I miss it somehow?)

 

Thank you

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.