unRAID Server Release 5.0-beta6 Available


Recommended Posts

  • Replies 119
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

This has gone quiet for two days. Thats very very unusual!!!

 

Is there a beta7 coming out soon?

 

Fixing something like this that only appears to happen on certain hardware can be incredibly time consuming and frustrating.

 

 

It is kind of like having to support IE6 for new web development (i.e. time consuming and frustrating).

Link to comment

What's kind of horrible about this one is once the issue occurs on a setup, the issue itself and even the steps taken to alleviate the issue (reconstruct the MBR on sector 63) prevent you from reproducing the issue.

 

What needs to be done when the MBR is identified as not meeting unRAID standards is to save the raw dump of the original MBR to see what doesn't match before proceeding forward. Only once you have some evidence of the before picture can you understand what has occurred.

 

Link to comment

Has anyone added a pre-cleared disk to a 5.0b6 array?  I just added one and instead of the usual formatting process, it went straight to clearing again.  I am very sure that the disk was pre-cleared as I label them all right after I finish the 3 passes via Joe L's script.

 

Regards,  Peter

Link to comment

Before v5, I was using v4.7.  In that version I seem to remember that there was a text box where I could enter the security settings for sharing via SMB over my network.  Now, the settings are via dropdown boxes.  I'd think that this should work just fine, but I can't seem to get my unraid shares visible to my TIVX, though, I can browse all my shares on my windows computers.  Now that I'm running v5 is there something new that I need to do to make my unraid shares visible on the network?

 

...

 

Ok,  I now realize that I wasn't using SMB to access my movies from TIVX, but rather that I was using NFS.  I figured out how to add NFS rules in the v5 interface, and I added the rule *(ro). 

 

The problem is... I can't remember if this is all I had for an NFS rule before I installed unraid v5.  So, I don't know if my issue is related to running v5 or if it is because I did not restore my NFS rules properly.

 

Now, my TIVX does not give a network error any longer but it doesn't display any files or directories either. 

 

Any suggestions?

Link to comment

Has anyone added a pre-cleared disk to a 5.0b6 array?  I just added one and instead of the usual formatting process, it went straight to clearing again.  I am very sure that the disk was pre-cleared as I label them all right after I finish the 3 passes via Joe L's script.

 

Regards,  Peter

 

This is fixed in next beta.

Link to comment

Without wishing to lull anyone into a false sense of security because, undoubtedly, there is a serious issue in b6 which appears to affect some, but not all, users I can say that I have been running b6 for 5 days - it has been serving files to my two media players (probably for 12 hours each day) and has had new files added without problem.

 

As I understand it, the fault only presents itself on the first start with b6 installed (possibly because, at this stage, all the drives are unassigned).  I also understand that, as long as a prescribed set of instructions are followed, the data on the disk can be recovered.  It also seems that, once repaired, the problem cannot be recreated.

 

I'm sure that much effort is being put into tracking down the precise circumstances in which this problem occurs, together with a lot of head-scratching.  I'm guessing that there is a fair amount of private correspondence between Tom and those who have been affected.  The shortage of news on progress suggests that the precise cause has still not been identified.

 

Now, it seems, to me, that it must be something in the disk's history which determines whether it is going to be affected.  Hopefully, most, if not all, who have installed b6 have some technical skill and keep some record or memory of their disk history.

 

Is there anything to be gained from opening up the dialogue to all who have installed b6, with a view to collating histories?  In particular, interest must centre around the method of clearing and formatting the disk (dates and program versions used), and any less-than-usual circumstances, such as HPA-related activity.

Link to comment

I'll copy this post over from the other thread to include my drives exact history.

 

All my drives were purchased brand new and ran through 3 cycles of PRECLEAR. I did have 1 RMA drive, which was run through 5 cycles of PRECLEAR (one of the Seagate 5900rpm).

 

The WD EADs (Sector 63 aligned) were not new to being in an unRAID Array, neither were the Seagate 5900rpms (Sector 64 aligned). The WD drives have only ever been used on a Jetway motherboard and the widely used MSI H55 motherboard while the Segates have only ever been connected to the IBM BR10i LSI1068E based controller. All drives have never been connected to anything Gigabyte, the logs never showed HPA or anything odd. Only the Seagates were connected to the IBM BR10i LSI1068E based controller, the others were purely only on MB Sata ports.

 

I never did any manipulation of the drives. The WD's have been running for 19 months or so in unRAID 4.5 betas, then unraid 4.5, then unraid 5.0 beta 1 and 2, then hit up unRAID 5.0 beta 3, then 5.0 beta 4, then beta 5b (skipped beta 5a), then beta 6.

 

The Seagates have been running in unRAID 5.0 beta 4, then beta 5b (skipped beta 5a), then beta 6.

 

I never did any manipulation to the drivers other than to fix the MBR issue that unRAID 5.0 beta 6 caused.

Link to comment

When I build beta6 from the source, I get a segfault when it starts.

 

strace:

 

open("/proc/mdcmd", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
time(NULL)                              = 1299437756
send(4, "<11>Mar  6 13:55:56 emhttp: read_"..., 85, MSG_NOSIGNAL) = 85
time(NULL)                              = 1299437756
send(4, "<14>Mar  6 13:55:56 emhttp: diskI"..., 46, MSG_NOSIGNAL) = 46
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

 

I'm using the same system that I was able to compile and run beta4 on with no problems.

 

Any suggestions?

 

Link to comment
  • 1 month later...

When I build beta6 from the source, I get a segfault when it starts.

 

strace:

 

open("/proc/mdcmd", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
time(NULL)                              = 1299437756
send(4, "<11>Mar  6 13:55:56 emhttp: read_"..., 85, MSG_NOSIGNAL) = 85
time(NULL)                              = 1299437756
send(4, "<14>Mar  6 13:55:56 emhttp: diskI"..., 46, MSG_NOSIGNAL) = 46
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

 

I'm using the same system that I was able to compile and run beta4 on with no problems.

 

Any suggestions?

 

 

I have the same issue on 6a. Same output on strace. Ever fix this Bubba?

Link to comment

Did you include the unRAID kernel drives into your source tree? Did you specify them as kernel module?

 

I copied the kernel drivers from these instructions. Then in make menuconfig, I have enabled and *'d the following:

 

File Systems ---> (*)Ext2, (*)Ext3 and (*)ReiserFS support

File Systems ---> DOS/FAT/NT Filesystems ---> (*) NTFS file system Support

Device Drivers ---> (*) Serial ATA and Parallel ATA Drivers ---> (*) AHCI Support

Device Drivers ---> SCSI device support --> (*) SCSI disk support

Device Drivers ---> SCSI device support --> (*) SCSI CDROM support

Device Drivers ---> (*) Multi-device support (unRAID) ---> (*) RAID support

Plus the remaining defaults.

 

 

The kernel boots fine. But emhttp still segfaults. strace shows the same issue of /proc/mdcmd not being found. I tried running /root/mdcmd status (which normally spits out of bunch of variables about the status of your drive) just says the same /proc/mdcmd not found.

 

Give me a few minutes and I will post my build logs.

Link to comment

Hmm there may be a build error I have been over looking...

 

  CHK     include/generated/compile.h
  UPD     include/generated/compile.h
  CC      init/version.o
  LD      init/built-in.o
  LD      .tmp_vmlinux1
init/built-in.o: In function `md_setup_drive':
do_mounts_md.c:(.init.text+0xfc7): undefined reference to `mdp_major'
fs/built-in.o: In function `rescan_partitions':
(.text+0x3849e): undefined reference to `md_autodetect_dev'
make: *** [.tmp_vmlinux1] Error 1
root@tower:/usr/src/linux# 

 

Any ideas?

Link to comment

I took time out today and updated from 4.5.6 -> 4.7 -> 5.0b6

 

Everything went smoothly with a clean log and all devices detected as expected.

 

Only thing I can't figure out is how to prevent all the disks from being shared, I remember doing this way back when I setup 4.5.6 but have since forgotten - surely a testament to unraid's stability as well as my forgetfulness! :-) Found it on the drives page - which is logical I guess.

 

 

Link to comment

Has anyone added a pre-cleared disk to a 5.0b6 array?  I just added one and instead of the usual formatting process, it went straight to clearing again.  I am very sure that the disk was pre-cleared as I label them all right after I finish the 3 passes via Joe L's script.

 

Regards,  Peter

 

Ive just had the same with 1x 2TB Parity and 9x 1.5TB drives. very annoying as im now re-preclearing then in an attempt to try again.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.