unRAID Server Release 5.0-beta13 Available


limetech

Recommended Posts

  • Replies 269
  • Created
  • Last Reply

Top Posters In This Topic

The `/root/mkmbr` that's included in 5.0-beta13 doesn't work with 3TB drives, right?

Will it be updated for 3TB?  Or maybe a similar tool that does the same job for 3TB drives?

Right, I forgot about the utility.  It was created to help deal with "4K-aligned" hard drives during that code transition period, why do you still want to use it?

Link to comment

Also included in here is network bonding support.  Actually I only enabled it in the kernel and included the 'ifenslave' command.  Still working on GUI support, but some users have been able to get it working.  Here's an overview:

http://www.sgvulcan.com/network-interface-bonding-in-slackware-version-13-1/

Does this allow me to combine server 1 that has the Share "Movies" with a second server that also has the share "Movies" to appear as a combined share \\SomeServer\Movies\?

No, but what do you think about that feature (of combining shares from separate servers like that)?

 

I'd also like to see this. While I don't have two servers currently, I see myself with a 2nd build in the future.

Link to comment

No, but what do you think about that feature (of combining shares from separate servers like that)?

 

I would love to see this... 3TB drives are still too much more expensive than 2TB's for me, and once my server hits full this is exactly the type of solution I'd like to see...

 

Plus it would be cool to have a "mini-server" that was small enough to take out of house, but joined the server at home, to keep the most vital data on...

 

Anyways, happy to see movement on 5.0, thanks limetech!

Joining them on the local network would be cool and feasible... doing it over an Internet connection from outside the local network... that would be asking for a support nightmare.

Link to comment

Works like a champ on my bench box.... including r8169.

 

Samba throughput (large file copy) is improved 15 to 20%.

 

Graph is 2.5GB file copy to unprotected disk (no parity).... solid 480 mbps.

 

Looks great, do you have some tweak in your smb-extra file? if so could you share these settings?

 

FYI: I have a speed of about 25MB/sec from windows7 to unraid (with parity disk on)

Link to comment

Possible issue or Possible coincidence..

 

A drive redballed on me within 30 min of running beta13. it came up doing a parity check on first boot, so I let it run.

<snip>

the Power-Off_Retract_Count bothers me, but it is not fatal. it just points out a possible backplane connection issue.

Power off retracts occur when the drive retracts the heads as an emergency measure when power is unexpectedly lost.  If the drive was written to while it had lost power, it would be red-balled.

 

I think, as you said, a back-plane or wiring issue, not a beta13 issue.

Link to comment

I have upgraded to b13 and now having severe problems. Never had any issues with the earlier betas. Please find enclosed my most recent sys log. After reboot, I have noticed a DISK NOT VALID error with parity disk. All slots, disk numbers and locations were correct. I could log to the GUI and saw the notice: new parity disk installed, though, I did not install anything new. Thereafter it started to rebuild parity. Now I cannot log to the GUI page neither to unMneu. The only connection I have is via the shell script. What should I do now? Force reboot or just wait for few hours and see if it could rebuild parity? Any advice will be very appreciated since my unix knowledge is next to nill.

 

ps: went back to v12a. Now the parity is rebuilding (1%), however, parity disk is red-balled with "MBR-unaligned". The disk1 seems to be unformatted. I had it functioning before the upgrade attempt- What should I do now? :(

syslog.txt

Link to comment

My 6 drives are connected to the onboard LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03). Within 30-45 minutes of running on unRAID 5.0b13 I was getting a ton of error similar to what I see in aht961's syslog and had a drive or two redball on me.

 

Reverting back to Linux kernel 3.0.3 with the 5.0b13 emhttp/shfs, samba 3.6.1 and the prior drive config worked like a charm to get the array back online. The smartctl and reiserfs checks show everything to be perfectly fine.

 

From what I can tell, there's some nasty issues that need to be resolved in Linux 3.1.0 kernel with certain hardware.

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
 1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
 3 Spin_Up_Time            0x0027   166   146   021    Pre-fail  Always       -       8658
 4 Start_Stop_Count        0x0032   099   099   000    Old_age   Always       -       1382
 5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
 7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0
 9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       19747
10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       100
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       15
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       1340
194 Temperature_Celsius     0x0022   122   110   000    Old_age   Always       -       30
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     19739         -

 

Here's a quick blurb from the over 4 meg log file when all hell started breaking loose:

 

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf] CDB: cdb[0]=0x28: 28 00 dc a9 ba 37 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh] CDB: cdb[0]=0x28: 28 00 dc a9 ba 38 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi] CDB: cdb[0]=0x28: 28 00 dc a9 ba 37 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde] CDB: cdb[0]=0x28: 28 00 dc a9 ba 38 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg] CDB: cdb[0]=0x28: 28 00 dc a9 ba 38 00 00 08 00

Oct 28 22:29:27 reaver kernel: REISERFS (device md2): Remounting filesystem read-only

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:2:0: [sdf] CDB: cdb[0]=0x28: 28 00 32 2f 77 ff 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:4:0: [sdh] CDB: cdb[0]=0x28: 28 00 32 2f 78 00 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:5:0: [sdi] CDB: cdb[0]=0x28: 28 00 32 2f 77 ff 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:1:0: [sde] CDB: cdb[0]=0x28: 28 00 32 2f 78 00 00 00 08 00

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg] Device not ready

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg]  Sense Key : 0x2 [current]

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg]  ASC=0x4 ASCQ=0x2

Oct 28 22:29:27 reaver kernel: sd 6:0:3:0: [sdg] CDB: cdb[0]=0x28: 28 00 32 2f 78 00 00 00 08 00

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdf, sector 3702110775

Oct 28 22:29:27 reaver kernel: md: disk2 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 3702110712/2, count: 1

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdh, sector 3702110776

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdi, sector 3702110775

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sde, sector 3702110776

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdg, sector 3702110776

Oct 28 22:29:27 reaver kernel: md: disk0 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 3702110712/0, count: 1

Oct 28 22:29:27 reaver kernel: md: disk1 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 3702110712/1, count: 1

Oct 28 22:29:27 reaver kernel: md: disk4 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 3702110712/4, count: 1

Oct 28 22:29:27 reaver kernel: md: disk5 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 3702110712/5, count: 1

Oct 28 22:29:27 reaver kernel: REISERFS warning: reiserfs-5090 is_tree_node: node level 0 does not match to the expected one 2

Oct 28 22:29:27 reaver kernel: REISERFS error (device md2): vs-5150 search_by_key: invalid format found in block 462763839. Fsck?

Oct 28 22:29:27 reaver kernel: REISERFS error (device md2): vs-13070 reiserfs_read_locked_inode: i/o failure occurred trying to find stat data of [1087 1127 0x0 SD]

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdf, sector 841971711

Oct 28 22:29:27 reaver kernel: md: disk2 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 841971648/2, count: 1

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdh, sector 841971712

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdi, sector 841971711

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sde, sector 841971712

Oct 28 22:29:27 reaver kernel: end_request: I/O error, dev sdg, sector 841971712

Oct 28 22:29:27 reaver kernel: md: disk0 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 841971648/0, count: 1

Oct 28 22:29:27 reaver kernel: md: disk1 read error

Oct 28 22:29:27 reaver kernel: handle_stripe read error: 841971648/1, count: 1

Oct 28 22:29:27 reaver kernel: md: disk4 read error

 

Link to comment

ps: went back to v12a. ....What should I do now? :(

 

I did a clean reinstall of my entire flash boot disk with my back-up from v 12a and rebooted via the shell script. Thereafter, it was up and running. All data and disks are ok with no disk (unaligned or unformatted) errors. All shares (AFP and SMB) are online and functioning :) The system is now rebuilding parity with no sync errors.

 

This was a very good lesson for me. I would not have tried a beta as soon as it became available. At least with my 1.5 years old hardware, the v13 had severe issues.

 

// modified typo errors

Link to comment

Getting 25MB/s parity speeds when I got 65-70MB/s on beta 12a.

 

Anyone else? I'd post a syslog but there's no errors.

 

EDIT: Confirmed going back to beta 12a fixed this.

 

CPU: Intel E5200

Motherboard: Supermicro MBD-X7SBE

Bios: 1.2a

Memory: 4GB G.Skill (DDR2-800 5-5-5-15)

Data Drives: 14x 2TB Western Digital Green (WD20EARS), 6x 3TB Western Digital Green (WD30EZRX)

Parity Drive: 1x 3TB Western Digital Green (WD30EZRX)

Cache Drive: 750GB WD SE16

Sata Cards: 3x SUPERMICRO AOC-SAT2-MV8

Power Supply: 850W Corsair HX850W

Case: Norco 4224

Link to comment
Do you have a Realtek NIC?  Please boot -beta13 and log in via console.  Then copy the system log to the flash using this command, then post here:

 

cp /var/log/syslog /boot/syslog.txt

 

Ok i tried again today and it worked! Server booted up and i can access it the my browser now. phew!!!! Now to see if i still have the power down issue.

Link to comment

I copied over the files to my stick and rebooted and nada! Could not connect to server. Tried several time and still got the same. Copied back beta 11 and i can connect just fine!

Any ideas?

Do you have a Realtek NIC?  Please boot -beta13 and log in via console.  Then copy the system log to the flash using this command, then post here:

 

cp /var/log/syslog /boot/syslog.txt

 

Here's the system log you asked for.

system_log_Oct_29_2011.txt

Link to comment

Possible issue or Possible coincidence..

 

A drive redballed on me within 30 min of running beta13. it came up doing a parity check on first boot, so I let it run.

<snip>

the Power-Off_Retract_Count bothers me, but it is not fatal. it just points out a possible backplane connection issue.

Power off retracts occur when the drive retracts the heads as an emergency measure when power is unexpectedly lost.   If the drive was written to while it had lost power, it would be red-balled.

 

I think, as you said, a back-plane or wiring issue, not a beta13 issue.

 

on all of my hitachi drives in unraid, the Power-Off_Retract_Count  and the Load_Cycle_Count match. I think there is a connection there.

I tore the server apart no mechanical errors can be found.

I swapped m1015's, backplanes , SAS cables and disk5. I also added power connectors to the second row of power headers just to be safe.

 

i am 92% through the data rebuild with what looks to me to be a clean syslog.

 

After it is done, I am going to expand the array with some more 3TB drives. I'll make sure to put on one the back backplane port of the drive that red balled.

 

Also, as an experiment, I have one M1015 with P10 and one with P11 since someone was wondering about seg faults in the newer betas

Link to comment

My 6 drives are connected to the onboard LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03). Within 30-45 minutes of running on unRAID 5.0b13 I was getting a ton of error similar to what I see in aht961's syslog and had a drive or two redball on me.

I had similar errors in this post on the previous page. http://lime-technology.com/forum/index.php?topic=16125.msg149267#msg149267

there is a syslog there for you to look at. see if they look similar.

the drives having issues are on m1015's with p10.

I am not sure the cause yet. my drive looked fine as did the hardware. I replaced the redballed drive and put the suspect drive into a 12a server for further investigation.

the rebuild is completely clean so far at 92%

Link to comment

 In b12a I was getting 80MB transfer speeds, and 90-125MB parity check speeds for the first 3-4 weeks, then it started crashing, I then had to go went back to b11, speeds dropped to 8-12MB, and there is absolutely NO improvement with b13. Still crazy slow!

I thought my original unraid was slow at 25-40MB/s. Now I am starting to think it isn't so bad after all.

I sure hope this gets worked out soon, I really liked the 80MB/s transfers. I am copying an 165.38G file (Yes, that is correct, it is an complete drive backup that I have to take to a clients to restore his PC.) from my unraid server to an external hard drive, and for about 2 hours, I have not seen it over 8.8MB/s speed

 

Another thing I noticed that had never happened before.

I wanted to stop the array to remove the cache drive, and it took more then 24 minutes to stop.

 

Getting 25MB/s parity speeds when I got 65-70MB/s on beta 12a.

 

Anyone else? I'd post a syslog but there's no errors.

 

EDIT: Confirmed going back to beta 12a fixed this.

 

CPU: Intel E5200

Motherboard: Supermicro MBD-X7SBE

Bios: 1.2a

Memory: 4GB G.Skill (DDR2-800 5-5-5-15)

Data Drives: 14x 2TB Western Digital Green (WD20EARS), 6x 3TB Western Digital Green (WD30EZRX)

Parity Drive: 1x 3TB Western Digital Green (WD30EZRX)

Cache Drive: 750GB WD SE16

Sata Cards: 3x SUPERMICRO AOC-SAT2-MV8

Power Supply: 850W Corsair HX850W

Case: Norco 4224

Link to comment

Johnm, your error log shows very similar errors if not identical to the other two (mine and aht961) I looked through. It seems like just out of nowhere the drive cuts out and then produces a metric ton of read errors.

 

From Johnm's log:

 

Oct 28 19:03:14 Goliath kernel: mdcmd (39): spindown 4

Oct 28 19:03:14 Goliath kernel: mdcmd (40): spindown 5

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] Device not ready

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Sense Key : 0x2 [current]

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  ASC=0x4 ASCQ=0x2

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 12 9d af d0 00 01 00 00

Oct 28 19:03:17 Goliath kernel: end_request: I/O error, dev sdc, sector 312324048

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] Device not ready

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Sense Key : 0x2 [current]

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  ASC=0x4 ASCQ=0x2

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 12 9d b0 d0 00 02 00 00

Oct 28 19:03:17 Goliath kernel: end_request: I/O error, dev sdc, sector 312324304

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] Device not ready

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Sense Key : 0x2 [current]

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  ASC=0x4 ASCQ=0x2

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 12 9d b2 d0 00 02 00 00

Oct 28 19:03:17 Goliath kernel: end_request: I/O error, dev sdc, sector 312324816

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] Device not ready

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Result: hostbyte=0x00 driverbyte=0x08

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  Sense Key : 0x2 [current]

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc]  ASC=0x4 ASCQ=0x2

Oct 28 19:03:17 Goliath kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 12 9d b4 d0 00 02 00 00

Oct 28 19:03:17 Goliath kernel: end_request: I/O error, dev sdc, sector 312325328

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312323984/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312323992/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324000/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324008/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324016/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324024/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324032/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324040/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324048/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324056/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324064/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324072/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324080/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324088/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324096/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324104/5, count: 1

Oct 28 19:03:17 Goliath kernel: md: disk5 read error

Oct 28 19:03:17 Goliath kernel: handle_stripe read error: 312324112/5, count: 1

Link to comment

I had read problems with disk1, which was shown unformatted. Here is the smart report of this disk if it helps. Until upgrading to v13, I had no disk errors. After trying to upgrade, I had at once 396 disk errors. The disk involved is a WD Green Caviar (WD20EADS), 2TB. The parity disk is also the same make. Can it be something related to this certain type of HDD?

smart.txt

Link to comment

Getting 25MB/s parity speeds when I got 65-70MB/s on beta 12a.

 

Anyone else? I'd post a syslog but there's no errors.

 

EDIT: Confirmed going back to beta 12a fixed this.

 

CPU: Intel E5200

Motherboard: Supermicro MBD-X7SBE

Bios: 1.2a

Memory: 4GB G.Skill (DDR2-800 5-5-5-15)

Data Drives: 14x 2TB Western Digital Green (WD20EARS), 6x 3TB Western Digital Green (WD30EZRX)

Parity Drive: 1x 3TB Western Digital Green (WD30EZRX)

Cache Drive: 750GB WD SE16

Sata Cards: 3x SUPERMICRO AOC-SAT2-MV8

Power Supply: 850W Corsair HX850W

Case: Norco 4224

 

Just a quick update on the above issue. Did a USB wipe and a fresh install of beta13. Same issues, no addons, no tweaks. Going back to 12a.

Link to comment

 

 

  I found a problem with my network (Just need to figure out how to correct it.)

In system information it shows that I have 100Mb/s full duplex, it should be 1000Mb/s full duplex.

I was going to attach a syslog, but I am having trouble finding it. (The only one I can find has just 6 lines, and I know that can't be it.)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.