Jump to content

Problems with system stability


drmaq

Recommended Posts

HI everyone I am new to using unraid so any help/advise would be appreciated.

I have setup my unraid with 11 Drive in it:

  1. 4- with ST3000DM001-1CH166 drives
  2. 3 - ST3300651NS drives
  3. 2 Samsung SSD 850 EVO Drives
  4. 2 ST3250312AS drives

My parity Drives are 1 ST3000DM001 and a ST3300551NS

Them Problems That I have been having are:

  1. The parity check takes days to complete and and more recently my install seems to crash.
  2. The parity check seem to find errors in the 100000s.
  3. Transfer rates now peeks at 4MB/s instead of 90 - 100MB/s that I usally get before adding the ST3300651NS.
  4. Slow Performance of ubuntu 18.04 vms
  5. Have also seen a error that I have not been able to find more info on in the logs. See below
Jun 19 08:38:21 GREENBOX kernel: EDAC MC1: 32654 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:22 GREENBOX kernel: EDAC MC1: 120 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:23 GREENBOX kernel: EDAC MC1: 84 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:24 GREENBOX kernel: br0: received packet on bond0 with own address as source address (addr:00:25:90:38:8e:fe, vlan:0)
Jun 19 08:38:24 GREENBOX kernel: br0: received packet on bond0 with own address as source address (addr:00:25:90:38:8e:fe, vlan:0)
Jun 19 08:38:24 GREENBOX kernel: EDAC MC1: 41 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:25 GREENBOX root: error: /plugins/preclear.disk/Preclear.php: wrong csrf_token
Jun 19 08:38:25 GREENBOX kernel: EDAC MC1: 32565 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:26 GREENBOX root: error: /plugins/preclear.disk/Preclear.php: wrong csrf_token
Jun 19 08:38:26 GREENBOX kernel: EDAC MC1: 32752 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:27 GREENBOX kernel: EDAC MC1: 178 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
Jun 19 08:38:28 GREENBOX kernel: EDAC MC1: 32561 CE error on CPU#1Channel#1_DIMM#0 (channel:1 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)

 

greenbox-diagnostics-20180619-1946.zip

Link to comment
3 hours ago, drmaq said:

is it better to have one model of drive as a mirrored cache or is it ok to mix drive models?

One reason why I had chosen UnRAID is because of its flexibility. Into my system I have various disks: 2.5inch/500TB drives with 2x3.5inch/1.5TB parity disks and one 2.5inch/250TB Samsung EVO SSD for cache.

 

Quote

Cache

[...]

With a single cache drive, data captured there is at risk, as a parity drive only protects the array, not the cache. However, you can build a cache with multiple drives both to increase your cache capacity as well as to add protection for that data. The grouping of multiple drives in a cache is referred to as building a cache pool. The unRAID cache pool is created through a unique twist on traditional RAID 1, using a BTRFS feature that provides both the data redundancy of RAID 1 plus the capacity expansion of RAID 0.

Source: http://lime-technology.com/wiki/UnRAID_Manual_6#Cache

According to the above details the cache disks can be used in RAID 1 or RAID 0 mode. RAID 1 will add redundancy, RAID 0 will give you a bigger cache pool size.

In RAID 1 situation the writing speed will be the minimum speed of both cache disks, therefore it is recommended to use disks with similar performance, it does not matter if are the same model/brand or not.

In RAID 0 case, if the cache disks have different writing performances you will get the speed of the current cache disk where the data is written.

Link to comment
1 hour ago, hernandito said:

 

I detest Seagate drives. Over the years I have had to replace all but one (2TB) Seagate drives  due to failures. If 8TB’s were on sale for $10, I would not touch them. IMHO, failing drives are some of life’s most stressful moments. 

 

 

 

I have used lot of Seagate that have worked very well. The important thing is that lots of manufacturers have had specific models or batches that have worked rather badly - and anyone who picks up a number of them have then seen a huge overrepresentation of problems.

 

So I have had a specific model of Seagate external drives that overheated. All other models of Seagate external drives have worked very well.

 

I did have quite a lot of the IBM "DeathStar" drives - but my drives never spent time idling, so they never suffered the firmware bug that resulted in the heads drowning in the lubricant.

 

I have had a number of WD drives that have killed themselves because of the extremely aggressive head parking that made them unusable in Linux unless you know/remember to reconfigure them with hdparam.

 

But in the end, they all have decent quality with the exception of their specific hw or fw goofs. I have accumulated hundreds of years of spinning time with Seagate drives without any issues if excepting the specific external drives that overheated.

Link to comment

Remember, even the most horrendous of failure rates mean there are THOUSANDS of that specific drive model that worked perfectly normally, even exceeded expectations. You can't take a backward looking failure statistic for an old model and formulate even a rudimentary risk profile for a single unit from a batch manufactured currently. There are so many other factors in drive failure, mostly related to handling after it left the factory, that it is useless to buy a drive based on past performance of the brand.

 

Buy based on current price / TB ratio including the connection cost, which includes HBA and PSU connection and consumption, and physical slot space. Brand and model are pretty useless when determining what drive to buy for the typical unraid user. Only buy single units, never multiple from the same vendor simultaneously. Purchase when you actually need the space, don't overbuy before you really need it, warranties run from purchase date, shelving a unit because you got a good deal seldom pays off long term.

 

Plan your purchases to allow old units to stay usable as offline backup.

Link to comment

Seagate drives in the 1T-3T range were pretty awful IMO, but newer drives (8T+) have faired much better.  Seagate was sued over their drive failure rates, and I believe they cleaned up their act as a result. 

 

I still prefer HGST drives when they are price competitive, but cheap shucked 8T Seagate and WDs (<$200 USD) have held up well for me.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...