Areca "Main" Temperature Support in Unraid 6.4


bsim

Recommended Posts

I haven't really been able to find a fix for this, from my research, the temperature of a drive on the "Main" page is pulled through hard code in emhttp, rather than using the settings for each disk's smart.

I can get the smart data for each drive pulled correctly for each drive in their individual disk properties, but I can't get the "Main" temperature to change. Has anyone figured out how to change these temperatures? Is this an issue that Limetech would ever tackle?

Link to comment

Areca support was an effort of BubbaQ from way back in 4.x days.  It has always been a hack on top of unRaid to allow for the non-standard Areca way of doing things to be understandable by unRaid.  @bonienl put together the scsi devices plugin that simplified this significantly, but 2 things remain unresolved. 

 

1. Temperatures on the main page always show as 30 degree Celsius regardless of the real disk temperature.  The real temperature is available on the

2. PreClear is limited by lack of support for the initial and post smart reports.  I always preclear on a motherboard or other standard SATA connection.

 

Areca users are a small group, and most of the development has been by people who have controllers that they want to work. 

Link to comment

I've went through some of the coding of the emhttp (main page temps) and have found a few locations that the infamous "30" degree comes from (looks like a default for no values)...I considered going through and attempting something, but without a larger overhaul of how unraid handles smart from all the different controllers, I may just keep doing about the same thing as you have with a single motherboard connected bay in 24 rack just for preclear smart info collection. Just sucks that most of the alarms/notifications available in the basic unraid aren't available for my ARC-1280ML 24 port with 2GB...was excited to get it for only 70$ on ebay! The onboard Areca web interface for the controller is pretty clean, just sucks not having the alarm functions support in unraid.

 

Reading a few of the posts on areca support, it looked promising, but may have fallen to the wayside by unraid devs.

Link to comment

The work by bubbaq adapted to the plugin result in the drive naming (model/serial number) being set correctly so a disk mounted on the Areca is named exactly the same as if it were plugged into any other card or 'motherboard port.

 

But getting the temperature is dependent on getting a valid smart report. I do believe there is a way do configure that for Areca drives in the dynamix web guI. And hopefully if you do configure it, temps will show correctly. @bonienl may be able to confirm and point you in the right direction.

 

By the way, the ARC-1280 is a PCIe 1.1 card in an x8 package. You can hook up 7 (maybe 8 ) spinners and get good performance in parity checks, but adding more drives is going to start constraining. (The I/O is only constrained when all disks are running in parallel, so adding UD devices to the controller beyond the 7 is fine.) But true 24 drive operation is a pipe dream unless your use case is only use a few at a time. The Areca cards do support creating a RAID0 parity, witch you can do nicely with the 1280 card. Such a parity made up of say 2 4T 7200 RPM drives would provide a very fast 8T parity drive, capable of over 330 MB/sec speed which can help with write speeds to the array.

 

A PCIe 1.x card is half the bandwidth of a PCIe 2.0 card. A PCIe 2.0 x8 card would therefore support 2x as many spinners (14 or 15) running in parallel. The best choices for a 16 port card are the LSI.SAS9201-16i and the LSI SAS9201-16e. The -16e is quite reasonably priced but is set up for externally mounted drives. The cables could be routed inside the case and used for internal drives, but its cables use a totally different connector than most SAS cards. There is an Areca card (1203-8i) that is a 2.0 card with an x4 connector. Supports 7 our 8 spinners in an x4 slot. And it supports RAID0 parity. I have one of these and quite happy with it. Somewhat outrageously priced, but I found one on eBay for $100 and went for it.

 

PCI 3.0 cards are double the bandwidth of 2.0, meaning in theory an x8 card could support about 30 spinning drives. Don't know of a controller at anywhere near a reasonable price that goes that high, although there is the SAS9205-24i might be one to watch for if you can find it.

 

Cheers! 

 

(#ssdindex - Areca / PCIe bandwidth)

Link to comment

Note that smartctl takes special command-line input for accessing devices using an areca card.

I don't have an areca card myself, so I can't test.

 

But the documentation claims:

  smartctl --all --device=areca,3 /dev/sg2
          (Prints all SMART info for 3rd ATA disk on Areca RAID controller)

 

Link to comment

Suggest reading this thread. There is a spurious log message that comes from the Areca drives that this is focused on, but there is a lot of information, including a link to a post I wrote long ago about how to do the Areca smart reports. Tom even chimes in later on.

 

 

Link to comment

Yes, I understand the limitations of the 24 port card, but with the exception of local server use, anything beyond the gigabit network connection throughput to it is moot. The parity check for a 70TB server isn't going to be a quick one anyway, but using only spinners (except a direct to motherboard btrfs ssd mirror cache pool), I don't expect it to ever be. I regularly do parity checks at the beginning of each month which may cause a bit of a slowdown during those delightful few days. This isn't a high availability system, only running a few vm's off the cache set, streaming and backups. I have cat6a ran in many places at this point, so if I ever decide to jump to 10Gb network, I may do something with the card in the future.

 

I understand all the nuances of using smartctl and it's command line, and yes, under each specific disk, if you put in the controller slot and the controller address you can get the smart listing under each disk. The smart default page for all the disks only helps with inputting the controller address (which changed between 6.3 and 6.4 btw from sg25 to sg24). This enabling does not help for any system alerts (especially for temp) or any preclear reporting. The emhttp is the source of many of the main smart config interests, and it doesn't use the drive config options under each drive...it is hard coded to use generic smartctrl commands and not the special areca, 3comm...etc...switches. The most obvious limit to this is that the main page will always be stuck at 30degC regardless of the actual temps.

 

 

 

 

 

The bad or missing sense data is a pain in the butt on the console, but 6.4 may have changed some of that (not sure because all of my drives have to manually be reset to the new controller address...fun, fun)

Link to comment
10 minutes ago, bsim said:

Yes, I understand the limitations of the 24 port card, but with the exception of local server use, anything beyond the gigabit network connection throughput to it is moot. The parity check for a 70TB server isn't going to be a quick one anyway, but using only spinners (except a direct to motherboard btrfs ssd mirror cache pool), I don't expect it to ever be. I regularly do parity checks at the beginning of each month which may cause a bit of a slowdown during those delightful few days. This isn't a high availability system, only running a few vm's off the cache set, streaming and backups. I have cat6a ran in many places at this point, so if I ever decide to jump to 10Gb network, I may do something with the card in the future.

 

I understand all the nuances of using smartctl and it's command line, and yes, under each specific disk, if you put in the controller slot and the controller address you can get the smart listing under each disk. The smart default page for all the disks only helps with inputting the controller address (which changed between 6.3 and 6.4 btw from sg25 to sg24). This enabling does not help for any system alerts (especially for temp) or any preclear reporting. The emhttp is the source of many of the main smart config interests, and it doesn't use the drive config options under each drive...it is hard coded to use generic smartctrl commands and not the special areca, 3comm...etc...switches. The most obvious limit to this is that the main page will always be stuck at 30degC regardless of the actual temps.

 

Loading this controller with 24 drives would mean about the max average speed each of the connected drives would be able to deliver is 65 MB/sec. Since the drives are probably able to delivery close to 180MG/sec at the start, trailing to about 65 MB/sec at the end (inner cylinders), you are slowing down start to finish. In theory the speed should be consistent throughout because even on the inner cylinders it is always or almost always contained by bandwidth and not individual disk speed.

 

I estimate about a 24-30 hours parity check with 5T parity. If you upped that parity to 8T, you're looking at 38-46 hours.


But if you are ok with the parity check speeds, for normal use it should be fine. You are typically only doing I/O to a few in parallel.

 

Did you set the Areca controller in your config and turn off spin down on the Areca connected drives? Wasn't clear from that other thread if that stops the sense data log messages. (The drives will still spin down based on the Areca specified timeout) - unRAID doesn't have the ability to spin it down - in fact I would not find ANY tool that would spin them down.

 

Link to comment

Uhh...If you can find a spinning consumer drive that dumps at 180MB/s...I would love to see it and anyone running a 70TB SSD array wouldn't be interested in using my sub 100$ drive controller.

 

My Areca spin downs are disabled on the controller and in unraid. I'm thinking you have my thread confused with a different thread, as I'm not having the sense data logging problem, as I have given up on the hopes of unraid supporting non generic smart data natively throught the emhttp interface.

My previous reply at the end, was directed towards another user that had not read the entire thread and commented on the sense data issue that is caused by unraid emhttp generic native commands being issued to a proprietary controller not using the special smartctl switches.

 

Please read the thread and stop hijacking the conversation, simple mathmatics and my usage habits tells me I'm ok with my array speeds.

 

My issue, as originally stated, was having the main page support (emhttp) reporting the correct smart temperature instead of the default 30 degrees.

Link to comment
7 minutes ago, bsim said:

My issue, as originally stated, was having the main page support (emhttp) reporting the correct smart temperature instead of the default 30 degrees.

 

I'd request it in the feature requests area then. Or as a defect. General support is pointing you to the fact that this does not exist today. I thought Dynamix had a way to specify smart parameters, but apparently I was mistaken.

 

7 minutes ago, bsim said:

Please read the thread and stop hijacking the conversation, simple mathmatics and my usage habits tells me I'm ok with my array speeds.

 

In my role as moderator I do try to help connect the dots between questions and existing information that might be relevant and of value. Sorry you saw it as hijacking. 

 

13 minutes ago, bsim said:

Uhh...If you can find a spinning consumer drive that dumps at 180MB/s...I would love to see it and anyone running a 70TB SSD array wouldn't be interested in using my sub 100$ drive controller.

 

Spinners max out at about 200 MB/sec. 180 MB/sec is a bit below. This is high speed sequential read mode that parity check uses. Run the disk test tool. But even parity check can't get quite that fast as it is processing the data and it is running a lot of I/O in parallel.

 

Good luck. Over and out.

Link to comment
21 minutes ago, bsim said:

Please read the thread and stop hijacking the conversation, simple mathmatics and my usage habits tells me I'm ok with my array speeds.

 

I hear you loud and clear. Have rolled back my suggested manual hack to /usr/local/emhttp/webGui/include/DeviceList.php that would most probably have supplied you with the SMART data for your Areca controller. I will make sure I don't offend you by accidentally hijack any further threads you have created.

Link to comment

Sorry if I seemed a bit brusk...just frustrated...SSD, I love the knowledge sharing, but for most speed has always taken a back seat to problems. I've been a network engineering/consulting for a couple decades so I understand the internal bus limitations very well. But I've always found that there is a sweet spot between cost and speed.

 

The two sides of my issue with unraid usually come down to two types of users/administrators. One group sees the customized smart page per disk and the overall disk smart defaults page and see the problem solved. This group usually doesn't have the controllers in question and don't see the glaring issues in unraid that are deep inside the core of unraid.

The second group (usually has to deal with the controller every day in question) and has to jump through hoops to use a controller that would cost several hundred to several thousand dollars correctly through the native unraid interface. I'm guessing the couple of guys that wrote the interface for the custom smart data were in this group.

 

From other posts, it seems that the Tom sees the issue with emhttp, but from my own digging around through emhttp's code, it looks like the fix would probably require quite a bit of rewrites in the base coding. I'm thinking the base coding was written the way it was because at the time smartctl (which unraid relies on heavily) did not support custom controller configurations/command switches. A few guys stepped in (probably had to deal with their own controller issues) and wrote the special handling pages for smart controllers that were now supported by the new smartctl command line options. But unraid hasn't went back to internalize the third party additional work so that the three or four big problems are taken to heart. Hence core support issues.

 

I'm guessing the several problems haven't been internalized because smart is always been treated as a secondary issue that tends to be a bit hokey pokey. A nice to have, rather than a signaling (full of false positives and potentially false negatives) of potential future issues.

 

The current issues with the core unraid special controller support from what I've researched...

 

1. Main temperatures page does not show current temps of any non-generic smart controllers

2. Generic smart commands are issued to non-generic smart controllers (causing console errors with sense data)

3. Preclear not being able to record correct smart data to determine if there is a potential problem

4. The Unraid Dashboard not showing if potential smart signals are flagged for a specific drive (red errors for reallocated for instance)

5. Smart error warnings not being flagged in unraid notifications

 

What I wrote in https://lime-technology.com/forums/topic/56820-bad-missing-sense-key-scsi-data/ was an attempt to start a discussion on future unraid full support of non-generic controllers since smartctl has been supporting them since something like 5.3 (unraid 6.4 now running 6.5)

 

 

Link to comment

The GUI does support specific controllers. From the Main page click on the desired disk and look at SMART settings.

image.thumb.png.fed0678e22a74315aa0863e8573b480a.png

 

Each disk needs to get the correct parameters (index and device name).

With the correct settings in place you will be able to generate SMART reports, monitor temperature and allow notifications when disk temperature exceeds thresholds.

 

Link to comment

The dashboard and the main page (emhttp) are the primary places of non functioning smart.  I'm not going to enter each drive page to verify temperature of each drive when I have a dashboard/main page that displays the data front and center incorrectly?

 

As of the last version of unraid I never received any notifications of pending sectors even though I had two drives launch to huge error numbers with all of the drives having the correct smart data pulled for each individual drive (seperate drive settings). The separate smart drive pages work with the custom smartctl settings, but the notifications never triggered...has something with them changed in 6.4?

 

Do the notification thresholds in the smart page reflect on the notifications on the dashboard status page (red icon for problems)?

 

Also, it looks like the second half of the smart information page seems to pull some sort of corrupt data?

 

 

 

 

correct temp in smart.jpg

Drive Custom Info.jpg

Dashboard Statistics.jpg

Main Incorrect temp.jpg

Link to comment

The first box after specifying areca would be the port on the areca card the drive is on, first number counts, second one ignore...pain in the butt to connect with the actual drive...you have to pull a manual smart report and correlate with serials...I use this for each of the 24 slots...

 

echo -n "01" ;smartctl --all --device=areca,1 /dev/sg24|grep "Serial Number:"|cut -d ':' -f 2

echo -n "02" ;smartctl --all --device=areca,2 /dev/sg24|grep "Serial Number:"|cut -d ':' -f 2

...

 

the second box would contain the address from lsscsi -g|grep "Areca"

Link to comment
13 minutes ago, bsim said:

The first box after specifying areca would be the port on the areca card the drive is on, first number counts, second one ignore...pain in the butt to connect with the actual drive...you have to pull a manual smart report and correlate with serials...I use this for each of the 24 slots...

 

echo -n "01" ;smartctl --all --device=areca,1 /dev/sg24|grep "Serial Number:"|cut -d ':' -f 2

echo -n "02" ;smartctl --all --device=areca,2 /dev/sg24|grep "Serial Number:"|cut -d ':' -f 2

...

 

the second box would contain the address from lsscsi -g|grep "Areca"

 

Yea - I had the /dev/sg3 and the areca number 6 in my command.

 

But looks like there has been an update since rc6, as I have 4 boxes not 2. And putting the 6 the right place didn't help. Will try again after I upgrade.

 

I am deleting my earlier post so as not to confuse.

 

I actually still run my old unmenu / myMain GUI. It has all of this automated. :)  So I get my smart reports and temperatures automatically. WIth a few CLI commands you can figure all of it out. I generate a little txt file for each Areca drive at boot with its parameters so I don't have to figure out each time. But I did notice that with my latest Areca card (ARC-1203-8i) the output of a few of the commands changed, and I had to rejigger my parser. 

Link to comment
8 hours ago, bsim said:

I think there were 3 or 4 boxes for 6.3.5 as well, but only two were pertainent...now with 6.4 only the two boxes are there.

 

6.3.5 and 6.4.0 use different versions of smartctl. Setting fields are based on the latest documentation of smartctl.

Edited by bonienl
Link to comment
  • 2 months later...

Guy

 

I am new to unRAID, being a QNAP person for over 10 years.  So far I am very impressed with the platform and it's manageability.

    

Been reading for hours to figure out if there is some hack / way to get the Arcea cards to show temps and smart info on the main screen.  Has anyone gotten this to work? 

 

I have 2 Areca 1882i (8 Port) connected to 6 & 4 TB spinners with 2 Parity hanging of the main board. 20 drives total.  

 

I can get the smart reports via CLI but having it available via the main menu would be helpful. 

 

Any help would appreciated.

 

Thanks

Link to comment
On 4/6/2018 at 7:02 PM, chris0583 said:

Guy

 

I am new to unRAID, being a QNAP person for over 10 years.  So far I am very impressed with the platform and it's manageability.

    

Been reading for hours to figure out if there is some hack / way to get the Arcea cards to show temps and smart info on the main screen.  Has anyone gotten this to work? 

 

I have 2 Areca 1882i (8 Port) connected to 6 & 4 TB spinners with 2 Parity hanging of the main board. 20 drives total.  

 

I can get the smart reports via CLI but having it available via the main menu would be helpful. 

 

Any help would appreciated.

 

Thanks

 

On disk settings, set the controller type options:

 

5acb7806101a6_Capture-ArecaConfigSettings.thumb.PNG.d2bbe5b5387f59b3b28ea652b73c34a0.PNG

 

Link to comment

Yes I have all that set

 

Arcea's are sg9 and sg21 and values 1-8 are all set for all the disks.   I can get smart reports by clicking on each disk.   

 

Was looking to get them to display on the main dashboard.  All the disks hanging off the controllers are reporting 86 F. 

 

image.png.35477c85ff9e6cd03a5f8af7eecdc62e.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.