Dynamix - Web GUI


Recommended Posts

  • Replies 1.3k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Hi

 

I've noticed recently that the CPU utilization on the main Dashboard seems to be wrong.

 

Something causes it to climb to 100% and just sit there, and yet if I go to System Stats and look at the live CPU Load graph, it will show something completely different and typically under 50%.

 

Right now System Stats shows CPU load sitting at between 20% and 40% but CPU utilization on the Dashboard is locked at 100%.

 

RAM usage on the Dashboard appears to be correct and consistent with what System Stats says though.

 

I rebooted the unRAID server, and my PC, then did a few things and watched CPU utilization start low, then climb to 100% fairly quickly and just sit there.

 

Sometimes it does drop off a bit, to around 75% or so, and then it climbs back to 100% again.

 

It looks almost as if it has become confused about where 0% baseline is and it now thinks 75% is 0% or something.

Exactly the same issue I reported a couple of days ago. Glad to know I'm not the only one.

 

Can you post your motherboard and CPU details ?

 

Something I just noticed:

I'm running FireFox (currently 32.1), and have multiple tabs open in a variety of groups. FF just crashed, and when I restarted & restored my session, Dynamix displayed the CPU utilization correctly (I assume - it was 10-40% and was moving around). After going to some other groups to look at things there, I returned to my Dynamix tab, and the CPU was maxed at 100% again.

 

On the other hand, maybe that doesn't have anything to do with it. I launched a fresh copy of Chrome (37.0), pulled up the page, and the CPU was pegged at 100%.

 

If I could just get FF to stop crashing, and using so much memory & CPU on my desktop, it would be nice... Anything you can do to fix that, bonienl?  ;D

Link to comment

I'll have a look into the reporting of the CPU usage. Currently I am working on the V6 version, which is quite a rework, but doable. No ETA yet.

 

It must be a lot of work, there's a lot of GUI changes between v5 and v6. Looking forward to it, the web GUI feels so empty without Dynamix!

Link to comment

I'll have a look into the reporting of the CPU usage. Currently I am working on the V6 version, which is quite a rework, but doable. No ETA yet.

 

I appreciate that the future is where you're focusing - makes sense.

 

Another restart of FF, and I watched the Dashboard for a minute. CPU% started at 19 and sat there for about 5-7 seconds. It increased to 42%, then sat there for another 5-7 seconds. It then jumped to 92% and sat for about 5-7 seconds, then climbed to 100% and there it sits.

 

Almost looks as if the 'new' percentage is being added to the 'displayed' percentage instead of replacing it. However, that seems unlikely, since it works for so many and so few of us are having issues.

 

Other than being a slight annoyance in an otherwise spectacular package, I'm not hugely concerned about it - if I really need to know CPU utilization, I can go to the stats page or fire up telnet and do a top. This is just to give you a bit more observed detail in the hopes that you'll wake up at 3am some day soon with an "Ah ha!" moment and have a fix.

 

Thanks!

FreeMan

Link to comment

This is just to give you a bit more observed detail in the hopes that you'll wake up at 3am some day soon with an "Ah ha!" moment and have a fix.

 

Your 3am is for me 9am, would give me plenty of time to sleep and come up with that *bright* "ah ha" moment :)

 

As part of the Dynamix v6 development I created a newer (better!?) way of measuring CPU load on the dashboard page. I'll port that back to the v5 version as well. Good hopes it solves the issues reported here!

 

Link to comment

bonienl, I have this issue with Dyanamix and cache_dirs. Errors coming up. Please see attached screenshot. Thanks

 

Any ideas on this Bonienl or should I just uninstall cache dirs from control panel and install joes script manually. Thanks

 

The latest version of "Dynamix Cache Dirs" downloads the cache_dirs script made available here on the forum. The file is unzipped and placed in the /usr/local/sbin folder.

 

This approach ensures that always the latest available version of cache_dirs is used. Dynamix builds a GUI wrapper around it, which allows the user to enter parameters using the browser. Upon execution these are given as the actual CLI options for cache_dirs.

 

Looking at the source code of cache_dirs I see it does a hard coded search in /mnt/disk* and /mnt/cache, which explains the warning messages you get. These are harmless, but you may want to ask Joe to make a correction (e.g. test if Cache drive exists).

 

Link to comment

Just updated on 2.2.9 and now I have this error on http://tower/Settings

 

Parse error: syntax error, unexpected $end in /usr/local/emhttp/plugins/webGui/template.php(365) : eval()'d code on line 1

 

Other thing, after update, i could see the "Changes" boutton of "WebGui" on http://tower/Dynamix, but now its gone  :(

 

I was too quick with putting things on github, my bad.

 

You need to telnet into server and manually remove the PLG and TXZ files and reload the updated version. Here is a guide:

 

cd /boot/plugins

rm dynamix.webGui-2.2.9-noarch-bergware.plg

rm dynamix.webGui-2.2.9-i486-1.txz

wget --no-check-certificate https://raw.github.com/bergware/dynamix/master/plugins/dynamix.webGui-2.2.9-noarch-bergware.plg

installplg dynamix.webGui-2.2.9-noarch-bergware.plg

 

 

Link to comment

Ok thanks;

 

Just have a little question, i dont have parity disk, my problem is that Dynamic always remind me errors about it, how can i disable it ?

 

 

Alert: Parity disk in error state
()

 

I didn't take into account the "no parity" situation, it is always seen as an error.

 

Make some check for this in the upcoming v2.2.10.

 

Out of curiosity: why no parity disk, it leaves your data unprotected.

Link to comment

I just got hit by the "Hot disk" pop up storm, and as I was clicking through all the warnings, a thought occurred to me:

40C seems to be a reasonable number for a warning on most disks, but I've got one that regularly runs above that when the ambient is higher. (Turned the AC off in the house the last week as it's cooled down outside, but it got warm overnight & the temp inside the server increased, so that one disk hit 40).

 

Instead of popping up every minute with a warning, would it be possible, when a warning hits to store off the "last" values on the alarmed disk, then continue to check every minute, and only pop up another warning if any values change (or another disk gets hot). For example I had about 20 minutes of 40C readings on disk 3, then it went up to 41C for a bit, then dropped below 40, went above for a bit again. Instead of a warning for every minute it was over, I propose a warning when it's detected at 40C, then another when it hit 41C, another at 40C (it changed from the "last" setting), then a green box indicating that it's now at 39C and "safe".

 

The advantages of this is that I would have known about the temp issue, I would have been able to easily identify the change points, and would have only had 6-8 boxes to clear, not 40-50.

 

One for the idea hopper, though immediate implementation wouldn't be turned down.  ;)

Link to comment

Your 3am is for me 9am, would give me plenty of time to sleep and come up with that *bright* "ah ha" moment :)

 

3 my time, 3 your time, whichever works out best for you. ;)

 

As part of the Dynamix v6 development I created a newer (better!?) way of measuring CPU load on the dashboard page. I'll port that back to the v5 version as well. Good hopes it solves the issues reported here!

 

I appreciate the effort. Looking forward to the 'fixed' release to see if that helps.

Link to comment

I just got hit by the "Hot disk" pop up storm, and as I was clicking through all the warnings, a thought occurred to me:

40C seems to be a reasonable number for a warning on most disks, but I've got one that regularly runs above that when the ambient is higher. (Turned the AC off in the house the last week as it's cooled down outside, but it got warm overnight & the temp inside the server increased, so that one disk hit 40).

 

Instead of popping up every minute with a warning, would it be possible, when a warning hits to store off the "last" values on the alarmed disk, then continue to check every minute, and only pop up another warning if any values change (or another disk gets hot). For example I had about 20 minutes of 40C readings on disk 3, then it went up to 41C for a bit, then dropped below 40, went above for a bit again. Instead of a warning for every minute it was over, I propose a warning when it's detected at 40C, then another when it hit 41C, another at 40C (it changed from the "last" setting), then a green box indicating that it's now at 39C and "safe".

 

The advantages of this is that I would have known about the temp issue, I would have been able to easily identify the change points, and would have only had 6-8 boxes to clear, not 40-50.

 

One for the idea hopper, though immediate implementation wouldn't be turned down.  ;)

 

Yeah, I need to look into this ... had exactly the same issue of one drive running a bit hotter than the rest and a screen full of warnings  ;D

 

Since I pulled v2.2.9 and currently fixing for v2.2.10 I'll take this request along.

 

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.