-
Posts
698 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by zoggy
-
-
Just to document, I've been seeing "eth0: failed to renew DHCP, rebinding" messages appear in 6.7+
Looking back it started with 6.7.0-rc8. rc7 and older did not throw the message. Seems to be cosmetic at least.6.7.0-rc6 == good // dhcpcd: version 7.1.1
6.7.0-rc7 == good // no dhcpcd changes, logs last entry was: May 5 12:32:34
6.7.0-rc8 == bad // dhcpcd: version 7.2.0May 6 10:02:35 husky dhcpcd[1603]: eth0: failed to renew DHCP, rebinding
Looking at the source, I dont see anything that stands out:
https://github.com/rsmarples/dhcpcd/compare/dhcpcd-7.1.1...dhcpcd-7.2.0
-
any chance we can get dhcpcd updated?
-
just to note, upgraded from 6.7.0->6.7.1-rc1->6.7.1-rc2 without problems.
6.7.0 cosmetic bug still shown every 20 hours:
QuoteJun 11 09:27:38 husky dhcpcd[1582]: eth0: failed to renew DHCP, rebinding
-
13 minutes ago, Frank1940 said:
Same result. I know this must be frustrating for you...
EDIT: Perhaps the link has changed with the new permission.
ok, here we go: https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92
-
whew okay, I have an intel i3-3220 CPU and wanted to see how much performance I can get back with disabling the mitigations as noted.
I upgraded to 6.7.1rc1 and spun up the Phoronix Test Suite in a docker vm and focus on the cpu test -- https://openbenchmarking.org/suite/pts/cpu
The array was running but no activity was ongoing, and no other dockers were active.
Test suite cycle took about 3 hours in a run, each test ran 3 times and deviations is noted.
Ran first set as is with the mitigations in place then rebooted with syslinux cfg modification to disable the mitigation (still get some due to microcode used) and re-ran same tests to compare.
results:
https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92
can see that 2-14% increase on various things.
The ctx-clock micro-benchmark for looking at the context switching overhead shows the big impact since Spectre/Meltdown
Which is why you can see is the most drastic reported as it targets that specific area.. 87% difference!
hope this helps for those curious
- 2
-
anyone else seeing this in 6.7.0 every 5 hours:
Quotedhcpcd[1589]: eth0: failed to renew DHCP, rebinding
I do dhcp with static assignments from my router. nothing has changed there recently (uptime on router was 85 days), rebooted this yesterday thinking it may have been the source but checked today and still seeing the messages.
Looking back at my logs, looks like it started on the 6.7.0-rc8.
as my logs from 6.7.0-rc7 and older does not have any of the entries.
network is integrated into mobo, its a realtek:
Quotekernel: r8169 0000:04:00.0 eth0: RTL8168evl/8111evl, d4:3d:7e:4d:e4:89, XID 2c900800, IRQ 26
looks to be cosmetic as nothing shows any impact by it.
looking at the unraid 6.7.0 release I see:
dhcpcd: version 7.2.0
guessing this is the root cause.
just fyi, latest is 7.2.2
-
2 minutes ago, saarg said:
If you read a few post above yours, you would have known.
the logical place is to go look at the security forum section about this issue which had nothing hence why I posted there and noted.
-
sorry for the cross post, but I came here looking to see if the CVE was already addressed.
-
offical site:
“ZombieLoad,” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker just a month ago.
Almost every computer with an Intel chips dating back to 2011 are affected by the vulnerabilities. AMD and ARM chips are not said to be vulnerable like earlier side-channel attacks.
additional info:
https://techcrunch.com/2019/05/14/zombieload-flaw-intel-processors/
and patches: https://techcrunch.com/2019/05/14/intel-chip-flaws-patches-released/
-
22 minutes ago, rpj2 said:
Fix Common problems was one of the first things I installed. I've never seen a warning about the marvel controller from it.
I can confirm you will get a marvell warning from the plugin (although a bit vague):
QuoteMay 9 17:37:20 husky root: Fix Common Problems: Warning: Marvel Hard Drive Controller Installed
I get it because I have a Supermicro's SAS2LP-MV8 controller (based on the Marvell 9480 host controller). "Marvell" never shows up in my syslog anywhere.
As others have noted, some Marvell controllers are fine (never been a problem and still is fine on 6.7.0) but it is more of piece of mind to us another brand for reliability.
-
what is different from 6.7.0 final and 6.7.0-rc8?
-
to add additional information that is not as dry:
- 1
-
or keep everything as is, and give user a way to toggle what is shown in that array info box at the top. warning+errors, errors, none.
-
in 6.7.0rc1 the 'array' box looks like:
the array info block at the top shows 5 utilization warnings (1 is critical, 4 are warning per thresholds).
would be nice if the warning message provided context or went with the actual drive like:
(used already provided classes: orange-text / red-text)
you do trade readability to add that context but I see it so different than what is already being done with the temp colors.
-
@bonienl for the usb flash backup in 6.7, will it only do git (and can i use private repo on github)?
-
3 minutes ago, Cessquill said:
yes
then that is most likely your issue. I recall seeing somewhere where its not playing nice with latest version on unraid and needs update..
-
13 minutes ago, Cessquill said:
Have rebooted since parity check, but current syslog has a call trace at 3.40 this morning, closely followed by an out of memory error. Looks like I've got some investigating to do...
you use cache_dirs plugin?
-
wish the parity history page listed the unraid version that check was running.
-
typo in changelog:
QuoteOOT Intel 10gbps netowrk drivers: ixgbe-5.5.1, ixgbevf-4.5.1
should be network
-
with active streams and latest 6.6 ui.. its easily to accidentally cancel a stream when you go to click on the 'top' image
-
16 minutes ago, DZMM said:
@bonienl is nerd-pack incompatible?
I saw people on reddit mention that dev-pack is incompatible with 6.6 right now
-
as a crude hack, I can just 'inspect' the source when needed:
also, thanks to:
I just increased the critical level to 92% on that disk to make things happy until I can decide on where to shuffle some things around.
Which now I see why you cant just do a flat % because its not just global but global+disk specific.
-
1 hour ago, bonienl said:
1. When you use colored bars (see Settings -> Display Settings -> Used / Free columns) the color green, yellow or red indicates the utilization
2. You can also install the Dynamix System Stats plugin, which gives a graphical representation of the disk utlization.
I happen to be using the colored bars but the issue is that I really dont know what the % are.. I would assume red = 90%+?
I was just saying it would be nice if the bars showed a % as a tooltip? (x% free or unused) or just toggle how they are presented when you use the toggle read/write thing in the top right.
-
Quote
Disk 4 is low on space (91%)
I got a notification about low disk space on a drive. I removed some files from that drive and went to the gui to see if its under the threshold now.
I found that I can't easily see what the % used for a drive is.
Any chance you can make it show % used when toggling the read/write display or add a tooltip to show the %?
6.7.2 -- Stopped. Unclean shutdown detected.
in General Support
Posted · Edited by zoggy
Using latest version, 6.7.2
Unraid was powered off when UPS died. I started it back up and it did its correction parity check.
Finished with no errors.
A day later the monthly parity check fires off. I cancel that because I did one less than 24 hours ago.
Today I get my new UPS battery and I need to shut down my server to hook it back up on the ups.
Upon stopping the array now I'm greeted with "Stopped. Unclean shutdown detected.".
Looks like the parity check it did on startup didn't clear the startup flag so now it thinks it still was a unclean shut down?
then yesterday I abort the monthly check:
Thus the exit status from this looks like is wrongly being used to know if is clean or not...