Jump to content

zoggy

Members
  • Posts

    698
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by zoggy

  1. Using latest version, 6.7.2

     

    Unraid was powered off when UPS died. I started it back up and it did its correction parity check.

    Finished with no errors.

    A day later the monthly parity check fires off. I cancel that because I did one less than 24 hours ago.

    Today I get my new UPS battery and I need to shut down my server to hook it back up on the ups.

    Upon stopping the array now I'm greeted with "Stopped. Unclean shutdown detected.".

     

    Looks like the parity check it did on startup didn't clear the startup flag so now it thinks it still was a unclean shut down?


     

    Quote

     

    Aug 26 22:53:33 husky emhttpd: unclean shutdown detected

    ...

    Aug 26 22:56:19 husky kernel: mdcmd (64): check correct

    Aug 26 22:56:19 husky kernel: md: recovery thread: check P ...

    Aug 26 22:56:19 husky kernel: md: using 1536k window, over a total of 7814026532 blocks.

    ...

    Aug 27 19:07:21 husky kernel: md: sync done. time=72662sec

    Aug 27 19:07:21 husky kernel: md: recovery thread: exit status: 0

    then yesterday I abort the monthly check:

    Quote

    Aug 29 01:30:01 husky kernel: mdcmd (98): check

    Aug 29 01:30:01 husky kernel: md: recovery thread: check P ...

    Aug 29 01:30:01 husky kernel: md: using 1536k window, over a total of 7814026532 blocks.

    Aug 29 03:20:02 husky emhttpd: req (3): startState=STARTED&file=&csrf_token=***************&cmdNoCheck=Cancel

    Aug 29 03:20:02 husky kernel: mdcmd (99): nocheck Cancel

    Aug 29 03:20:02 husky kernel: md: recovery thread: exit status: -4

    Thus the exit status from this looks like is wrongly being used to know if is clean or not...

     

  2. Just to document, I've been seeing "eth0: failed to renew DHCP, rebinding" messages appear in 6.7+
    Looking back it started with 6.7.0-rc8. rc7 and older did not throw the message. Seems to be cosmetic at least.

     

    6.7.0-rc6 == good // dhcpcd: version 7.1.1
    6.7.0-rc7 == good // no dhcpcd changes, logs last entry was: May  5 12:32:34
    6.7.0-rc8 == bad  // dhcpcd: version 7.2.0

    May  6 10:02:35 husky dhcpcd[1603]: eth0: failed to renew DHCP, rebinding

     

    Looking at the source, I dont see anything that stands out:

    https://github.com/rsmarples/dhcpcd/compare/dhcpcd-7.1.1...dhcpcd-7.2.0

     

  3. whew okay, I have an intel i3-3220 CPU and wanted to see how much performance I can get back with disabling the mitigations as noted.

    I upgraded to 6.7.1rc1 and spun up the Phoronix Test Suite in a docker vm and focus on the cpu test -- https://openbenchmarking.org/suite/pts/cpu

    The array was running but no activity was ongoing, and no other dockers were active.

    Test suite cycle took about 3 hours in a run, each test ran 3 times and deviations is noted.

    Ran first set as is with the mitigations in place then rebooted with syslinux cfg modification to disable the mitigation (still get some due to microcode used) and re-ran same tests to compare.

     

    results:

    https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92

     

    can see that 2-14% increase on various things.

    The ctx-clock micro-benchmark for looking at the context switching overhead shows the big impact since Spectre/Meltdown

    Which is why you can see is the most drastic reported as it targets that specific area.. 87% difference!

     

    hope this helps for those curious

     

     

    • Like 2
  4. anyone else seeing this in 6.7.0 every 5 hours:

    Quote

    dhcpcd[1589]: eth0: failed to renew DHCP, rebinding

     

    I do dhcp with static assignments from my router. nothing has changed there recently (uptime on router was 85 days), rebooted this yesterday thinking it may have been the source but checked today and still seeing the messages.

     

    Looking back at my logs, looks like it started on the 6.7.0-rc8.

    as my logs from 6.7.0-rc7 and older does not have any of the entries.

     

    network is integrated into mobo, its a realtek:

    Quote

    kernel: r8169 0000:04:00.0 eth0: RTL8168evl/8111evl, d4:3d:7e:4d:e4:89, XID 2c900800, IRQ 26

     

    looks to be cosmetic as nothing shows any impact by it.

     

    looking at the unraid 6.7.0 release I see:

    dhcpcd: version 7.2.0

    guessing this is the root cause.

     

    just fyi, latest is 7.2.2

  5. offical site:

    https://zombieloadattack.com/

     

    “ZombieLoad,” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker just a month ago.

    Almost every computer with an Intel chips dating back to 2011 are affected by the vulnerabilities. AMD and ARM chips are not said to be vulnerable like earlier side-channel attacks.

     

    additional info:

    https://techcrunch.com/2019/05/14/zombieload-flaw-intel-processors/

    and patches: https://techcrunch.com/2019/05/14/intel-chip-flaws-patches-released/

     

     

     

  6. 22 minutes ago, rpj2 said:

    Fix Common problems was one of the first things I installed. I've never seen a warning about the marvel controller from it.

     

    I can confirm you will get a marvell warning from the plugin (although a bit vague):

    Quote

    May 9 17:37:20 husky root: Fix Common Problems: Warning: Marvel Hard Drive Controller Installed

    I get it because I have a Supermicro's SAS2LP-MV8 controller (based on the Marvell 9480 host controller). "Marvell" never shows up in my syslog anywhere.

    As others have noted, some Marvell controllers are fine (never been a problem and still is fine on 6.7.0) but it is more of piece of mind to us another brand for reliability.

  7. in 6.7.0rc1 the 'array' box looks like:

     

    firefox_2019-01-22_15-48-42.png

     

    the array info block at the top shows 5 utilization warnings (1 is critical, 4 are warning per thresholds).

     

    would be nice if the warning message provided context or went with the actual drive like:

    firefox_2019-01-22_15-57-22.png

     

    (used already provided classes: orange-text / red-text)

    you do trade readability to add that context but I see it so different than what is already being done with the temp colors.

     

     

  8. 1 hour ago, bonienl said:

    1. When you use colored bars (see Settings -> Display Settings -> Used / Free columns) the color green, yellow or red indicates the utilization

    2. You can also install the Dynamix System Stats plugin, which gives a graphical representation of the disk utlization.

    I happen to be using the colored bars but the issue is that I really dont know what the % are.. I would assume red = 90%+?

     

    firefox_2018-09-24_14-56-56.png

     

    I was just saying it would be nice if the bars showed a % as a tooltip? (x% free or unused) or just toggle how they are presented when you use the toggle read/write thing in the top right.

×
×
  • Create New...