Jump to content

zoggy

Members
  • Content Count

    569
  • Joined

  • Last visited

Community Reputation

15 Good

About zoggy

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. this past time i did grab diags (see post #2)
  2. I'm not debating that. It was a unclean shutdown. And on boot it said so and it was correct. Then when it finished the parity check and brought the array online it cleared it. So far all good. Then a day later the monthly parity check started, I aborted it since the partiy was just checked a day prior. The UI at this point was showing that it was an unclean shutdown again -- This in my eyes is wrong. It was making me think that if I was to try and start the array again it would force me to do the parity check. However, I found that if I rebooted the box it 'fixed' the ui on thinking it was unclean as I could start the array just fine again.
  3. the first time yes, but the unclean status should have been wiped once the check completed. (which it did, it just doesnt look that way if you abort out another parity check)
  4. went ahead and just powered down. booted up, and started array. no warning or anything about a unclean shutdown or requiring me to do parity check. so looks like it wast just a bug that it still said unclean shutdown due to aborting the parity check after the unclean one.
  5. Attaching diags: husky-diagnostics-20190829-1958.zip
  6. Using latest version, 6.7.2 Unraid was powered off when UPS died. I started it back up and it did its correction parity check. Finished with no errors. A day later the monthly parity check fires off. I cancel that because I did one less than 24 hours ago. Today I get my new UPS battery and I need to shut down my server to hook it back up on the ups. Upon stopping the array now I'm greeted with "Stopped. Unclean shutdown detected.". Looks like the parity check it did on startup didn't clear the startup flag so now it thinks it still was a unclean shut down? then yesterday I abort the monthly check: Thus the exit status from this looks like is wrongly being used to know if is clean or not...
  7. Just to document, I've been seeing "eth0: failed to renew DHCP, rebinding" messages appear in 6.7+ Looking back it started with 6.7.0-rc8. rc7 and older did not throw the message. Seems to be cosmetic at least. 6.7.0-rc6 == good // dhcpcd: version 7.1.1 6.7.0-rc7 == good // no dhcpcd changes, logs last entry was: May 5 12:32:34 6.7.0-rc8 == bad // dhcpcd: version 7.2.0 May 6 10:02:35 husky dhcpcd[1603]: eth0: failed to renew DHCP, rebinding Looking at the source, I dont see anything that stands out: https://github.com/rsmarples/dhcpcd/compare/dhcpcd-7.1.1...dhcpcd-7.2.0
  8. any chance we can get dhcpcd updated?
  9. just to note, upgraded from 6.7.0->6.7.1-rc1->6.7.1-rc2 without problems. 6.7.0 cosmetic bug still shown every 20 hours:
  10. ok, here we go: https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92
  11. whew okay, I have an intel i3-3220 CPU and wanted to see how much performance I can get back with disabling the mitigations as noted. I upgraded to 6.7.1rc1 and spun up the Phoronix Test Suite in a docker vm and focus on the cpu test -- https://openbenchmarking.org/suite/pts/cpu The array was running but no activity was ongoing, and no other dockers were active. Test suite cycle took about 3 hours in a run, each test ran 3 times and deviations is noted. Ran first set as is with the mitigations in place then rebooted with syslinux cfg modification to disable the mitigation (still get some due to microcode used) and re-ran same tests to compare. results: https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92 can see that 2-14% increase on various things. The ctx-clock micro-benchmark for looking at the context switching overhead shows the big impact since Spectre/Meltdown Which is why you can see is the most drastic reported as it targets that specific area.. 87% difference! hope this helps for those curious
  12. anyone else seeing this in 6.7.0 every 5 hours: I do dhcp with static assignments from my router. nothing has changed there recently (uptime on router was 85 days), rebooted this yesterday thinking it may have been the source but checked today and still seeing the messages. Looking back at my logs, looks like it started on the 6.7.0-rc8. as my logs from 6.7.0-rc7 and older does not have any of the entries. network is integrated into mobo, its a realtek: looks to be cosmetic as nothing shows any impact by it. looking at the unraid 6.7.0 release I see: dhcpcd: version 7.2.0 guessing this is the root cause. just fyi, latest is 7.2.2
  13. the logical place is to go look at the security forum section about this issue which had nothing hence why I posted there and noted.
  14. @limetech sorry for the cross post, but I came here looking to see if the CVE was already addressed.
  15. offical site: https://zombieloadattack.com/ “ZombieLoad,” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker just a month ago. Almost every computer with an Intel chips dating back to 2011 are affected by the vulnerabilities. AMD and ARM chips are not said to be vulnerable like earlier side-channel attacks. additional info: https://techcrunch.com/2019/05/14/zombieload-flaw-intel-processors/ and patches: https://techcrunch.com/2019/05/14/intel-chip-flaws-patches-released/