zoggy

Members
  • Posts

    698
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by zoggy

  1. Using latest version, 6.7.2 Unraid was powered off when UPS died. I started it back up and it did its correction parity check. Finished with no errors. A day later the monthly parity check fires off. I cancel that because I did one less than 24 hours ago. Today I get my new UPS battery and I need to shut down my server to hook it back up on the ups. Upon stopping the array now I'm greeted with "Stopped. Unclean shutdown detected.". Looks like the parity check it did on startup didn't clear the startup flag so now it thinks it still was a unclean shut down? then yesterday I abort the monthly check: Thus the exit status from this looks like is wrongly being used to know if is clean or not...
  2. Just to document, I've been seeing "eth0: failed to renew DHCP, rebinding" messages appear in 6.7+ Looking back it started with 6.7.0-rc8. rc7 and older did not throw the message. Seems to be cosmetic at least. 6.7.0-rc6 == good // dhcpcd: version 7.1.1 6.7.0-rc7 == good // no dhcpcd changes, logs last entry was: May 5 12:32:34 6.7.0-rc8 == bad // dhcpcd: version 7.2.0 May 6 10:02:35 husky dhcpcd[1603]: eth0: failed to renew DHCP, rebinding Looking at the source, I dont see anything that stands out: https://github.com/rsmarples/dhcpcd/compare/dhcpcd-7.1.1...dhcpcd-7.2.0
  3. any chance we can get dhcpcd updated?
  4. just to note, upgraded from 6.7.0->6.7.1-rc1->6.7.1-rc2 without problems. 6.7.0 cosmetic bug still shown every 20 hours:
  5. ok, here we go: https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92
  6. whew okay, I have an intel i3-3220 CPU and wanted to see how much performance I can get back with disabling the mitigations as noted. I upgraded to 6.7.1rc1 and spun up the Phoronix Test Suite in a docker vm and focus on the cpu test -- https://openbenchmarking.org/suite/pts/cpu The array was running but no activity was ongoing, and no other dockers were active. Test suite cycle took about 3 hours in a run, each test ran 3 times and deviations is noted. Ran first set as is with the mitigations in place then rebooted with syslinux cfg modification to disable the mitigation (still get some due to microcode used) and re-ran same tests to compare. results: https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92 can see that 2-14% increase on various things. The ctx-clock micro-benchmark for looking at the context switching overhead shows the big impact since Spectre/Meltdown Which is why you can see is the most drastic reported as it targets that specific area.. 87% difference! hope this helps for those curious
  7. anyone else seeing this in 6.7.0 every 5 hours: I do dhcp with static assignments from my router. nothing has changed there recently (uptime on router was 85 days), rebooted this yesterday thinking it may have been the source but checked today and still seeing the messages. Looking back at my logs, looks like it started on the 6.7.0-rc8. as my logs from 6.7.0-rc7 and older does not have any of the entries. network is integrated into mobo, its a realtek: looks to be cosmetic as nothing shows any impact by it. looking at the unraid 6.7.0 release I see: dhcpcd: version 7.2.0 guessing this is the root cause. just fyi, latest is 7.2.2
  8. the logical place is to go look at the security forum section about this issue which had nothing hence why I posted there and noted.
  9. @limetech sorry for the cross post, but I came here looking to see if the CVE was already addressed.
  10. offical site: https://zombieloadattack.com/ “ZombieLoad,” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker just a month ago. Almost every computer with an Intel chips dating back to 2011 are affected by the vulnerabilities. AMD and ARM chips are not said to be vulnerable like earlier side-channel attacks. additional info: https://techcrunch.com/2019/05/14/zombieload-flaw-intel-processors/ and patches: https://techcrunch.com/2019/05/14/intel-chip-flaws-patches-released/
  11. I can confirm you will get a marvell warning from the plugin (although a bit vague): I get it because I have a Supermicro's SAS2LP-MV8 controller (based on the Marvell 9480 host controller). "Marvell" never shows up in my syslog anywhere. As others have noted, some Marvell controllers are fine (never been a problem and still is fine on 6.7.0) but it is more of piece of mind to us another brand for reliability.
  12. what is different from 6.7.0 final and 6.7.0-rc8?
  13. just curious if this RC is going to be final.. as 6.7.0rc started in jan
  14. you can just ssh to your unraid box and do "ifconfig eth0" for the info. also the unraid gui dashboard says the info as well but just more tedious as you have to change the drop down from general info > counters > errors to see everything.
  15. from your test, looks like you did tcp testing and see the problem. so now you could try udp and see if it also has an issue. but regardless now you should packet capture and see what happens at the dips. you can use wireshark and do an io graph to make it easy to align and search. example: side question, what mtu are you using and have you checked your nic stats to make sure you dont have runts/giants and/or errors?
  16. you should load up iperf3 / ethr and test internally from within your own network. eliminate as many networking elements and unknown variables you can. may find out your isp just has bad peering.. or your firewall rules are the cause.. etc
  17. updated from previous rc. so far no fireworks
  18. to add additional information that is not as dry: https://www.bleepingcomputer.com/news/security/runc-vulnerability-gives-attackers-root-access-on-docker-kubernetes-hosts/
  19. mover is fine for me, # grep mover /var/log/syslog Jan 26 08:05:01 husky root: mover: started Jan 26 08:14:54 husky root: mover: finished Jan 27 01:13:42 husky emhttpd: shcmd (4862): /usr/local/sbin/mover |& logger & Jan 27 01:13:42 husky root: mover: started Jan 27 01:15:52 husky root: mover: finished
  20. main announcement thread needs to be updated to point to this RC instead of rc1
  21. man you have a lot of stuff going on your box.. whats the specs? (curious how whats needed for all those dockers/vm/etc) finally got to the part in the video where you are showing the dashboard and I can see the specs there I also agree about the blue color when the switch is toggled on (ex: dockers).
  22. yes, but when you aren't on the dashboard that banner info box is nice to have. so really the redundant one is in the dashboard but it does make it nice when you want to screen shot and you dont have to include all of the window just to include the stats. you can collapse the server info box and leave it like that where it only takes up a little room.
  23. backend returns: <tr><td class='green-text'>Online</td><td class='green-text'>100.0 Percent</td><td class='green-text'>61.9 Minutes</td><td class='green-text'>865 Watts</td><td class='green-text'>86 Watts</td><td class='green-text'>10.0 Percent</td></tr> which webgui parses: $('#ups_loadpct').html(data[5]+(data[5].search('-')==-1 ? '%':'')+(data[4].search('-')==-1 ? ' ('+data[4].replace(' Watts','W')+')':'')); but as you can see from my sample data there is no '-' in it. if I replace that line to just do data[5] + data[4] then it appears just fine. in case you need it: # /sbin/apcaccess | egrep "STATUS|BCHARGE|TIMELEFT|NOMPOWER|LOADPCT" STATUS : ONLINE LOADPCT : 9.0 Percent BCHARGE : 100.0 Percent TIMELEFT : 61.9 Minutes NOMPOWER : 865 Watts
  24. thats shocking to hear... figured everyone would be using an UPS on their nas in this day and age
  25. or keep everything as is, and give user a way to toggle what is shown in that array info box at the top. warning+errors, errors, none.