Frank1940 Posted December 27, 2019 Share Posted December 27, 2019 2 minutes ago, ssb201 said: Upgraded my server (from 6.6.6) last week and ran into two issues: If you are expecting anyone to respond in a release thread, you need to include the Diagnostics file. Tools >>> Diagnostics (If you see this in the next thirty minutes after your initial posting, include in that post.) Quote Link to comment
trurl Posted December 27, 2019 Share Posted December 27, 2019 19 minutes ago, Frank1940 said: (If you see this in the next thirty minutes after your initial posting, include in that post.) I always recommend putting any new information in a new post. That is why I always say: Tools - Diagnostics, attach complete diagnostics zip file to your NEXT post. New information is easier to find in a new post, and the thread will show us there are new posts to be read in the thread. If you just attach it to a previous post, then the thread won't show there is anything unread in the thread and so it might just get skipped over when we are looking for new things to read and help with. Quote Link to comment
ssb201 Posted December 27, 2019 Share Posted December 27, 2019 (edited) I did not think to upload the diagnostics because I assumed it would not be that interesting since the logs are completely full of the failed spindown. Here are the diagnostics: tower-diagnostics-20191226-2145.zip Edited December 27, 2019 by ssb201 Quote Link to comment
Interstellar Posted December 27, 2019 Share Posted December 27, 2019 Just before I open a bug topic, can people using Safari check that when you open a sub-window (e.g. the log) that it asks you to login again? Safari: I get the login window on every pop-up (E.g. The 'Log Summary' button) which when you login just takes you to the home 'Dashboard' page again... Chrome: Works I would assume as intended (all works as per pre-6.8) I use Safari 99.999% of the time, so this is a minor annoyance. (This happens on both my iMac and newly installed MBP so it isn't a cache issue). Quote Link to comment
ghost82 Posted December 27, 2019 Share Posted December 27, 2019 (edited) 42 minutes ago, Interstellar said: can people using Safari check that when you open a sub-window (e.g. the log) that it asks you to login again? Confirmed, with Safari 13.0.4. Firefox 71.0 doesn't show this issue. Not sure, but it seems a bug in safari rather than unraid. Edited December 27, 2019 by ghost82 Quote Link to comment
bonienl Posted December 27, 2019 Share Posted December 27, 2019 42 minutes ago, Interstellar said: Just before I open a bug topic, can people using Safari check This is a known issue when using Safari, any other browser works properly. No solution yet... Quote Link to comment
wgstarks Posted December 27, 2019 Share Posted December 27, 2019 For me, once I log in I can navigate back to the tab I was trying to access without having to log in again. Quote Link to comment
jpowell8672 Posted December 27, 2019 Share Posted December 27, 2019 Upgraded from 6.7.2 to 6.8 with only one problem, the pfsense vm hang issue described here: I was able to fix with @bastl workaround. Thanks Quote Link to comment
nraygun Posted December 27, 2019 Share Posted December 27, 2019 On 12/26/2019 at 10:41 AM, bonienl said: Browser issue. Usually corrupted browser cache. Ugh. After some experimenting, I found the culprit - MyEtherWallet Chrome Extension. Toggling it makes the issue appear/disappear. Removing it. Not even sure why that extension would even mess with web pages. I cleared a bunch of stuff in my system too. Lesson learned - next time start with extensions. Quote Link to comment
TechBLT Posted December 28, 2019 Share Posted December 28, 2019 On 12/13/2019 at 8:18 AM, wgstarks said: Hurrah. I can get a terminal window in Safari now. Outstanding. 👍👍👍👍 What version of MacOS are you using and what version of Safari? I am running Catalina and Safari 13.0.4 and I can't get a terminal window to open in Safari. Thanks! Quote Link to comment
wgstarks Posted December 28, 2019 Share Posted December 28, 2019 4 hours ago, TechBLT said: What version of MacOS are you using and what version of Safari? I am running Catalina and Safari 13.0.4 and I can't get a terminal window to open in Safari. Thanks! unRAID 6.8 macOS 10.15.2 Safari 13.0.4 (15608.4.9.1.3) Quote Link to comment
Frank1940 Posted December 28, 2019 Share Posted December 28, 2019 4 hours ago, TechBLT said: What version of MacOS are you using and what version of Safari? I am running Catalina and Safari 13.0.4 and I can't get a terminal window to open in Safari. Thanks! Make sure that you have whitelisted the Unraid GUI address for any adblockers and popup managers. Quote Link to comment
ken-ji Posted December 29, 2019 Share Posted December 29, 2019 (edited) Tried upgrading a really old system - running 6.7.2 and SATA Silicon Image port multipliers The whole system bombs out with read and write "errors" on the port multipliers until a disk is disabled. Rolled back with out issues other than a eventless disk rebuild. I guess its time to try upgrading this system Posting the diagnostics for reference. tower-diagnostics-20191229-0353.zip <- 6.7.2 tower-diagnostics-20191228-1024.zip <- 6.8.0 Edited December 29, 2019 by ken-ji 1 Quote Link to comment
Dazog Posted December 29, 2019 Share Posted December 29, 2019 https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.4-4.19-4.14-MCE-Fix-TR Looks like next point release for 6.8 will have new threadrippers work properly for those that have em Quote Link to comment
EGOvoruhk Posted December 30, 2019 Share Posted December 30, 2019 Upgraded to 6.8, and about 8 hours later, both my parity drives popped up as disabled simultaneously after some normal usage. Curious if there's anything in the logs that may hint as to why they would both drop at the same time Note: Prior to the 6.8 upgrade I had ran a full parity check with zero errors, then I upgraded my 2x8GB dual parity with a new 10GB drive (now 1x8GB, 1x10GB) and parity sync passed with zero errors, then I upgraded a 4GB data drive with the retired 8TB parity drive, and that sync also passed with zero errors. Then I made a backup of my flash, and upgraded to 6.8 without issues. All my VMs and Dockers were running, and everything seemed normal until later when both parity drives popped up with red Xs I shutdown all VMs/Dockers, ran a SMART test on both parity drives and they came back fine, grabbed my diagnostics, stopped the array, and powered down void-diagnostics-20191228-1756.zip Quote Link to comment
BRiT Posted December 30, 2019 Share Posted December 30, 2019 Problem begins with Disk 29 then Disk 0, reads and write issues. Dec 28 17:35:19 Void kernel: mpt2sas_cm0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01) ### [PREVIOUS LINE REPEATED 6 TIMES] ### Dec 28 17:35:19 Void kernel: sd 8:0:0:0: [sdb] tag#6204 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Dec 28 17:35:19 Void kernel: sd 8:0:0:0: [sdb] tag#6204 Sense Key : 0x2 [current] Dec 28 17:35:19 Void kernel: sd 8:0:0:0: [sdb] tag#6204 ASC=0x4 ASCQ=0x0 Dec 28 17:35:19 Void kernel: sd 8:0:0:0: [sdb] tag#6204 CDB: opcode=0x8a 8a 00 00 00 00 02 35 7e e8 c0 00 00 04 00 00 00 Dec 28 17:35:19 Void kernel: print_req_error: I/O error, dev sdb, sector 9487444160 Dec 28 17:35:19 Void kernel: md: disk29 write error, sector=9487444096 Quote Link to comment
EGOvoruhk Posted December 30, 2019 Share Posted December 30, 2019 50 minutes ago, BRiT said: Problem begins with Disk 29 then Disk 0, reads and write issues. Those drives were sitting physically untouched in a rackmount server for days (Disk0 never actually being physically touched at all, as that was the 8TB parity that was left intact), and went through 2 full checks. I'm curious why both would fail at the exact same time (within 3 seconds, per the log). Could it be indicative of a different issue? They're connected via SFF-8087 fanout cables, and the controller/SFF-8087 end was never unplugged, and hasn't been for over a year, so it shouldn't be any seating issues. They also passed SMART tests after the failure without ever being touched, so obviously not a cabling issue Just wondering where I should be focusing my attention. One drive would make sense, both simultaneously throws me for a loop Quote Link to comment
JorgeB Posted December 30, 2019 Share Posted December 30, 2019 25 minutes ago, EGOvoruhk said: One drive would make sense, both simultaneously throws me for a loop Difficult say if it will help but you could update LSI firmware to latest firmware, 20.00.07.00. Quote Link to comment
laterdaze Posted December 31, 2019 Share Posted December 31, 2019 https://hardforum.com/threads/understand-mpt2sas-kernel-messages.1828901/ seems like an identical problem. Quote Link to comment
Edrikk Posted January 2, 2020 Share Posted January 2, 2020 (edited) Fully realize that this is more of a usability/user experience request more than anything else: @bonienl is it possible to have the new login screen have focus on the username field when it loads? Currently one has to click/tab into the username field to start the process... It's an extra click that doesn't seem necessary. PS. I'm on Windows 10, and have sanity checked behavior on Firefox and Chrome. Edited January 2, 2020 by Edrikk 1 Quote Link to comment
Aceriz Posted January 3, 2020 Share Posted January 3, 2020 I upgraded a week or two ago from 6.7.2 to 6.8. No problems prior to update (other than low server space which I have rectified getting new harddrives in process of preclearing). The problem I have been getting is that I have been getting a error message saying Syslog is filling quickly. (Went to 70% within 2 days). As I am trying to figure this out I have pulled from the sys log and diagnostics. See attached. I am not sure what it means or what i should do. As I read through the syslog i keep seeing the below show up. It has various CPU listed not just "7". So not sure what is causing problem. " CPU: 33 PID: 13783 Comm: CPU 7/KVM Tainted: G W 4.19.88-Unraid #1 " I had been running preclear right now on a new drive I am planning to install to server this should be finished in next day. So can reset and post new diagnostics once done, if that clears up anything that is coming from that process. Any help would be appreciated as I am not sure exactly what is causing problem. If any additional information needed or things to test please just let me know I will do them. rizznetunraid-diagnostics-20200103-1115.zip Quote Link to comment
Squid Posted January 3, 2020 Share Posted January 3, 2020 47 minutes ago, Aceriz said: I upgraded a week or two ago from 6.7.2 to 6.8. No problems prior to update (other than low server space which I have rectified getting new harddrives in process of preclearing). The problem I have been getting is that I have been getting a error message saying Syslog is filling quickly. (Went to 70% within 2 days). As I am trying to figure this out I have pulled from the sys log and diagnostics. See attached. I am not sure what it means or what i should do. As I read through the syslog i keep seeing the below show up. It has various CPU listed not just "7". So not sure what is causing problem. " CPU: 33 PID: 13783 Comm: CPU 7/KVM Tainted: G W 4.19.88-Unraid #1 " I had been running preclear right now on a new drive I am planning to install to server this should be finished in next day. So can reset and post new diagnostics once done, if that clears up anything that is coming from that process. Any help would be appreciated as I am not sure exactly what is causing problem. If any additional information needed or things to test please just let me know I will do them. rizznetunraid-diagnostics-20200103-1115.zip 434.86 kB · 0 downloads Constant errors about the VM failing to allocate memory for itself. May be possible that it's preclear running amok. Quote Link to comment
Aceriz Posted January 4, 2020 Share Posted January 4, 2020 3 hours ago, Squid said: Constant errors about the VM failing to allocate memory for itself. May be possible that it's preclear running amok. Well Hopefully that is all it is. The last preclear is going to be finished probably by end of the night. Then I will reset server and see what happens at that point. Post a new diagnostic. Quote Link to comment
CarlosCo Posted January 6, 2020 Share Posted January 6, 2020 (edited) On 12/13/2019 at 12:41 PM, FlynDice said: After upgrading to 6.8 I'm now getting this noVNC error while trying to access both vms I run. noVNC encountered an error: Uncaught SyntaxError: The requested module './util/browser.js' does not provide an export named 'dragThreshold' http://192.168.1.4/plugins/dynamix.vm.manager/vnc.html?autoconnect=true&host=192.168.1.4&port=&path=/wsproxy/5700/:0:0 SyntaxError: The requested module './util/browser.js' does not provide an export named 'dragThreshold' I can access the vms just fine with NoMachine and Remmina but not the VNC Remote menu option in Unraid. Same issue here tower-diagnostics-20200105-2307.zip Edited January 6, 2020 by CarlosCo Quote Link to comment
itimpi Posted January 6, 2020 Share Posted January 6, 2020 2 hours ago, CarlosCo said: Same issue here tower-diagnostics-20200105-2307.zip 80.76 kB · 0 downloads What browser are you using (and what version). NoVNC has always seemed a bit susceptible to not working correctly with all browsers. I have personally have often had problems with noVNC in the past so I always have a free-standing VNC client as a fall-back. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.