ak-47 Posted February 19, 2017 Share Posted February 19, 2017 I can't seem to start my Vm's. I did an update to 6.3 and all subsequent updates there after, currently on 6.3.2, and have yet to successfully get my VM's to start. I keep getting a "Permission Denied" error on both my previously running VMs and whenever I create a new VM. Directory permissions are 0777 with nobody:users ownership and the vdisk is 0777 with root:root. I have tried changing the permissions around to see if that has any affect, but no luck. I've attached screenshots of the errors I get when powering on, diagnostics report, and also supplied link to log files for libvirt. Thanks for any help! http://pastebin.com/u/aklein84 Quote Link to comment
phbigred Posted February 19, 2017 Share Posted February 19, 2017 Ssh in and type virsh list. Find your vm that's stuck and type virsh destroy (vm_name) that will kill the vm so it can be reset/fixed for versioning or just getting back online. Quote Link to comment
ak-47 Posted February 19, 2017 Share Posted February 19, 2017 Ssh in and type virsh list. Find your vm that's stuck and type virsh destroy (vm_name) that will kill the vm so it can be reset/fixed for versioning or just getting back online. No dice. Quote Link to comment
danioj Posted February 19, 2017 Share Posted February 19, 2017 Just upgraded fine. No issues. Quote Link to comment
Vr2Io Posted February 19, 2017 Share Posted February 19, 2017 Disks are being told to spin down but don't do it, only when the "spin down" button is used do they do it. Has happened in all 6.3.x releases but not 6.2.4 Are you absolutely positive the disks are still physically spinning? I suspect there is a disconnect between the display and reality. Can you hear the drives spinning? Im thinking the disks are actually spun up, given they are reporting their temps etc Other than physically inspecting the drives, is there any other way to confirm? Otherwise I'll inspect the drives physically tonight, but based on UPS draw I'd say they're spinning Sent from my iPhone using Tapatalk FYR. The log only show spindown by system and won't show spinup by system, so the log just mean disks have spindown. Last time got spindown issue was 6.2 beta18 and no problem after. http://lime-technology.com/forum/index.php?topic=47626.msg456399#msg456399 Quote Link to comment
itimpi Posted February 19, 2017 Share Posted February 19, 2017 These is also the fact that the GUI can lag the true physical state of the drives by a significant amount when the disks spin down because of time rather than explicit user action (I believe the delay is affected by the Settings->Disk Settings->Tunable (poll_attributes) value). This delay is not there when a manual timeout is done. Quote Link to comment
RobJ Posted February 19, 2017 Share Posted February 19, 2017 Other than physically inspecting the drives, is there any other way to confirm? At a terminal command prompt, use the following command to obtain the actual spin state of the drive: hdparm -C /dev/sdX (replace sdX with the correct drive symbol from the Main screen) If it returns "active/idle", then it's spinning. If it returns "standby", then it's spun down. I'm sure someone has a little script that reports the spin states for all attached drives. Quote Link to comment
garycase Posted February 19, 2017 Share Posted February 19, 2017 As I noted earlier, I upgraded two of my servers to 6.3.2 with no issues. This upgrade DID, however, add even more time to parity checks. I know it's irrelevant, but it's nevertheless a bit frustrating to see these times keep getting longer. The system shown below is a dual core Pentium E6300 -- an older setup (one of the main UnRAID servers sold by Limetech several years ago) with a PassMark of 1761. It does NOT show 100% utilization during the checks (although it tends to stay at 50% or a bit over) ... but this used to take under 12 hours (v5), then grew to a very consistent 13:19 with v6.1; then 13:58 with 6.3.1, and now takes 14:10 with 6.3.2. All of these times are with NO GUI active during the checks -- I start the check; let it run overnight; and then look at the GUI the next day to see how long they took. Just curious if those with higher-powered systems are seeing this same increase in times (I suspect not -- it's probably just due to the older CPU and/or only 2 cores). Quote Link to comment
Frank1940 Posted February 19, 2017 Share Posted February 19, 2017 As I noted earlier, I upgraded two of my servers to 6.3.2 with no issues. This upgraded DID, however, add even more time to parity checks. I know it's irrelevant, but it's nevertheless a bit frustrating to see these times keep getting longer. The system shown below is a dual core Pentium E6300 -- an older setup (one of the main UnRAID servers sold by Limetech several years ago) with a PassMark of 1761. It does NOT show 100% utilization during the checks (although it tends to stay at 50% or a bit over) ... but this used to take under 12 hours (v5), then grew to a very consistent 13:19 with v6.1; then 13:58 with 6.3.1, and now takes 14:10 with 6.3.2. All of these times are with NO GUI active during the checks -- I start the check; let it run overnight; and then look at the GUI the next day to see how long they took. Just curious if those with higher-powered systems are seeing this same increase in times (I suspect not -- it's probably just due to the older CPU and/or only 2 cores). Single or Dual parity? I will run checks on both of mine tonight and report tomorrow morning. Quote Link to comment
Hoopster Posted February 20, 2017 Share Posted February 20, 2017 As I noted earlier, I upgraded two of my servers to 6.3.2 with no issues. This upgraded DID, however, add even more time to parity checks. I know it's irrelevant, but it's nevertheless a bit frustrating to see these times keep getting longer. The system shown below is a dual core Pentium E6300 -- an older setup (one of the main UnRAID servers sold by Limetech several years ago) with a PassMark of 1761. It does NOT show 100% utilization during the checks (although it tends to stay at 50% or a bit over) ... but this used to take under 12 hours (v5), then grew to a very consistent 13:19 with v6.1; then 13:58 with 6.3.1, and now takes 14:10 with 6.3.2. All of these times are with NO GUI active during the checks -- I start the check; let it run overnight; and then look at the GUI the next day to see how long they took. Just curious if those with higher-powered systems are seeing this same increase in times (I suspect not -- it's probably just due to the older CPU and/or only 2 cores). My last parity check was with version 6.3.1 four days ago. It was 4 minutes faster than parity checks run with 6.2.4. I have always seen very consistent parity check times on all versions of unRAID 6.x.x. I have been through three CPUs (two 4-core i5s and the current Xeon 4-core/8-thread CPU) and three different motherboards in that time. Parity checks have always been within 2-3 mintues of the prior time. This is the biggest variation I have seen in a while and it is 4 minutes faster. I have yet to run a parity check with 6.3.2. Quote Link to comment
miniwalks Posted February 20, 2017 Share Posted February 20, 2017 Other than physically inspecting the drives, is there any other way to confirm? At a terminal command prompt, use the following command to obtain the actual spin state of the drive: hdparm -C /dev/sdX (replace sdX with the correct drive symbol from the Main screen) If it returns "active/idle", then it's spinning. If it returns "standby", then it's spun down. I'm sure someone has a little script that reports the spin states for all attached drives. Thanks for that RobJ, Unfortunately all the drives are reporting as active/idle when this is occuring. I have tried using ionotifywait -mr on /mnt/user/ and get nothing back except for sickrage touching its DB which is on the cache drive only. Ideas? Quote Link to comment
lionelhutz Posted February 21, 2017 Share Posted February 21, 2017 (edited) Does this release do anything that might address this issue? https://forums.lime-technology.com/topic/54059-unraid-unresponsive-shfs-100-cpu/ Edited February 21, 2017 by lionelhutz Quote Link to comment
BRiT Posted February 21, 2017 Share Posted February 21, 2017 5 minutes ago, lionelhutz said: Does this release do anything that might address this issue? https://forums.lime-technology.com/topic/54059-unraid-unresponsive-shfs-100-cpu/ Wow, your text is so tiny, it's like the Toy Poodle of text sizes.. Quote Link to comment
Frank1940 Posted February 21, 2017 Share Posted February 21, 2017 (edited) On 2/19/2017 at 6:45 PM, Frank1940 said: On 2/19/2017 at 6:35 PM, garycase said: As I noted earlier, I upgraded two of my servers to 6.3.2 with no issues. This upgraded DID, however, add even more time to parity checks. I know it's irrelevant, but it's nevertheless a bit frustrating to see these times keep getting longer. The system shown below is a dual core Pentium E6300 -- an older setup (one of the main UnRAID servers sold by Limetech several years ago) with a PassMark of 1761. It does NOT show 100% utilization during the checks (although it tends to stay at 50% or a bit over) ... but this used to take under 12 hours (v5), then grew to a very consistent 13:19 with v6.1; then 13:58 with 6.3.1, and now takes 14:10 with 6.3.2. All of these times are with NO GUI active during the checks -- I start the check; let it run overnight; and then look at the GUI the next day to see how long they took. Just curious if those with higher-powered systems are seeing this same increase in times (I suspect not -- it's probably just due to the older CPU and/or only 2 cores). On 2/19/2017 at 6:45 PM, Frank1940 said: Single or Dual parity? I will run checks on both of mine tonight and report tomorrow morning. Here are the results of my two servers. Spec's for both are listed below: (Since this is the new BBS software, I am not quite sure how it will look. So I apologize if there is an issue. I sure so miss the preview feature of the old software...) Top Screen shot is for the Dual Parity Test-Bed Server. Bottom screen shot is for the Single Parity Media Server. (New software hides the descriptive file name!) Edited February 21, 2017 by Frank1940 Quote Link to comment
ufopinball Posted February 21, 2017 Share Posted February 21, 2017 Upgraded both Cortex and Lab to 6.3.2 via the webgui and had no issues whatsoever. Next Parity Check will be on March 1st, will let you know if I run into anything funky. Quote Link to comment
Drift King Posted February 21, 2017 Share Posted February 21, 2017 Upgraded both of my servers a couple of days ago with no issues... Thanks! Quote Link to comment
perhansen Posted February 21, 2017 Share Posted February 21, 2017 Upgraded main server from 6.3.0 and my backup from 6.3.1. Both went with no issues. All docker and VM running again. Thanks guys!Sent from my iPhone using Tapatalk Quote Link to comment
Mettbrot Posted February 21, 2017 Share Posted February 21, 2017 On 19.2.2017 at 7:48 PM, limetech said: We've been monitoring the kernel change logs for mention of this, but nothing has appeared yet. Have you seen anywhere someone has posted a patch? @limetech I have scouted some of the bugtrackers that mention this issue. This seams to be the main one: https://bugzilla.kernel.org/show_bug.cgi?id=191891 another usb device: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=853894 about an audio adapter: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852749 another (older) one here: https://bugzilla.kernel.org/show_bug.cgi?id=109521 same problem with a usb LAN controller: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852556 I found the following two patches: https://patchwork.kernel.org/patch/9534751/https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852749 (this is the one about the audio adapter though) I hope this helps 2 Quote Link to comment
bakes82 Posted February 22, 2017 Share Posted February 22, 2017 I still get Kernel Panic with this version. Quote Link to comment
limetech Posted February 22, 2017 Author Share Posted February 22, 2017 45 minutes ago, bakes82 said: I still get Kernel Panic with this version. Please open a Defect Report on this so we can track it. Quote Link to comment
skoub Posted February 22, 2017 Share Posted February 22, 2017 hi everyone. Since the update to 6.3.2, i can't access my shares anymore from any device on my network. - windows 10 can't access the shares - android phone - linux box telnet from my windows 10 work perfectly and i can navigate to the unraid dashboard with my browser. I had this problem with the 6.2.x branchs too and the last version that is working without any problems for me is 6.1.9. Any idea? tower-diagnostics-20170221-2041.zip Quote Link to comment
jonp Posted February 22, 2017 Share Posted February 22, 2017 1 minute ago, skoub said: hi everyone. Since the update to 6.3.2, i can't access my shares anymore from any device on my network. - windows 10 can't access the shares - android phone - linux box telnet from my windows 10 work perfectly and i can navigate to the unraid dashboard with my browser. I had this problem with the 6.2.x branchs too and the last version that is working without any problems for me is 6.1.9. Any idea? tower-diagnostics-20170221-2041.zip Have you tried browsing by IP? If IP works but hostname does not for accessing shares, let us know. Quote Link to comment
skoub Posted February 22, 2017 Share Posted February 22, 2017 (edited) 9 minutes ago, jonp said: Have you tried browsing by IP? If IP works but hostname does not for accessing shares, let us know. tried to browser by IP and hostname without success. I rebooted my computer just to be sure before doing the test. Edited February 22, 2017 by skoub Quote Link to comment
jonp Posted February 22, 2017 Share Posted February 22, 2017 1 hour ago, skoub said: hi everyone. Since the update to 6.3.2, i can't access my shares anymore from any device on my network. - windows 10 can't access the shares - android phone - linux box telnet from my windows 10 work perfectly and i can navigate to the unraid dashboard with my browser. I had this problem with the 6.2.x branchs too and the last version that is working without any problems for me is 6.1.9. Any idea? tower-diagnostics-20170221-2041.zip Nothing in your logs stands out. Can you access the webGui for the server in a browser (http://hostname or http://ipaddress)? I don't believe anyone else has reported this issue either. So are you stating that on 6.1.9, everything works, then if you upgrade the server to any newer version (6.2.x or 6.3.x), SMB access stops working, and if you drop back down to 6.1.9 everything comes back again? Quote Link to comment
mgworek Posted February 22, 2017 Share Posted February 22, 2017 I went into more detail on the 6.3.0 board but wanted to mention that 6.3.2 is booting for me now and no longer getting KPs at start with 6.3.x. Thanks! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.