methanoid Posted January 18, 2016 Share Posted January 18, 2016 I have a Win10VM (setup as per Wiki so no power management issues) on top of unRAID. I come down this morning and screen is blank (like time out from VM) and mouse movement makes no difference. It is "dead" But the VM hasnt passed back the control to the Console so no KB & Mouse there. Try login to WebUI on phone (no fun on 4in screen) and no response. No choice but to reboot. So I still have so far failed to create parity. Twice now... And could not capture any logs either. Wondering why parity creation seems to lock the machine? Machine has been completely stable in "other products" Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 I may have solved my own problem... I had one disk at 98% full... I didnt see that unRAID has ALL hard disks defaulting to Warning disk utilization level (%): at 70% Critical disk utilization level (%): at 90% So presumably at 90% of parity completion it then barfs? Why would a default setting mean you cannot use the capacity (or even close) of the drive. For a 5Tb drive that means I am expected to leave 500Gb empty? And would I want a warning at 70% of capacity (when 1.5Tb is free)? I have changed the values on each HDD and set the Parity drive "critical" level at 1% higher than the Data drives. Fingers crossed that is the issue. Parity sync again... See what happens in 14 hours or so Link to comment
itimpi Posted January 18, 2016 Share Posted January 18, 2016 The disk utilization levels have nothing to do with parity. The levels are merely used for deciding what color coding should be used in the GUI, and also when notifications should be sent about disk usage. They do not stop you exceeding the values set. With the 6.1.7 release you can set a global (i.e. default) value for these settings, and also if wanted override the default at the individual drive level. Unfortunately this means that changing those values will have no effect on your parity issue. Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 The disk utilization levels have nothing to do with parity. The levels are merely used for deciding what color coding should be used in the GUI, and also when notifications should be sent about disk usage. They do not stop you exceeding the values set. With the 6.1.7 release you can set a global (i.e. default) value for these settings, and also if wanted override the default at the individual drive level. Unfortunately this means that changing those values will have no effect on your parity issue. Damn... OK.. I'm stumped then! Link to comment
trurl Posted January 18, 2016 Share Posted January 18, 2016 How many disks do you have? What is the exact model of your power supply? Have you done a memtest? Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 8 hdd. 550w silver stone gold psu (SST-55G). Never any issues before with "other" products. It's just barfed again. Will have to try tests. EDIT: Mem tests fine.. have rebooted and disabled all Dockers. Just unRAID and Win10 VM... If that fails will just do with unRAID on own... Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 Still barfing... removed GTX970 and fitted crappy HD5450 (should eliminate any question of PSU draw!). I wonder.. I have been watching youtube when its gone all dead... wonder if its to do with audio issues? No ACS Override on? Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 Still barfing.. went without ANY VM or Dockers.. barfs in around 10m of trying to do parity check. Can't grab logs cos the machine is locked completely. Disabling parity check and running VM again. See if it locks at all or if its the creation of parity that is the problem. Does LT do any support or do they rely completely on the Forum doing the job for them? Link to comment
bonienl Posted January 18, 2016 Share Posted January 18, 2016 Still barfing.. went without ANY VM or Dockers.. barfs in around 10m of trying to do parity check. Can't grab logs cos the machine is locked completely. Disabling parity check and running VM again. See if it locks at all or if its the creation of parity that is the problem. Does LT do any support or do they rely completely on the Forum doing the job for them? See http://lime-technology.com/services/ Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 Still barfing.. went without ANY VM or Dockers.. barfs in around 10m of trying to do parity check. Can't grab logs cos the machine is locked completely. Disabling parity check and running VM again. See if it locks at all or if its the creation of parity that is the problem. Does LT do any support or do they rely completely on the Forum doing the job for them? See http://lime-technology.com/services/ And there was me thinking a paid product (repackaging free opensource stuff like KVM, Docker) might actually mean some support... Silly me! Link to comment
itimpi Posted January 18, 2016 Share Posted January 18, 2016 Still barfing.. went without ANY VM or Dockers.. barfs in around 10m of trying to do parity check. Can't grab logs cos the machine is locked completely. Disabling parity check and running VM again. See if it locks at all or if its the creation of parity that is the problem. Does LT do any support or do they rely completely on the Forum doing the job for them? See http://lime-technology.com/services/ And there was me thinking a paid product (repackaging free opensource stuff like KVM, Docker) might actually mean some support... Silly me! The normal route for support from LimeTech is via email, not via the forums (which are all about community support). The paid services (which are a recent addition) are for those who want a higher level of interaction than email provides. Link to comment
methanoid Posted January 18, 2016 Author Share Posted January 18, 2016 Cool, thanks.. Rather worried that it looked like "pay" or "rely on community" (good though it is). Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.