every reboot caues unraid to do parity check and says unsafe shutdown


96 posts in this topic Last Reply

Recommended Posts

what i ment  by   "the diagnostic files never showed"

 

as you seen it says  unclean shut down detected

 

but it didnt really you anything..   

it didnt tell you exactly specificly the exact file that caused this..  the program  application

 

like Windows operating systems  will do a memory dump and when you give it in they know exactly what file caused the crashed

 

but "unclean shutdown detected"  still didnt help me..  

it didnt say  this file caused the problem .. or these 2 files caused the problem so how do i fix it?

thats what i ment from your partial quote  i wrote

i was saying that i guess you cant find the problem from the 2 first diagnostic files

 

i guess my disliexa  really mess's up for ya

sorry about that

 

so i guess i need to do what i was saying...  run 1 VM at a time..  then do a reboot

 

also thats why i asked if the VM Backup  it couldnt back up couple vms  if that could be linked to the problem.. but then again i deleted a few VMs

 

what i could do is 

 

1..  turn off all VMS  then do a reboot

2.   turn on 1 vm reboot and do that for the 16 VMS i have installed 

3 .  run the VM Backup  again but its not creating proper logs  it says  there is no error .. then split second later says  there was an error reported saving to log file

but there is no log file etc..

 and maybe that causing it

 

ill give all 3 a try tommorow starting 1  then 3 then 2

 

as i stumped

 

and sorry once again with my dislexia  and not explaining things properly

 

Link to post
  • Replies 95
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

OK, try another test.

 

Disable Docker and VM Manager in Settings.

Shutdown without stopping the array.

See if you still get an unclean shutdown on next reboot.

 

Link to post

so shutting down  VM MAnager and Disabled Docker..

and rebooting unraid with the Array Up 

has no effect

 

it still causes Parity Check..    i guess its something to do with the array??

 

i thought it was the VM Backup   but that issue  says it creates a log  but i never can find it thats where i mentioned it  vm backup issues

 

so  i guess VMS and Dockers arent the problem..

 

here is the diagnostic file too..   

 

unraid vmbackup fail.PNG

unraid3.PNG

tower-diagnostics-20210411-0822.zip

Link to post

Just a check - did the parity check actually start?   It is always possible that the Parity Tuning plugin incorrectly detected an unclean shutdown and the notification is misleading.    The plugin does not start the check itself - it just assumed unRaid is going to do so after an unclean shutdown and puts up the notification to indicate why.

Link to post
Posted (edited)

it was running.. i also tried to fix my go file  i figured maybe it is the problem???/  since Fix common issues??

so i remmebed out my go file i tried reading the documenation  about it.. i guess it does everything thats remmed out automaticlly??

 

 

# Copy SSH files back to /root/.ssh folder and set permissions for files
	#mkdir -p /root/.ssh
	#cp /boot/config/ssh/tower_root /root/.ssh/id_rsa
	##cat /boot/config/ssh/*.pub >> /root/.ssh/authorized_keys
	##cat /boot/config/ssh/mitchsserver_root.pub >> /root/.ssh/authorized_keys
	##cat /boot/config/ssh/backupserver_root.pub >> /root/.ssh/authorized_keys
	##cat /boot/config/ssh/mitchflix_root.pub >> /root/.ssh/authorized_keys
	##cat /boot/config/ssh/pumppi_root.pub >> /root/.ssh/authorized_keys
#	ssh-keyscan mitchflix >> /root/.ssh/known_hosts     #MitchFlix At Home
#	ssh-keyscan 192.168.0.9 >> /root/.ssh/known_hosts   #MitchFlix At Home
#	ssh-keyscan 192.168.0.8 >> /root/.ssh/known_hosts   #Mitch Server At Home
#	ssh-keyscan 192.168.1.8 >> /root/.ssh/known_hosts   #Mitch Server
#	ssh-keyscan 192.168.1.9 >> /root/.ssh/known_hosts   #Mitch Flix
#	ssh-keyscan backupserver >> /root/.ssh/known_hosts  #Backupserver
#	ssh-keyscan 192.168.0.4 >> /root/.ssh/known_hosts   #Backupserver
#	ssh-keyscan 192.168.0.12 >> /root/.ssh/known_hosts  #Raspberry Pi
#	chmod g-rwx,o-rwx -R /root/.ssh

 

 

 

i then did a Reboot.. and then parity check didnt start back up  but it was running from the screen shot....  the screen shot is from after the reboot..  is the go file causing the problems?  at the time of my reboot it was at like 2.2% done parity check

 

unraid4.PNG

Edited by comet424
Link to post

Maybe after all this you should do a correcting parity check just to make sure your parity has no sync errors so it would be able to correctly rebuild a failed disk.

 

Then you can go back to testing.

Link to post
Posted (edited)

how do i do that?  is the correcting partiy check  what i been doing  that i done 4  in the past week  that take 25 hours to complete

is that a correcting parity check or is a correcting parity check something else?

 

so far  no failed disk's  nothing

 

but what you mean correcting parity check and ill go from there

 

but if a correcting partity check is what i do after every reboot where it starts partity check i already doing those 

Edited by comet424
Link to post

so i been trying to find this  correcting parity check in settings etc..  but i dont have anything  least i not seeing it

 

i only have the Parity Check  on the main page that is in the image..

so then i can try doing a correcting parity check instead of doing a standard parity check every reboot

 

unraid11.PNG

Link to post

Whether scheduled parity checks are correcting or not is set under Settings -> Scheduler.    For Manual checks is set by the Correcting checkbox next to the Check button on the main page.    The automatic checks after an unclean shutdown are always non-correcting.

Link to post
Posted (edited)

oh ok so the automatic parity checking after a reboot does nothing then?

 

as for scheduled  parity   i have it set for ever 3 months to run..

 

to do thid paritiy correcting    thats what that check box is for correct?   so i just click start  ...

 

and then report back here for the next step

 

how come the other parity checks show no errors?

 

so its 2:02pm  and i just clicked the check  so around 3pm tommorw show be done

Edited by comet424
Link to post

what i noticed  from those   screen shots   i not getting over 200mb/s  

but all my drives are 7200rpms  is unraid slow at times..  i know i read  that freenas is a faster data transfers then unraid

but i like unraid its user friendly.. but i noticed  shouldnt i be getting over 200mb/s  always?  

 

Link to post
4 hours ago, comet424 said:

what i noticed  from those   screen shots   i not getting over 200mb/s  

but all my drives are 7200rpms  is unraid slow at times..  i know i read  that freenas is a faster data transfers then unraid

but i like unraid its user friendly.. but i noticed  shouldnt i be getting over 200mb/s  always?  

 

Outer track of disk will be max performance, decrease with inner track.

 

Those are average speed figure, not record on max / min. BTW, it look normal and seems quit good.

 

There are many factor affect that, i.e. any other load on array, no of disks, different disk capacity mix or not etc ....

 

Below is a simple setup 1 data+1 Parity result, it will got best speed, strat at 210MB/s with average 169MB/s, 6TB complete less then 10hrs.

 

image.png.ed3d86289385c2f30e74a6e5faf84b52.png

Edited by Vr2Io
Link to post

ah ok so then what i get is fine wasnt 100% sure figure  i should got 200+  anywhere on the drive lol

well learn something new everyday.. i guess thats why people do  Raid 0  is it or Raid 1   to get the maximum speed..

i just cant afford enough drives to do it

 

also reason i like unraid i dont need all the same drives  same size and need to buy them all same time. i can add 1 at a time..

 

Link to post

alright its done the correcting parity  here is image  

..

so whats the next step  to test why reboots cause parity checking.. and anything to do with why vm backup  as you can see wont backup..  from image from a couple posts ago

uraid.PNG

Link to post

so i did 2 diagnostic files..  and is it normal it takes a long time to mount disks?  is it the bigger the HDs  the longer it takes?  or the more full it is the longer it takes?

 

so what i did was

1.  uninstalled VM Backup

Disable Vm Manager

Shut down Array.

Restart Array

Took Diagnostic

 

then i did

2.   renabled VM Manager

Start 4 of my VMS

waited min  

Shut Down Array

Restarted Array

Took Diagnostic

 

hopefully you see something..  and no parity check started

but no reboot either

 

tower-diagnostics-20210414-1405.zip tower-diagnostics-20210414-1359.zip

Link to post
6 minutes ago, comet424 said:

and is it normal it takes a long time to mount disks?

Btrfs disks can take a few seconds to mount each, the larger the filesystem the longer it can take.

 

P.S. there's some data corruption detected on disk3:

 

Apr 11 08:58:58 Tower kernel: BTRFS info (device md3): bdev /dev/md3 errs: wr 0, rd 0, flush 0, corrupt 2580, gen 0

 

This can be old or new, if old and fixed you should clear the stats, if new run a scrub.

 

 

 

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.