PeteAsking

Members
  • Posts

    671
  • Joined

  • Last visited

  • Days Won

    3

Report Comments posted by PeteAsking

  1. 2 hours ago, trurl said:

    Without diagnostics no way to know for sure but maybe corruption with your system share unless you did something to the paths these vdisks were configured to use.

    No corruption. Everything is very clean. I had upgraded just few days before to the previous beta without any issues whatsoever. Unfortunately we dont live in an ideal world where I can spend hours getting logs, digging around wondering why all my vm's are totally blanked by unraid etc. Literally unraid has my DNS and firewall as a VM on it so no vms = no internet = lots of unhappy people in my house and no google or anything just me fixing it on my own back so when this happens I just fix it as fast as possible. Took me about 30 minutes then I went to sleep at 11:30PM. Alternative is I just dont post anything at all when i have a problem, and you dont even know what to look out for. Im just saying what happened to me, maybe you discover it happening to lots of people I dont know. Up to limetech if they want to fix it as it impacts their business, not mine. In my case it was fixable, you just remake a vm template and link the disk that is on the array to the vm template. Sorry if thats not acceptable, but I dont have the hours and hours to spend figuring out why unraid would delete all my vm xml files and dockers for no apparent reason. Adding the dockers back was easier as I just clicked add docker or whatever and chose the dockers that previously existed from the list it remembered and it auto used the docker files on the disks, I didnt have to even do anything, and its unlikey that was corrupted since that process is fairly automatic when you add a docker. I had just a minecraft server and a unifi controller that I use which both came back without any configuration changes whatsoever so no doubt the default settings were still remembered. It just decided to delete them.

     

    P

  2. Hmm, well thats kind of impossible since I use the system so I cant really leave it in safe mode for days just on the off chance it locks up again. Thanks anyway, if what I posted is not useful I will just move onto the next thing. Not every crash reported can be useful I guess. Just life, not really going to lose sleep over it. I will do the disk removal instead of worrying about this :)

     

    If it happens again I will try take a diagnostics right after like you suggest. Cant really do the safe mode thing though.

  3. 3 minutes ago, trurl said:

    Some people have reported lockups when trying to get their BOINC docker tuned just right. Does it lockup if you don't run that docker?

    Oh no idea. I have never run it before yesterday. I dont even know if it is a bug with unraid beta or not thats kind of why Im asking. Also since I had to reset any logs are lost as its all run from RAM. I couldnt access anything so no way to get any logs.

  4. Hi I had a complete lockup last night. I have been making changes to the system so will detail what I can.

     

    Yesterday I replaced 1 disk for a larger one. Process was pretty standard, stopped all dockers/vm's.

    Shutdown unraid.

    Changed disk.

    Started unraid - assigned replacement.

    Waited for rebuild.

    Did a parity check. Was using all my vm's etc while this happened fine through the day. No errors.

    Shutdown unraid again and checked it booted again fine.

    Did another parity check to be safe. No errors again.

     

    Created the boinc-rdp docker to use and assigned it 3GB RAM and 50% CPU.

    I think that was about all the changes I made.

     

    I continued using the vm's and services unraid provides during the day for a few hours and eventually went to sleep. Once I woke up all services were unavailable. Unraid could not be pinged either.

    On the actual console screen it showed error messages that would dump to the screen every few minutes. I could press return and login but nothing happened when I tried to type "shutdown -h now" - it accepted the command and said system is going down now! but nothing actually happened. After a few more minutes of the erros being dumped to the screen I took a photo and had to reset it. (see image).

     

    It has come back up and a parity check is now at 40% with 1 sync error found.

     

    In addition to this crash i also want to remove a disk today from the array. As I replaced a disk yesterday, I dont need this additional disk. I have copied off what was on it already to the replaced drive in the array that had enough capacity to accommodate the replaced disk and this one I want to remove now.

     

    Are the steps as follows? (taken from faq):

    Stop the array by pressing "Stop" on the management interface. Un-assign the drive on the Devices page, then return to the unRAID Main page.

    Stop array

    Select the 'Utils' tab

    Choose "New Config"

    Agree and create a new config

    Reassign all of the drives you wish to keep in the array

    Start the array and let parity rebuild

     

    Got to keep forging ahead despite the crashes huh?

     

    Let me know your thoughts. My fault or unraids? I do abuse the system quite a bit. I dont treat it with the respect of a maiden, I kind of just beat it with a stick until it stops working then try to not batter it so hard until its been stable for a while. I do make a lot of changes to it generally speaking, but it was working fine on the beta until last night when I changed the disk and made the new docker. I believe CPU would have averaged around 60-70% and memory would have been at about 40% all of last night (16GB system). I am now running at 93% RAM and about 80% CPU as I had to spin up a vm to do something and the parity check is running. It hasnt crashed again under this high load.

     

    Happy to hear thoughts on this and advice on removing a disk would help.

     

    P

    UnraidCrash.jpg

  5. Ok so this is what I did, to upgrade. It wasnt too bad actually. These steps seem alright if you want a way to go back to your old setup quickly.

     

    I logged in, shut down all my VM's etc and then stopped the array.

    I then set the array autostart to NO (so it wont autostart).

    Took a backup of the flash image at this point so I had a zip just in case (I never used it).

    Powered off.

    Clonezilla from this original key to a new key.

    Powered up the system with the new key.

    Noted everything boots and is looking fine, but registration says invalid, but Unraid says I can click replace key if I want to and it will fix my key and blacklist the original USB key. As I dont want to do this yet unless I have to revert I power down the system and just put this cloned key in a draw. I can now boot it anytime and use the replace key option if I want to quickly fall back to the old 6.8.3 system.

    I now put the original key back in and boot. Dont start array yet, first upgrade the system to the beta image of Unraid and set array to autostart, then reboot.

    Now system boots with 6.9Beta1 and array starts automatically.

     

    So far I have just started my 3 VM's and checked they are working as well as a couple of dockers. I thought I may as well run a parity check so I will let that run overnight then copy some large files tomorrow to the array and see if its all working and will report back any issues. I cant really tell any difference from the limited testing so far but maybe some stuff will appear over the next few days. I did check some obvious settings here and there but they all seem exactly what I remember them being from when it was on 6.8.3 so looks like nothing obvious has changed so far. I guess I have to wait and see.

     

    Kind regards,

    P

     

    • Like 1
  6. 2 minutes ago, trurl said:

    You posted again while I was typing, so hopefully you will reconsider.

     

    One of the things that will happen when you transfer the license is blacklisting the original flash.

    It sounds like something someone can fix if I email them so probably if it comes to it I will just do that. Paying customer after all, so they should be able to sort out my license. Sorry but customer service is expected to a degree and dealing with a request like that is only fair.

  7. 4 minutes ago, Frank1940 said:

    Your logic is fairly straight forward.  But I would caution you to carefully read the actual release post and make sure that there are no really new features being introduced (or in this case, some different, or missing, drivers).  From what I read this is a very vanilla beta release.  It looks like the standard Unraid code base with a new kernel.  So unless there is a 'gotcha' in the kernel (I seem to recall that there have been some in the  past), you should be safe.  Just wait a few days and see what the guys with the Ryzen systems (who are the ones asking/begging for this kernel update) have to say. 

    Cool Im going to just go for it later tonight and deal with any issues that appear. Will clone to a new key and boot it, then fix any licensing issues and to the update and probably if anything bad happens I will be able to resolve it. I am interested to see how btrfs handles as I use that on all my disks.

  8. I know you guys said this:

     

    "Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!"

     

    But lets be real, its linux. How dangerous is it really for me to upgrade my system? I mean people reccomend against debian testing but its super stable in my own experience. Can you eventually upgrade to the stable release from beta? Im happy to give the beta a go on my live home setup. its just home at the end of the day, with a few VM's, and dockers and shares on a small NAS. Its not like these mega systems people are setting up with threadrippers and 128GB RAM. I can always clone the USB key to a new one and upgrade the cloned key. Would this break my license key if I did that? End of the day some testing would be good, and I can test on a simple home setup if its semi safe to do that.

     

    P