Jump to content
  • [6.10.3] Boot hangs and random freezes upgrading to 6.11.X


    apmiller
    • Annoyance

    In early October I upgraded to 6.11.3 and have nothing but issues since.  Once 6.11.3 was installed, the server would either hang while booting or freeze 3-5 minutes after I could log in.  I attempted to roll back to 6.10.3 using a backup of my USB stick but the machine would not see that as bootable.  I then created a new 6.10.3 from scratch and copied over the directories from my backup zip file.  I was able to get back into 6.10.3, but I now get numerous out of memory errors and have to either restart the server or kill/restart my VM/dockers.  Last night I tried 6.11.5 and had the same problem, server would hang while booting or freeze quickly.  I've again rolled back to 6.10.3 but now expect the memory errors to return.  Attached are system diagnostics over the past month or so.  

     

    MB:  Supermicro X11SSH-F-O
     

    serenity-diagnostics-20221016-2223.zip serenity-diagnostics-20221023-1848.zip serenity-diagnostics-20221030-1121.zip serenity-diagnostics-20221122-1641.zip




    User Feedback

    Recommended Comments

    Quote

    In early October I upgraded to 6.11.3 and have nothing but issues since

    What version were you running before that?

    Quote

    roll back to 6.10.3

    This sounds as if you were running 6.10.3 before. Was 6.10.3 running OK before the upgrade?

    Link to comment
    16 hours ago, apmiller said:

    I'm 90% sure it was 6.10.3 from the start but there could be a chance I was 6.11.1.  

    Whichever version it was, was it running OK before the upgrade?

    Link to comment

    System was rock solid until this bout of issues.  I noticed the appdata on two different pools after the first restore attempt in early October, not intentional.  

    Link to comment
    root@Serenity:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs          7.8G  788M  7.0G  10% /
    tmpfs            32M  772K   32M   3% /run
    /dev/sda1       3.8G  379M  3.4G  10% /boot
    overlay         7.8G  788M  7.0G  10% /lib/firmware
    overlay         7.8G  788M  7.0G  10% /lib/modules
    devtmpfs        8.0M     0  8.0M   0% /dev
    tmpfs           7.8G     0  7.8G   0% /dev/shm
    cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
    tmpfs           128M  332K  128M   1% /var/log
    tmpfs           1.0M     0  1.0M   0% /mnt/disks
    tmpfs           1.0M     0  1.0M   0% /mnt/remotes
    tmpfs           1.0M     0  1.0M   0% /mnt/rootshare
    /dev/md1        3.7T  3.2T  491G  87% /mnt/disk1
    /dev/md2        1.9T  1.4T  465G  76% /mnt/disk2
    /dev/md3        1.9T  1.4T  494G  74% /mnt/disk3
    /dev/md4        1.9T  1.4T  465G  76% /mnt/disk4
    /dev/md5        7.3T  522G  6.8T   8% /mnt/disk5
    /dev/md6        3.7T  2.4T  1.3T  65% /mnt/disk6
    /dev/md7        7.3T   52G  7.3T   1% /mnt/disk7
    /dev/md8        7.3T  1.6T  5.7T  22% /mnt/disk8
    /dev/md9        7.3T   52G  7.3T   1% /mnt/disk9
    /dev/nvme0n1p1  477G   39G  438G   9% /mnt/nvmecache
    /dev/sdn1       239G   38G  201G  16% /mnt/ssdcache
    shfs             42T   12T   31T  29% /mnt/user0
    shfs             42T   12T   31T  29% /mnt/user
    /dev/sdc1       699G  607G   92G  87% /mnt/disks/CAMDATA
    /dev/loop2      1.0G  4.4M  904M   1% /etc/libvirt
    overlay         477G   39G  438G   9% /var/lib/docker/overlay2/67b522cda6cedb9f678d8313f15612618f456bcf7c705de2f5eafe0d476e30a6/merged
    overlay         477G   39G  438G   9% /var/lib/docker/overlay2/177ad1508d01f8626bdb646a8900b9609f74997a7ecf64ac46db015724bb93b1/merged

     

    Link to comment

    The reason I was asking for those command line results is because I am wondering if you have something misconfigured that is filling rootfs. Actually, all we really need to see is the result of

    df -h /

    These latest diagnostics has a little more used than the result you posted, and those other diagnostics showed as much as 16% of rootfs used. Not necessarily a problem unless it gets close to 100%. The OS is in rootfs (which is in RAM) and if it fills the OS has no room to work in.

     

    Quote

     created a new 6.10.3 from scratch and copied over the directories from my backup zip file

    Which directories? All you need is the contents of config folder from flash to get your configuration back.

     

    Disable Docker and VM Manager in Settings, reboot in SAFE mode, and see if it is stable like that.

     

    Link to comment

    System has been stable since my last post.  Still on 6.10.3 and now a bit leery of even trying 6.11.X.   Here is that command that was requested:  

     

    root@Serenity:~# df -h /
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs          7.8G  859M  6.9G  11% /

     

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...