JohnBee

Members
  • Posts

    24
  • Joined

  • Last visited

Retained

  • Member Title
    Enthusiast

Converted

  • Gender
    Male
  • Location
    Earth
  • Personal Text
    "Creativity is intelligence having fun." - Albert Einstein

JohnBee's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. I would like update this thread and post that I have identified the cause of this behaviour as well as found a suitable solution. The first thing to note, is that this particular behaviour seems specific to assigning relevant paths and directories to alternative devices - such as nvme's or SSD drives The second to note is that the Docker and VM services, will continue to point to their original paths until these are either stopped or cycled. That said, and should this not be done, the respective Docker and VM services can potentially report false information, and as a result of polling the now original paths and folders. And so, the solution to this, and at this time, appears to that of cycling both the Docker and VM services after the changes to their respective devices are made. - hope this helps and thanks to JorgeB, for following-up on this PS. it would seem that the ideal scenario on this, is that of cycling said services following devices changes
  2. Here is what I have found; [Scenario 1] Upon installing Unraid and setting; appdata, domains and system(cache pool settings), to 'nvme', prio-to all other things, the docker and vm images will drop from management panels following a restart of array, or the Unraid server. [Scenario 2] Upon installing Unraid, and creating docker and vm images prior-to changing; appdata, domains and system cache, to 'nvme', the dockers and vm's will not drop from their respective management panels following array restarts, or restarting the Unraid server That said, it would appear that there are discrepancies with setting-up and/or changing system shares(appdata, domains and system) prior-to specific server activities, though I have yet to interrogate this further...
  3. I made progress on this after reinstalling Unraid from scratch, and starting with a clean slate. That said, and to the best of my abilities, it would seem that the issue is linked to assigning the docker and vm image files to a separate nvme drive, this in-turn, leading to the system dropping these from their respective management panels. In addition to this, and equally perplexing, is that I can only reproduce this one time, after which, both the dockers and vm's will be retained regardless of whether the array is cycled, or the Unraid is restarted. Which seems rsther odd to say the least. Therefore and with that being said, I would like to cover the recommended method to provision an nvme drive for docker and vm images, so-as to determine whether this can be avoided, as well as whetherthis is a genuine bug. - thanks for your continued participation on this btw. I do appreciate the help
  4. Here's another apparent discovery toward this issue. If I restore the VM, by mounting the image within a newly created shell, the VM will now persist through array restarts, shutdown, reboots etc. That said, I am no closer to identifying the culprit on this most intriguing behavior
  5. This seems like very odd behavior. The question now is; why would Unraid conclude the images or storage deleted following the start and stop of the array? That said, do you have any tips on how I might close-in on this particular bug? I would add that this is a test server, and that all of the disks can be deleted, formatted, etc NB. I was able to reproduce this by rebooting the NAS as well, all VM's and Dockers removed from the tabs
  6. The weird part is, this is a fresh install, and so after initial configuration, I proceeded to create a docker, then stopped and started the array, and the docker disapeared from the manager window. Thing of it is, this was happening in my previous install as well, which prompted me to scrap the server configuration and start over. Really not sure what could cause this, and it's got me stumped. Guess my only recourse, is to nuke the disks and see if that makes a difference, though all of the metrics say the disks are 100%, which is weird.
  7. Would add that this behavior also extends to dockers, as well as reinstallation of my USB /Unraid OS server. Only thing left, is the drives, which appear to be the culprits in this behavior?
  8. I have since reinstalled, but find the same issue. Here's the diag file bertnas-diagnostics-20230524-1211.zip
  9. Having installed a fresh Unraid server, and noticed the screen is now blank during the creation of dockers and VM's That said, I know this was not the case before, and I have not changed anything, in-terms of hardware, and so I'm wondering if anyone knows what might cause this? ps. the screen will remain blank until the conclusion of the installation process - at which point, the completed script process will appear with the [done] button...
  10. I managed to recreate this After re-creating a vm, I then stopped and started the Array and found the VM gone from the manager listing That said, the .img and relevant data appears to be intact, though I cannot for the likes of me, figure out why the VM keeps disappearing from the list
  11. NB. this particular bug is specific to: Version: 6.11.5 Problem: After restarting Unraid or the Array, some(or all) docker and VM, images are dropped from their respective management panels. That said, and after a little investigating, I discovered that this behaviour is specific to assigning the system shares(appdata, domains and system), to an nvme drive, prior-to, and following initial Unraid server setup. After which, and upon creating docker and vm images, a restart of the array or Unraid server, will result in the disappearance of said images from their respective management panels.
  12. Moving a share folder under rc6 leads to 'Share 'appdata' has been deleted' Which seems like odd behavior, though I test this on a fresh install, and found the same behavior.
  13. HI and thanks for answering. That said, the answer is yes, in that, I have run memtest86 - 0 errors That said, here is my list of things tried; 1. Tried changing ram; different brands(32GB) vs 64GB 2. Tried different USB installation sticks; 16GB, 32GB, 64GB, each different brands 3. Tried removing and/or swapping HDD drives and/or nvme(cache drive) 4. Disabled C-state in bios 5. Ensured, and turned-off all forms of oc/boost/xms profiles Having said that, I managed(by sheer miracle), to installed the latest OS update - I say, miracle, as the system usually freezes-up before I can do anything, though as it stands, it was able to conclude the update, and OS is now latest .RC5, and low and behold, the system has been up longer than I have ever seen - and it actually building parity drive. And so, I guess there's that. That said, why isn't there a way to collect logs and identify the cause of these incessant lockups? Do we have to pay a license to get support, or is this the extent of the support given for this particular software?
  14. Sorry for the confusion, I thought the issue was resolved, as I found errors on one of my drives, though it appears that, that was not the case, as Unraid is now frozen again. Needless to say, my experience with Unraid is not proving to be a positive one at all
  15. I cannot seem to build an array, against the system freezing-up /or shutting down with kernel panics That said, is there any way to identify or troublshoot this issue against the system freezing-up before I can navigate the UI?