rvijay007

Members
  • Posts

    44
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

rvijay007's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. My server is on the exception list and the blocker doesn't show as processing on the server's address/webUI.
  2. I have moved the system share to the Cache. One thing I noticed is that my cache is reporting 90.4GB used, but I have a VM that alone is 400+GB, as show in the screenshots below. Is this a clue as to what may be going on since the values aren't even being reported correctly, or is this a different issue?
  3. Looks like other users are experiencing this issue as well, as seen in this new Reddit thread:
  4. No, it doesn't. Steps performed: 1. Clicked the button to explore the disk contents in the webUI 2. Clicked back to go back to the Main screen 3. Blank disks as in the original post photo 4. Since webUI terminal doesn't work, I ssh'd into the server and typed touch /mnt/disk1/junk 5. Checked webUI, still nothing appears 6. Confirmed over ssh that file exists. Then deleted file. 7. webUI still doesn't show anything on Main
  5. Anytime I explore a disk contents within the webUI, and then go back to the Main Tab in the webUI, all my disks don't show up, and my unRAID becomes more unstable. Trying to go to the webUI Terminal doesn't launch the Terminal (white screen), but I can navigate to other tabs. Trying to download diagnostics just seems to hang the UI, though I can hit Esc to quit the window. I can also launch VMs, and connect to shares via SMB, so things are working. Trying to manually invoke the Mover never seems to move any files, whereas hitting the "Move" button always used to invoke the Mover with older versions of unRAID. This UI issue seems to resolve when I reboot/shutdown the server and restart it, but will reoccur once I explore any disk. SMARTCTL doesn't show any issues on any of the disks. Given 10-20 minutes, the disks finally refresh and show up, but the issue immediately happens again if I explore into any disk. Does anyone know what is going on here??? Thanks in advance! alexandria-diagnostics-20240325-1247.zip
  6. It's been up to date (v2024.01.11.1434), but I see my server has been on longer than the latest version. Will reboot - thanks!
  7. Received this error today within Fix Common Problems, but I am completely unsure why as I haven't received errors on my system in a long time. I've attached my diagnostics; can anyone help me? alexandria-diagnostics-20240119-1055.zip
  8. did you rerun the macinabox script after changing the VM definition with your new logical cores?
  9. Hi all, I set up a Mojave VM using macinabox, and it all works when the base installation. However, VNC is slow, so I thought I could pass through my Intel HD Graphics 530. I replaced VNC with this iGPU, but it gets stuck in boot loops. I tried adding it as a second GPU as well so I could see via VNC what was going on, and saw it just gets stuck with boot loops, saying error. Does anyone know how I can successfully passthrough my Intel HD Graphics GPU so I can make Mac screen share from my other laptops a better user experience? Thanks
  10. I don’t have a UPS. I believe it’s set to auto boot since the computer always restarts. Are there any unRAID plugins to monitor CPU temperature and/or power usage throughout the box?
  11. Do you know of the potential issues that don’t get logged that lead to this sort of behavior? Basically subtracting out issues that you know would have been logged, so by not seeing a log we can narrow down to other potential issues?
  12. I’m not sure I understand. By what you are saying, my VMs should never have worked due to the RAM definition, but they were always working concurrently until I put the second GPU in and added it to the VM definition. If I took the second GPU definition out, the VMs continued to work concurrently. Why did the VMs ever work concurrently based on what you wrote?
  13. I should have plenty of RAM - 64MB in the system, 1 VM that defines 8MB and another that defines 32M. Nothing else is running that requires intense RAM usage, so not sure what is occurring.
  14. Thanks everyone. Though I don't really understand why, the issue seemed to resolve itself when I changed the Ubuntu VM definition to use the same amount of Max Memory as Initial Memory. That is, I changed it from Initial 16M Max 32M to Initial 16M Max 16M. There are 64M of RAM in my box, and those were the only 2 VMs running with nothing else of note using RAM, so there should have been plenty of RAM. I'm not entirely sure why that was blocking concurrent GPU access, and as mentioned earlier, I could make the VMs run concurrently if I removed the GPU definition from only one of the VMs, even with different Initial/Max memory specifications on the Ubuntu. The CPU core definition didn't make a difference; both VMs could share cores/threads and they still work concurrently after the Initial/Max memory change. Does anyone know why the memory specification allowed the system to work? Thankful to the community helping me through this issue!
  15. It happened again about an hour ago. I've captured the syslog file and attached here. I think the reboot occurred at or around this timestamp: Jan 7 19:48:57 Hopefully there is something in here that can prove of use debugging the issue. syslog.log