Cirdan

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Cirdan's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I'm trying to mount one of my shares in an Ubuntu 18.04 server VM. Unfortunately, adding it via fstab causes my system to go into emergency mode on each boot. My share (/mnt/user/hemlock) has been added to the VM with tag 'hemlock'. Running 'sudo mount -a' works correctly and shows no errors, and 'sudo mount -t 9p hemlock /home/<user>/hemlock' also works. However on boot up I consistently see this: Pressing Control-D and continuing seems to work anyway, but having that prompt is still an issue on a server. I cannot find anything related to mounting in 'journalctl -xb', but perhaps I am looking for the wrong thing. Here is my fstab entry: hemlock /home/<user>/hemlock 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0 Thank you for any help, and please let me know if you need any other information.
  2. I just tried starting the VMs with lower amounts of memort. Doesn't seem to make a difference even when only 2Gb of my 16GB are allocated. In that scenario the usage is only shown to be around 21%. I'm including a diagnostic zip of those most recent tests. lindon-diagnostics-20180709-0634.zip
  3. All of my Linux VMs are now waiting at the TianoCore splash screen for about a minute before continuing to boot. They seem to function correctly once they boot. I initially thought it was an issue with my regular Kubuntu VM but it is happening to an Arch VM that I just created today as well. It also does not occur when connecting over VNC, only with the GPU pass through of my R7 370. It does not occur with the Windows 7 VM. No errors show up in the unRAID VM logs so I am unsure what to do at this point. I just setup the Arch installation and have only installed grub and the mesa drivers via pacman. Thanks for any help! lindon-diagnostics-20180708-1509.zip
  4. This afternoon I tried to setup a Debian VM on my unRAID server. I used the Debian preset, attached the 9.4.0 iso, and added a 32GB disk; pretty standard operation. I also attached my AMD r7 370 as a graphics card. However, upon boot up, the OS would freeze every time I selected “install” or “graphical install”. I changed over to VNC and was able to install it fine with that but even now I cannot start the VM with an external GPU. It works fine with VNC but stops at this message right after the BIOS splash screen. Loading Linux 4.9.0-6-amd64 ... Loading initial ramdisk ... unRAID doesn’t seem to have any errors and continues to show the VM running as usual. It shows the behavior even in recovery mode. At this point I’m not really sure what to do. Any suggestions?
  5. I reinstalled and entirely updated Windows 7 without the card installed. During this time there were nonissues with crashes. I then reinstalled and reattached the USB card. Windows actually recognized the card and was able to install a driver from Windows update. However, no actual USB devices were recognized and upon reboot it goes back to having an extremely laggy mouse and audio, still without the USB support. I also noticed that when the card is installed, the computer hangs at BIOS for a minute or two before booting into unRAID. Could this mean that I need to configure something in BIOS or is there a hardware issue with the card? Again, I would very much appreciate help.
  6. I recently set up a Windows 7 VM and have been having two issues using it. I'm not sure if they are related, but this post has led me to believe that they could be. Therefore I am creating single thread. I am running Windows 7 Enterprise (Windows Update has done all its magic) and unRAID 6.4.0. I have included logs and information from a run where both errors occurred. My first issue is that unRAID's VM manager crashes when I shutdown the VM. It does not crash all of the time; I'd guess around 70%. When this happens, the WebGUI and other services continue to work however the VMs tab is inaccessible and I cannot start VMs from the Dashboard. Restarting the server fixes the issue until I shutdown Windows 7 again. Secondly, Windows 7 will not detect my USB PCIe card. There is absolutely not mention of it Device Manager. Part of my flash configuration is shown below, and I have checked the box in the VM editor. The one indication that something is happening is that the VM becomes very stuttery when this is attached. Ausio is choppy and the mouse lags. label unRAID OS menu default kernel /bzimage append vfio-pci.ids=1b73:1100 initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append vfio-pci.ids=1b73:1100 initrd=/bzroot,/bzroot-gui I would greatly appreciate any help with either of these issues. Please let me know if you need any more information. lindon-diagnostics-20180126-0659.zip lindon-diagnostics-20180126-0659.zip
  7. My unRAID box recently started taking a long time to start the WebGUI and auto start VM. I had made no hardware or software changes at the time. It used to be functional within ~2-3 minutes but now takes ~8 for the Windows VM to post and start working. The CLI output shows the login screen within the usual amount of time and, after the VM finally boots, everything functions correctly. The WebGUI is inaccessible during this time. I'm running 6.3.5 and have attached my logs below. Any help is appreciated! lindon-diagnostics-20180109-1758.zip
  8. That does seem to have been the issue. Works fine in a USB 2.0 port. Is there a way to prevent this while keeping the drive in USB 3.0? I know there would be little to know performance improvement, simply curious what exactly is causing this and how it could be fixed. Either way, thank you Squid!
  9. I recently had my motherboard repaired and rebuilt my server with the replacement yesterday. I reformatted and installed unRAID onto the flash drive but the contents of the other disks were not changed at all. The server functions as normal on startup and initializes my autorun dockers (currently just plex) and VMs (Windows 10) successfully. These, and the shares, continue to function as normal however the configuration through the WebGUI seems to lose track of certain paths on the array. Within a few minutes, I can no longer access the VMs and Docker tabs and they are disabled in settings. Other configuration files are also unavailable and Community Applications does not load. Both settings pages, for VMs and dockers, claim that certain paths do not exist and therefore cannot be enabled. The VMs and dockers that were started, however, continue working as expected. Fix Common Problems says that it detected an unclean shutdown and recommends a UPS. This is the case despite having no unclean shutdown as far as I know. The issue remains even after rebooting or shutting down as normal from the unRAID GUI. FCP also has some errors (in the application, not from scanning) about files that are not found. I have not made many changes to configuration yet since it is a new flash. I have added a PCI device in the flash configuration. Could this have something to do with the new flash not communicating properly with existing app data? Should I reset permissions? Help is appreciated. I have attached the complete diagnostics zip below. It was taken only a few minutes ago and the issue is currently present on the server. Please let me know if you require more information. lindon-diagnostics-20171218-0643.zip