User01

Members
  • Posts

    11
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

User01's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Hi @halestorm - Did you manage to find a root cause in your setup? Thanks,
  2. Hi Marc, Are you trying to join the UNRAID server to the domain? If so, I noticed something similar setting up a test server the other day. Under "SMB Settings" do you have the option "Enable SMB" and are you able to select "Yes (Active Directory)"? And when do that and click on Apply does the "Active Directory Settings" section appear below? (Note the array has to be stopped). Hope that helps.
  3. Thanks ChatNoir, however I think I might have worked out what is happening, at least in my scenario. I suspect that Unraid (v6.9.2 in this scenario), triggers a stop of services when the array is started and someone clicks on the "Join" button in the SMB Settings > Active Directory Settings section. 16:23:59 Server emhttpd: Stopping services... 16:23:59 Server emhttpd: shcmd (212): /etc/rc.d/rc.libvirt stop To date, I have only ever used that button when the array was stopped (along with Libvirt and docker in a stopped state). In my scenario, by clicking on join, it causes unassigned devices to dismount any drives mounted in unassigned devices. 16:24:09 Server unassigned.devices: Unmounting All Devices... My DC VM is located on a drive mounted with unassigned devices, hence the domain join does not work (Libvirt is stopped, stopping the VM) and the drive with the VM is now dismounted so it cannot start it again anyway. Prior to today, I had a second DC which was not hosted in Unraid, but now must be running into this issue since that second DC failed. Reading up in the forums, it seems like you cannot host a single DC VM in Unraid and also have Unraid use that as the DC it joins the domain with (a bit of chicken and egg I guess). For now, I am going to use a temporary DC VM until I can get the second DC back. I think the only thing I could use, to close this off, would be a confirmation that with the array started, clicking on Join under SMB settings will trigger the VM and Docker services to stop and trigger unassigned devices to dismount drives. Thanks in advance!
  4. Hi, Resurrecting an old thread here. I just noticed that I am experiencing the same/similar behavior as the OP. Some key items I noticed in my logs when I click on the Join utton (the UNRAID server was previously joined to the domain): 16:23:57 Server sshd[31346]: Received signal 15; terminating. .... 16:23:59 Server emhttpd: Stopping services... 16:23:59 Server emhttpd: shcmd (212): /etc/rc.d/rc.libvirt stop 16:24:00 Server root: Domain {GUID} is being shutdown 16:24:01 Server root: Waiting on VMs to shutdown. 16:24:01 Server root: Stopping libvirtd... 16:24:04 Server root: Stopping virtlogd... 16:24:05 Server root: Stopping virtlockd... 16:24:06 Server emhttpd: shcmd (213): umount /etc/libvirt ..... 16:24:07 Server root: stopping dockerd ... 16:24:08 Server root: waiting for docker to die ... ..... 16:24:09 Server unassigned.devices: Unmounting All Devices... 16:24:09 Server unassigned.devices: Unmounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/SSD1'... 16:24:09 Server unassigned.devices: Synching file system on '/mnt/disks/SSD1'. 16:24:09 Server unassigned.devices: Unmount cmd: /sbin/umount '/dev/nvme0n1p1' 2>&1 16:24:09 Server kernel: XFS (nvme0n1p1): Unmounting Filesystem 16:24:09 Server unassigned.devices: Successfully unmounted 'nvme0n1p1' 16:24:09 Server emhttpd: shcmd (217): /etc/rc.d/rc.samba stop .... Within this environment, the only AD DC left is running as a VM within UNRAID. With the DC VM started, I can go to the SMB settings page, find the UNRAID server listed as not joined and click on join (domain). At this point the above happens. Unfortunately, I think this is triggering unassigned devices to unmount a SSD which in turn causes the DC VM to go down. This may have done this previously but I haven't noticed UD dismounting drives before when clicking on join domain. Any ideas/thoughts would be welcome. Thanks!
  5. Hi, I didn't follow it up. Just tried with 6.8.3 and still experiencing the same behavior. I just manually boot if I stop and start the array at the moment.
  6. Hi, I get the same experience as L0rdRaiden using unraid 6.8.3 and hyperv 2019. nested virtualisation enabled. Windows server 2019 boots, I can install the hyperv role successfully then system reboots to complete role installation. Looks like a crash occurs and the system boots to the recovery screen (asking for language). Any thoughts would be appreciated. Thanks.
  7. Hi, I have the CP1500EPFCLCD, which is slightly different to that model. The UPS is connected via USB to the UNRAID host and the native UPS functionality works well. I have some other hosts (running APCUPSD) using the UNRAID server as the master and they all communicate well with UNRAID. I have also configured the "Turn off UPS after shutdown" option to Yes. On my UPS, this results in a shutdown timer to start (60mins) on the UPS when the UPS is sent the shutdown signal. In my tests, the UPS usually has enough capacity left to run that 60mins as very little load is applied at that point. The UPS shuts down gracefully. When power is restored, it wakes up and starts to provide power to all connected devices. Hope this helps. Regards,
  8. Hi, 6.8.2, no -i br0 and experiencing the same issue. Stopping and starting the array seemed to stop it spiking a CPU for me. Would be good to know the root case. Thanks,
  9. Hi, I noticed that the VM is autostarting, but only after a full system reboot. If the array is stopped and started, the VM does not autostart. Is this how the autostart functionality is meant to work? Thanks,
  10. Hi, I'm also experiencing a similar issue, I'm running 6.8.1 and setting, for the first time, a VM to autostart. I also checked out the autostart directory (/var/libvirt/qemu/autostart/), found links in there (they looked valid) and removed them, reset the "autostart" option and found new links were created in the autostart folder. However the VM service still does not start the VMs. I checked the libvirt log, and could only find the following error: "error : qemuMonitorIO:619 : internal error: End of file from qemu monitor". Using Krusader, I had a look at the link files in the autostart folder, the "Link to: " field lists the source path but also has " - (broken)" stated next to the link. The link is valid and is the correct path for the file, not sure if this is an issue or not. Note: the VMs start manually without any issues. Other services are starting without any issues. The VMs are hosted on a disk mounted using Unassigned Devices,. I cannot see any errors in the main log. Attempting to trigger autostart by stopping the array and then starting it again. Anyway, any thoughts/recommendations would be welcome. TIA!