nivrem

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nivrem's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Great! Parity was definitely valid before as a check was done a few days ago, and the new drive was precleared using the unassigned devices plugin over usb while still in the enclosure. Thanks very much indeed!
  2. Hey thanks for the reply. I understand I'll be able to add the precleared drive without clearing it again, I'm just wondering if I can stop the parity sync on the current drives early, once the sync has progressed beyond the size of the largest removed drive
  3. I have a single 18TB parity drive, the largest drive in the array is 8TB. I have removed a 1TB and 2TB drive and set parity to rebuild. I have a precleared 16TB drive I'd like to add to the array, and our ISP have indicated that our internet connection will be down today, so I'd like to install the drive while the internet is unavailable to minimize downtime. Is it safe to wait until the parity rebuild gets above 8TB, then stop the sync, add the new pre-cleared drive and tell Unraid that parity is valid? My understanding is everything >8TB will be zeros on parity anyway.
  4. I moved from a docker image to a docker directory, which is on my 1tb cache drive. The docker directory is about 40 gig, which is fine. However, I have mover tuning set to move off the cache above 75%, which means the drive is often 70% full. Sadly Unraid thinks this means that my docker image is over 70% full and this triggers the "Docker image high disk utilization" notification repeatedly, even though there are around 300gb free on the drive. Is there any way to disable this notification, or increase the threshold at which it triggers? I have looked in Docker settings with Docker stopped, and in settings > notification settings. I guess I could change mover tuning to move at a lower threshold, but I would rather keep it as is if possible. Many thanks for any help!
  5. Oh my god, after tearing my hair out for hours with this it turned out to be Windows fast startup. I tried passing the drive through to a linux vm and noticed it had the NTFS unclean shutdown error. Turning off fast startup on both Windows VMs seems to have fixed the issue!
  6. Diagnostics in case it's useful flan-diagnostics-20210910-1242.zip
  7. Here is the XML for one of the VMs. The other is identical apart from the location of the primary vdisk. They usually have a GPU passed through but I've turned that off for testing. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>win10testing</name> <uuid>e6cad086-eabe-bff6-b908-45e60df64fbb</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>13107200</memory> <currentMemory unit='KiB'>13107200</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='4'/> <vcpupin vcpu='4' cpuset='5'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e6cad086-eabe-bff6-b908-45e60df64fbb_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='5' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/win10testing/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/nvme-PM991_NVMe_Samsung_256GB_S50ANF1N283258'/> <target dev='hdd' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.196.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:87:d5:d2'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  8. I'm experiencing something that I'm finding very confusing: I have a 256gb nvme as an unassigned device which is passed to a Windows 10 VM as a manual vdisk using its path i.e. /dev/disk/by-id/nvme-PM991_NVMe_Samsung_256GB_S50ANF1N283258 vDisk Bus: SCSI This is great, and it works very well for a single VM. However, I would also like to pass the same drive through to a second Windows 10 VM (note both are never running at the same time). Then things get weird. Files I create on the drive via VM 1 are not visible on VM 2, and vice versa. When I shut down the VM and mount the device via Unassigned Devices, some of the files are there, but only ones created by one of the VMs. I then tried repartitioning and formatting the drive via Unassigned Devices (tried both NTFS and exFat), and when booting the VM the old files are *still* there. The drive is definitely reformatted, and in the case of formatting it as exFat it shows exFat in Unraid (and empty) but shows as NTFS in Windows (containing files). The file I tried during testing was a 3gb video file, and I'm able to play it in Windows despite having repartitioned and formatted the drive with a different filesystem and unraid (and the other VM) both showing it as empty. I guess there are two issues 1) Why are files created by VM 1 not visible to VM 2 and vice versa? 2) Why are files still visible to the VM that created them even after a repartition and format in Unraid? Am I missing something fundamental about how disks work? My hunch is it is something to do with file allocation tables being created separately by Windows but if I'm honest I don't really understand how that works so it could be nonsense. As a side note, the reason I'm trying to set up two VMs this way is so I can have two separate gaming VMs that have shared storage for Steam/other launcher libraries. Any help would be appreciated, even if it's advice about how I could achieve my goals via some other method