jkexbx

Members
  • Posts

    34
  • Joined

  • Last visited

jkexbx's Achievements

Noob

Noob (1/14)

1

Reputation

  1. So I've had some interesting leanings, and found something workable. Those commands as a pair are intended for the array to be mounted If the array is in maintenance mode and you try to run umount /mnt/disk# you receive: no mount point specified That means the only reason you would ever need umount /mnt/disk# is if the array was mounted If you start the array in maintenance mode you can run just the zero command dd bs=1M if=/dev/zero of=/dev/md1 status=progress and the write speed is as expected You can also exit the terminal and the command will stop after 10 minutes or so. When the array is remounted, the disks show Unmountable: Unsupported or no file system When you run the zero command on that unmountable disk then with the array running the zero runs at expected speed. While the command is running, the scheduled parity check won't run. All this means that when the array is mounted, and you run the umount /mnt/disk# something is happening that results in the disk almost, but not quite being unmounted. The zero command does run, just slowly. The zero command does not terminate You cannot stop the array because the zero command is running without any way to kill it My procedure that ended up being reasonable is: Verify disks are empty Stop array Start array in maintenance mode Run dd bs=1M if=/dev/zero of=/dev/md1 status=progress Wait for first 20G of data to be zeroed Close command window Wait until disk activity goes to zero(around 50G of writes) Repeat with next disk Reboot (you could also stop array and then start, but for me that results in a graphical glitch) Start array normally See disks you're trying to zero as unmountable Run dd bs=1M if=/dev/zero of=/dev/md1 status=progress on disks 1 by 1. Remove disks from array Reset configuration What's funny about this is I still don't ever need to use the umount command. I've done at least a partial parity check and have 0 sync errors. Running the zro command for a short while removes the drive formatting, so it's no longer mountable. All with dual parity. Of course you can always just keep it in maintenance mode, but if you're trying to zero a dozen disks like me that means your server is down for a week.
  2. Yeah, all the above has been setup and tested previously. Just trying to avoid the headache of restoring data. I just don't want to take any needless risk and it's surprising that removing a disk while keeping parity intact is such a difficult thing to do.
  3. Definitely simpler, but incredibly risky. If any data disks die during rebuild, you don't have a valid parity left. I'm trying to maintain parity while removing data disks.
  4. Per the Unraid Docs it's how you zero a disk to remove it from an array. The script is broken in a couple of different ways, so I avoid that now. Do you know if there's a new way to zero a disk? It's supposed to cause a parity update. I'd expect it to run at 50 mb/s like it does after the hard reboot. The problem is something is happening with the umount to cause it to run at 400kb/s.
  5. Thanks for taking a look, but It's not relating to other drives activity. I can kill all dockers, VMs, and scripts, and it will still continue at the same speed. There is some additional interaction going on that causes a consistently slow write speed of 400-500 kb/s. I believe it's related to the umount not working as expected. If you need me to run through the sequence and download logs at a specific time I can. But I'm trying to avoid that without something to also try because the only way is to hard power down the server after running the first dd command. No matter what I do, I can't get the server to clean shutdown.
  6. The commands to zero a disk are: umount /mnt/disk15 dd bs=1M if=/dev/zero of=/dev/md1 status=progress When I run the dd command, the progress runs at 400kb/s with no way to kill it. I end up following the below steps to successfully zero a disk. I have the dd command run fine after an umount in the past. Does anyone know if there's a specific trick to it? Run the umount command The disk size changes to a random number I then run the dd command, and progress is running at 400kb/s I fail to kill the running dd using every method available I stop the array, but it fails I hard shutdown the server It comes back up I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks I then do another reboot through the GUI Everything comes back normally The disks I'm trying to zero show unmountable: wrong or no file system I run dd again on them and it runs fine at a normal 80mb/s tower-diagnostics-20231223-1749.zip
  7. I'd mentioned that post previously. Following it still doesn't allow unraid to stop the array. I suspect the problem is relating to the dd command running at a root level. No matter what I do, the disk I run the command on continues to see activity at 400kb/s.
  8. Do you mean this command for unmounting? root@Unraid-Server:~# fusermount -uz /mnt/cache/
  9. Are you referring to this thread? Or do you mean an alternative to using the umount /mnt/disk8 command? I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command. What happens currently: Run the umount command The disk size changes to a random number I then run the dd command, and progress is running at 400kb/s I fail to kill the running dd using every method available I stop the array, but it fails I hard shutdown the server It comes back up I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks I then do another reboot through the GUI Everything comes back normally The disks I'm trying to zero show unmountable: wrong or no file system I run dd again on them and it runs fine. I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now.
  10. I'm trying to just use the commands and sometimes after I run the umount command, and run the dd command, the progress will run at ~400kb/s with no way to stop it. I end up having to reboot. Has anyone run into this before and found a better way to unmount reliably? I found the same issue occurred with the script. umount /mnt/disk8 dd bs=1M if=/dev/zero of=/dev/md8 status=progress
  11. Thanks for the script! I'm trying to modify it so that I can create a restore point where my configuration is good and then a script to restore to that point every day. That way when I modify my SMB configuration to work on something, I don't have to worry about resetting my configuration back to normal once I'm done. Copying and creating the backups seems to be working fine, but I'm struggling to get the backup to reapply. ### Backup Script with Timestamp for smb-shares.conf #!/bin/bash timestamp=$(date +"%Y%m%d%H%M%S") if cp /etc/samba/smb-shares.conf /etc/samba/smb-shares.conf.bak && cp /etc/samba/smb-shares.conf "/mnt/user/Unraid Backups/SMB/smb-shares.conf_${timestamp}.bak"; then echo "$(date) - Backup successful." >> /var/log/smb_backup_restore.log else echo "$(date) - Backup failed." >> /var/log/smb_backup_restore.log fi ### Restore Script for smb-shares.conf with Reload Config #!/bin/bash if [ ! -f /etc/samba/smb-shares.conf.bak ]; then echo "$(date) - Backup file not found. Restore failed." >> /var/log/smb_backup_restore.log exit 1 fi if cp /etc/samba/smb-shares.conf.bak /etc/samba/smb-shares.conf; then echo "$(date) - Restore successful." >> /var/log/smb_backup_restore.log smbcontrol all reload-config else echo "$(date) - Restore failed." >> /var/log/smb_backup_restore.log fi
  12. Rebooting multiple times and in maintenance seems to have fixed it. I still can't access the individual discs, but that's for another day.
  13. I tried closing every single window on every device, and opening a single new window in incognito on Chrome. It still incorrectly shows the Array as stopped. One thing I noticed is I can access all my shares on the network, but I can't access the individual discs.
  14. Currently unraid thinks my array isn't mounted, even though it is. I have VMs, dockers, and can access shares fine, but everything in the UI shows that the array isn't' mounted. Under VMS the message "Array must be started to view virtual machines" shows right above the VM that shows started. Docker says the same thing, but shows the running containers Under Main, the device slots are drop downs like they are when specifying which drive to go to which slot, but the array is clearly running. I have a UPS, but I still think it ended up doing an uncleanshutdown. I've rebooted the server, cleared all data on the browser, and it still behaves the same. Screenshots Ignore disk 4 in the logs, that just started today. tower-diagnostics-20230529-2059.zip
  15. I'm experiencing a Code 43 when I pass my Quadro P4000 to a windows VM. I'd gotten it working by putting the VM to sleep, then installing the drivers, but that no longer works. I've tried just about everything I can find listed. A copy of my config is listed. I'll take all the help I can get. Thanks! <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='14'> <name>Windows</name> <uuid>4669039d-9b7e-f16b-e369-7b195d949688</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>20971520</memory> <currentMemory unit='KiB'>20971520</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>10</vcpu> <cputune> <vcpupin vcpu='0' cpuset='30'/> <vcpupin vcpu='1' cpuset='31'/> <vcpupin vcpu='2' cpuset='32'/> <vcpupin vcpu='3' cpuset='33'/> <vcpupin vcpu='4' cpuset='34'/> <vcpupin vcpu='5' cpuset='35'/> <vcpupin vcpu='6' cpuset='36'/> <vcpupin vcpu='7' cpuset='37'/> <vcpupin vcpu='8' cpuset='38'/> <vcpupin vcpu='9' cpuset='39'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/4669039d-9b7e-f16b-e369-7b195d949688_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='5' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso' index='2'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:a2:38:6a'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-14-Windows/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/QuadroP4000.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>