Jump to content

WobbleBobble2

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

WobbleBobble2's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I am currently running this script but it is speed limited by the read rate of my slowest drive which is currently about 5MB/s. At this rate it will take 19 days which is unacceptably long. Is there any way to interrupt/cancel this script from running without completing the full clear? Would that impact parity?
  2. Can you add some detail on how you were able to do a new config? I also have 5 disks I'm trying to remove, all of which were zeroed and 4 of which were never used. Want to try to safely remove the 4 and zero the 5th but not sure how and unraid docs don't provide any guidance.
  3. Shoot I already used the script on 2 drives. Any chance you can take a look at the output in my earlier post (disk 6) and help me understand what happened? How is it possible it completed so fast? Could it be because the disk was just precleared? Any way for me to confirm whether the zeroing was completed successfully other than the script messaging saying it was done correctly?
  4. Ok I just tried running the script and I am getting some wierd results. The whole clearing only took like 30 seconds: Script location: /tmp/user.scripts/tmpScripts/clear an array drive/script Note*** Clear an unRAID array data drive *** v1.4 Checking all array data drives (may need to spin them up) ... Found a marked and empty drive to clear: * Parity will be preserved throughout. * Clearing while updating Parity takes a VERY long time! * The progress of the clearing will not be visible until it's done! * When complete, Disk 6 will be ready for removal from array. * Commands to be executed: ***** You have 60 seconds to cancel this script (click the red X, top right) Unmounting Disk 6 ... Clearing Disk 6 ... dd: error writing '/dev/md6': No space left on device 9+0 records in 8+0 records out 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.00190806 s, 4.4 GB/s A message saying "error writing ... no space left" is expected, NOT an error. Unless errors appeared, the drive is now cleared! Because the drive is now unmountable, the array should be stopped, and the drive removed (or reformatted). And now disk 6 says it's only 16gb instead of 8tb: Also there is nothing in disk 6 - the "clear-me" folder has been removed.
  5. Hey Jorge, I found this thread from a year ago which is saying the script doesn't work out of the box. You had commented about making a change to a command. Do I still need to make that change?
  6. Thanks so much for the quick reply. This is a bummer but I will try one 8tb first and see how it goes.
  7. Hello! I had listed my old NAS with drives on ebay for a few months with no takers and gave up on it ever selling so pulled 5 8tb drives out and cleared and put them into my new Unraid server. Clearing is done and they are now mounted in the array. I have already used Unbalance to remove all data from them, so all 5 are completely empty and all 5 have been removed from global shares and all individual shares. Literally a day after doing this someone bought my old NAS so now I need to remove those 5x 8tb drives. I am risk averse and really do not want to do the "remove drives then rebuild parity" option here: https://docs.unraid.net/legacy/FAQ/shrink-array/#for-unraid-v62-and-later This option would mean I would have no parity protection for a very long and intensive rebuild process which would be super scary. Option 2 "Clear Drive Then Remove Drive" would be fine except I can only do 1 drive at a time and 5 8tb drives will likely take a month. Way too long to expect my eBay buyer to wait. Is there any way for me to simultaneously zero all 5 disks at once without losing parity? Any help would be hugely appreciated! I tried to attach diagnostics to this post, but my diagnostics tab keeps crashing before the download starts (something to deal with in another thread I suppose). Other basic stats on the array: Dual parity 5 disks to keep, 14-20tb 5 8tb disks to remove 12500k
  8. I had already done step 1, but not 3rd party cookies which I do generally disable in most browsers. In chrome i added an exception to allow 3rd party cookies for my server IP and .local address. But I'll also give Brave a try! I already have chrome edge safari firefox & opera, so one more can't hurt.
  9. Ok after a reboot I can confirm the logs and terminal are working again! Will wait a few more days to confirm this solution works on an ongoing basis.
  10. I pasted this command into terminal but it seems NGINX failed to restart so I can no longer access the UI from browser. I'll do a reboot later when I've finished some tasks queued on the server.
  11. Oh man thanks so much for figuring this out! I just restarted NGINX, fingers crossed!
  12. [removed while still confirming solution with giganode] HUGE shout out to Giganode for guiding a newb like me through this setup!!
  13. Thank you for helping here! Ok I tried the following but nothing worked: 1. Clearing cookies in chrome 2. Chrome incognito 3. Safari cleared cookies 4. Safari Private 5. Firefox cleared cookies 6. Firefox Private I also disabled all my ad blockers and disabled any in-browser pop-up blocking / site protection I could find. But obviously if it were one of these issues, the problem wouldn't follow me across browsers. Here's the list of previous steps taken that were also unsuccessful:
  14. Oh no - this is back! And restarting my laptop isn't fixing it anymore. I just get this constant "reconnecting" message again (see below). I am able to use the terminal & logs fine with a different mac laptop though. Also I should add that this issue, whatever is causing it, also prevents me from connecting to VMs via VNC. I just get a "Failed to connect to server" error in VNC. Also attached diagnostics although I imagine they won't be relevant since this is clearly a client side issue. Lastly, one other thing I've tried is switching from my 10GBE ethernet wired connection to the router to wifi. I thought maybe there was some issue with the 10gbe connection, but wifi did not resolve the issue. hal9000-diagnostics-20240222-0951.zip
  15. Thanks so much again for helping! The dev just replied here! Yes they are bound to VFIO at boot. I thought I needed to do that to passthrough the iGPU but I guess not. Ok I just unbound both of them and rebooted, but unfortunately exact same behavior. I've attached the VM logs below and updated diagnostics. text error warn system array login 2024-02-20 22:30:23.339+0000: starting up libvirt version: 8.7.0, qemu version: 7.2.0, kernel: 6.1.64-Unraid, hostname: HAL9000 LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-1-Windows 10' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10/.config' \ /usr/local/sbin/qemu \ -name 'guest=Windows 10,debug-threads=on' \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-Windows 10/master-key.aes"}' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/9ff111d8-9ac0-34f8-4fdf-cbc8b866a6fa_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-i440fx-7.2,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -accel kvm \ -cpu host,migratable=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \ -m 16384 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":17179869184}' \ -overcommit mem-lock=off \ -smp 8,sockets=1,dies=1,cores=4,threads=2 \ -uuid 9ff111d8-9ac0-34f8-4fdf-cbc8b866a6fa \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=35,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device '{"driver":"pci-bridge","chassis_nr":1,"id":"pci.1","bus":"pci.0","addr":"0x2"}' \ -device '{"driver":"pci-bridge","chassis_nr":2,"id":"pci.2","bus":"pci.0","addr":"0x3"}' \ -device '{"driver":"pci-bridge","chassis_nr":3,"id":"pci.3","bus":"pci.0","addr":"0x6"}' \ -device '{"driver":"pci-bridge","chassis_nr":4,"id":"pci.4","bus":"pci.0","addr":"0x8"}' \ -device '{"driver":"pci-bridge","chassis_nr":5,"id":"pci.5","bus":"pci.0","addr":"0x9"}' \ -device '{"driver":"pci-bridge","chassis_nr":6,"id":"pci.6","bus":"pci.0","addr":"0xa"}' \ -device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pci.0","addr":"0x7.0x7"}' \ -device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pci.0","multifunction":true,"addr":"0x7"}' \ -device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pci.0","addr":"0x7.0x1"}' \ -device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pci.0","addr":"0x7.0x2"}' \ -device '{"driver":"ahci","id":"sata0","bus":"pci.0","addr":"0x4"}' \ -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x5"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device '{"driver":"virtio-blk-pci","bus":"pci.0","addr":"0xc","drive":"libvirt-3-format","id":"virtio-disk2","bootindex":1,"write-cache":"on","serial":"vdisk1"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/Win10_22H2_English_x64v1.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device '{"driver":"ide-cd","bus":"sata0.0","drive":"libvirt-2-format","id":"sata0-0-0","bootindex":2}' \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.240-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device '{"driver":"ide-cd","bus":"sata0.1","drive":"libvirt-1-format","id":"sata0-0-1"}' \ -netdev tap,fd=36,id=hostnet0 \ -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:73:76:08","bus":"pci.0","addr":"0xb"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=34,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -device '{"driver":"vfio-pci","host":"0000:00:02.2","id":"hostdev0","bus":"pci.6","addr":"0x10"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/0 (label charserial0) 2024-02-20T22:30:25.723211Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:25.723258Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -2 (No such file or directory) 2024-02-20T22:30:25.778823Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:25.778839Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -22 (Invalid argument) 2024-02-20T22:30:27.473292Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:27.473349Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -22 (Invalid argument) 2024-02-20T22:30:27.517960Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:27.518004Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -22 (Invalid argument) 2024-02-20T22:30:27.591884Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:27.591939Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -22 (Invalid argument) 2024-02-20T22:30:27.644339Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:27.644361Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -22 (Invalid argument) 2024-02-20T22:30:29.429010Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument 2024-02-20T22:30:29.429040Z qemu-system-x86_64: vfio_dma_map(0x14a2a4c48800, 0x381800000000, 0x20000000, 0x14a282a00000) = -22 (Invalid argument) I'm just referring to the following lines from the Unraid Logs. I'm not a dev and have very little linux experience so my interpretation could be totally wrong, but I saw references to 02.0 but not 02.1 or 02.2: Feb 20 12:21:32 HAL9000 kernel: i915 0000:00:02.0: VF1 FLR Feb 20 12:21:58 HAL9000 kernel: i915 0000:00:02.0: VF2 FLR Feb 20 12:21:58 HAL9000 kernel: i915 0000:00:02.0: VF2 FLR Yes I have tried waiting at least 30 minutes but the VMs never recover. Thank you again for responding here! I was about to post in your support thread but you beat me to it!!! hal9000-diagnostics-20240220-1430.zip
×
×
  • Create New...