RS7588

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

RS7588's Achievements

Noob

Noob (1/14)

1

Reputation

  1. root@void:~# xfs_repair -v /dev/sde1 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 6166120 entries sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... zero_log: head block 236210 tail block 236210 - scan filesystem freespace and inode maps... sb_icount 0, counted 506304 sb_ifree 0, counted 84912 sb_fdblocks 244071381, counted 174348275 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 0 - agno = 2 clearing reflink flag on inode 1611998035 clearing reflink flag on inode 1612335074 clearing reflink flag on inode 1075665249 clearing reflink flag on inode 937479 clearing reflink flag on inode 1612335078 clearing reflink flag on inode 1075736762 clearing reflink flag on inode 937481 clearing reflink flag on inode 581027892 clearing reflink flag on inode 1612701675 clearing reflink flag on inode 937483 clearing reflink flag on inode 937485 clearing reflink flag on inode 1075767383 clearing reflink flag on inode 581101113 clearing reflink flag on inode 1075767397 clearing reflink flag on inode 1075769280 clearing reflink flag on inode 1613499577 clearing reflink flag on inode 1613499579 clearing reflink flag on inode 1613499581 clearing reflink flag on inode 581101115 clearing reflink flag on inode 581101117 clearing reflink flag on inode 581101118 clearing reflink flag on inode 937487 clearing reflink flag on inode 1613499583 clearing reflink flag on inode 1613503680 clearing reflink flag on inode 1201332 clearing reflink flag on inode 1401909 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Note - stripe unit (0) and width (0) were copied from a backup superblock. Please reset with mount -o sunit=<value>,swidth=<value> if necessary XFS_REPAIR Summary Tue Apr 18 12:25:59 2023 Phase Start End Duration Phase 1: 04/18 12:25:56 04/18 12:25:56 Phase 2: 04/18 12:25:56 04/18 12:25:56 Phase 3: 04/18 12:25:56 04/18 12:25:57 1 second Phase 4: 04/18 12:25:57 04/18 12:25:58 1 second Phase 5: 04/18 12:25:58 04/18 12:25:58 Phase 6: 04/18 12:25:58 04/18 12:25:59 1 second Phase 7: 04/18 12:25:59 04/18 12:25:59 Total run time: 3 seconds done Is this next? mount -o sunit=0,swidth=0
  2. Yes, I did. For kicks and giggles . . . I will stop the array, make sure the drive is set to xfs and start the array again. void-diagnostics-20230418-0726.zip
  3. Thanks for your time, @JorgeB! I am fairly certain it was btrfs, but now that you have me thinking about it . . . I'm questioning my memory 🙃. Anyway, here is the output you requested: root@void:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdb1: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1433ad33-cf37-44eb-bd36-c83981f24c2f" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="0a83b1a8-338d-4de9-a744-8c5cbb7a28d0" BLOCK_SIZE="512" TYPE="xfs" /dev/sdc1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="70116823-ebca-4d02-bf75-7ef2d941149d" /dev/md2: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs"
  4. Hello everyone! Hopefully someone can offer me some advice . . . I was on a mission today to remove a 1TB SSD from Array Devices and add it to my cache pool alongside a 1 TB NVMe. I roughly followed this video from @SpaceInvaderOne to "safely shrink" my array. He offers two methods in the video and I opted for the latter method despite not even having any parity discs. Don't ask me why...I don't know lol. I shut down Docker and VM I used unBalance to export the data to the others disks and then a script he provided via link to zero out that SSD. I stopped the array, ran New Config with "Preserve current assignments" set to "All". Unassigned that SSD. Started the array. This is where I deviated from the steps in his video . . . The SSD was now an unassigned device and instead of shutting down the server to pull the drive, I figured it was now safe to add it to the cache pool. At first, both SSDs in the cache pool said they couldn't be mounted because they had no uuid. I stopped the array once again, unassigned that 1TB SSD and started the array. My thought here was that at least my original cache drive would mount and I could carry on with a working server, surviving another day to try adding that second SSD to the cache at a later time. Wrong! The original cache disk was still saying it didn't have a uuid. Instead of realizing that I had to change the cache slots back to 1, I changed the file system from btrfs to auto (I thought I had seen that somewhere once). That didn't work so I changed it back to btrfs . Now I'm noticing that the drive is saying "Unmountable: Wrong or no file system". Despite saying that the drive is unmountable, it still appear in my cache pool instead of under unassigned devices. I briefly read through the documentation for handling Unmountable Disks and saw that is is recommended to scrub rather than repair a btrfs drive, but I can't. Even with the array started, UnRaid tells me that "Scrub is only available when the array is Started". I'm going to wait for some feedback before proceeding to screw anything else up. void-diagnostics-20230416-0157.zip
  5. Have you looked at this? Hasn't been updated since 2016 . . . It would appear that this guy got Space Engineers to work with Docker + WINE. I'm not smart enough to tinker and find out.
  6. Thanks for the reply. I had originally ticked the check box and then hit the "Install Selected Applications" button up top. Hit the download icon and that did the trick this time lol. I don't think there is a difference between those two options, so you're probably right about there being a connectivity problem for a moment.
  7. So the extent of my technical knowledge of Unraid is pretty limited. I point and click around the GUI to get what I need done. I updated to 6.9 just a few moments ago and then proceeded to start updating my Docker containers. First one up was Plex. I missed the exact message, but the update failed and it said that the image was removed. I headed over to the Apps tab and clicked on "Previous Apps" to reinstall the Plex container by LinuxServer. Failed - Couldn't start because it couldn't find the image (surprise). Hindsight tells me I should have made sure the containers were up to date prior to updating the OS. Sure there is a solution to reinstalling Plex without having to rebuild the library and all the metadata . . . right?
  8. My plex server had stopped so I went poking around and saw two things - "libusb_init failed" and after saving/exiting the settings screen something about unknown nvidia command or run time. Now my plex docker shows up as an orphaned image and I only have an option to remove it when I click on it. Um...Can anyone point me in the right direction here?
  9. I have a Windows 10 LTSC VM (my only VM right now) that appears to stop at random intervals. Sometimes it may stay online for an hour and other times several. I dont know what to make of the logs, but every time this happens I see the same last few lines. -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=33,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10 LTSC/vdisk1.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.173-2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \ -netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:13:66:c4,bus=pci.0,addr=0x3 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=38,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-02-15 05:43:23.981+0000: Domain id=10 is tainted: high-privileges 2020-02-15 05:43:23.981+0000: Domain id=10 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2020-02-15T05:44:32.620347Z qemu-system-x86_64: warning: guest updated active QH 2020-02-15T05:50:56.215776Z qemu-system-x86_64: warning: guest updated active QH 2020-02-15T05:52:44.950399Z qemu-system-x86_64: warning: guest updated active QH 2020-02-15T06:43:38.892744Z qemu-system-x86_64: terminating on signal 15 from pid 5608 (/usr/sbin/libvirtd) 2020-02-15 06:43:39.122+0000: shutting down, reason=shutdown void-diagnostics-20200215-0230.zip
  10. That did the trick! My long lost br0 came back and now I am able to assign VMs IPs on my main network! Thanks dude!!!!
  11. That's what I was thinking. There had to be something I fiddled with in all my excitement when I was first setting up my server. I had also, at the same time, made some changes to my home network. I use Ubiquiti gear and pretty much did a reset on all my hardware to get it paired with the new controller I installed on my unRAID server. Anyway . . . What will happen with my currently installed Dockers after nuking the network settings. I'm guessing that would be accomplished by deleting the files you mentioned above? -Thanks!
  12. I have not, but I will try I guess. Thanks for the tip!