RS7588

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by RS7588

  1. root@void:~# xfs_repair -v /dev/sde1 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 6166120 entries sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... zero_log: head block 236210 tail block 236210 - scan filesystem freespace and inode maps... sb_icount 0, counted 506304 sb_ifree 0, counted 84912 sb_fdblocks 244071381, counted 174348275 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 0 - agno = 2 clearing reflink flag on inode 1611998035 clearing reflink flag on inode 1612335074 clearing reflink flag on inode 1075665249 clearing reflink flag on inode 937479 clearing reflink flag on inode 1612335078 clearing reflink flag on inode 1075736762 clearing reflink flag on inode 937481 clearing reflink flag on inode 581027892 clearing reflink flag on inode 1612701675 clearing reflink flag on inode 937483 clearing reflink flag on inode 937485 clearing reflink flag on inode 1075767383 clearing reflink flag on inode 581101113 clearing reflink flag on inode 1075767397 clearing reflink flag on inode 1075769280 clearing reflink flag on inode 1613499577 clearing reflink flag on inode 1613499579 clearing reflink flag on inode 1613499581 clearing reflink flag on inode 581101115 clearing reflink flag on inode 581101117 clearing reflink flag on inode 581101118 clearing reflink flag on inode 937487 clearing reflink flag on inode 1613499583 clearing reflink flag on inode 1613503680 clearing reflink flag on inode 1201332 clearing reflink flag on inode 1401909 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Note - stripe unit (0) and width (0) were copied from a backup superblock. Please reset with mount -o sunit=<value>,swidth=<value> if necessary XFS_REPAIR Summary Tue Apr 18 12:25:59 2023 Phase Start End Duration Phase 1: 04/18 12:25:56 04/18 12:25:56 Phase 2: 04/18 12:25:56 04/18 12:25:56 Phase 3: 04/18 12:25:56 04/18 12:25:57 1 second Phase 4: 04/18 12:25:57 04/18 12:25:58 1 second Phase 5: 04/18 12:25:58 04/18 12:25:58 Phase 6: 04/18 12:25:58 04/18 12:25:59 1 second Phase 7: 04/18 12:25:59 04/18 12:25:59 Total run time: 3 seconds done Is this next? mount -o sunit=0,swidth=0
  2. Yes, I did. For kicks and giggles . . . I will stop the array, make sure the drive is set to xfs and start the array again. void-diagnostics-20230418-0726.zip
  3. Thanks for your time, @JorgeB! I am fairly certain it was btrfs, but now that you have me thinking about it . . . I'm questioning my memory 🙃. Anyway, here is the output you requested: root@void:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdb1: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1433ad33-cf37-44eb-bd36-c83981f24c2f" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="0a83b1a8-338d-4de9-a744-8c5cbb7a28d0" BLOCK_SIZE="512" TYPE="xfs" /dev/sdc1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="70116823-ebca-4d02-bf75-7ef2d941149d" /dev/md2: UUID="b05e35d3-1bfb-4be9-b396-471c733eaba5" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="d424188f-bf7b-490c-a130-0b7db11b6546" BLOCK_SIZE="512" TYPE="xfs"
  4. Hello everyone! Hopefully someone can offer me some advice . . . I was on a mission today to remove a 1TB SSD from Array Devices and add it to my cache pool alongside a 1 TB NVMe. I roughly followed this video from @SpaceInvaderOne to "safely shrink" my array. He offers two methods in the video and I opted for the latter method despite not even having any parity discs. Don't ask me why...I don't know lol. I shut down Docker and VM I used unBalance to export the data to the others disks and then a script he provided via link to zero out that SSD. I stopped the array, ran New Config with "Preserve current assignments" set to "All". Unassigned that SSD. Started the array. This is where I deviated from the steps in his video . . . The SSD was now an unassigned device and instead of shutting down the server to pull the drive, I figured it was now safe to add it to the cache pool. At first, both SSDs in the cache pool said they couldn't be mounted because they had no uuid. I stopped the array once again, unassigned that 1TB SSD and started the array. My thought here was that at least my original cache drive would mount and I could carry on with a working server, surviving another day to try adding that second SSD to the cache at a later time. Wrong! The original cache disk was still saying it didn't have a uuid. Instead of realizing that I had to change the cache slots back to 1, I changed the file system from btrfs to auto (I thought I had seen that somewhere once). That didn't work so I changed it back to btrfs . Now I'm noticing that the drive is saying "Unmountable: Wrong or no file system". Despite saying that the drive is unmountable, it still appear in my cache pool instead of under unassigned devices. I briefly read through the documentation for handling Unmountable Disks and saw that is is recommended to scrub rather than repair a btrfs drive, but I can't. Even with the array started, UnRaid tells me that "Scrub is only available when the array is Started". I'm going to wait for some feedback before proceeding to screw anything else up. void-diagnostics-20230416-0157.zip
  5. Have you looked at this? Hasn't been updated since 2016 . . . It would appear that this guy got Space Engineers to work with Docker + WINE. I'm not smart enough to tinker and find out.
  6. Thanks for the reply. I had originally ticked the check box and then hit the "Install Selected Applications" button up top. Hit the download icon and that did the trick this time lol. I don't think there is a difference between those two options, so you're probably right about there being a connectivity problem for a moment.
  7. So the extent of my technical knowledge of Unraid is pretty limited. I point and click around the GUI to get what I need done. I updated to 6.9 just a few moments ago and then proceeded to start updating my Docker containers. First one up was Plex. I missed the exact message, but the update failed and it said that the image was removed. I headed over to the Apps tab and clicked on "Previous Apps" to reinstall the Plex container by LinuxServer. Failed - Couldn't start because it couldn't find the image (surprise). Hindsight tells me I should have made sure the containers were up to date prior to updating the OS. Sure there is a solution to reinstalling Plex without having to rebuild the library and all the metadata . . . right?
  8. My plex server had stopped so I went poking around and saw two things - "libusb_init failed" and after saving/exiting the settings screen something about unknown nvidia command or run time. Now my plex docker shows up as an orphaned image and I only have an option to remove it when I click on it. Um...Can anyone point me in the right direction here?
  9. I have a Windows 10 LTSC VM (my only VM right now) that appears to stop at random intervals. Sometimes it may stay online for an hour and other times several. I dont know what to make of the logs, but every time this happens I see the same last few lines. -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=33,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10 LTSC/vdisk1.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.173-2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \ -netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:13:66:c4,bus=pci.0,addr=0x3 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=38,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-02-15 05:43:23.981+0000: Domain id=10 is tainted: high-privileges 2020-02-15 05:43:23.981+0000: Domain id=10 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2020-02-15T05:44:32.620347Z qemu-system-x86_64: warning: guest updated active QH 2020-02-15T05:50:56.215776Z qemu-system-x86_64: warning: guest updated active QH 2020-02-15T05:52:44.950399Z qemu-system-x86_64: warning: guest updated active QH 2020-02-15T06:43:38.892744Z qemu-system-x86_64: terminating on signal 15 from pid 5608 (/usr/sbin/libvirtd) 2020-02-15 06:43:39.122+0000: shutting down, reason=shutdown void-diagnostics-20200215-0230.zip
  10. That did the trick! My long lost br0 came back and now I am able to assign VMs IPs on my main network! Thanks dude!!!!
  11. That's what I was thinking. There had to be something I fiddled with in all my excitement when I was first setting up my server. I had also, at the same time, made some changes to my home network. I use Ubiquiti gear and pretty much did a reset on all my hardware to get it paired with the new controller I installed on my unRAID server. Anyway . . . What will happen with my currently installed Dockers after nuking the network settings. I'm guessing that would be accomplished by deleting the files you mentioned above? -Thanks!
  12. I have not, but I will try I guess. Thanks for the tip!
  13. First off let me state that I am brand new to unRAID and I have very basic, if any, knowledge of this kind of stuff. When I first went on the adventure of setting up my unRAID server several days ago I remember seeing "br0" somewhere. I don't even know if this is an important detail, but I don't see "br0" anywhere in the network settings now. Didn't know what it was and didn't care since everything I was attempting to do worked just fine and still is...for the most part. I run several dockers with no issues, but today I set out to install a Windows 10 VM that I was hoping to be able to access from my network (assigned a private IP by my gateway) and also use as a host for a Wreckfest game server accessible over the internet for my friends. Nope... I've poked around the forums and wiki a BUNCH and have read that I need to create a public bridge in Network settings, but I haven't found any details on exactly how to do this. Help? Thanks in advance for any effort here to help a noob.
  14. UD was up to date at the time. Anyway, I have an update. Even though I was finally able to add a share to UD by manually entering it's path, I could not mount it! I had the idea to create another user on my desktop (local, NOT Microsoft account sign-in) and make sure it had access to the shares I wanted to access via my server. I used the credentials of that user in UD and I had no issues! All the shares on my desktop computer loaded up in UD after hitting the button and I was able to easily add/mount all of them. I'm not exactly sure what was going on, but I have a hunch it has something with Microsoft account sign-in vs local user sign in on Windows 10.
  15. UPDATE: So manually typing the network path to the shares has worked for me instead of clicking the button to load available shares!
  16. I've went ahead and shared all the folders in the desired drive, but UD still does not show any shares. I could see all the hard drives in my desktop computer listed under shares in UD prior to changing my password. New diag . . . void-diagnostics-20200208-2258.zip
  17. Hmm. Okay. Thanks for the speedy reply. I've restarted my desktop after the password change, but maybe I should restart my unRAID server as well. Gotta run to work, but I'll post diagnostics here in the mean time just in case there is something I'm missing. void-diagnostics-20200208-1304.zip
  18. Hi there. I'm new to unRAID and I've just installed this plugin. I'm having issues with mounting a SMB share from my Windows 10 desktop. I use my Microsoft account instead of a local account on my PC. When I first attempted to "search for servers" I was able to find my desktop and sign in using my MS email/password. All disks...or shares...loaded up just fine in the drop down, but I was unable to add any. The windows just closed and the selected share was not in the UD list. After a few minutes of digging I tried some things that were mentioned in this thread - I forced SMBv1, but had not luck. I then changed my MS account password to have no special characters, but still had no luck as shares on my desktop no longer loaded after entering the new password in UD. Lastly, I switched the setting to force SMBv1 back off, but I still can't load any of the shares on my desktop. I've seen repeated replies about removing a share and then trying to re-add after making changes such as forcing SMBv1 or removing special characters from the password. I'm assuming that is done in the terminal using some commands, but I don't know how to do that. Thanks in advance for any help here!