dzyuba86

Members
  • Posts

    60
  • Joined

  • Last visited

Everything posted by dzyuba86

  1. Finally resolved. Converting the .img into a passthrough disk worked.
  2. *UPDATE* Converting the .img to a physical disk. If that doesnt work I'm just making the vm fresh from scratch. All my files are in cloud backup anyways so I'm not losing anything but time
  3. I even uninstalled a few things to bring the size down to less than 400GB used in windows, It's still showing as over 900GB.
  4. I'm making a new VM. But I don't get an option to do a disk passthrough.
  5. So I did that. But even after running a defrag 3 times. The .IMG file is still much too large to boot into the VM without freezing mid boot.
  6. Update. Using MC via terminal to copy to a different drive.
  7. Another newbie question. What is the most efficient way to do this?
  8. I have an 88TB array. I think I'll be good.
  9. Or can I move the VM to the array, boot from array, defrag then move back to the ssd?
  10. The VM won't boot as the vm drive has 0 free space. Is there anything I can do besides getting a larger drive to fix this? Or do I need to re-create the VM from scratch?
  11. The vdisk was set to the same size as the drive. But now it's not booting due to the drive having 0 free space. Maybe I need a better setup solution and just recreate the VM from scratch again?
  12. I've gone into scheduler and ran TRIM. I got back no space on the vm drive. Out of ideas here and need this working as soon as possible.
  13. I'm trying to find a way to run a command to unmap on my existing vm.img and can't seem to find the proper command. But shouldn't it be doing this automatically already?
  14. If it helps. I have auto trim and compression enabled.
  15. It's set Primary vDisk Location: Auto /mnt/user/domains/Windows 11/vdisk1.img The img is on it's own ssd for only vm purpose. Not sure if that answers the question.
  16. I have a windows 11VM on a dedicated 1TB SSD. For some reason. The VM keeps freezing and the free space shows 16KB on the drive under shares, in the VM however it's only showing 630GB of used space. Why is the file size so much larger than the used size and why isn't it shrinking automatically and is there a way to shrink/compress the file manually?
  17. I ran a non correcting check as you suggested and it came up with 0 errors. Also my telegram bot now says my health passes instead of fails. SO I'm going to assume this is resolved and can be flagged as solved. Thanks.
  18. Should I be worried about this? For clarity and back story. I had a drive fail so I replaced it. A week later another drive failed so I replaced that one as well (These are my old 3TB drives I've been replacing). This month when it ran it's monthly parity check it took a very long time and found this many errors.. Just wondering if there is any cause for alarm or leave it be?
  19. So I'm looking at adding VM capability to my unraid server to double down as a part time gaming rig for light to medium gaming. I currently have: Asus TUF gaming X570 PLUS motherboard Ryzen 5950X cpu 16GB ram (will add more Nvidia p2200 quadro GPU. Can I add a RTX card and use that for the passthrough for VM's and keep the quaddro for Plex transcoding only? Since the motherboard supports dual GPU's. I'm planning to add a 2.5" ssd for VM use only, the M.2 drives I have are for cache and downloads. Thanks in advance
  20. Been seeing these more often than usual and not sure why they are popping up. plexnas-diagnostics-20230220-0952.zip
  21. Last night I was doing some docker updates and going through sonarr to manually find missing episodes for sone older shows. This morning I woke up to a powered down server. I have auto power on set in bios after a power loss. And no settings to power off for anything. Nor did I initiate a shut down sequence. I attached the diag log and hope someone can find why it self powered down as I still haven't learned my way around these logs so I'm not even sure where to look for the cause of this issue in them. Thank you plexnas-diagnostics-20221217-0959.zip
  22. 4 hours in now. No errors found yet. It's on pass#4 now. I doubt it'll find anything now. At this point I think my only option is to reformat the drive. lose the data and have radarr/sonarr re-acquire what I lose on that drive. I really have no other idea what I can do now. Also thinking that the 8TB drive that was in that slot is still good. SO I'll use it to replace disk 6 which is showing old age pre-fail errors.
  23. This is disk 6 repair output Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done
  24. Disk 6 claims to have 158 errors on the "Main" dashboard. Disk 7 is green light now but still saying unmountable no file system present.
  25. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_icount 64, counted 32 sb_ifree 61, counted 29 sb_fdblocks 1952984849, counted 1952984853 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 4 - agno = 5 - agno = 6 - agno = 3 - agno = 2 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... SB summary counter sanity check failed Metadata corruption detected at 0x47a15b, xfs_sb block 0x0/0x200 libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x1 SB summary counter sanity check failed Metadata corruption detected at 0x47a15b, xfs_sb block 0x0/0x200 libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x1 xfs_repair: Releasing dirty buffer to free list! xfs_repair: Refusing to write a corrupt buffer to the data device! xfs_repair: Lost a write to the data device! fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair.