eruk2007

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

eruk2007's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Aah right...seems like I misinterpreted a bit there, my bad. In that case the old Disk3 might not even be bad...gonna have to investigate that further and maybe return it to the Array at some point, thanks for clarifying that. Still not a bad thing that I bought the new HDD seeing that it was on Sale for about $80 for the 4TB and I needed to expand for a project in a few months anyway. So, new Disk3 now has a mountable FS after an actual xfs_repair run (I thought you were only supposed to execute an actual repair if the check came up with some sort of error message). Thanks for your help!
  2. Here's the output of the xfs-check utility in unraid Maintenance Mode...doesn't seem to detect a whole lot, right? Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 79808839, counted 71851136 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
  3. Hey thanks for the reply. Makes total sense that it would just rebuild a corrupt filesystem...never thought of that...anyway: I attached a SMART-Report of old Disk3 that I downloaded before removing it from the system (possibly this was also pre-OS-Upgrade, not sure though). That's what you wanted from the drive being connected as an unassigned device, right? I determined the drive to be faulty because I saw the high number of reported read-errors and seek-errors along with the write-errors in syslog (sadly I didn't export that prior to reboot). None of my current drives show any SMART-Errors in the Dashboard and even old Disk3 didn't show anything in the Dashboard..."raw_read_error_rate" doesn't seem to be monitored by unraid. I also tried the xfs-repair on the emulated drive prior to drive-swap and it didn't find any problem (after all, the emmulated drive *was* mounted) and so I thought the filesystem not to be a problem...apparently that was wrong. Gonna try the xfs-check on the restored drive in a few moments... ST2000VN004-2E4164_Z521V7MX-20210920-1235.txt
  4. Hi guys, just stumbled upon something strange in unRAID, thought you might be able to help me out: A few days ago I logged into the WebGui for the first time after a while and found a drive disabled with write-errors. After some research I determined the drive to be faulty and ordered a new one to replace it (the drive sadly was out of waranty by a few months). This morning I then installed the new drive in the server and removed the old one. I then started the server, stopped the array and put the newly identified replacement drive in place of the failed drive using the dropdown. I then started the array and it began rebuilding the drive, so I left for work. Now that I returned, I see the rebuild has finished but Disk3 still shows "unmountable: not mounted" with the "Format unmountable drives" option lower down on the page. But on the "# of writes"-counter, you can clearly see that the drive had been written to a lot... (see attachment-image). I am pretty sure that following through with that format would make me basically lose any data that was on old Disk3, right? What went wrong here? Was it user error? If so, what did I do wrong? Can I fix it now? (old Disk3 still exists, and I think I should be able to pull the data from it if necessary, so not a huge loss...but still...what went wrong here?) A few details: - old Disk3 was 2TB in size, new Disk3 is 4TB. Shouldn't be a problem imo because that's a supported feature of unraid and it's the same size as the Parity drives now... - During the "Disk3 disabled" condition, I upgraded from unraid 6.8.3 to 6.9.2...maybe that wasn't the best thing to do but why would that have an impact like this? Is this even a possible side effect or am I just correlating things that don't have anything to do with each other at this point? gamingnas-diagnostics-20210923-0236.zip
  5. Oh wow it worked. Thanks! Didn't know it would be that easy...
  6. Hi, So I recently decided to replace my Windows 10 VM with a new one on my unRAID-server as the old one just got worse with every Windows update there was. At first, I wanted to "Dual boot and then boot from the same drive" as Spaceinvader One outlined in a recent video. Technically it did work but here comes the catch: I can use the VM for a few minutes after boot, after which the passed through NVMe SSD "freezes" and just becomes inaccessible. Windows shows the same behaviour as if it ran and suddenly the boot medium crashed meaning everything that you see (which at that point resides in RAM) works fine and buttery smooth but even clicking on something as simple as the Start-Menu doesn't do anything besides the animation. I then decided that I would try (with the identical second SSD in my system) to just install Windows through the VM and call it a day because I thought the whole dual boot thing didn't work right: it didn't help, same behaviour. Even "not stubbing but only passing the SSD through with the managed=yes attribute in the XML" didn't help. If I manage to open Windows TaskManager before the drive freezes everything looks normal. But after the "magic 10 minutes" it shows an "active time" of the disk as 100% with a simultaneous 0kB read and write.It does show some "inverse spikes" from time to time, lasting less than a second, during which drive I/O seems possible (Start would open at this instance if previously pressed for example, after maybe 3 Minutes waiting mind you). Attached are diagnostics and XML. Has anyone even the slightest clue where I could start with elimination testing? Any help is greatly appreciated. gamingnas-diagnostics-20181018-2337.zip Windows10Dump.xml
  7. I just realised that also the size computation tool for each share on the shares tab gives me this problem. If I add up all the "sizes" which shares occupy on the cache pool it is about 175GB...not anywhere near the 250G cache pool size...does this classsify as an unRAID bug or did I just misconfigure something essential?
  8. hmm, I see, but how do I know what's on my cache pool then? Everything that is set to be on cache is: a 140G vdisk for Win10, a 20G vdisk for debian and appdata (with Plex and some other stuff) I realise that a Plex dadabase can be big...but not ~90GB...or can it? What else is there that I am not counting?
  9. I hope this helps. gamingnas-diagnostics-20180528-1547.zip
  10. Hi, I have 2 Samsung 960Evo NVMe drives (256GB each) installed and both set to be in BtrFS RAID1 in the Cache disk pool. I have the "appdata", "domains" (1 debian VM) and a share for my Windows VM set to be permanently located on Cache. Now lately my Windows VM keeps going into the "paused" state and I read here in the forum that that is probably caused by a full cache pool. Fair enough, when I look in the Main tab of the unRAID UI there are 249GB full and only 1.08 GB remaining...I guess this is to little for it to work right.But when I go look at mnt/cache with e.g. Dolphin, and let Dolphin calculate the size of the whole mnt/cache folder it only gets to about 160G...where did my remaining space go? Thanks for any tips.
  11. I don't even know what the msi interrupts program is, so no probably not. Yes, it seems like cache is set as writeback...what does that mean exactly? (I'll try it with "none" shortly) XML: <domain type='kvm' id='3'> <name>Windows 10</name> <uuid>610603a1-6bb8-96b4-1b98-0f88da96484c</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/610603a1-6bb8-96b4-1b98-0f88da96484c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/Windows/Windows 10/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:8f:26:de'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x17' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x65' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc335'/> <address bus='1' device='2'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1b1c'/> <product id='0x1c00'/> <address bus='1' device='3'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  12. Hi, I'm a relatively new unRAID user and am trying remove the last bugs and set everything up like I want it on my first server. Therefore I need some help with a Windows 10 VM issue. So I have a Windows 10 and a debian VM constantly running as well as one or two dockers (Plex, NoIP client). The Windows 10 VM is my main home computer and I passed a GPU, some USB devices and of course some CPU cores through to it. Every so often the whole Windows 10 system would constantly hitch/freeze and I guess the displayed framerate of the whole system drops to 1 or 2 FPS, especially when I move the mouse cursor, and strangely enough it's completely fine again a few minutes (or even seconds) later. During these "hitching sessions" I really can't find anything thats abnormal. It doesn't matter what I'm doing and most of the time the GPU is pretty much idle, so I don't think that's the problem. I also can't think of a reason for the CPU to be the problem here, as it mostly also is in the 10-30% usage range when this problem happens, and as I made sure that the cores the VM uses aren't used by the other VM or the dockers (with --cpus-cpuset=...). The problem even occurs when all other VMs and Dockers are stopped and nothing else is running on the system. Did I misconfigure something in the unRAID OS or what else could be the problem? Any ideas around? I don't really know what log files and so on I could provide to solve this so if anything could help, please let me know what it is and where to get it. System Config: Mobo: Asus WS X299 PRO CPU: Intel i7-8700X (6-core) GPU: GTX 1050 Some random HDDs connected over Mobo SATA 2 NVMe M.2 drives (cache, Win10 is also on the cache) ...a few unimportant accessories So, I hope any of you can help me with this. Thanks in advance even if you don't.