Devy

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by Devy

  1. Hello Community,

     

    i got some issues regarding the performance of my Windows 10 Gaming VM, actually everything goes well until i move my mouse, but for some reason sometimes when i restart my vm it's better and sometimes it's worse.

     

    i did test it in CoD Modern Warfare while Spectating a Teammate, when i move my Mouse while spectating my FPS drops from like 170 to 100. (and yes, it is noticeable and choppy)

    tho if i lower my signal rate of the mouse it get way better or doesnt lag at all.

     

    i currently passthrough the zeppelin usb 3.0 controller, i also got ACS Override enable but i pretty much dont need it anymore, since i only run one VM currently.

     

    I currently give my VM 7 out of 8 Cores of my Ryzen 2700x and all the cores that my vm uses are isolated

     

    Here is my XML

     

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='3'>
      <name>Main v2</name>
      <uuid>fbc8ffc0-64b2-bc67-2016-db589542361a</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>14</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='8'/>
        <vcpupin vcpu='1' cpuset='9'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='10'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='11'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='12'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='13'/>
        <vcpupin vcpu='10' cpuset='6'/>
        <vcpupin vcpu='11' cpuset='14'/>
        <vcpupin vcpu='12' cpuset='7'/>
        <vcpupin vcpu='13' cpuset='15'/>
        <emulatorpin cpuset='0'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/fbc8ffc0-64b2-bc67-2016-db589542361a_VARS-pure-efi.fd</nvram>
        <smbios mode='host'/>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='FckYouNVIDIA'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>
        <ioapic driver='kvm'/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='14' threads='1'/>
        <cache mode='passthrough'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='rtc' tickpolicy='catchup' track='guest'/>
        <timer name='hpet' present='no'/>
        <timer name='tsc' present='yes' mode='native'/>
        <timer name='hypervclock' present='yes'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/meowshare/isos/windows10business1909.iso' index='3'/>
          <backingStore/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/meowshare/isos/virtio-win-0.1.160-1.iso' index='2'/>
          <backingStore/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <alias name='sata0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/disks/nvmessd/Main v2/vdisk1.img' index='1'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <alias name='pci.5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <alias name='pci.6'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:bc:6d:40'/>
          <source bridge='br0'/>
          <target dev='vnet1'/>
          <model type='e1000-82545em'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/1'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/1'>
          <source path='/dev/pts/1'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Main v2/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <rom file='/mnt/disks/nvmessd/5700XT.rom'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x0d' slot='0x00' function='0x3'/>
          </source>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
    </domain>

     

     

  2. Hello dear Community,

     

    I'm currently trying to migrate Mailcow from my VM to unraid directly over the docker-compose plugin in nerd-tools

     

    i migrated everything as stated in the mailcow documentation (volumes and the folder with docker-compose)

    however the only thign that doesnt really want to start is mysql, also the folders don't exist on my unraid system

    do i have to recreate them onto my unraid server?

     

     

    or do i have to do a clean install ? since i only got one mail account i would jsut export my mails, contacts and calender and reconfigure it, but maybe someone got a better solution

    Bildschirmfoto 2020-02-04 um 16.33.46.png

  3. Hello Community,

     

    Would this UPS be enough?

     

    i'm having a gaming rig and i was planning to buy a UPS because even tho it rarely ever happens, but i did get some issues from the power outage a few months back, so i wanted to ask you if this would suffice or if there is a better value/product

     

    Cyberpower Value Serie 1200 VA

    the thing is i need over 700w because my power supply in my PC is a 700w power Supply and there are some UPS for sooo much money but i'm not sure if i ned those, all i want is to give my server a really good chance to shutdown

  4. thank you for the reply, did also something else change with the final 6.8 version? as for 6.8 rc 7 my GPU runs completely fine days without a crash or anything at all

    but under 6.8 ( non beta ) daily i lose the output once and can't get it back without a reboot of the server, non of those in the beta build rc 7

    maybe i should just return the gpu and get a nvidia card :( i mean slightly less performance for 400€ but also way less of an issue

  5. Hello, after trying to find some clues i couldn't find any, is there any information regarding the ditching of the navi reset patch? as one of the few unlucky ones with a 5700 XT it is kind of sad to always restart the complete Unraid Server from time to time just because the GPU felt like it

     

    obv. this isn't a problem directly related to unraid but i just wanted to know as it has been included in earlier versions of the 6.8 beta

  6. okay i tried that and in fact only 100GB are missing in Total, i did this on the new Disk and it looks pretty good so far, the iso folder is missing and one docker, there can't be much else that is missing

    i mean its a bit annoying to reconfigure the proxy manager and dump some gpu bios but thats nothing compared to if everything was lost

     

    will anothe rprogram get any more success out of this? like this ufs explorer i'm reading in some posts about?

     

  7. Hello. just to make sure my previous Steps:

     

    - Reboot and noticed that the file system is suddenly unmountable

    - Stopped the Array

    - Started in Maintenance Mode
    - Checked with this -n command the xfs_repair but it gave a lot of errors

     

    - Got another Disk that i plugge din on another sata cable on another Sata Port ( got the old one still plugged in )

    - in Maintenance mode i did a data_rebuild that went for 7 hours that lasted the whole night, in the morning it was done but still unmountable

     

    Disk Log from the new Disk:

    Quote

    ErrorWarningSystemArray


    Dec 3 22:45:01 unrawr kernel: ata3: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80200 irq 53
    Dec 3 22:45:01 unrawr kernel: ata3: SATA link down (SStatus 0 SControl 300)
    Dec 3 23:35:28 unrawr kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Dec 3 23:35:28 unrawr kernel: ata3.00: ATA-9: WDC WD30EFRX-68EUZN0, WD-WCC4N5ZA4578, 82.00A82, max UDMA/133
    Dec 3 23:35:28 unrawr kernel: ata3.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 32), AA
    Dec 3 23:35:28 unrawr kernel: ata3.00: configured for UDMA/133
    Dec 3 23:35:28 unrawr kernel: sd 3:0:0:0: [sdf] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
    Dec 3 23:35:28 unrawr kernel: sd 3:0:0:0: [sdf] 4096-byte physical blocks
    Dec 3 23:35:28 unrawr kernel: sd 3:0:0:0: [sdf] Write Protect is off
    Dec 3 23:35:28 unrawr kernel: sd 3:0:0:0: [sdf] Mode Sense: 00 3a 00 00
    Dec 3 23:35:28 unrawr kernel: sd 3:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Dec 3 23:35:29 unrawr kernel: sdf: sdf1
    Dec 3 23:35:29 unrawr kernel: sd 3:0:0:0: [sdf] Attached SCSI disk
    Dec 3 23:36:50 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:37:11 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:37:11 unrawr kernel: mdcmd (3): import 2 sdf 64 2930266532 0 WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578
    Dec 3 23:37:11 unrawr kernel: md: import disk2: (sdf) WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 size: 2930266532
    Dec 3 23:37:26 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:37:26 unrawr kernel: mdcmd (3): import 2 sdf 64 2930266532 0 WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578
    Dec 3 23:37:26 unrawr kernel: md: import disk2: (sdf) WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 size: 2930266532
    Dec 3 23:37:32 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:37:38 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:37:38 unrawr kernel: mdcmd (2): import 1 sdf 64 2930266532 0 WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578
    Dec 3 23:37:38 unrawr kernel: md: import disk1: (sdf) WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 size: 2930266532
    Dec 3 23:38:06 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:38:06 unrawr kernel: mdcmd (2): import 1 sdf 64 2930266532 0 WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578
    Dec 3 23:38:06 unrawr kernel: md: import disk1: (sdf) WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 size: 2930266532
    Dec 3 23:38:43 unrawr emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 (sdf) 512 5860533168
    Dec 3 23:38:43 unrawr kernel: mdcmd (2): import 1 sdf 64 2930266532 0 WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578
    Dec 3 23:38:43 unrawr kernel: md: import disk1: (sdf) WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4578 size: 2930266532

     

    now i'm really unsure on what to do now,

     

    the Disk Log from the Parity

     

    Quote

    ErrorWarningSystemArray


    Dec 3 22:45:01 unrawr kernel: ata1: SATA max UDMA/133 abar m131072@0xfcd80000 port 0xfcd80100 irq 53
    Dec 3 22:45:01 unrawr kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Dec 3 22:45:01 unrawr kernel: ata1.00: ATA-9: WDC WD40EFRX-68WT0N0, WD-WCC4E3YCCVZY, 82.00A82, max UDMA/133
    Dec 3 22:45:01 unrawr kernel: ata1.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 32), AA
    Dec 3 22:45:01 unrawr kernel: ata1.00: configured for UDMA/133
    Dec 3 22:45:01 unrawr kernel: sd 1:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
    Dec 3 22:45:01 unrawr kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks
    Dec 3 22:45:01 unrawr kernel: sd 1:0:0:0: [sdb] Write Protect is off
    Dec 3 22:45:01 unrawr kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
    Dec 3 22:45:01 unrawr kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Dec 3 22:45:01 unrawr kernel: sdb: sdb1
    Dec 3 22:45:01 unrawr kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
    Dec 3 22:45:09 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 22:45:09 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 22:45:09 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 22:45:46 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 22:45:46 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 22:45:46 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 22:51:15 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 22:51:15 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 22:51:15 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:08:13 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:08:13 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:08:13 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:35:23 unrawr kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Dec 3 23:35:23 unrawr kernel: ata1.00: configured for UDMA/133
    Dec 3 23:36:50 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:36:50 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:36:50 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:37:11 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:37:11 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:37:11 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:37:26 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:37:26 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:37:26 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:37:32 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:37:32 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:37:32 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:37:38 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:37:38 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:37:38 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:38:06 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:38:06 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:38:06 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532
    Dec 3 23:38:43 unrawr emhttpd: WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY (sdb) 512 7814037168
    Dec 3 23:38:43 unrawr kernel: mdcmd (1): import 0 sdb 64 3907018532 0 WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY
    Dec 3 23:38:43 unrawr kernel: md: import disk0: (sdb) WDC_WD40EFRX-68WT0N0_WD-WCC4E3YCCVZY size: 3907018532

     

     

    The unassigned Devices that i got all work without any issue. i still want to try and not lose all of the data and i'm a bit clueless on what steps to take next.

    This is the Current state, the one in Unassigned Devices is the old One that gave ma all the Troubles and the one above sdd is the one that i uses to replace it

     

    Bildschirmfoto 2019-12-04 um 09.22.35.png

  8. i was running Unraid with just one HDD and one Parity Disk, today after a reboot for installing a new GPU

    after the reboot everything was gone, so i checked the main tab.

    My Disk one, a WDC one showed "unmountable file system". now i just got pretty much the parity disk and i could install another one, but would it recover the files from the parity drive if i install another disk? because  i think after the array started it tried to do a parity check from the unmountable drive

    im really scared now, of the most important things i got backups but it would still hurt me and cost me weeks to rebuild it exactly like it has been

     

    the problem in the past with the cache drive was kinda meh but that was my fault for not having an ups.

    but i'm really confused with this suddenly unmountable drive

    (i also can't moutn it over the unassigned devices plugin)

     

     

    here is a log:

    Quote

    Dec 3 23:02:30 unrawr kernel: CPU: 0 PID: 9500 Comm: mount Tainted: P O 5.3.12-Unraid #1
    Dec 3 23:02:30 unrawr kernel: Hardware name: System manufacturer System Product Name/PRIME X470-PRO, BIOS 5406 11/13/2019
    Dec 3 23:02:30 unrawr kernel: Call Trace:
    Dec 3 23:02:30 unrawr kernel: dump_stack+0x67/0x83
    Dec 3 23:02:30 unrawr kernel: xfs_trans_cancel+0x55/0xcd [xfs]
    Dec 3 23:02:30 unrawr kernel: xfs_efi_recover+0x195/0x1db [xfs]
    Dec 3 23:02:30 unrawr kernel: xlog_recover_process_efi+0x2d/0x3f [xfs]
    Dec 3 23:02:30 unrawr kernel: xlog_recover_process_intents+0xd1/0x1b2 [xfs]
    Dec 3 23:02:30 unrawr kernel: xlog_recover_finish+0x14/0x85 [xfs]
    Dec 3 23:02:30 unrawr kernel: xfs_log_mount_finish+0x5a/0xc3 [xfs]
    Dec 3 23:02:30 unrawr kernel: xfs_mountfs+0x50c/0x700 [xfs]
    Dec 3 23:02:30 unrawr kernel: ? xfs_mru_cache_create+0x11c/0x13c [xfs]
    Dec 3 23:02:30 unrawr kernel: xfs_fs_fill_super+0x498/0x56d [xfs]
    Dec 3 23:02:30 unrawr kernel: mount_bdev+0x134/0x183
    Dec 3 23:02:30 unrawr kernel: ? xfs_test_remount_options+0x54/0x54 [xfs]
    Dec 3 23:02:30 unrawr kernel: legacy_get_tree+0x22/0x3b
    Dec 3 23:02:30 unrawr kernel: vfs_get_tree+0x1d/0xc1
    Dec 3 23:02:30 unrawr kernel: do_mount+0x6b6/0x76f
    Dec 3 23:02:30 unrawr kernel: ? memdup_user+0x3a/0x57
    Dec 3 23:02:30 unrawr kernel: ksys_mount+0x71/0x99
    Dec 3 23:02:30 unrawr kernel: __x64_sys_mount+0x1c/0x1f
    Dec 3 23:02:30 unrawr kernel: do_syscall_64+0x57/0xfd
    Dec 3 23:02:30 unrawr kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
    Dec 3 23:02:30 unrawr kernel: RIP: 0033:0x14da2583225a
    Dec 3 23:02:30 unrawr kernel: Code: 48 8b 0d 39 7c 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 06 7c 0c 00 f7 d8 64 89 01 48
    Dec 3 23:02:30 unrawr kernel: RSP: 002b:00007ffc89170018 EFLAGS: 00000206 ORIG_RAX: 00000000000000a5
    Dec 3 23:02:30 unrawr kernel: RAX: ffffffffffffffda RBX: 000014da259b9f64 RCX: 000014da2583225a
    Dec 3 23:02:30 unrawr kernel: RDX: 000000000040d500 RSI: 000000000040d580 RDI: 000000000040d560
    Dec 3 23:02:30 unrawr kernel: RBP: 000000000040d2f0 R08: 0000000000000000 R09: 000000000040d535
    Dec 3 23:02:30 unrawr kernel: R10: 0000000000000c00 R11: 0000000000000206 R12: 0000000000000000
    Dec 3 23:02:30 unrawr kernel: R13: 000000000040d560 R14: 000000000040d500 R15: 000000000040d2f0
    Dec 3 23:02:30 unrawr kernel: XFS (sde1): xfs_do_force_shutdown(0x8) called from line 1049 of file fs/xfs/xfs_trans.c. Return address = 000000000336dac2
    Dec 3 23:02:30 unrawr kernel: XFS (sde1): Corruption of in-memory data detected. Shutting down filesystem
    Dec 3 23:02:30 unrawr kernel: XFS (sde1): Please unmount the filesystem and rectify the problem(s)
    Dec 3 23:02:30 unrawr kernel: XFS (sde1): Failed to recover intents
    Dec 3 23:02:30 unrawr kernel: XFS (sde1): log mount finish failed
    Dec 3 23:02:31 unrawr unassigned.devices: Mount of '/dev/sde1' failed. Error message: mount: /mnt/disks/WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4ARL: mount(2) system call failed: Structure needs cleaning.
    Dec 3 23:02:31 unrawr unassigned.devices: Partition 'WDC_WD30EFRX-68EUZN0_WD-WCC4N5ZA4ARL' could not be mounted...

     

  9. 13 hours ago, Squid said:

     

    i did this, i tried scrub now which just goes a bit and on both disks now ends with 272 unfixable errors or something like that, recover also jsut leads to an error,

    i really need a better way the most annoying part are the mails, due to using mailcow i still had to run a vm just for that with one week of mails missing, maybe i should increase the backup frequency

     

    what annoys me the most is the fact that i was feeling a littler bit save due to the raid 1 cache pool, little did i know that it seems useless in this case due to if one drive gets corrupted it just erases the files on the other disk :D

  10. 1 hour ago, Squid said:

    You have corruption on your cache pool.  Unfortunately, I am not the appropriate person to help you with file system repairs on BTRFS

     

    Buy a UPS

    i will do so, thanks

     

    one thing that is a bit weird for me, whats the next logical step for me? add another cache drive? and i was feeling a little bit secure :( welp it stil shows 168gb/250 in use, is there any chance to save a vm that was on there? without using the week old backup?

     

    i was feeling save with the cache pool but i guess i was wrong :(

    is there any like way to still have some hope?

     

    or what would be the next logical step? to get at least everything else up and running again

     

    does my whole cache like dissapear or the one file if one of my two raid 1 ssd's fails?

  11. Hello Everyone,

     

    Today i had a power loss, and after trying to reboot Unraid everyhing looked fine at first, my array started so i wanted to bring my vm back Online but suddenly

     

     

    libvirt failed to start, docker failed to start, when checking the settings it told me that it couldnt find the directory.

     

    same was kinda for all my shares, the shares tab is completely empty, i can still go to the shares i created and there are all the files in it, but when i try to find the system share it is completely empty.

     

    what is the best way to go from here? i included the diagnostic zip

     

     

    unrawr-diagnostics-20191117-1241.zip

  12. Hello, so i want to transfer my Home Setup all into one "little" Box

     

    having unraid on it as a network storage but also running my Debian Mailserver on it and for example 2 Windows VM's, or one Mac/Linux and one Windows

     

    Due to some limitations with my current setup i look for hardware that offers me the following:

     

    • being able to pass both of my GPU's to a VM each.
    • Having the Option to have 2 USB Controllers in total that i can pass through to a VM each, so each VM has one USB Controller
    • Enough Cores for my Setup, 1 For unRaid, Two for my Debian VM, 4 for VM1 and 5 or so for VM2 ( so at least 12 Cores)


    So the tricky part that i figured out is that i need a Mainboard and CPU configuration that offers good IOMMU Groups and that information isn't that easy to find (for me)

    Which breaks my current setup is mainly no option for a second USB Controller and no way to passthrough a second GPU with my Prime x470 Pro and my Ryzen 7 2700x

  13. Hello everyone, so far i'm a happy user but there is one thing that annoys me a bit and even after trying a lot of things i just dont know what to do :(

     

    Problem:

     

    i got a docker-compose in a debian VM and so far back in the  old server

    (same setup but without unraid and connecting to a normal nas via nfs) i didn't had a single issue

     

    the problems occur on Nextcloud and Emby that both reach the NFS Share, happens either after reboot or downloading a huge game on a different network share on windows that isnt even on the same disk

     

    Solution for me so far was if a rebooted the server or the error occurs to just restart the docker containers

     

    Here is my fstab line, that also worked flarwlessly on boot untill i switched to unraid

    10.0.0.2:/mnt/user/cloud  /mnt/cloud   nfs      rw,sync,hard,intr  0     0

     

    emby also uses the same folder as nextcloud for music/videos and so on

     

    is there something i need to know when using nfs? maybe it would work if someone has a better way mounting that thing there, however i can't use smb as nextcloud requires specific folder sharesettings userwise and from rwx

     

     

     

    EDIT:

     

    I got a second question, how many parity drives do you guys use? i currently use 2 Parity and 2 Data to have like the same situation as with my old raid6 nas

     

    however i'm not really sure if the additional parity drive doesnt for example slow down the transfer speeds...i was even thinking of using a normal HDD as a cache disk just so it doesnt get that slow from time to time to move some files

  14. Hello, i did that..i googled a bit more but still couldnt find a solution..my server pretty much got no internet

    it has a ens192 interface and thats the only one.

     

    i also changed the ethernet from virtio to "e1000-82545em" and in lspci the Ethernet Controller is listed as an Intel Corporation Controller, which should be fine i guess

     

     

    however : ig i ping something it tells me "connect" Network is unreachable, its a Debian VM

     

     

    EDIT: i solved it...networkctl was the magic word, my network interface wasnt in "interfaces" after moving the vm so i had to manually add it..everything works perfectly now

     

  15. I actually made some checks with the drained Watts on my Old build and so far it looks pretty good comapred to my old NAS that also drains quite a bit...i got 3 Options now

     

    1: Use My Second PC for it entirelly and just get some new HDD Screws

     

    2: Use my Main PC (Which i would like) Headless and giving one GPU to my Linux System and one to my rarely used SSD.

    It would have 2 Minus Points

    - Only space for 2 Internal HDDs (i would probably do the Data drives internal and do the backup stuff/safety drives external if thats possible)

    - i will have a little bit less CPU Power when it comes to gaming, to not just run my Linux Desktop next to my Windows like now, but also my Datastore and my little Debian Server.

     

    3: Have the Performance loss like before that comes with running a Datastore and a small linux Server but get a new PC Case to mount all the HDDs internally.

     

  16. That is awesome to Hear, the only negative thing is that my main PC case only Supports 2 3,5" Hard Drives ( i got a small fractal meshify C)

     

    which makes that kinda hard...i mean i still do have the 2nd pc which is a normal Tower with like 6 Slots for 3,5" HDDS.. but the sound of like just using my main pc sounds kinda good, just unfortunate that when i got the PC i already had a NAS so i only wanted a smaller Case with jsut enough space for normal ssd's next to my nvme.

  17. Well i actually do have an old pc with a i7 3770 that still works and 16gb RAM, but it would drain like around 100w probably isntead of my 13W Server with the 25W Nas, so it would increase my Bill by 250% running this, obv. it would be more powerful but i dont know about the performance that i gain while my current server idles at like 2% cpu most of the time and easily handles all the things i do, nextcloud...mediastreaming and so on.

  18. Hello,

    i recently switched from windows to linux and got a kvm setup to play games.

     

    i also got a homeserver and a pretty old NAS, recently the NAS from a friend did break and it was a mess, now i thought about moving from my homeserver with windows server and hyper-v (debian vm) to unRaid but i got a few questions.

     

    1: is ecc memory really that important?

     

    2: is it an okay option to connect the nas volumes over usb 3 from external enclosure cases? due to the fact that my "server" is a small office pc that doesnt drain much in general but only has space for 2 2,5" drives

     

    *The reason i dont want to turn my main pc into the unraid machine is due to the fact that i would first of all need a 3rd GPU, because i will always run linux and windows just next to it and it also has like not a lot of space for 3,5" HDD's

     

    3. Can i connet via "virtual machine manager" to the vms to manage them?

     

    4. does unraid come with a web interface like a normal nas/esxi install? because i dont want to always hook up a monitor to my Server.

     

    5. Does Docker-Compose work with unraid? because except for Apache and Certbot everything else pretty much runs and is setup from 2 docker-compose files that manage Mailcow and Nextcloud etc. or is it recommended to just create a Debian VM like before instead of doing the docker things directly in unraid ?