Jump to content

Checkm4te

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by Checkm4te

  1. So the past weeks, i managed to backup my data as mentioned above, but it seem it was just a display error because after i finally started my array yesterday to restore the data and after i started the drives were shown as normal and everything was still there. I still do recommend doing a full backup of your data/server.

    temp.png.2ca1627744cb00240a9d60f28b0830ed.png

     

    i followed this video from spaceinvador one

     

    after that i set my PCI ACS override to Downstream
    image.png.f74fe10da87fd2ca9842bc65d8ade4ca.png
    and after a reboot all my IMMOU Groups are looking good and my VM is starting again.

     

    Thank you harshl!

     

     

  2. hey thank you for the information. I must have overlooked that video from him as i researched.

     

    i started with his first assumption

    to add "vfio-pci.ids=10de:17c8,10de:0fb0,1d6a:d107"

     

    after i rebooted my server i think i have now a much larger problem since

    tempsnipcen.png.1da1702b9deb1137ca10b7c2c7cdcf37.png

     

    it shows my cache drives as new drives.

    i tried to remove the line i added and reboot the system but it still says both of them are new drives.

     

    and as dumb as i am i didnt do a backup of my cache files in the past month so

    is there any possibility to access my files from them?

    i didnt start the array yet.

    it was btrfs raid1

     

    edit i see them in Unassigned devices

    image.thumb.png.772522c5bc17e27bc2563756938f3f80.png

     

    i also still have the 2 old caches drives i swapped at the weekend.

  3. Hey i have a problem with my Windows 10 VM

     

    system overview

    unraid OS Version 6.8.3 2020-03-05

    CPU Intel Xeon E-1246v3

    MB ASRock Z97 Extreme v6

    GPU Gainward Nvidia 980Ti GS

     

    a situation summary:

    i shutdown my server for a few hardware upgrades, two NVMe SSDs for the cache and a 10GbE Card.

    I also used that to do a BIOS Upgrade.

    After that i started unraid and my array and followed this guide to change my cache drives

    this worked fine for me. then as i wanted to start my Windows 10 VM i noticed that the GPU i have installed didn't showed up in the VM settings

     

    I searched and found out that i need to enable IOMMU pass-through or Intel VT-d (for me) in my BIOS

    since the BIOS settings got reset after the upgrade i checked everything again and booted my system up again.

     

    my error:

    Now the GPU showed up in my VM again and everything looked good so far.

    but when i start the VM the following error occurs

     

    internal error: qemu unexpectedly closed the monitor: 2020-06-16T17:56:01.149672Z qemu-system-x86_64: -device vfio-pci,host=0000:01:00.0,id=hostdev0,x-vga=on,bus=pci.0,addr=0x5: vfio 0000:01:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver.

     

    so i checked the IMMOU groups and i have no idea why it would group itself like that

     

    image.thumb.png.ce051ffc9c97e72a949cd068d58c540c.png

     

    VM settings

    image.thumb.png.958081eaa829be70f27d638013c4d63d.png

     

    so i did some research and found this

     

    but i didnt really get the solution so i wanted to ask what should i do.

    enable PCIe ACS (also which one)

    Or do that vfio-bind (and how)

    image.png

  4. The Array works again!

    It worked exactly as you said.

     

    For Short:

    Start the Array in maintenance mode

    go in the corrupted disk menu and

    start xfs_repair option:      <-blank for repair

    if error occures follow the help and run

    xfs_repair option: -L

    and after that again

    start xfs_repair option:      <-blank for repair

     

    that worked for me.

     

    Thank you very much everyone!

  5. I first started xfs_repair with -n as option after that i started to start the repair again with "blank" in th eoption field as described in the help.

    This gave me the following error report:

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

    So as i can't mount my disk I should run xfs_repair with the option -L am I right?

  6. Again the required zip file.

    I tried to start the array in maintance mode so that works.

    Should i try to start my array without my disk 2 and/or 7 to check if one of these disk are causing the error?

     

  7. So first of all I'm from Germany I hope you can follow my english.

     

    My Problem:

    I can't start my Array anymore

    I reboot the server and when I hit the Start Button in the webinterface it ends in an loading loop.


    When the array isn't started unraid and webinterface works just fine.

    The first time i was able to reboot the server with putty after that

    I couldn't shutdown or reboot the server over the console in putty and needed to do an manual forced shutdown through press button method.

    Only when it isn't in the loop.

     

    I have now 2x forced the shutdown so unraid wants to start an parity check when i start the array.

     

    What I did before the error occured:

    I disabled disk 2 in my media share and enabled read& write for my user

    I moved via Windows Explorer from Disk 2 (2TB HDD) to Disk 7 (4TB HDD) ~300GB Media Files mkv mp3 etc pp...

    Then the copy process abortet due an "unexpected network error"

    I don't know what occured it maybe that the windows backup started during the copy process. It also stopped due an "unexpected network error"

    my media share contains disks (2),4,6&7

    my backup share disks 3&5

    so there should be no conflict due the disks.

     

    What is functional.

    I can login via putty as root

    I can restart the Server and the webinterface starts

    I can ping my unraid server 192.168.xxx.y

     

    My Specs:

    Unraid Server Plus v6.3.3

    Intel Xeon 1246v3

    32GB RAM

    2x480 SanDisk Ultra II SSD as Cache

    7 Different sized HDDs all Seagate only one WD

    1 Seagate 4TB Parity

    2x LAN setup as backup

     

     

     

×
×
  • Create New...