Jump to content

blewstar

Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by blewstar

  1. While troubleshooting the cache drive I unassigned the wrong one and wiped the drive. Possible to recover the data, but not much on it so ordered new SSD drives. I believe this is the solution. The drive is 5 years old and ran hot so failure was possible moreso. Will update this once up just making sure there is no other issues with updating to latest version of unraid. 

  2. Sorry,

     

    root@Blewstar:~# fdisk -l /dev/nvme0n1
    Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
    Disk model: SBX                                     
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    root@Blewstar:~# fdisk -l /dev/nvme1n1
    Disk /dev/nvme1n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
    Disk model: SBX                                     
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    root@Blewstar:~# 

  3. Well, you burn and learn. Seems that info is gone. Thanks for your help. I have ordered new SSDs and will build the shares and dockers back up. Most of my info is on the array so shouldnt take me long. It taught me to make sure to review documentation and always do backups. What  backup option covers shares including appdata? I thought I read flash in one place, perhaps an old thread.

     

    root@Blewstar:~# fdisk -1 /dec/nvme0n1
    fdisk: invalid option -- '1'
    Try 'fdisk --help' for more information.
    root@Blewstar:~# fdisk -1 /dev/nvme0n1
    fdisk: invalid option -- '1'
    Try 'fdisk --help' for more information.
    root@Blewstar:~# fdisk -1 /dev/nvme1n1
    fdisk: invalid option -- '1'
    Try 'fdisk --help' for more information.
    root@Blewstar:~# fdisk help

    Welcome to fdisk (util-linux 2.38.1).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    fdisk: cannot open help: No such file or directory
    root@Blewstar:~# 

  4. Hi,

     

    I have been troubleshooting shares that disappeared until rebooted. It turned out that one of the 2 cache drives was read only. While troubleshooting this I unassigned the drives while switching them. I did not format them, but followed information to remove and start array for unraid to forget then assign them. I am at a lost of how to restore them after several hours of reading forums. I have very little experience with unraid only setting up and forgetting until I updated from version 6.9.2 to current. Think that is what caused my drive to go out. Been chasing my tail ever since. Any help appreciated

     

    image.thumb.png.449a2f6839510f5114800afe6f88afe4.png

  5. Thought I would update some information on this issue of what I have done so far. With a good friend of mine that is more knowledgable with unraid we started from scratch. 

     

    1. Made sure that the VM has the right binds set up. This also looked at the grouping without separating the gpu in its own group

    2. Disabled vm and docker as well as services

    3. Checked settings> fix common problems (app) for errors. Found several files on both the array and cache. Moved files to cache and deleted. There were versions of libvirt(spelling) on both.  

    4. Checked shares and noticed that appdata share was also being removed.

    5. logs were checked, but for now doing the above and seeing if issue persist. 

    6. Scan fixed common problems and have a new error that will be next to look at.

     

    Error*** Unable to write to cacheDrive mounted read-only or completely full. Begin Investigation Here:

     

    I have a mirrored cache 1TB nvme drives. I am begginning to think that the primary nvme is not working(stuck in read only). My solution for this is to switch them on the motherboard. If the issue is resolved then the nvme needs to be replaced. Will wait for friend before doing this. The cache is not full from what I see so issue is the physical drive.

     

    Will update every 3 days or so 

     

    blewstar-syslog-20231231-2232.zip

  6. Hello everyone,

     

    I made the update version from 6.92 to latest version. My VM is crashing and I have noticed that my "Domain Share" is being removed. However, I reboot and it comes back. I can use the VM for hours and then walk away and it crashes. When I try to start the VM before rebooting I received the errors 

     

    internal error: qemu unexpectedly closed the monitor: 2023-12-26T20:43:57.775837Z qemu-system-x86_64: Could not open '/etc/libvirt/qemu/nvram/30595530-640e-414c-535d-3361c45c54a0_VARS-pure-efi.fd': Read-only file system

     

    The following error I just stayed at the log in screen:

     

    internal error: process exited while connecting to monitor: 2023-12-27T01:48:33.861571Z qemu-system-x86_64: Could not open '/etc/libvirt/qemu/nvram/30595530-640e-414c-535d-3361c45c54a0_VARS-pure-efi.fd': Read-only file system

     

    I believe this is refering to a full cache or disk. I really don't know how to properly read the storage to determine where I am having the problem. There was no issue before the update so scratching my head here. I have sinced removed all vm vdisk except the one I am using freeing up about 120 GB. The issue is still there. 

     

    Could I ask the communitys help on what screen shots or tools I use to provide the information needed to troubleshoot this? I'm not sure if I use paint to copy the screen for main, domain, vm edit/xml etc...

     

    Thank you for any help!

     

     

     

  7. Hi,

     

    Seems I have made a mistake and not sure what the process is to correct it. I added a SSD Evo (unassigned) and thought I had tied it to my VM correctly. This SSD is storing games and want to keep it separate from the VM. For the last couple weeks my performance has suffered some. I looked in the VM and the SSD is a vdisk. I know unraid controls the vdisk which is not what I wanted to do.

     

    Steps i think I should take. 

     

    1. Copy data from SSD to array.

    2. Nuke the SSD so can remount it (need to figure how I created a vdisk so don't do it again)

    3. Do I remove the SSD  from the VM by deleting the XML entry? 

     

    Appreciate any help,

     

    Blewstar

  8. Hello,

     

    I have been trying to remove unallocated space from a VM using elevated command line. I am getting the error -bash: cd: Too many arguments. The directory I am trying to get to /mnt/user/domains/Windows 10 Virtual Desktop/ --- I can get to domains directory with no issue. I have tried quotes around windows 10 Virtual Desktop and spaces and anything else I can think of.... when I do list it shows as:   Windows\ 10/ Windows\ 10\ Virtual\  Desktop/....

     

    I tried under scores to tie the Windows 10 Virtual Desktop file name together, but no dice. Have little experience in file names and tried to find this in google.  I have copy and pasted the list result and still get the same error. 

     

    Am stuck, any ideas?

     

    With thanks

  9. Hi all, 

     

    Think I have a good grasp on what I need from reading the pages here. Have a few questions to  make sure though. 

     

    I am looking at adding 8 SATA ports for expansion including one optical drive as my win 10 vm recognizes the optical drive, but crashes the vm when opening data disk.  For   instance installing audio drivers for an Audigy Rx sound card that was just installed. I will be using this optical drive for burning blu-ray for the plex server. After reading additional threads that the expansion resolved there issue. Currently I have just 2 4TB HGST with room to add 6 more so makes sense to get a card now for future growth. 

     

    1. Purchasing an LSI 9211-8i - does this come in IT mode? this IT mode removes the raid component correct?

    2. When purchasing cables, do not get the ones that have the metal latches? Need the SATA end for HDD to be a 90 down. Could use recommendation here of which ones for the expansion board. 

    3. This board is plug-n-play? Does not matter if I need to configure it as long as I have directions to follow

    4. Which vendor is recommended to avoid counterfeits? Rather pay more for quality than being cheap. 

    5. Anything else I need to know

    6. Adding this to the pcie lanes impact the 16 lanes for the GPU?

     

    Would this be the right card wit IT already installed?

    https://www.amazon.com/LSI-Controller-LSI00301-9207-8i-Internal/dp/B008J49G9A/ref=olp_product_details?_encoding=UTF8&me=

     

    Thanks!

  10. Hi, I too am having issues with an onboard audio card not being able to add to a win 10 vm. I can select it in the vm, but when launching the vm I get an error. I have an asRock Z370 Taichi board with Intel 8700k processor. I followed the instructions and still not able to use. Is this possible with my setup? Do I need to buy a sound card? I have a 5.1 surround system I would like  to install for gaming. I use to have a usb controller pass-through  before I did the PCIe ACS Override if that could of caused an issue. I will need to do that again unless this vm will recognize new hardware (will need to test this)

     

    Execution error

    internal error: qemu unexpectedly closed the monitor: 2018-07-23T18:58:38.067861Z qemu-system-x86_64: -device vfio-pci,host=00:1f.3,id=hostdev2,bus=pci.0,addr=0x9: vfio error: 0000:00:1f.3: group 11 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver.

     

    IOMMU group 11: [8086:a2c9] 00:1f.0 ISA bridge: Intel Corporation Device a2c9
      [8086:a2a1] 00:1f.2 Memory controller: Intel Corporation 200 Series PCH PMC
      [8086:a2f0] 00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio
      [8086:a2a3] 00:1f.4 SMBus: Intel Corporation 200 Series PCH SMBus Controller

     

    default menu.c32
    menu title Lime Technology, Inc.
    prompt 0
    timeout 50
    label unRAID OS
      menu default
      kernel /bzimage
      append pcie_acs_override=downstream vfio-pci.ids=8086:a2f0
    modprobe.blacklist=i2c_i801,i2c_smbus initrd=/bzroot
    label unRAID OS GUI Mode
      kernel /bzimage
      append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui
    label unRAID OS Safe Mode (no plugins, no GUI)
      kernel /bzimage
      append pcie_acs_override=downstream initrd=/bzroot unraidsafemode
    label unRAID OS GUI Safe Mode (no plugins)
      kernel /bzimage
      append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui unraidsafemode
    label Memtest86+
      kernel /memtest

     

    Did I make a mistake somewhere? Please let me know and thank you!

×
×
  • Create New...