JQNE

Members
  • Posts

    55
  • Joined

  • Last visited

Everything posted by JQNE

  1. SOLUTION! I noticed my last parity check was done 15 years ago. And that I checked my server time and date. I corrected time and date on settings. Next day noticed all was working correctly. App update working, os update working.
  2. Yes indeed. I dont see same symbols, but in GUI terminal...
  3. thank you for the clarification. I tried with this settings. I also restarted tower before checking. No luck... Still in CA I get same error information... Could there be problem with my router? I can ping example google.com
  4. No. I have not. Could you point me some way or tutorial how to do it, please?
  5. Hi! Thank you @Frank1940 for these instructions. I was able to update to version 6.9.0 and Unraid is running happily. BUT... I still have same problem. Community aplications "Download of appfeed failed". Can't update plugins. Can't udpate OS (there is version 6.9.1 available) So I think that old version of Unraid was not the problem. In version 6.9.0 there is new update assistant test capability. Also that test suggested I should be okay to update... I also did reset my router to factory settings. No help. Any other ideas? Or do I have to start thinking new build... unraid-diagnostics-20211211-1233.zip
  6. I would be happy to update, but it seems that I have lost the possibility to update... I have not been interested at all about my server or updating it. It has been working very nicely as my local network storage.
  7. Thank you for this idea. For me this seems everything is okay. And internet is working. Still I can't connect to community applications. And update os "Check for updates" does nothing. Any ideas for internet? Could this be some bug in unraid os?
  8. Community Applications not working. Download of appfeed failed. Update OS search don't find anything. I'm currently running 6.8.3 Local connections are working. Any ideas?
  9. Hi, Is there any solution to spin down server Hard disks when using baremetal Windows? I'm loving the silence when using vm on Unraid from SSD and HDDs spinned down. On baremetal Windows HDDs will make annoying hum at all time... HDDs are needed only as Unraid vault drives.
  10. I understood it is not vdisk. It uses clover to boot from nvme, but isn't booting from nvme now supported by Unraid? So when I boot that wm, nvme drive dissapears from unassigned device list. I think it's like passed throuhg to vm. So I will start by copying my wm image to array and start tinkering about nvme...
  11. Before I try that. Is It possible to increase my win10 partition on nvme drive? I've tried to look some instructions but came up really short.
  12. No. I'm not sure about what command I could use to create that...
  13. SOLVED! At the XML there was wrong IOMMU group adress to nvme controller. Changed that and everything works fine!
  14. Thanks for advice! I did create new VM and added vm to boot with custom clover that will boot windows from nvme drive. That have been working nice for me. Now VM will boot only to clover boot manager main window and sits there. On the right bottom corner there is number 3974. VM is not frozen, I can access clover setup menus. I have done this clover nvme boot setup same as on this SIOs video: And to be clear my system has been working and booting without problems on nvme. What could be the problem? unraid-diagnostics-20190817-1354.zip
  15. At this part there is my GPU. On adress type line there is that slot='0x05' Could that be okey if I change it to GPU slot number, which I can look at system devices? Or could something bad occur?
  16. I have looked a while my XML and I can't figure this out. Here is VM start error: Execution error Device 0000:05:00.0 not found: could not access /sys/bus/pci/devices/0000:05:00.0/config: No such file or directory I have added pcie card to my rig and I think it has moved my usb controller from 05 to 06. I have checked usb controller is know at 06. I have my VM running from NVMe drive. Here is my XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Joonas Windows 10</name> <uuid>xxxxxxxx-48c7-bb18-431a-8b07bd737733</uuid> <description>Gaming</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/xxxxxxxx-48c7-bb18-431a-8b07bd737733_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> <source file='/mnt/cache/vdisk_ssd/Joonas Windows 10/vdisk1.img'/> <target dev='hdc' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'/> <interface type='bridge'> <mac address='52:54:00:25:2c:0c'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/Setup_files/GTX_file/asus1070.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> unraid-diagnostics-20190806-2028.zip
  17. Re-enable did work. Thanks for advice!
  18. I did swap cables with bluray drive. Here are also both diags. What do you mean by rebuild on top? unraid-diagnostics-20190725-1926.zip unraid-smart-20190725-1924.zip
  19. Here are also full diagsunraid-diagnostics-20190725-1817.zip
  20. I did run smart shot test after. And it's completed without error state. Power on hours 10089 (1y, 1m, 24d, 9h) on old laptop(cheap) hdd is really good amount of time... I think I also checked cables. unraid-smart-20190724-2137.zip
  21. I installed couple days ago new expansion card to my Unraid server (LSI SAS9211-8i). I installed LSI card and one new drive to it at it worked fine. I did preclear new drive. Then expanded/added it to my array. Then just after that another disk went in error state, I installed new drive and did preclear. Then I removed old drive and did rebuild at new drive. Then I installed new drive and did preclear to add that drive as second parity. I normally started array, but at very first I got another drive in error state. So I didn't got to set new drive to parity2, I just wanted to get array running first to see everything is working normal. Now I'm scared to do anything, I stopped the array. Would it be wise to replace error drive with new precleared drive, or add it as second parity? Could this be normal random two drive failures in a row, or can you find something bad at my diagnostics? Thanks in advance! I'm very concerned if there's something really wrong on my server... unraid-diagnostics-20190724-2103.zip
  22. When drive is mounted or not mounted I got error like this on SSH connection, so it doesn't work. root@Unraid:/mnt# mkfs.xfs nvme0n1p3 Error accessing specified device nvme0n1p3: No such file or directory Usage: mkfs.xfs /* blocksize */ [-b size=num] /* metadata */ [-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num, (sunit=value,swidth=value|su=num,sw=num|noalign), sectsize=num /* force overwrite */ [-f] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2, projid32bit=0|1,sparse=0|1] /* no discard */ [-K] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n sunit=value|su=num,sectsize=num,lazy-count=0|1] /* label */ [-L label (maximum 12 characters)] /* naming */ [-n size=num,version=2|ci,ftype=0|1] /* no-op info only */ [-N] /* prototype file */ [-p fname] /* quiet */ [-q] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx] /* sectorsize */ [-s size=num] /* version */ [-V] devicename <devicename> is required unless -d name=xxx is given. <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB), xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB). <value> is xxx (512 byte blocks). When NVMe drive is mounted i see on mc two partitions of drive on mnt/disks/ , but third one is not shown. On SIO's video the gui support is there. Why to took it out...?
  23. Thanks! No error when running fstrim command. And Dynamix SSD trim installed and running daily. Still 35 MB/s write speeds to cache.