mathieuNls

Members
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mathieuNls's Achievements

Noob

Noob (1/14)

0

Reputation

  1. For whatever reason, I thought cache pool was a raid0 config. It makes sense now. The biggest drive is marked unmountable when used alone in the cache pool. I'll save the cache data on one of the HDD, format the 480GB Drive and put the cache data back on it. Then, only use that one in the cache pool. Hopefully, it'll do the trick and I'll be able to see my dockers & vms ...
  2. I've added a second drive to my cache pool a few days before the power loss. It shows that the total size is 360 GB even though the disks sum up to 740 GB. It also says that there is 120 GB free :x. Are all the disks in cache pool supposed to be the same ? Well, that's acceptable. The newly downloaded docker containers should be able to use the data from the old ones right ? Also, do you think the cache pool problem is also behind the vms problems? operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd
  3. Done. How can I know which one is corrupted? Definitely.....
  4. Hi, After a power loss, I lost my vms & docker containers. My Dashboard tab says No apps available to show even with the show all apps ticked on. When I start my array I have the following error: btrfs error (device loop0) in cleanup_transaction:1850: errno=-5 IO failure (Error while writing out transaction) The data and shares looks OK but when I try to create a new VM I got: operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd I have already tried to reset the files on my usb stick with clean ones without effects. Thanks. EDIT: I've attached the diagnostics as suggested tower-diagnostics-20170912-0807.zip
  5. Hi, I used to be able to access the \\TOWER smb shares from my w10 vm (16273 build) but it doesn't work anymore. I've tried to put shares on 'secure' but the same error appears. I've tried various solutions such as https://serverfault.com/questions/720332/cannot-connect-to-linux-samba-share-from-windows-10 and http://getadmx.com/?Category=Windows_10_2016&Policy=Microsoft.Policies.LanmanWorkstation%3A%3APol_EnableInsecureGuestLogons to no avail. Any ideas ? Thanks, M.
  6. Hi, I've followed most of the advice on this entry and still falling short of native performances by 16% for an i7-6700k without OC. For CPU Single Thread I score 393 (average of three tests with nothing else running) while the reference for my processor is 474 according to CPU-Z. The CPU multi-thread test is irrelevant as not all the cores are linked to the VM. Do any of you achieve to have more negligible losses (~5%)? My config: name>Windows 10</name> <uuid>d560712f-74e7-e728-d16e-7f42e6209349</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>20971520</memory> <currentMemory unit='KiB'>20971520</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='3'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='1'/> <vcpupin vcpu='5' cpuset='5'/> <emulatorpin cpuset='0,4'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d560712f-74e7-e728-d16e-7f42e6209349_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='3' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock>
  7. Hi, I've been using unraid (6.3.2) for a few months, and so far, it's all love. Recently, I bought a graphic card (RX480) in order to make myself a casual gaming VM as featured in so many places (ltt and so on). Something I didn't notice while doing productive work (docker container + some IDE on a debian VM) is that I seem to lose a lot of my CPU performances while in a VM, 16% to be precise. Indeed, for CPU Single Thread I score 393 (average of three tests with nothing else running) while the reference for my processor is 474 according to CPU-Z. The CPU multi-thread test is irrelevant as not all the cores are linked to the VM. I've followed other posts in this forum to improve the performances for W10 and came up with the following configurations: <name>Windows 10</name> <uuid>d560712f-74e7-e728-d16e-7f42e6209349</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>20971520</memory> <currentMemory unit='KiB'>20971520</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='3'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='1'/> <vcpupin vcpu='5' cpuset='5'/> <emulatorpin cpuset='0,4'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d560712f-74e7-e728-d16e-7f42e6209349_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='3' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> Is there something you would change in order to have a more negligible loss (~5%)? Thanks, M.