Jump to content

xxsxx47

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

2 Neutral

About xxsxx47

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I Solved it , for any having this trouble : all i did is editing the file in " \appdata\NginxProxyManager\nginx\proxy_host " for proxy you went and add the following text after this line " location / { " add_header Content-Security-Policy "upgrade-insecure-requests"; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-XSS-Protection "1; mode=block" always; add_header X-Content-Type-Options "nosniff" always; add_header X-UA-Compatible "IE=Edge" always; add_header Cache-Control "no-transform" always; add_header Referrer-Policy "same-origin" always; add_header Feature-Policy "autoplay 'none'; camera 'none'" always; and restart the docker for the Nginx Proxy Manager . -------------------------------------------------- Docker : CalibreWeb from linuxserver if any could help me please , is there any danger if left alone because every thing is working we me i tired the https://securityheaders.com to see if every thing is right in security Department but got red ones there any way to fix them ? >i tried adding "add_header X-Frame-Options "SAMEORIGIN";" but didn't do any.
  2. already did the msi fix but didn't fix it for me , the only fix that worked for me is the pinning the cpu. Will try different combination with the cpu pinning that will suite both gaming and watching in plex. Thanks testdasi for help
  3. so here to report after just letting windows vm use all the cores without pinning them and isolating them the game finally started to play nice like am on bare metal one but now the old problem sound static Crackling returned i can but watching in plex is very bad only fix to is pinning them and isolating the cpu . so i don't know i should upgrade or wait till i get to point i cant play anymore
  4. if you need any more info please tell me . i played yesterday the ffxv in high settings and its was ok still little lag in the cut scene and battle so is it better to upgrade to the new ryzen and give 5 to 6 cores to fix the lag in games ?
  5. Hi every one it's been while and i just unraid is the best ever my current setup : Cpu : intel core 5 4690k motherboard : msi z97 gaming 5 gpu : EVGA GeForce RTX 2070 8 GB Black ram : 16GB nvme : Intel 660p Series 1.02 TB i am thinking to upgrade to this or tweak things for the old to be better cpu : AMD Ryzen 7 3700X 3.6 GHz 8-Core Processor motherbouard : Gigabyte X570 AORUS ELITE ATX AM4 Motherboard ram : TEAMGROUP T-Force Dark Pro DDR4 16GB KIT (2 x 8GB) 3200MHz (PC4 25600) CL 14 and put 2 nvme one is the samsung 970 for the os and the Intel 660p for the games so am using a the unraid and win10 vm gaming but just feel like the gaming could be better for the vm i gave it 3 cores and 10 gb ram and the nvme as Unassigned Device and it has the win 10 os and games inside it , when the i am playing assassin creed odyess in high settings the fps is in 35 and there some lag here there and slow loading screen but its playble and 3 cores are always it 100% and gpu is at 43 to 52 % any advise or tips to help to stable things and better playing the games here the xml and picture for vm template , cpu pinning and main dashboard thank you very much for you help . <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>Windows 10 - Next</name> <uuid>8574667b-b35b-7fa5-2f06-7c1b4fe43c2f</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>10485760</memory> <currentMemory unit='KiB'>10485760</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>3</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/8574667b-b35b-7fa5-2f06-7c1b4fe43c2f_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='3' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/INTEL_SSDPEKNW010T8_BTNH938428UY1P0B/Windows 10 - Next/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/Windows 10 - 1903.iso' index='2'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Drivers/virtio-win-0.1.160-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:ba:ac:23'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Windows 10 - Next/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/Bios/2070.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </source> <alias name='hostdev4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x028f'/> <address bus='3' device='5'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0719'/> <address bus='3' device='6'/> </source> <alias name='hostdev6'/> <address type='usb' bus='0' port='3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x04d9'/> <product id='0x1702'/> <address bus='3' device='3'/> </source> <alias name='hostdev7'/> <address type='usb' bus='0' port='4'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1038'/> <product id='0x1702'/> <address bus='3' device='2'/> </source> <alias name='hostdev8'/> <address type='usb' bus='0' port='5'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1058'/> <product id='0x0820'/> <address bus='4' device='3'/> </source> <alias name='hostdev9'/> <address type='usb' bus='0' port='6'/> </hostdev> <hub type='usb'> <alias name='hub0'/> <address type='usb' bus='0' port='1'/> </hub> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  6. can confirm that disabling the wireguard i can access the adguard webpage or just leave the wireguard on and turn off the "Local gateway uses NAT" Thank you very much every thing is working .
  7. but it works if the vm down and maybe the error appeared because i changed the network type to bridge and that error appeared in the diagnoses file
  8. Hi There I can't enter the Adguardhome docker webpage if vm on and even tho the docker page it says it working it doesn't the network doesn't work but if turn the vm off everything is working , is because of the Network type br0 ? tower-diagnostics-20191018-1409.zip
  9. 1- I have only 1 Data drive that is 8 tb that i went to reuse in the new setup 2- I have now 2 parity drives that are 2x 10 tb i didn't do any peclear it the drives that will look like this 2 Parity Drives that are 10 tb 3 Data drives 1x 10tb ( New drive ) , 2x 8 tb ( One old data drive ) and ( the old parity drive ) so i started the Operation for upgrade yesterday following this guide did the new configuration did a maintain only for cache and data drives only move the old parity drive to the data drives as second one and didn't format it yet and add the third drive as the 10 tb and both the new parity drives and did the Parity-Check it will take 1 day and 7 hours ( only remning now 9 hours the speed go up and down from 57.9 up to 129 ) am now at 78 % done , but am worried about the Sync errors corrected : 1099825108 is it normal to reach after an upgrade like this ? here is the diagnostics file and screenshots hope it helps tower-diagnostics-20190909-1716.zip
  10. Hello every one so am getting 3x 10 TB drives and planning to make 2 of them as parity drives and 1 as an array drive and moving the old parity to the array , so the steps that i need to follow are they correct for my case ?
  11. Update 5 Final : OK every thing is fixed after googling and youtube and the forums i fixed it That what helped me to solve it by adding this line ( pci-stub.ids=10de:1aec,10de:1aed ) now i'm in Love in unraid
  12. Update 4 : just see linus video and connected from my laptop and i enblad only igpu in the bios and connected the cable to integrated gpu and set every thing i got now this type of error internal error: qemu unexpectedly closed the monitor: 2019-08-21T17:19:13.915791Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x6: vfio 0000:01:00.0: group 13 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. here is my IOMMU IOMMU group 0: [8086:0c00] 00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06) IOMMU group 1: [8086:0c01] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) IOMMU group 2: [8086:0412] 00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06) IOMMU group 3: [8086:0c0c] 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) IOMMU group 4: [8086:8cb1] 00:14.0 USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller IOMMU group 5: [8086:8cba] 00:16.0 Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1 IOMMU group 6: [8086:8cad] 00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2 IOMMU group 7: [8086:8ca0] 00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller IOMMU group 8: [8086:8c90] 00:1c.0 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0) IOMMU group 9: [8086:8c96] 00:1c.3 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 4 (rev d0) IOMMU group 10: [8086:8c9e] 00:1c.7 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 8 (rev d0) IOMMU group 11: [8086:8ca6] 00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1 IOMMU group 12: [8086:8cc4] 00:1f.0 ISA bridge: Intel Corporation Z97 Chipset LPC Controller [8086:8c82] 00:1f.2 SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode] [8086:8ca2] 00:1f.3 SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller IOMMU group 13: [10de:1f02] 01:00.0 VGA compatible controller: NVIDIA Corporation TU106 [GeForce RTX 2070] (rev a1) [10de:10f9] 01:00.1 Audio device: NVIDIA Corporation TU106 High Definition Audio Controller (rev a1) [10de:1ada] 01:00.2 USB controller: NVIDIA Corporation TU106 USB 3.1 Host Controller (rev a1) [10de:1adb] 01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C Port Policy Controller (rev a1) IOMMU group 14: [144d:a808] 02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 IOMMU group 15: [1969:e091] 03:00.0 Ethernet controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 13) IOMMU group 16: [1102:0012] 04:00.0 Audio device: Creative Labs Sound Core3D [Sound Blaster Recon3D / Z-Series] (rev 01)
  13. ok tried the video and did it step by step still same problem the vms page dosnt appear anymore need to restart the server i have only the evga 2070 black gpu is there a problem with this model with unraid or vm ? the vnc works and intel integrated gpu just says green light start dont know how to connect to it Hope any can help me with this problem