plxmediasvr

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

plxmediasvr's Achievements

Noob

Noob (1/14)

0

Reputation

  1. unable to use Custom SSL if you click on SSL Certificates and then choose Custom rather than LetsEncrypt it does absolutely nothing. app broken on 2 seperate unraid servers. to verify its not on my end and the app backend itself I have done the following: Restarted. Uninstalled Reinstalled Unsinstalled Again. Removed the App XML from Flash Deleted the /appdata installation Reinstalled 3rd time Uninstalled Reinstalled the other guys NGINX PROXY MANAGER by jc21 same thing pulled out a NUC created brand new Flash drive.paid another 129 for license set unraid to use 8008 and 8443. set to Auto, and pulled down LetsEncrypt USB SSL now that 80/443 not in use, went into router and changed the ip to the new USB. Rinse and repeat of all the steps above the Custom was working, and then I did a docker update and it broke. but different computers, different flash drives, different apps (NGINX REVERSE PROXY MANAGER + NGINX REVERSE PROXY MANAGER OFFICIAL) I HAVE HSTS ON and set to 1 YEAR for all my domains, meaning I am unable to change back to port 80, and grey the lock out on CFLARE and use LetsEncrypt I have been using CFLARE with Custom Domain Certs provided from CLFARE using SSL: Full (Strict) until last night
  2. unable to use Custom SSL if you click on SSL Certificates and then choose Custom rather than LetsEncrypt it does absolutely nothing. app broken on 2 seperate unraid servers. to verify its not on my end and the app backend itself I have done the following: Restarted. Uninstalled Reinstalled Unsinstalled Again. Removed the App XML from Flash Deleted the /appdata installation Reinstalled 3rd time Uninstalled Reinstalled the other guys NGINX PROXY MANAGER by djoss' same thing pulled out a NUC created brand new Flash drive.paid another 1129 for license set unraid to use 8008 and 8443. set to Auto, and pulled down LetsEncrypt USB SSL now that 80/443 not in use, went into router and changed the ip to the new USB. Rinse and repeat of all the steps above the Custom was working, and then I did a docker update and it broke. but different computers, different flash drives, different apps (NGINX REVERSE PROXY MANAGER + NGINX REVERSE PROXY MANAGER OFFICIAL) I HAVE HSTS ON and set to 1 YEAR for all my domains, meaning I am unable to change back to port 80, and grey the lock out on CFLARE and use LetsEncrypt I have been using CFLARE with Custom Domain Certs provided from CLFARE using SSL: Full (Strict) until last night
  3. I need help majorly bad. Im to the point where I am done with UNRAID and go back to plain old Windows. Unraid 6.8.3 Specs: TRipper 3960x 24/48 256GB DDR4 Parity: 2- 16TB Seagate Exos Array: 11- 16TB Seagate Exos Cache: 4- 2TB Firecuda 520 Raid0 -8tb GPU0: RTX 4000 GPU1: RADEON PRO W5700 I am trying to move files from one hard drive to the other and it works for like 10 to 25gb and then Krusader says "stalled". I have even tried creating a VM with virtio everything enabled, with Redhat LAN showing 100gb/s and itll start at 250mb/s and then go back to 30mb/s occasionally it will go to 200 again and then stay at 30mb/s for like 80% of the time. To transfer 100gb it starts at saying 7 minutes and then sometime after 10gb to 25gb it slows down, and it changes the estimated time to 1 1/2 hours, and then shows stalled as the message in Krusader. I have thus far also tried All files/folders/shares/drives in question are set to root 777 just to rule that out changed destination drive to use cache 1st, and mover to move. didnt change. Xfer still failed out after 25gb. Tips and Tweaks Plugin set for Performance Ryzen Enhancements Enabled Changed the RAM from 10%/20% to 1%/2% since we are dealing with 256gb. Krusader stalled at 5gb transferred. So then I changed it to 50%/80% and nothing changed but Krusader went back to stalling at 25gb. So the RAM was put back to default 10%/20% Global Share Direct IO enabled Tunable scheduler to SFQ\BFQ as they are Exos Drives to rule out cache mq_deadline Tunable MD Queue Limit to 85% so IO has the ability to nearly max the drive out. Stripes set to 4096, but my understanding is thats only for Parity, not drive to drive xfer TUNABLE set to RECONSTRUCT WRITE (TURBO MODE) Changing Krusader to /mnt/cache/appdata along with all apps instead of mnt/user/appdata Changed Docker to /mnt/cache/appdata instead of mnt/user/appdata Stopped and disabled all other Docker Containers except for Krusader Since I have cores to play with, and im sittting at 48 cores, i can go against the standard and just pin docker cores without affecting cpu usage whilst guaranteeing cpu usage isnt the issue Krusader pinned at 20 Cores / 48 Threads - and yes they are properly assigned (4/28, 5/29,-- ,-- ,-- 23/47 etc etc) i even did the tunable test thing, even though that still seems to point to parity, and ran the generic, and my numbers did not change from what the test stated if the numbers dont fluctuate then theres no need to run the full day or two test. When I first started UNRAID, it took nearly 5 weeks to Xfer 80TB when in Windows it would have Taken only a week or so at 12 to 16tb per 24hours @250mb/s. Now that the data is on the drives UNRAID itself works. Sonarr/Radarr/Sab/Plex/Handbrake all work. No issues with speed. Its only when I want to physically move the data around. What am I missing here? how does this expensive of a rig have these kind of problems? In Windows, on these exact same drives, i can push 250-260mb/s continuous transfer speeds. Even my pcie 4.0 drives dont transfer at 4000mb/s the average i get is 600mb/s total (150mb/s x4) doing tasks. However that goes to 8000mb/s (2000x4) running SCRUB BTRFS but 600mb/s daily use and 8000mb/s using SCRUB BTRFS is a far cry from 16,000MB/s for 4 raided gen4 nvme i should be getting
  4. I am still relatively new to all of this, however I installed W10 and everything installed correctly 1ST GPU - VNC QXL 2ND GPU - NOTHING I installed the virtio drivers and restarted the computer. plugged a display port up to gpu 2 1ST GPU - VNC QXL 2ND GPU - RADEON PRO W5500 ROM BIOS the amd found the card and installed the drivers and everything worked fine. restarted the computer and changed it to: 1st GPU - RADEON PRO W5500 ROM BIOS 2ND GPU - NOTHING hit save, and the changes never save. only for the gpu settings. everything else saves just fine Reverts to GPU1 - VNC CIRRUS GPU2 - RADEON PRO W5500 ROM BIIOS SAME THING HAPPENED WHEN USING THE RADEON PRO W5700 M/B: Gigabyte Technology Co., Ltd. TRX40 DESIGNARE Version x.x - s/n: Default string BIOS: American Megatrends Inc. Version F4c. Dated: 03/05/2020 CPU: AMD Ryzen Threadripper 3960X 24-Core @ 3800 MHz HVM: Enabled IOMMU: Enabled Cache: 1536 KiB, 12288 KiB, 131072 KiB Memory: 256 GiB DDR4 (max. installable capacity 512 GiB) Network: eth0: 1000 Mbps, full duplex, mtu 1500 eth1: 1000 Mbps, full duplex, mtu 1500 Kernel: Linux 5.7.8-Unraid x86_64 OpenSSL: 1.1.1g <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>a6681270-6975-90ce-b3cd-ccdac34fe976</uuid> <description>Windows 10</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='26'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='27'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='28'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='29'/> <vcpupin vcpu='8' cpuset='6'/> <vcpupin vcpu='9' cpuset='30'/> <vcpupin vcpu='10' cpuset='7'/> <vcpupin vcpu='11' cpuset='31'/> <vcpupin vcpu='12' cpuset='8'/> <vcpupin vcpu='13' cpuset='32'/> <vcpupin vcpu='14' cpuset='9'/> <vcpupin vcpu='15' cpuset='33'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/a6681270-6975-90ce-b3cd-ccdac34fe976_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/vdisks/.iso-images/W10X64.2004.ENU.JUN2020.ISO'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/vdisks/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/system_files/vms/personal/windows/plx-svr/os.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:d7:57:25'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x23' slot='0x00' function='0x0'/> </source> <rom file='/mnt/system_files/vdisks/.vbios/amd/AMD RADEON PRO W5500.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x23' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/> </qemu:commandline> </domain>
  5. on the parity, ahh ok, I kinda assumed, and overreached there. After watching 100s of videos on this, the suggestion was do not enable cache or parity until after you are done with transferring data. had I written this later tongiht, after I moved the parity drives from the Unassigned Devices and finally into parity slot 1 and 2 I would have found out that they cannot be encrypted.
  6. ...a clarification. AMD's Remote Workstation I thought, which I was clearly wrong, was where in Hyper V it would allow you to see the GPU. What I found out later was that, what it actually meant, was the Host OS through RDP was able to see the GPU. And that did work
  7. yes, as the username does say i use it for plex, and unfortunately, yes I did have to go out and buy yet another gpu for the docker containers and hw transcoding- i settled on the rtx 4000 due to the transcode streams and nvidia capping them to just 2 streams on the non quadros. the gpu issues are actually what led me to unraid. I was running 10 Pro for Workstations, with Hyper V enabled. I wanted to use gpu passthrough, and the only thing I could find was that u had to have vGPU or AMDs Remote Workstation. dropped $1,500 into the w5700 and the w5500 and then after it was too late realized that AMD Remote Workstation doesnt work in Windows 10, and unf, at the time they didnt have Server 2019 support for the card. (At the time of writing this Server 2019 is supported now) So, with no Server 2019 support, I felt as if Team Red messed me over. A month later I look, and then all the sudden Server was supported, only after they announced the Radeon Pro VII. Upon further inspection, turns out I still couldnt do it, because you had to have a Remote Workstation CAL for each Hyper V or something rather as an additional service. Then I looked into vGPU, only supported on the RTX 6000 and 8000, but also same thing. So with all this hardware and no way to use it I figured why not go unRAID. Linus and a few other YTube Influencers, conviced me, as it seems like this is the best option even over True Nas Core or something rather coming out soon. Still dont know about Docker vs Jail apps but all the same. And nol, unf not rich. wish I was. I had a client paying me to run RDP Sessions for his company, the money that came in was just enough to drop on the server. A week after it was built, he cancelled. Oh boy I was sooo mad. So now I have a glorified Media Server. But hey, in the end, I finally get to use a server for more than just one OS, with graphics! I havent even set anything up yet, tranferring 70tb through network at 112 mb/s took forever. Im finally done. I am sure happy last year I took my 200tb and hevd it down to 70tb. its be another week or two before that got done transferring.
  8. Case: Fractal Design Define 7 XL E-ATX Dark Tinted Tempered Motherboard: GIGABYTE TRX40 DESIGNARE sTRX4 Power Supply: CORSAIR AXi Series AX1200i Processor: AMD Ryzen Threadripper 3960X sTRX4 24/48 Memory: CORSAIR Vengeance LPX 256GB (8 x 32GB) DDR4 3200 AIO Cooler: CORSAIR Hydro Series, H115i RGB PLATINUM Case Fans: 4x - CORSAIR QL Series, iCUE QL120 RGB Case Fans: 3x - CORSAIR QL Series, iCUE QL140 RGB AIC SATA Controller - 6Port COOLMOON SATA3-6GB/s AIC Gen4 NVMe - Gigabyte GC-4XM2G4 (4x4x4x4 Gen4) Pcie Adapter: FLEX VRC-25 Fractal Riser Cable (RTX 4000) GPU0: AMD Radeon Pro W5700 Graphics Card GPU1: AMD Radeon Pro W5500 Graphics Card GPU2: PNY NVIDIA Quadro RTX 4000 Parity Disks: 32TB HDD00: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD01: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s Array Disks XFS Encrypted: 192TB HDD02: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD03: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD04: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD05: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD06: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD07: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD08: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD09: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD10: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD11: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD12: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD13: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s HDD14: Seagate Exos 16TB Enterprise HDD X16 SATA 6Gb/s Cache Pool 1 BTRFS Encrypted: 8TB (4TB-RAID1) NVMe00: Seagate FireCuda 520 M.2 2280 2TB PCIe Gen4 x4 NVMe01: Seagate FireCuda 520 M.2 2280 2TB PCIe Gen4 x4 NVMe02: Seagate FireCuda 520 M.2 2280 2TB PCIe Gen4 x4 NVMe03: Seagate FireCuda 520 M.2 2280 2TB PCIe Gen4 x4 Cache Pool 2 BTRFS Encrypted: 4TB (2TB-RAID1) NVMe04: Corsair Force MP600 M.2 2280 2TB PCI-Express Gen 4.0 x4 NVMe04: Corsair Force MP600 M.2 2280 2TB PCI-Express Gen 4.0 x4