Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. Thanks. Turning off privileged mode seems to have done it.
  3. The Youtube dl server by kmb32123 is great. I installed it via community applications. But the crazy modified / created dates it creates while downloading videos is making me go nuts. There is an option to set " --no-mtime " forcing the script to consider the actual download date time as the modified/created date. I have been trying to apply this option in various ways. The last method I tried was by using this setting by creating a youtube-dl.conf. It did not work. Any idea how can I actually get this option set and working? root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='youtube-dl-server' --net='bridge' -e TZ="Asia/Calcutta" -e HOST_OS="Unraid" -p '9090:8080/tcp' -v '/mnt/user/4TB/Downloads/Yt_Dls/':'/youtube-dl':'rw' -v '/mnt/user/appdata/youtubdl-config/youtube-dl.conf':'/youtube-dl.conf':'ro' 'nbr23/youtube-dl-server'
  4. Good day everyone. I am mostly happy with my Unraid setup, however I am having issue while gaming. First, I know my hardware is a bit old, but these games worked great before I moved to Unraid. My issue is even on the lowest settings in games that previously ran on high setting or better. Lets take the game Guild Wars 2 on the lowest settings I can almost get a smooth-ish experience (25-30fps drops to 15-20 at times). I don't appear to have a CPU/RAM/GPU bottleneck according to the windows VM, the GPU is barely touched. I have attached my VM's xml file and the diagnostics. M/B: MSI X99A SLI PLUS(MS-7885) BIOS: American Megatrends Inc. Version 1.E0. Dated: 06/15/2018 CPU: Intel® Core™ i7-5820K CPU @ 3.30GHz HVM: Enabled IOMMU: Enabled Cache: 384 KiB, 1536 KiB, 15360 KiB Memory: 32 GiB DDR4 (max. installable capacity 512 GiB) GPU slot1: GeForce GT 710 (for unraid boot) GPU slot2: GeForce GTX 980 Ti (passthrough to VM as well as the built-in sound card.) Any help is appreciated as I am starting to seriously consider switching back to windows on bare metal. beast-diagnostics-20200706-0113.zip Win10 VM.xml
  5. I think the problem is the sound is using the Nvidia Audio Controller Which isn't a PCI device so can't set the MSI fix to it I don't have enough knowledge or where to look for about how the audio devices are binding
  6. Hello, I've just switched from letsEncrypt to using my own certs, signed by my own CA. Everything is working with the public domains, but the local IP still appears to be registered with unraid and is pulling the let's encrypt cert from unraid.net. How can one deprovision their private ip and clean up the link with unraid's letsencrypt? Thanks
  7. Today
  8. Bonjour, Qu’il est agréable de trouver une section française ! Mon parcours est assez simple. J’avais un simple serveur Plex et Homebridge sous Windows 10. Ça a duré 4 ans, il y a quelques semaines je me suis dit qu’il était grandement temps de passer à l’étape supérieur ! J’ai donc installé OpenMediaVault, mais j’ai du mal à accrocher, depuis je me renseigne pour UnRaid et j’avoue que je suis très enthousiaste ! J’attends encore quelques jours avant de tout migrer, je veux emmagasiner le maximum de connaissances et éviter de faire des bêtises...
  9. Bonjour @Will9560 Merci beaucoup pour votre réponse, je comprends mieux en effet. Je parcours le forum depuis des heures et c’est une mine d’informations. J’ai cependant encore quelques interrogations qui me sont venues au fil de mes lectures: Que se passe-t-il si le disque de parité tombe en panne ? Je le remplace et il sera reconstruit ? Si je mets 2 SSD en raid 1 dans le cache, je suis obligé d’utiliser le système BTRFS, c’est ça ? Je lis qu’il est nécessaire de faire un pre-clear des disques, est-ce toujours d’actualité ? Le cache SSD est-il compris dans la parité ? Certains préconisent de mettre les metadatas de Plex dans un SSD à part du cache, est-ce primordial ? Encore merci Merci pour votre accueil @SpencerJ !
  10. Oh, I did not think about that... Yes I have a iPhone / iPad / AppleTV. I share the WiFi with my sister when she comes. I know the answer now
  11. how much would shipping be?? id pay for it
  12. Do you have a link to a guide on how you got handbrake to use the iGPU for transcoding? Also, I finally got my unraid server to use the iGPU for Plex transcoding. I was having issues because I had a couple of graphics cards installed. Is it possible to use iGPU for Plex, and keep those cards plugged in for protein folding with FAH and maybe to be used as a passthrough GPU in a VM? I tried everything that I could think of and the only way I was able to get it working was removing the GPU's. My setup is a Asus z170-AR with a i3-7350k. Any help would be appreciated. Thanks.
  13. I like the improvements to the cache. I'm wondering if there will be new features coming to the Virtual Machines. ^.^
  14. Do you have any IOT devices on your network? Any SmartPhones, Tablets? Do you allow guests in your home to use your network?
  15. @ghost82 I got the Big Sur installer to boot with the latest master from Opencore and Lilu using the stock OVMF files.
  16. Is there someone that will be carrying this plugin forward? As of the new UnRAID 6.9.0 this plugin is having major issues for me. When I run a backup, it breaks the libvirtd service, so it can not discover the VM's. They are still running, however I then have to shutdown all the running VM's from within the guest and use lsof to close any further open files and reboot the entire server so I can get the libvirtd service back up and manage my VM's again. I have snapshots enabled and qemu-guest-agent installed on all the VM's. Sometimes it gets through 1 or 2 VM's, sometimes none. If a snapshot has been taken, when it fails, it also doesnt remove the snap disk from the VM's XML so after reboot I have to manually edit the XML back to get the VM working again (since the snap disk no longer exists) I see nothing in the /var/log/libvirt/libvirtd.log Here are the 3 last backup logs from one of my scheduled configs 20200705_0000_unraid-vmbackup_error.log 20200706_0551_unraid-vmbackup_error.log 20200703_1700_unraid-vmbackup_error.log
  17. I'm only in the phase of building all the hardware so cant dig through all the software yet, but trying to prepare. Anyway....my question.....I thought the rgb control was part of the bios. When I look through the Asus x99 Deluxe ii manual in the bios section, it shows settings for the Aura control. Am I confused about that? Once I saw that in the manual appearing to be part of the bios, I thought I was going to be good with running the lights in Unraid. All of the bios settings will work correctly wont they? Thank You for any info you have!
  18. Just saw this on another thread, don't know if it is relevant or not:
  19. /mnt/user/appdata from the command line on the server or just open the appdata folder on you server (may need to navigate to it by IP address if it is not showing up in Windows file explorer) from a PC if you have the appdata folder exported and with public security If exported but not public you can navigate to //[servername]/appdata in Windows file explorer. You will have to provide login credentials if you have made the share secure. Sent from my iPhone using Tapatalk
  20. Oh wow, this is really cool!!! I've been looking for this a few months ago and didn't know it was under development and even released by now! I tried to install it, but it gives me an error. Cannot start up the docker. Will share the log later. It seems related to "binding" devices (e.g., passthrough of GPU to VM). Does this prevent the docker to run?
  21. According to your diagnostics, your appdata folder isn't shared (exported) on the network. In the Unraid webUI, go to Shares - User Shares, click on the appdata share to get to its page. There you can configure the settings for the appdata share, including whether or not to share it on the network.
  22. First of all, thanks for creating this plugin. This is excellent! I have a question how the binding works. I noticed that my GPU is bound (tick shown in the plugin with the GPU, no tick with the audio card): Group 36 65:00.010de:1c03VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1) 65:00.110de:10f1Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) I passthrough this card to a Windows VM. I don't remember that I had set this "tick". Is it needed / helpful?
  23. I saw reports of this a while back, but I though this was fixed. I have it set for every 90 days, but, its now running every night on Version: 6.8.3 hardhome-diagnostics-20200705-2047.zip
  24. Just wondering if anyone has had the same issue I'm passing through audio to a Windows 10 VM (I've done the MSI tricks as well).. but it's not getting all the sample rates and bit depths I have a GTX 1050 and 1650 Super and both have the same issue so I feel it's a setting somewhere When I run all bare metal it works fine Not sure which part is not passing through this information correctly <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='4'> <name>Gaming VM</name> <uuid>1b5d9111-4e15-eea3-bb19-1c697a076bd3</uuid> <description>Windows 10 Gaming</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='8'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='9'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/1b5d9111-4e15-eea3-bb19-1c697a076bd3_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <vendor>AMD</vendor> <topology sockets='1' dies='1' cores='3' threads='2'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='clwb'/> <feature policy='require' name='umip'/> <feature policy='require' name='stibp'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='xsaves'/> <feature policy='require' name='cmp_legacy'/> <feature policy='require' name='perfctr_core'/> <feature policy='require' name='clzero'/> <feature policy='require' name='wbnoinvd'/> <feature policy='require' name='amd-ssbd'/> <feature policy='require' name='virt-ssbd'/> <feature policy='require' name='rdctl-no'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='mds-no'/> <feature policy='require' name='pschange-mc-no'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='svm'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_250GB_S465NX0KB29872L' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:bb:42:cd'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-Gaming VM/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/TU116.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x2'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='1' device='2'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  25. I currently have 3 SAS2LP controllers that tie into my Norco 4224 backplanes, but the new motherboard I have only has 3 x16 and 1 x1 PCIE slots. I am hoping to add a 10GB NIC, and possibly a transcoding video card down the road. I have bought a RES25V240 which I've seen I don't even need to mount in a PCIE slot which is kinda cool, but I am wondering what sort of throughput I am going to see if I use that. Given that I still need 1 HBA I am assuming I'd remove 2 SAS2LP, and have one with a direct connection to a backplane and the other to the RES24V240 with it's other 5 ports connecting to the other backplanes. Essentially I want to try and understand the following: 1) Am I going to significantly impact performance with 5 backplanes going through the one card. I currently get around 95-98MB/s for Parity Checks and dont' know if this will really impact that, or if I have 8-10 people streaming off different disks am I going to bottleneck at all 2) Is there a significant difference between running 1 SAS2LP with 5 connections through the RES25V240 vs running 2 SAS2LP cards both feeding a connection into the RES25V240 card and only using 4 connections from that to backplanes 3) Given that the backplanes are all 6GB SAS, is there any value in buying a 12GB SAS controller with 6 ports and run SFF-8643 to SFF-8087 converting cables? I've had one vendor tell me this could cause issues (i.e. frying backplanes), but I dont' know if that's true, or if I'd see any improved performance vs the SAS Expander... If so, this would also potentially set me up if I was to replace the case down the road as I could get one with 12GB SAS (though these are really expensive) 4) Would I even notice much difference between 6GB SAS and 12GB SAS with all WD Red drives? I know this gets into a throughput question, and I've seen some comments on other threads, but I'm still not clear on if there is a significant gain by going to 12GB SAS (enough to justify a $500 card and $1200-$1500 case down the road) Any input or thoughts would be appreciated.
  1. Load more activity