dboris

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by dboris

  1. Allow me to add that I was having an issue and this topic is the one I found by googling the error : "unraid custom docker network <insert your network name> not found". In my case the solution have been partly given here : The issue : My custom docker network had caps in the name (ex "CustomNetwork"). While it was recognised by docker by doing "docker network ls", I noticed the name of the network was given without caps on the "Network" column, on Unraid's GUI docker page. The solution : Delete and recreate a network without caps. "docker network rm <insert your network name>" Then recreate it and reasign it to the dockers (as it's identified internally by ID and not by name).
  2. Hello, I have a remote server at a friend's place. I originally tried to give a VM access to more ports. I think I just had to assign the main interface to the VM other than the virtual bridge. Instead of doing that, I edited the bonding members in the networking settings. Now I can not access the server via SSH or GUI. I have a limited access to the server : only through my friend. I'm searching how to edit this settings through the terminal, since it's my only entry point. I'm guessing I'll just have to do that to restore the server. Here's a screenshot to illustrate the setting I remember having changed before meeting a no-access wall. I also include a diagnostic. Don't hesitate to point me somewhere if It's already been discussed. I was unable to find such informations, and I now wonder how I could edit basically any setting with command lines if I ever meet such issues later on. I tried booting with no pluggins and no GUI ; still no SSH. Thanks a lot in advance for the tips. server-diagnostics-20220612-1257.zip
  3. After rebooting my remote server, I realised I couldn't access my tunnel anymore. Thanks god I had Teamviewer running on a VM and was able to recover it.
  4. TLDR : https://github.com/jellyfin/jellyfin/issues/4338 It seems that stutters are normal for 4K HEVC files. _______ Path is : "/dev/dri/renderD128" Originally I stupidly edited it on jellyfin for "/dev/dri/", but rapidly noticed decoding wasn't working. I set it back as "/dev/dri/renderD128". I only got "results" once edited back. On the docker settings it's still "/dev/dri/" indeed. _______ I applied what you recommended at the end of your post. I still get playback stutter. 20% GPU load and 40% CPU load when displaying a HEVC HDR movie As a reference, 0 % GPU load and 20% CPU load with no playback. So it seems to work, no change compared to previous configuration. I still get the same error message when enabling HW encoding. HW decoding works however. I still have playback stutter. _______ Previously it took me time to answer because I had an issue : Jellyfish container was responding H264 was playing back, but no H265. Couldn't restart the container. It was always displayed as turned on, on unraid GUI. The log was showing that the contained was killed (capture included). Previously I thought it was a random issue. But I encoutered that bug again. It seems the GPU crashed. The GPU info, when crashed, displayed nothing, and sometimes "100%" usage (capture included). I tried disabling docker instead of restarting. But was stuck with "Please wait... starting up containers". So I restarted the unraid server. Both time it triggered a rebuild of the array so it doesn't look like a clean shutdown. If you think it's valuable to try to identify what causes it, I can try tinkering further. _______ For the non-HDR content, see the included screenshot using the jellyfish samples listed above : I get 20% GPU and 25% CPU with non-HDR HEVC. Same results than with HDR. _______ Overall, I get it working, but playback isn't smooth ; No hardware encoding. I still stutter at least every minute or more. Even when playing locally and from a NVME. Without HW acceleration I still get occasionally stutters. I see no CPU/GPU/drive overload when stutters happens. As you can see there's still the "codec exceed limit" and "codec not supported" messages. I shared a screenshot of the log. When launching a video with HW decoding enabled, I see a spike of usage of the GPU before getting the error message. Probably it's trying to playback before crashing, driver issue? Making wild guesses. _______ I tried hosting a jellyfin server on my macbook M1 and playback a movie from my server's NVME, HLS : Still get stutters. Very strange. Once again ; I can play the same file from VLC without any issue. Skipping through the file (from SMB accessed via wifi) is nearly instant. Takes 10s to load for each clic through jellyfin. I tried hosting the movies from a hard drive connected directly to my macbook M1. Still get stutters... What the. I feel I lost a considerable amount of time trying to fix something that is originally broken. _______ Tried jellyfin on a intel 8700K with 1080Ti. No issues, playback works much better, even when streaming content over wifi with a bad wifi card. It seems that CPU-only decoding is a very bad idea. I also took the time to test the performance when disabling the "GPU encoding" option. Well, disabling it gives noticably less playback performance, and increase considerably the lag/delay when moving in the video files. Considering that this option doesn't work with my 4750U - I can't playback files with it enabled -, it can possibly explain all the issues I'm facing ! Maybe later drivers updates will make it work for me? It looks like other users don't have such issues with desktop amd CPU.
  5. Wow, thanks for your fast answer. Glad to provide information. Sorry for the delay. I passed through using /dev/dri in the docker options. I also ran the command : "mkdir /boot/config/modprobe.d touch /boot/config/modprobe.d/amdgpu.conf" And rebooted. On the playback page, everything is ticked outside "10bits HEVC decoding". I tried ticking one by one, checking if playback worked each time. Ticking this one gives me the message : "This client is not compatible with the media and the server is not sending a compatible format." I get around 60-70% CPU usage without the "activating HEVC 10 bits decoding", when playing back a HEVC 10bits file. While getting around 35% with it ticked. I joined : - A capture during HEVC playback. - The output of "lsmod". - Output of "ls -la /dev/dri". lsmod.rtf
  6. Using a 4750U (laptop). Success with HEVC decoding. But hardware encoding won't work if the option is ticked. "This client is not compatible with the media and the server is not sending a compatible format." And even with it unticked, the playback is constantly stopped every 10-15s. Probably caused by the lack of encoding, since when playing back from another laptop (macbook M1), streaming through SMB, I get no issues at all. Finally, here's a link with 4K H265 and HECV to easily test performance between files codec : https://jell.yfish.us/
  7. https://forums.plex.tv/t/got-hw-transcoding-to-work-with-libva-vaapi-on-ryzen-apu-ryzen-7-4700u/676546/216
  8. Curious as well with the new one comming.
  9. Exciting. I'm planning to switch to a laptop. Having already a 10gbits compliant router, being able to usb a USB 5 gbits ethernet adapter would be a huge plus. Interested by any updates.
  10. For me the solution was to enable "list IOMMU" in the motherboard BIOS. I forgot to re-enable it after wiping BIOS's options to default.
  11. Exactly, normally you would do it from your bootable usb key
  12. In two words ; Machine Type Q35, VNC driver CIRRUS. Download NiceHashOS from : https://www.nicehash.com/download-center Download and install OSFMount : https://www.osforensics.com/tools/mount-disk-images.html On a Windows VM : - Extract the NHos archive with Winrar or 7Zip. - With OSFMount, select tthe extracted .img file and mount the DDOS3.31+FAT16 partition. - Uncheck "read-only drive" - Open the mounted partition in the explorer and edit the configuration.txt by adding your BTC address (from your Nicehash account) and worker's name (as you like). - Save the file and unmount the partition - Mount again the .img file and ensure that your changes have been recorded - You can now move the .img file in your domains/vm disks folder Create a new VM with Ubuntu Template. Ensure to have these settings : - At least 2GB of ram - Save on CPU cores as it will bring no benefits Mandatory : - Of course point to the edited .img file. - Machine Type Q35 - Change VNC driver for CIRRUS - Add a secondary nvidia GPU. You should be all set :). You can control the GPU OC settings from the web interface. Your nvidia GPU should have no more rest.
  13. I don't care about translation, what I care about is keyboard layout. When accessing the UI directly from the n°1 GPU (not the webui), it require me to mentally switch from azerty layout to qwerty. 🤪 I have seen posts about keyboard layout but didn't managed to successfully change the default qwerty layout. I'm surprised there's even translations without keyboard layouts. Please add that functionality as it has been a source of frustrations every time I hadno VM/GUI openned but required to type commands or edit a XML. 😪
  14. Thanks for the update ! Will get it thanks to you !
  15. Brother with a 5700XT waiting for a smarter bother to show us the way.
  16. No, but after struggling one day I wanted to use my computer instead of still try to fix it. Those are the results I got by swapping parts of the XML with for what you gave. I wanted me to avoid copy pasting but shouldn't expect any other linux competencies outside copy pasting. I can't do much more than test and report. I edited this as I thought you wanted me to (?) : <numatune> <memory mode='strict' nodeset='0,1'/> <memnode cellid='0' mode='strict' nodeset='0'/> <memnode cellid='1' mode='strict' nodeset='1'/> </numatune> I have no idea of what to think about the Aida results however. Much better L2 write speeds, but worse L1. Cinebench I now get 4885 on Cinebench, so on paper it looks a tiny bit better.
  17. I noticed it was working on some other VMs, so after reinstalling windows to be clear of software bugs, and after a good night of sleep, I messed around with the XMLs and ended up solving my audio problem while retaining the L3 performance. I get 4793 on Cinebench. So, with a 1950X, this is what worked : <numatune> <memory mode='interleave' nodeset='0-1'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/xxx_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <topology sockets='1' cores='8' threads='2'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <numa> <cell id='0' cpus='0-7' memory='16777216' unit='KiB'/> <cell id='1' cpus='8-15' memory='16777216' unit='KiB'/> </numa> </cpu>
  18. I spent multiple weekends trying to problem solve my unraid system. So I'm glad it took you 10mn to read the topic, but it took me a few hours to do the benchmarks, test and confirm. Each windows reboot require a system reboot as I have a 5700XT. This fixed my latency issues but made somehow my realtek sound card crash (the windows audio service consummes 15% of the CPU, and freeze the windows audio settings). It still works in bare metal on the same SSD, no problem. Don't you think I went through some other topics already? As of today I just spent a straight 12 hours on problem solving unraid. Did your 2150 posts took you 10 minutes too? Don't you think I can rightfully express my regrets on not checking this topic twice, without having you trying to make yourself shine over me calling myself lazy ? Thanks for your forum contribution, thumbs down for your behaviour. 👎
  19. Not everyone can afford to spend countless hours reading every topic on you-name-it-forum.
  20. Oh, my, god. I came a few time on this topic but I was too lazy to read it all. This information REALLY needs to be condensed / communicated (or integrated in the OS ? ) by unraid's dev team. It would have saved me so much time. A bit pissed of off not having been informed properly ; I just have been told that TR were bad for gaming on unraid... After having bought a pro key. Meh. I run a 1950x + 5700XT + 64GB. Note that I use OVMF for the 5700 XT. I was getting 750GB/sec bare metal on L3 and 45GB/sec on VM. I now get a solid 312GB, but as I understood it's close to bare metal because I'm now using half the Threadripper. Some tips : I went to NUMA by booting on windows bare metal and setting the Threadripper to gaming mode with Ryzen Master. The option can be now hidden in the bios as AMD asked Motherboard manufacturers to do so. I had no improvements (results were worse) without activating Numa in the XML. I was previously running 12 cores for the VM but I'm fine with 8 only if I get stability. It applies not only for gaming but also video editing, where you never really knows if your system performs at it's best. I'm redownloading Modern Warfare, it's a good benchmark as it was UNPLAYABLE before ; system stutter every few secs. I'll update the post to confirm/or not if it fixed the lags I was getting. Here's (another) the XML and screenshots of the tests I did before achieving what I was expecting to get, thanks to you guys, who took time to collect and share that precious information ! <domain type='kvm' id='1'> <name>Windows 10 RX</name> <uuid>XXXXXXXXXXXXXXXX</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='16'/> <vcpupin vcpu='1' cpuset='17'/> <vcpupin vcpu='2' cpuset='18'/> <vcpupin vcpu='3' cpuset='19'/> <vcpupin vcpu='4' cpuset='20'/> <vcpupin vcpu='5' cpuset='21'/> <vcpupin vcpu='6' cpuset='22'/> <vcpupin vcpu='7' cpuset='23'/> <vcpupin vcpu='8' cpuset='24'/> <vcpupin vcpu='9' cpuset='25'/> <vcpupin vcpu='10' cpuset='26'/> <vcpupin vcpu='11' cpuset='27'/> <vcpupin vcpu='12' cpuset='28'/> <vcpupin vcpu='13' cpuset='29'/> <vcpupin vcpu='14' cpuset='30'/> <vcpupin vcpu='15' cpuset='31'/> </cputune> <numatune> <memory mode='interleave' nodeset='0-1'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/XXXXXXXXXXXXXXXXXXXXXXXX_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <topology sockets='1' cores='8' threads='2'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <numa> <cell id='0' cpus='0-7' memory='16777216' unit='KiB'/> <cell id='1' cpus='8-15' memory='16777216' unit='KiB'/> </numa> </cpu>
  21. Hello. I was using the autosnapshot script and noticed that znapzend wouldn't create snapshots anymore of the dataset contained a snapshot made by the auto snapshots script. I guess it's conflicting between the two snapshots. [Sat Mar 21 06:43:17 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists [Sat Mar 21 06:49:22 2020] [info] starting work on backupSet zSSD/PROJECTS [Sat Mar 21 06:49:22 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists warning: cannot send 'zSSD/PROJECTS@2020-03-21-064445': Broken pipe warning: cannot send 'zSSD/PROJECTS@2020-03-21-064626': Broken pipe warning: cannot send 'zSSD/PROJECTS@2020-03-21-064921': Broken pipe cannot send 'zSSD/PROJECTS': I/O error I would like to move to znapzend however it doesn't seem to support shadowcopy. What I like with znapzend : - Ease of use to set snapshots occurence and retention rules. - Robustness by saving datasets between different pools... I lost a pool yesterday and that's why I tried znapzend. But shadowcopy is an option I had with thezfs autosnapshots script that I'm not willing to loose Edit : In short, is there a way to backup ONLY to the external drive/pool? I guess it's still more efficient than a dumb rsync (or is it?) ?
  22. Regarding problems with the 5700XT, once it works "reliably", I hope someone will be able to produce a full tutorial. 🥰 I spent already about 4 full weekends on setting up and understanding unraid... I can't afford the luxury to set up osx. So yes, you were lucky or talented if it was only one day.
  23. I tried with open core and the qcow2 image shared in this post. I manage to boot when using VNC, but never reach the loggin page with the 5700XT passed through. If someone managed to make it work, hints would be greatly appreciated.
  24. As promised here's the SysDevs. Ideally I would pass the whole group 14 but it also holds the single gigabit input of the server ahaha. The VM ran all day, I confirm I got no error in the log. Not related, but I had instability on modern warfare, disabling ACS overide didn't helped. Server_SysDevs.pdf