Leaderboard

Popular Content

Showing content with the highest reputation on 02/25/22 in all areas

  1. 2 points
  2. ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
    1 point
  3. All the support need for the app's I put in the CA plugins. All question or Other can be ask here I will answer all of them when I can in time. (You have to know I am French UTC+01:00 Paris.) Please read the discription of each docker and the variables that you install (some dockers need special variables to run.) Fir a better and faster support I recommand you to joins the DiscordSupportServer ! The basic password and username for WEBUI are in the docker discription. Little FAQ ✅ » Yes i am open for dm's to add your idea of docker to add. Support Discord : Invitation Wiki/SetupTuto : Wiki Forum : Forum
    1 point
  4. As my earlier post noted, my main server was due for an upgrade - it'll probably keep running for now so as not to overload the backup server due to all the VMs I have running for work-related testing and such, but the replacement, it's finally becoming a reality: It's a supermicro CSE-743-1200B-SQ, a 1200 watt platinum power supply with 8 bays built in, with a 5 bay CSE-M35TQB in place of the 3 5.25" bays, all designed to run at less than 27db, while able to be either ran as a tower, or rack mounted (it'll spend the next 3 months in tower form... seems getting rails for this thing requires first sending a carrier pigeon to hermes, hermes then tasks zeus with forging them in the fires of the gods from unobtainium, who then ships then when he's done doing... well greek stuff). My first 700 series chassis is still doing work, still with it's original X8SAX motherboard, and I see no reason to fix something that isn't broken! While having a bunch of drives is great, the idea here is to have two gaming VMs, run plex, nextcloud, homeassistant, frigate, and numerous others. All of that takes a ton of IO. Enter the motherboard: This motherboard is a friggin monster - but importantly, at least to me, it's design syncs up perfectly with the chassis, so all the power monitoring, fan modulation, and LED/management functions can all be controlled via built-in out of band management. The M12SWA is currently paired to a 3955WX; given how close we are to next gen threadrippers release, I'm going to wait that out for now, and then decide whether to upgrade to next gen's mid-range (whether it be a 5975WX, or whatever the case may be), or otherwise. For now, the VM's will be 4 core / 8 thread to match the CCD's, leaving the rest to docker. Down the line, they'll likely be either 8 cores each, or one 8 and one 4, depending on what the need is. The lighter of the two is going to house an always on emulation VM with a 1650s, which will play all our games on screens throughout the house (or wherever) via moonlight/parsec/whatever. It slots perfectly in the chassis: But cable management is going to be a meeessssss: That ketchup and mustard is hurting my friggin eyes. I'm going to have to wrap those with something More to come on this one - the plan for now is to throw in 128GB of ECC 3200, 4 NVME, an rtx 2070, gtx 1650s, quad 10Gb nic (chelsio, since this thing comes with the stupid acquantia nic which has no SR-IOV support), quad 1Gb nic (since the intel nic they included ALSO doesn't support SR-IOV... ugh), then one slot left for potentially adding either tinker-type toys or an external SAS HBA if I somehow eventually run out of room. There are custom boards out there that combine the x540 and i350 chipsets onto one board, but I may instead consolidate this down to a single X550 or one of those fancy x700 intel based boards... We'll see.
    1 point
  5. Oha. Ich nehme auch 10. Bei Vorkasse hätte ich allerdings Schiss in der Buchs ^^
    1 point
  6. /mnt/user includes files on cache and array. User Shares are simply the top level folders on array disks and pools. Files on the lowest numbered disk are shown in the user share if there are multiples. Cache is considered lower than the array. Not really a question of the age of the files but in this case newer seems a reasonable choice to keep.
    1 point
  7. Not sure have not tested that, it was just a simple test to show core speeds, normally only have pcores in the VM and leave e core to unraid.
    1 point
  8. I just tried to use these drives as parity whilst doing a data-rebuild and it failed to read the parity back most likely due to that protection was enabled at a guess. For anyone seeing this in the future and who has the same drives and issues follow the steps here BEFORE you start using the drives whether for data or parity. And since removing the protection i can now see the operations on the drive
    1 point
  9. @JorgeB Thank you for all your help these last few days! Everything seems to be happy and stable now! You've undoubtedly saved me hours if not days of troubleshooting and headache, so again, thank you for the help!
    1 point
  10. Yes, then you just need to assign the disk with the changed name.
    1 point
  11. Deswegen zahle ich gerne per PayPal: https://www.paypal.com/de/webapps/mpp/refunded-returns
    1 point
  12. Also ich hab mal hier eben etwas rumgetestet (hab mir mal eben einen Rechner von der Resterampe zusammengesteckt und UNRAID mit Testlizenz drauf losgelassen). Das Ergebnis ist recht simpel: ES GEHT :-))) allerdings nur, weil der Switch hier auf diverse Bonding Einstellungen vorbereitet ist, und sie automatisch erkennt. Ich brauchte also gar nix einstellen am Switch, nur das Bonding in UNRAID aktviert (mit Modus 802ad). Aber da steht ja bei Dir im Manual, dass NetGear das NICHT kann...
    1 point
  13. Thanks. The update is going to ignore anything that parses out correctly as JSON
    1 point
  14. I have been testing the latest internal build, and can report the issue is solved. Please wait until rc3 is released.
    1 point
  15. Which NIC does your motherboard have. My MSI Z690 has the following and doesn't work on rc2. I am running an internal release of rc3 and can confirm current kernel in rc3 supports it. 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) root@computenode:~# lspci -vs 05:00.0 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7d25 Flags: bus master, fast devsel, latency 0, IRQ 18, IOMMU group 17 Memory at 50e00000 (32-bit, non-prefetchable) [size=1M] Memory at 50f00000 (32-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [70] MSI-X: Enable+ Count=5 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number d8-bb-c1-ff-ff-8c-c9-b0 Capabilities: [1c0] Latency Tolerance Reporting Capabilities: [1f0] Precision Time Measurement Capabilities: [1e0] L1 PM Substates Kernel driver in use: igc Kernel modules: igc
    1 point
  16. Hi @titust1 This is running an internal rc3 release, I can look to do the same on 6.10rc2 if you would like to see the results. Kernel is now 5.15.24. This was pinning all the pores to a Win10 vm and running cpu-z stress test. If I add the e-cores it does impact the p-cores This is idle.
    1 point
  17. 1 point
  18. 1 point
  19. OLD TUTORIAL/INFO For any Old tutorials / small help before making a post here please check out my WIKI where is a lot of question / answer and help, thanks!
    1 point
  20. Ayyéé ! Sorry I totally missed your message from my notification box But yeah now it work with your update ! Same that I was only missing some dependancie ! Thanks for the help and support provide !
    1 point
  21. Eu tive que fazer varias configuracoes para funcionar direito, principalmente no meu notebook corporativo rodando na minha LAN que nao usa o DNS do Windows (consequentemente o DNS do meu roteador). Algumas coisas que eu lembro: 1) Mudei minha conexao internet de pessoa fisica para pessoa juridica. Isso permite com que meu roteador aceite conexoes na porta 443 2) Contratei um IP fixo na minha conexao internet 3) Registrei um dominio na Registro.Br apontando para meu ip fixo (e criei uma entrada para o nextcloud la) 4) Montei uma config de NAT reverso no meu roteador (pfsense) para desviar tudo que chega na porta 443 no ip publico para o ip privado do meu SWAG (container unraid proxy reverso) 5) Apanhei bastante do arquivo .conf do SWAG mas depois funcionou direitinho. 6) Montei um alias no DNS do meu roteador para quando alguem chamar o nome registrado no Registro Br ele resolva internamente com meu ip privado, e nao com o ip publico. Isso economiza rede quando um cliente interno tenta acessar a nextcloud, se nao fizer isso toda hora vai resolver no ip publico localmente
    1 point
  22. So, I think this may be related to me enabling ip6tables in Docker using this Docker daemon.json: { "experimental": true, "ip6tables": true } Today IPv6 stopped working for VMs again. After disabling my custom daemon.json and rebooting the Unraid server (which didn't help the last time I worked on this) fixed IPv6 in VMs again. Kind of annoying, since I utilize this to have correct IPv6 port forwarding for my PiHole, but at least I now have something concrete to test. Edit: This is definitely the case. To test this I stopped Docker with /etc/rc.d/rc.docker stop, put the above daemon.json at /etc/docker/daemon.json, startet Docker with /etc/rc.d/rc.docker start and rebooted my VM. After the reboot, the VM couldn't get an IPv6 via SLAAC again.
    1 point
  23. Hallo @jj1987, der Tipp war Gold wert! Hab nach dem Neustart schon 4-mal kontrolliert und die Zeit hat immer gepasst - Danke. SG Andreas PS: tolles Forum😀
    1 point
  24. Schuss ins Blaue: in der syslinux config hpet=disabled eintragen?! Gibt bei Intel ja Probleme damit und wenn der interne Zeitgeber falsch läuft würde das vielleicht auch die Zeitabweichung erklären
    1 point
  25. What do you all think of my new build? Decided to spruce up the Torrent case due to the sheer amount of space inside the case. Just waiting for a new GPU to stick in the top slot for my gaming streaming VM. The RGB panels I got from ColdZero (SSD RGB covers). 4 fit perfectly vertically. Spec: Gigabyte X570s, Ryzen 3950X, 128GB Mem, 3xSSD, 2xnvme, WX4100 GPU, GT1030 GPU, BeQuiet Fans, Fractal Torrent case, Noctua CPU cooler and fans.
    1 point
  26. Always a work in progress. Fractal Design Define 7 XL Asus X570 Ryzen 5950X Vengeance 2x8 GB ram P2200 quaddro Random 750watt psu Evo 860 500GB cache Ironwolf 10TB double parity Mix and match of shucked drives = 44TB storage. Max capacity 20 3.5" drives.
    1 point
  27. you can change which device is eth0, scroll down on your network settings to "interface rule", change the mac address for interface eth0 to the pcie card.
    1 point
  28. Hi. I would very much like to see the amount of files being moved measured in GB, estimated remaining time and percentage done for mover jobs in the GUI on the dashboard page. It helps me to plan downtime if i for some reason need to shutdown my server. The current indicator does not give much more info than "mover is running".
    1 point