Leaderboard

Popular Content

Showing content with the highest reputation on 04/08/21 in all areas

  1. This release contains bug fixes and minor improvements. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Thank you to all Moderators, Community Developers and Community Members for reporting bugs, providing information and posting workarounds. Please remember to make a flash backup! Edit: FYI - we included some code to further limit brute-force login attempts; however, fundamental changes to certain default settings will be made starting with 6.10 release. Unraid OS has come a long way since originally conceived as a simple home NAS on a trusted LAN. It used to be that all protocols/shares/etc were by default "open" or "enabled" or "public" and if someone was interested in locking things down they would go do so on case-by-case basis. In addition, it wasn't so hard to tell users what to do because there wasn't that many things that had to be done. Let's call this approach convenience over security. Now, we are a more sophisticated NAS, application and VM platform. I think it's obvious we need to take the opposite approach: security over convenience. What we have to do is lock everything down by default, and then instruct users how to unlock things. For example: Force user to define a root password upon first webGUI access. Make all shares not exported by default. Disable SMBv1, ssh, telnet, ftp, nfs by default (some are already disabled by default). Provide UI for ssh that lets them upload a public key and checkbox to enable keyboard password authentication. etc. We have already begun the 6.10 cycle and should have a -beta1 available soon early next week (hopefully).
    12 points
  2. It's hard to release it in the USA and around the world at the same time. Someone is always sleeping. Also @limetech getting us the latest kernel the same day it was released. Hard to give the plugin devs the heads up when the linux kernel wasn't even released before they went to bed
    7 points
  3. @limetechUpgraded and no problems here, checked the version of runc and its all good, so thanks for the inclusion of the latest version of Docker, you guys have saved me a lot of additional support, much appreciated!.
    4 points
  4. The local syslog server requires the server to work properly and may miss things when the system hangs unexpectedly. The mirror function simply copies everything simultaneously to syslog and flash, and can catch more in case the system hangs. Of course I am expecting everything to work and no more call traces
    2 points
  5. You can also add this as said to your syslinux.conf file so that you don't have to do anything or create a file if you installed the Intel-GPU-TOP plugin. But I was wrong above you have to do it in this format: i915.force_probe=4c8b Simply append this to your syslinux.conf file and this is appended when the Intel-GPU-TOP plugin loads the drivers, so no need to create the file with the contents (Main -> Click on 'Flash' and append it like that and click Apply at the bottom): From what I know this is a problem with Plex and you can only solve this if you run a custom script but you have to run it on every update of the container otherwise it will stop working again. I would post an issue an the Plex Forums about that.
    2 points
  6. Hi Newbie, bei mir läuft es so stabil wie es nur kann. Ich selber betreibe eine 5700xt im Referenzdesign. Negative Erlebnisse werden immer mehr hervorgehoben, mach dir da am besten dein eigenes Bild mit deinem Setup und teste einfach mal. Ich selber hab bisher noch nichts schlechtes aus dem unraid forum gehört. Teilweise war der vendor-reset tatsächlich die lang ersehnte Lösung, da bei einigen der mittlerweile alte navi-patch nicht alle Probleme behob. Mein Name steht übrigens eine Zeile höher 😜
    2 points
  7. Das wird deine Lösung sein Dafür musst du dann mit @ich777s "unraid kernel helper" einen custom kernel kompilieren um den vendor-reset zu integrieren. Hast du über die community apps andere zusätzliche treiber installiert, müssen diese ebenfalls mit kompiliert werden und die dementsprechenden plugins vor dem Neustart deinstalliert werden. Für ein How-To checke bitte mal den entsprechenden thread ab. Solltest du Probleme haben wird er dir sicher gern helfen! Ich bin im Moment beruflich stark eingebunden, deswegen können weitere Antworten von mir eine Weile dauern 😜 Viel Spaß dann, sobald alles eingerichtet ist
    2 points
  8. Big Navi cards should work ootb, as they fully support flr as specified in pcie specifications. Strange... But as I read here, there seems to be an issue with the vbios. @trig229 you can also try to download your vbios from techpowerup and use that instead.
    2 points
  9. We're working on a design that lets driver plugins be automatically updated when we issue a release.
    2 points
  10. Dieses Verhalten solltest du über clover bzw. opencore deaktivieren. Es reicht, wenn der Bildschirm ausgeht. Ist deine Karte in macos unterstützt? Haben welche mit genau deiner Karte einen mac am laufen? Nutzt du den vendor-reset? Wenn nicht, solltest du das machen!! Korrekt. Ich habe immer über den xml editor konfiguriert, da ansonsten relevante Parameter beim speichern verloren gehen.
    2 points
  11. Done, container is already rebuilt and uploaded to Docker Hub.
    2 points
  12. Please try to append 'force_probe=4c8a' to your syslinux.config and reboot (if you do it like that then please remove the contents of the i915.conf file). Please don't double post, you also can mention me here. Your i915.conf file has to be empty or at least have only the middle line in it, the first and the last line are wrong.
    2 points
  13. No problem, here to help... I will keep you updated when everything is sorted out and the source is available.
    2 points
  14. The specific macvlan issue is discussed here The specific kernel fix is described here, it comes down to broadcast messages were not properly handled.
    2 points
  15. Builds for 6.9.2 have been added (2.0.0 and 2.0.4 if you have enabled "unstable" builds) Thanks to @ich777 the process is now automated! When a new unRAID version is released ZFS is built and uploaded automatically. Thanks a lot to @ich777 for this awesome addition!
    2 points
  16. Keep in mind he's probably sleeping.
    2 points
  17. Not to be the one to complain, but we need to turn from reactive to proactive. I genuinely appreciate the support here and the dev work put in here, but couldn't this be anticipated and communicated out to the developer ahead of time? If we are trying to bridge the gap between core product devs and community devs, this could be avoided In either case, no harm no foul. The system is running and we can wait for the fix.
    2 points
  18. @ich777 will update them when he awakes. He is on the other side of the world
    2 points
  19. ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
    1 point
  20. Having any trouble setting up an Unraid 6.9 Capture, Encoding, and Streaming Server? Are you a streamer who uses Unraid? Let us know all about it here!
    1 point
  21. If you used rm -rf /mnt/homebase Then only that folder would be gone. You probably did something like rm -rf /mnt But it wouldn't hurt to post your diagnostics. At this point, you best recovery option is to pull the drives, attach them to a Windows box and try out UFS Explorer to recover the data. (It has a free option that let's you see what to recover, but the full version runs something like $80
    1 point
  22. So I restored my flash drive from this mornings backup, rebooted, and I'm now able to reach my server. Thank goodness for backups!
    1 point
  23. @xXx File should look like this. except change to 4c8b for your system.
    1 point
  24. With the drop of 6.9.2, I'm going to go ahead and toggle host access to custom networks later today and see if I immediately get a hang. I know my experience is slightly different than others in this thread, but it is the single setting that causes my kernel panics so I think it could be related. Will probably have an update within the next 24 hours.
    1 point
  25. Kurz OT: 60 MB/s Upload.... Was ein Träumchen Ich hab letzten auf 5 MB/s Upload upgraden können und fühl mich jetzt wieder langsam
    1 point
  26. Do you have access to your network shares or any other apps that are hosted in it? That would be a good indicator of whether the server is actually running at all. I had to hook a monitor and keyboard up to mine. If you get can send the command "/etc/rc.d/rc.nginx start" it should either start the web GUI (NGINX) or tell you something else.
    1 point
  27. Just to confirm you have the following root@computenode:~# cat /boot/config/modprobe.d/i915.conf options i915 force_probe=4c8a root@computenode:~# I dont have a new gen cpu, but dont get any errors. root@computenode:~# modprobe -c | grep 4c8a options i915 force_probe=4c8a
    1 point
  28. Uff, I was looking like 24h non-stop for my problem and now after creating a topic I solved it myself within half an hour So for anyone who might stumble across, here's what did the trick for me: Configuration was correct, my graphics card and the audio were stubbed BUT I did not change my BIOS configuration to boot from iGPU instead from PCle device.
    1 point
  29. habs inzwischen zum lösen gebracht, die MTU ist falscheingestellt worden, ich hatte am Mac 4000 drinnen, und der Server ist mit 1500 unterwegs jetzt läuft das ganze wieder 🙂
    1 point
  30. awesome!, and thanks for your time taken to explain it all to me, its appreciated.
    1 point
  31. "UDMA_CRC_Error_Count" hat idR etwas mit der physischen Anbindung zu tun (also Stecker, Kabel, Buchsen) nicht direkt mit dem jeweiligen Gerät. Der Fehler-Counter wird auch nicht zurückgesetzt wenn das eigentliche Problem (nicht richtig sitzender Stecker o.ä.) behoben wurde. Man muss die Zahl also im Auge behalten um zu sehen ob man das tatsächliche Problem entdeckt hat. Ob das für einen Garantietausch reicht? Wenn bei Amazon gekauft vermutlich schon, der Kundenservice ist äußerst kulant und hat im Zweifel auch kein übermäßiges technisches Verständnis. Nur löst das nicht dein eigentliches Problem. Also Kabel nochmal an beiden Enden fest in die Buchsne drücken. ggfs anderes Kabel verwenden, SATA-Buchse auf dem Mainboard tauschen. Und jedes Mal gucken ob sich die Zahl ändert oder nciht. Ist ein zeitaufwendiger Prozess (es sei denn du merkst sofort beim ruckeln am Kabel, das dort eine Verbindung locker war) Tante Edit: Ich sehe gerade "Initial" war auch schon bei "2" - hat sich also nicht mal erhöht. Dann erstmal nur im Auge behalten OB der Counter sich überhaupt ändert
    1 point
  32. Probier das: #!/bin/bash currenIP="$(wget -qO- ipecho.net/plain)" filepath="/mnt/cache/myip.txt" if [ ! "$(cat $filepath | tail -5 | grep ${currentIP})" ]; then echo "$(date +'%Y-%m-%d')" ${currentIP} >> ${filepath} fi Nur kurz zur Erklärung: if holt sich die letzten 5 Einträge deiner myip.txt Datei mittels tail und prüft dann mittels grep ob diese gefunden wird mittels grep und das ganze wird umgekehrt durch ! am Anfang sprich: Wenn einer der letzten 5 Einträge in deiner Datei myip.txt nicht gleich der aktuellen IP ist dann schreibe die aktuelle IP in die myip.txt hinzu. (die letzten 5 Einträge deswegen weil es durchaus möglich sein kann das du nach einer Woche oder ein paar Tagen wieder die gleiche IP hast die du schon mal gehabt hast und die würde dann nicht geschrieben werden wenn du einfach blind nach der IP suchen würdest, kannst 5 auch mit 3 oder so tauschen wenn du noch weniger prüfen willst oder sich deine IP nur selten ändert) Bei meinem Script füge ich dann noch den Datum hinzu am Anfang damit du siehst wann das war, Uhrzeit finde ich hier ein wenig übertrieben. Hoffe das ergibt Sinn. EDIT: Du kannst es auch zusätzlich ins syslog schreiben nur damit du siehst was er so macht... currenIP="$(wget -qO- ipecho.net/plain)" filepath="/mnt/cache/myip.txt" if [ ! "$(cat $filepath | tail -5 | grep ${currentIP})" ]; then logger "Writing current public IP: ${currentIP} to myip.txt" echo "$(date +'%Y-%m-%d')" ${currentIP} >> ${filepath} else logger "Nothing to do, current public IP: ${currentIP} hasn't changed." fi
    1 point
  33. Please read this for more information (try my best to build the packages as quick as possible but this time something prevented the automatic upload of the packages):
    1 point
  34. @limetech Thank you for the update. Two servers updated from 6.9.1 without apparent issues. This was about as painless as it gets! ☺️
    1 point
  35. It will be because new kernel vers havent been created for new release yet in his repo.
    1 point
  36. @hugenbdd took over this plugin a while ago. The thanks on the current iterations go to him.
    1 point
  37. Mit einer doppelten eckigen Klammer kann man Text in eine Datei schreiben: https://stackoverflow.com/a/11162510/318765 Und die öffentliche IP bekommt man nur in dem man eine Website herunterlädt, die mit der IP antwortet. Da gibt es verschiedene: https://stackoverflow.com/questions/14594151/methods-to-detect-public-ip-address-in-bash https://www.cyberciti.biz/faq/how-to-find-my-public-ip-address-from-command-line-on-a-linux/#:~:text=Use 3rd party web-sites to get your IP Beispiel: http://checkip.amazonaws.com/ zB könnte man nun die IP einer Variablen zuordnen: server_ip="$(curl checkip.amazonaws.com)" Und dann packt man den Inhalt der Variablen in eine Datei: echo $server_ip >> "/mnt/cache/myip.txt" Teste die beiden Kommandos im Webterminal und wenn sie funktionieren, packst du sie in ein Script und lässt das zB alle 5 Minuten ausführen. Mit dem Befehl kannst du übrigens den Inhalt einer Datei ausgeben lassen: cat "/mnt/cache/myip.txt"
    1 point
  38. The problem here is that I have to compile QEMU from source and I don't know how QEMU is compiled exactly for Unraid and since I don't wan't to break something I can't do this easily... I tried it once and it compiled just fine but some VM's on my Dev machine won't fire up after installing my custom QEMU build since I'm pretty sure I don't compiled it with the right flags and options to work with Unraid. I'm also looking for a way to compile QEMU with GL support since I'm currently looking into Intel GVT-g and this has a really big potential for HW accelerated VM's with vGPU's (for example iGPU's). Currently the VM's run with HW accelerated Video if you use it over RDP or Parsec and you feel the difference that they have a "real" graphics card attached to them but the GL support for QEMU is missing and so I can't get SPICE with MESA EGL, QEMU Display,... not to work. Eventually @eschultz can help here. 😃 I think they are planing to release ARM support if I'm not mistaken for quite some time but I think that there are some heavy modifications to the GUI needed otherwise this is all command line only or in terms of Unraid XML editing only. The build options for QEMU would be really nice but I also understand that they (as a company) don't give it to us since you can do very bad things with it and if something isn't working the support for it can be very "frustrating" if the user does not tell them in the first place that something custom is installed. I will definitely look into QEMU again but currently my time is very limited... EDIT: Eventually I should rename the container from "Unraid-Kernel-Helper" to "Something-For-everyone"...
    1 point
  39. I am having issues with the latest Docker image and i cannot access the application, if i look in the log file located at '/config/supervisord.log' then i see the following message:- The JAVA_HOME environment variable is not defined correctly This environment variable is needed to run this program NB: JAVA_HOME should point to a JDK not a JRE Q. What does it mean and how can i fix it? A. See Q10 from the following link for the solution:- https://github.com/binhex/documentation/blob/master/docker/faq/general.md EDIT - unRAID 6.9.2 has just been released, this includes the latest version of Docker, which in turn includes the latest version of runc, so if you are seeing the message above then the simplest solution is to upgrade to v6.9.2
    1 point
  40. You can also specify a port like this:
    1 point
  41. I have now integrated 'sendemail' to the container and updated it in a few other different ways (Please be sure to the Container on the Docker page before trying this). To send a mail configure luckyBackup like this: Please notice that I have appended '-xu' (SMTP username) and '-xp' (SMTP user password) to the arguments to actually being able to send an email because my mail server, as I think most mailservers nowerdays, need a authentication and it works without a problem (if you entered 'sendemail' at command click on the green arrow on the right side and it will fill in all the necessary arguments in the arguments line without the two arguments mentioned above). Please report back if this is also OK for you because I don't have a easy way of send a Mail through a Docker container from Unraid.
    1 point
  42. Yes and no... I dodn't added it because you actually can make things worse if you don't know what you are doing... I have many other variables integrated in Containers that are not visible to users because these are "in case someone needs this for whatever reason" variables for me... No one ever requested this but I will look into it ASAP.
    1 point
  43. I was running the preview of v3 for some weeks without problems. Recently the preview tag got deprecated, so I removed it. Suddenly I was back to v2 and all settings gone. I changed the tag specifically to "latest" and with a force update I had v3 installed with all settings restored. Weird but it worked.
    1 point
  44. Thank you for your authoritative answer!
    1 point
  45. WebAPI Plugin for Organizrv2 Ok, so let's figure this out if we could. I've looked over the forums and the general consensus is that the WebGUI add plugin doesn't and won't work. Meh, whatever. But to manually add it: Download WebAPI-0.4.0-py3.7.egg (or one of the variants) Place in (assuming SpaceInvader One's setup guide) appdata/binhex-delugevpn/plugins/ Restart the Connection Manager/Daemon .... Profit. However, these instructions don't seem to work for at least myself, but also a bunch of folks in this thread. Vital info: Deluge 2.0.4.dev38 binhex/arch-delugevpn (no version specified, updated today) Python 3.9.2 (default, Feb 20 2021, 18:40:11) unRAID 6.9.1 I've tried eggs: deluge_webapi-0.4.0-py3.6.egg from https://pypi.org/project/deluge-webapi/ WebAPI-0.4.0-py3.6.egg from https://github.com/idlesign/deluge-webapi/tree/master/dist WebAPI-0.4.0-py3.7.egg from https://github.com/idlesign/deluge-webapi/tree/master/dist WebAPI-0.3.2-py2.7.egg from https://github.com/idlesign/deluge-webapi/tree/master/dist WebAPI-0.3.1-py2.7.egg from https://github.com/idlesign/deluge-webapi/tree/master/dist None of these seem to work by loading or by dropping them into the directory and restarting the docker. Manually expanding the egg files and adding the folders to the plugins/ dir also doesn't seem to do anything. Most places also remind us to update binhex-delugevpn/core.conf to include the Plugin: Note "WebAPI" added to enabled_plugins "download_location_paths_list": [], "enabled_plugins": [ "LabelPlus", "AutoAdd", "Scheduler", "WebAPI" ], "enc_in_policy": 1, "enc_level": 1, "enc_out_policy": 1, "enc_prefer_rc4": true, "geoip_db_location": "/usr/share/GeoIP/GeoIP.dat", "ignore_limits_on_local_network": false, "info_sent": 0.0, Some posts have mentioned restarting the Daemon to initialize the plugin, however, 1) wouldn't it no longer be activated after a docker restart?, and 2) Don't know about anyone else, but if I select the Daemon in Connection Manager and hit "Stop Daemon" I get an error message window that just says "An error Occurred" so... womp womp. Meanwhile, back at Google... My old buddies at OpenMediaVault (it was good at the time, but I'm soooooo glad I made the switch omg) are also struggling with adding plugins to deluge. I found a few posts that outline getting AutoRemovePlus-0.6.2-py3.7.egg to work, so I followed those (more or less the same as above, adding the egg to the plugins dir) to make sure it wasn't just the WebAPI egg itself. Sadly, no joy. I know it's a few years old now but SpaceInvader One's plugin vid also doesn't help with this. dev.deluge-torrent.org appears to have gone offline while I was typing this, so that's not ideal. But if anyone has an idea of how to enable plugins I'd appreciate some pointers. If nothing else, perhaps I could request @binhex include the WebAPI plugin in the build? Given the number of requests it seems like it might be a well received addition ¯\_(ツ)_/¯ ---------------------------------------- WORKING! ---------------------------------------- Right, welp, got it working though good, old fashioned, luck. There's a github support thread here: https://github.com/idlesign/deluge-webapi/issues/27 that got me going on the right path. Basically, you need to download the plugin linked here: https://github.com/idlesign/deluge-webapi/files/4458994/WebAPI-0.4.0-py3.8.zip And then rename the file to "WebAPI-0.3.9-py3.9.egg" then copy it into your config/appdata/binhex-delugevpn/plugins/ directory. Also expand it as if it were a zip file, the folder should be named "WebAPI-0.3.9-py3.9" by the expanding software. I don't know if it's the egg, the folder, or both that deluge wants to see, but I also don't care cause it's working. You do need to add "WebAPI" to the binhex-delugevpn/core.conf as seen above. Make sure to mind your commas! Now reboot the docker and, hopefully, you'll see WebAPI available in Settings > Plugins and can activate it. Once activated it will have a settings item in the left list, click on that and check Enable CORS. Now go back over to Organizrv2 and in the Deluge Home settings enter the [ip]:[port] of your deluge instance using the password that you use for the webUI (as best as I can tell this doesn't work if you don't have a pw set). Hope this helps folks in the future
    1 point
  46. Workaround: "Enable Tone mapping" with Unraid 6.9.1 and an Nvidia decoder I figured I'd write this for posterity in case anyone else wants to set it up. My System: Unraid 6.9.1 (latest stable) I3770K 32GB RAM GTX 1060 6GB Latest Jellyfin / linuxserver docker Everything worked as expected but when I "Enable Tone mapping" and then try to watch a file with HDR data I get a player error of "This client isn't compatible with the media and the server isn't sending a compatible media format." (no error when "Enable Tone mapping" is not checked, no error if watching a file without HDR data). I did some searching prior to posting and did find posts like this one on Reddit. Following that post I found the expected directory doesn't exist on the Docker I used (latest as of 3/26/2021 for Unraid from linuxserver). The following fixed it: mkdir -p /etc/OpenCL/vendors echo 'libnvidia-opencl.so.1' > /etc/OpenCL/vendors/nvidia.icd As soon as that was done, the files that failed to play were able to play and tone mapping was evident compared to when the option was turned off when viewed via transcoding. Don't know if it makes sense to add those directories and entry by default, but if not it might be a good thing to append to the nvidia install hints.
    1 point
  47. To use the plex version of SQLITE3 is mentioned here - it worked fine from the container console via Unraid webgui here (remember to kill the running plex task):- https://forums.plex.tv/t/hoping-for-help-with-db-corruption/701344 Cheers, -jj-
    1 point
  48. Sometimes the simple things you miss. a quick SG_format to remove the protection has worked and now underway of rebuilding my array. Thanks Johnnie
    1 point