Leaderboard

Popular Content

Showing content with the highest reputation on 01/01/21 in Posts

  1. The Ultimate UNRAID Dashboard Version 1.5 is here! This is another HUGE 😁 update adding INTEGRATED PLEX Monitoring via Varken/Tautulli. This update is loosely derived from the official Varken dashboard, but I stripped it down to the bolts, modded the crap out of it, and streamlined it with a straight Plex focus. Honestly, the only code that sill remains from their official dash is the single geo-mapping graph, as it is not actually an editable panel, but rather straight JSON code. I wanted to say thank you to that team for providing a great baseline to start from, and all of their previous work! The UUD Version 1.5 adds 50 new Panels within 3 new sections. I have placed these strategically within the UUD right below the Overwatch section, as this is the second data set that I would want to see, right after my overall server health. As always, with greater features, comes a greater need for plugins and dependencies. I have provided links and resources below to help you along. New Dependencies: Install Guides/Tutorials: https://github.com/Boerderij/Varken https://wiki.cajun.pro/books/varken/chapter/installation https://dev.maxmind.com/geoip/geoip2/geolite2/ Dockers: Varken (Install With Default Setup / Follow Current Project Install Guide) Tautuilli (Install With Default Setup / Follow Current Project Install Guide) Docker AppData: Varken Config (Follow Varken Install Guide) New Grafana Data Source: "Varken" New Grafana Plugins Pie Chart Panel Run Following Command in Docker: grafana-cli plugins install grafana-piechart-panel World Map Run Following Command in Docker: grafana-cli plugins install grafana-worldmap-panel Third Party: FREE GeoLite2 License Registration (Follow Varken Install Guide) Without this, the MAP WILL NOT WORK. Please Note: This release is an example tailored to MY Plex setup/library. The intent here is that you will take this and modify it for your Plex Library/Setup. You have everything you require to template new panels and to add new media sections as needed! Highlights: Real Time Plex Monitoring Extremely Detailed Breakdown of All Current Streams Current Number of Streams Internal and External Streaming Bandwidth Breakdown Stream Origination (Geo Location) With Interactive Map Streaming Types Streaming Devices Detailed User Monitoring Current Library Statistics Broken Out By Library Sections Plex Library Growth Plex Library Growth Over Time (Day/Week/Month/Year) Currently Templated For Following Media Sections TV Shows Movies Documentary TV Shows Documentary Movies Anime Shows Music You Can Add More... Historical Plex Monitoring Heat Maps to See Your Overall Streaming Saturation (Last Day/Week/Month/Year) Device Types (Last Month) Stream Types (Last Month) Media Types (Last Month) Media Streaming Qualities (Last Month) Stream Log (Last Week with Limit of Last 1,000 For Performance Reasons) Log Captures All Streaming Activity Via 10 Minute Intervals Screenshots (With Personal Info Redacted): I am very pleased that I could still get this out to you all in 2020 (my time), so hopefully this will ease us into a better 2021! As always, I'm here if you need me. ENJOY and Happy New Year! See Post Number 1 For the New Version 1.5 JSON File!
    3 points
  2. I have a fix for the file system not showing 'precleared' in UD. I also implemented the temperature display and reads and writes during a preclear. @gfjardim I see the problem with the blinking of the preclear status line in UD. UD does a refresh of the page every 3 seconds. You are updating the status periodically like when UD did not refresh itself. They are clashing. The best bet is to let UD do the page update and preclear just provides the information. Also take a look at the /usr/local/state/devs.ini file in 6.9 RC2. Unraid is providing disk information for Unassigned Devices. I use this for disk spin, reads, writes, and temperature information. I don't query the disk. UD keeps this file current when a hot plug event occurs.
    2 points
  3. Microsoft, Apple and Google all add/remove things in their OS to insure security and safety and can/will make old apps obsolete if they are not updated to the newest release. You are not being charged to update your OS to a newer version and will get all the benefits of the update as well. You have stated you have reasons for not updating and seeing there is no upside for staying with an an older release other than a personal choice isn't a valid reason for you to badger a Developer to go out of their way to accommodate you.
    2 points
  4. Hi all! In search of a way to get my iCloud pictures on my array and keep them in sync with iCloud (pull only). I ended up writing a docker template for the boredazfcuk/icloudpd docker container. This is my first template so bear with me. If someone has suggestions or comments about it please let me know! The template is located here. https://github.com/Womabre/unraid-docker-templates Known issues - Your iCloud password cannot contain special characters (I know of * and & to be problematic).
    1 point
  5. Hi Everyone Unraid is great and I am like many others using my Unraid server for Plex (And of course other things) So I would really like to collect all the tweaks and "Hack" done by others to increase performance of large Media servers doing transcoding First just to get the "normal" recommendation listed: Appdata on cache drive (Fast as possible - SSD/M2) HW encoding using Unraid Nvidia plugin + GPU Structure of media in each folder/optimize files for transcoding? Specific Unraid tweaks: Moving transcoding to RAM (Update better guide for doing this) DB optimize over time (Mostly Plex not Emby) - people with large collection run this even if the DB is not corrupt? And afterwards should load much faster? https://support.plex.tv/articles/201100678-repair-a-corrupt-database/ https://hub.docker.com/r/nouchka/sqlite3/dockerfile/ Move the sqlite3 db file into a ramdisk? Link from Reddit - I guess it would just be a matter of copying to /temp? (If you already have your Appdata on M.2 then might not be big difference) Use direct /cache/ path: Big thanks to @mgutt for sharing this! (If you have space then adding Appdata/Docker.img really makes a difference!) What other things or tips do you have to speed things up? Speed up the UI? have anyone tried to move the DB to RAM? and did it help? Looking forward to getting some input from the power users! 👍 and updating this post with new things! Thanks for a really great forum with so many help-full people (I placed this post here because it should not be about the dockers but things around it and Unraid, please move it if you find a better place for it!)
    1 point
  6. I have updated the OpenVPN Server & Client plugins for the V6.2+ plugin manager. Post all issue you have in this thread !! To install. Install the plugins, go to the 'Extensions' page and enter the below links into the 'Install Extension' line and then click the 'Install' button. Client https://raw.githubusercontent.com/petersm1/openvpn_client_x64/master/openvpn_client_x64.plg Unpack your provider certificate/files to /boot/openvpn (create that folder if it's not exist) , can now be several ovpn files. More info about Client installation : http://lime-technology.com/forum/index.php?topic=19439.0 Server https://raw.githubusercontent.com/petersm1/openvpnserver/master/openvpn_server_x64.plg Post installation for OpenVPN server. 1: Change and save settings in Cert and Misc Settings. 2: Install Easy-rsa. 3:Generate server certificate ... This take several minutes ..... 4: Set and save Server config even if you use default settings you need to save. 5:Generate clients. 6: start server . 7: When you created the client file, prefered a "inline file" (inline file is only one *.ovpn) on the server then you can email the config (with file extension .ovpn) as an attachment from an email account on your computer (or a webmail) to the email address setup on IOS in the Mail app. In the mail app open the email and open the .ovpn file, then choose to open it with OpenVPN. If you did it right, OpenVPN opens and you can click a + icon next to your config to import it. Now you can simply slide Off to On and your VPN connects. Make sure the OpenVPN server will be accessible from the internet. That means: you need to open the server ports on the router and forward this ports to the fixed LAN IP of the unRAID More info : http://lime-technology.com/forum/index.php?topic=28557.0 If you appreciate my work on my plugins. //Peter
    1 point
  7. Yep. I assume the built in authentication with a strong password to be secure enough. Do you have evidence or hearsay to the contrary?
    1 point
  8. Since all pre-clear does is read and write to the disk, if it caused it to fail then it was almost certainly going to fail shortly anyway. it might be worth seeing if you can get any SMART information from the disk, and if you can whether it can pass the extended SMART test. Either of those failing means the disk will need replacing.
    1 point
  9. Thanks Binhex! I'd really appreciate that. Just sent you a beer from the UK. Happy new year!
    1 point
  10. That's really strange that's the first report that I've got that it's not working or stopped working. Please keep an eye on it and contact me again if this happens the next time, maybe there was some other "bug" within Unraid that caused the error. Btw Please post the next time in the Plugin support subforums for DVB if this happens again. Also Happy New Year!
    1 point
  11. You can get much more than that with SATA disks, see turbo write, I get between 100 and 200MB/s+, depending if I'm writing to the outer or inner sectors of the disks, SAS disks are not going make much (if any) difference with Unraid.
    1 point
  12. I think you can just delete the file and restart the container and Plex will download it again. The process should not flash but show until it's finished transcoding. Why not start with a new fresh Plex container and appears to be sure there is no leftovers from the guide and your attempts. Then follow the instructions on the first page of the Nvidia driver thread.
    1 point
  13. For me it was the Dolby Atmos Home Theater upgrade. Happy New Year!
    1 point
  14. Doch, weil wenn der Ordner leer ist, dann verschiebt der Mover den Ordner auf die HDD und dadurch muss die HDD anlaufen, damit Papermerge den Ordner scannen kann. Liegt der leere Ordner dagegen permanent auf der SSD, dann passiert das nicht. Was auch richtig ist, wenn der Share appdata auf "prefer", "only" oder "yes" steht. Das ist ja der Sinn an der Sache. Der Share wird gecached. Wenn der Share auf "Prefer" steht, dann nutzt der Share ausschließlich den Cache, außer die SSD läuft voll. Bei "Only" nutzt er ausschließlich den Cache. Und bei "Yes" nutzt er den Cache bis der Mover aktiv wird und die Dateien auf das Array verschiebt. Über /mnt/user/appdata siehst du immer die Summe aus /mnt/cache/appdata, /mnt/disk1/appdata, /mnt/disk2/appdata, usw. Das ist nur ein "virtueller" Pfad.
    1 point
  15. My personal opinion on that is that USB is more reliable than the drives. I've got a total of 24 hard drives between two servers, and my pair of flash drives have outlasted every one of the drives. The USB is only used as a boot device. It's basically used extremely little (only at boot and only to store any settings changes). At the end of the day, most users will actually read or write to it maybe at best one a month. You're never actually out of luck. You can always set up a trial key on a new flash. Assign the drives accordingly and you're basically off to the races. That being said, some people do have trouble with the drives. I believe that Limetech has implied that a better solution for replacement etc is coming soon(tm)
    1 point
  16. Definitely a bad choice for Unraid. Ideally parity operations happen in parallel. Any write to the parity array will read and write both parity and the data disk to be written, or read all disks then write parity and the data disk depending on how you have it set. And rebuilds, parity checks will read all disks. If each disk has a separate connection, all this can happen with all disks at the same time. If all of the disks are trying to use a single connection to the computer, then things are going to go very slowly since it has to work with each disk separately.
    1 point
  17. That was something needed a year ago to get hardware decoding working. Hardware encoding have been working without any needs to rename files and create files. So revert your changes.
    1 point
  18. [Qemu 5.2] -- Pay attention to addresses in your config.plist for devices When Qemu 5.2 will be available for unraid (I'm still on 6.8.3, qemu 5.2 should be inside 6.9rc2 (?), or for sure in 6.9 stable) pay attention to addresses of your devices in your config.plist I just tried Qemu 5.2 on another machine and I noticed that layout-id for built in audio was not correctly applied, resulting in no audio detected once inside mac os, but boot chime working well at boot (bootloader level). I then noticed that also the built-in=true property of my en0 ethernet was not applied inside mac os. Qemu 5.2 has probably changed the way of managing addresses, it's a bug fix, so it's ok, I noticed a wrong address assignment when I was playing with boot chime in the past with qemu 5.1 and lower versions: with Qemu 5.1 and lower versions mac os address for audio was PciRoot(0x1)/Pci(0x2,0x0) but the bootloader detected audio at PciRoot(0x0)/Pci(0x2,0x0) With Qemu 5.2 now the right address of the PciRoot is (0x0). As an example, this was my old (qemu 5.1 and lower) config.plist for the device injection: <key>DeviceProperties</key> <dict> <key>Add</key> <dict> <key>PciRoot(0x1)/Pci(0x1,0x0)/Pci(0x0,0x0)/Pci(0x1,0x0)</key> <dict> <key>built-in</key> <data>AQ==</data> </dict> <key>PciRoot(0x1)/Pci(0x2,0x0)</key> <dict> <key>layout-id</key> <data>BwAAAA==</data> </dict> <key>PciRoot(0x1)/Pci(0x1F,0x0)</key> <dict> <key>compatible</key> <string>pci8086,2916</string> <key>device-id</key> <data>FikA</data> <key>name</key> <string>pci8086,2916</string> </dict> </dict> .... .... .... First block is for en0 built-in Second block is for audio layout Third block is for LPC As you see, for all devices, the PciRoot was PciRoot(0x1) In the same config.plist for boot chime I had: <key>Audio</key> <dict> <key>AudioCodec</key> <integer>0</integer> <key>AudioDevice</key> <string>PciRoot(0x0)/Pci(0x2,0x0)</string> .... .... .... with PciRoot(0x0): again this was needed instead of 0x1, because the bootloader didn't see the audio device at PciRoot(0x1) ------------------------- Now, with Qemu 5.2 the same snippet of code is replaced with: <key>DeviceProperties</key> <dict> <key>Add</key> <dict> <key>PciRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x0)/Pci(0x1,0x0)</key> <dict> <key>built-in</key> <data>AQ==</data> </dict> <key>PciRoot(0x0)/Pci(0x2,0x0)</key> <dict> <key>layout-id</key> <data>BwAAAA==</data> </dict> <key>PciRoot(0x0)/Pci(0x1F,0x0)</key> <dict> <key>compatible</key> <string>pci8086,2916</string> <key>device-id</key> <data>FikA</data> <key>name</key> <string>pci8086,2916</string> </dict> </dict> .... .... .... With PciRoot(0x0) as the root address. So, pay attention to addresses, as this may apply to all your addresses you have in the config.plist
    1 point
  19. If you have modified any config files, you have probably messed up something. There is no reason to modify any config files to get hardware transcoding working. You could try to disable both transcoding options, restart the container and then enable it again and restart the container.
    1 point
  20. Data drives in unRAID are separate disks each with its own independent file system. No RAID. User share configurations determine how files are allocated across the independent disks. Shares can be limited to one disk, span all disks, or anything in between. Allow the allocation method and split levels help determine how full disk get before spilling over to other disks and at what point content folders should be split and allowed to spill over to other disks the share is configured to use.
    1 point
  21. Unraid IS NOT RAID. Best if you don't even attempt to use a RAID controller with Unraid. Which controller is it? Can it be flashed to IT mode?
    1 point
  22. Yes, because it saves energy and the performance is not limited to the bridge controller of the adapter. As an example ASMedia has a low energy consumption, but also a low performance which is sufficient for 2 HDDs. LSI has a huge performance, but drains more power than the entire board incl the CPU in idle. My suggestions: - Fujitsu D3644-B (ECC, 6x SATA) - Gigabyte C246M-WU4 (ECC, 8x SATA, two M.2) - Asus WS C246M Pro (ECC, 8x SATA, two M.2) - Gigabyte W480M Vision V (8x SATA, two M.2, 2.5G LAN, but sadly no ECC RAM with i3, only with Xeon) Note: i5/i7/i9 does never support ECC RAM.
    1 point
  23. Some commend - Use 2 RAM module better then 4 module - If add-on GPU not for gaming, plex, passthrough ( just be local console ), I may go to X570 ( Asrockrack have similar MB ) with 4000 series APU, i.e. 4750G ( or use IPMI VGA without iGPU / GPU ) If accept 8core ( no 16thread ), then recommend Intel 9700K route + C246 ( 8 port SATA ). This depends on your need or like.
    1 point
  24. I also got that error. I deleted the custom_ovmf folder and started over. I would also set it up using the default configuration - it will be slow, but it will work - and then once the OS is installed and working, you can edit the hardware, edit the helper script and run it again. That's what I've just done - trying to give the machine more cores and running helper before the initial setup was done did not work for me.
    1 point
  25. Installiere dir auf dem Unraidserver den Docker NginxProxy. Nun brauchst du eine dynamische DNS z.B. DuckDns. In NginxProxy kannst du für die DNS ein Zerti generieren und auf den Nextcloud Port weiterleiten. In der Fritz eine Portweiterleitung 443 auf den NginxProxy machen. Port steht dort unter den Dockereinstellungen glaube 18443. Grüße
    1 point
  26. I could tell from watching this saga play out that a huge misunderstanding and lack of communication was at the root of it. The LSIO guys were clearly blindsided after a tremendous effort on the part of those mentioned to provide a functionality in unRAID that was far more than just a niche solution. I think Tom unwittingly underestimated both the effort put forth by others and the widespread appeal of the previous solution to the community and went about releasing a native solution without fully understanding the potential impact and without communicating his intentions beforehand. When 6.9 was first released in beta, Jon P. was on a podcast in which it was definitely stated that, at some point, Limetech intended to integrate Nvidia drivers into unRAID. The problem is not that this, in fact, happened; rather, the issue is with how it was done and communicated (not well). Were I in the shoes of the LSIO guys (and I never will be as I do not have that skill), I can understand why they reacted in the way they did. It was a true blindside that appeared to diminish and disrespect their effort. Tom has acknowledged this. I hope Tom's mea culpa and acknowledgement can be accepted as I do believe the fallout was 100% unintentional. Owning up to a mistake like this and accepting responsibility is not easy, but, it was necessary and I am pleased to see he was willing to do it.
    1 point
  27. I found by accident another tweak: Direct disk access (Bypass Unraid SHFS) Usually you set your Plex docker paths as follows: /mnt/user/Sharename For example this path for your Movies /mnt/user/Movies and this path for your AppData Config Path (which contains the thumbnails, frequently updated database file, etc): /mnt/user/appdata/Plex-Media-Server But instead, you should use this as your Config Path: /mnt/cache/appdata/Plex-Media-Server By that you bypass unraid's overhead (SHFS) and write directly to the cache disk. Requirements 1.) Create a backup of your appdata folder! You use this tweak on your own risk! 2.) Before changing a path to Direct Disk Access you need to stop the container and wait for at least 1 minute or even better, execute this command to be sure that all data is written from the RAM to the drives: sync; echo 1 > /proc/sys/vm/drop_caches If you are changing the path of multiple containers, do this every time after you stopped the container, before changing the path! 3.) This works only if appdata is already located on your SSD which happens only if you used the cache modes "prefer" or "only": 4.) To be sure that your Plex files are only on your SSD, you must open "Shares" and Press "Compute" for your appdata share. It shows if your data is located only on the SSD or on SSD and Disk. If its on the Disk, too, you must stop the docker engine, execute the mover and recheck through "Compute", after the mover has finished its work. You can not change the path to Direct SSD Access as long files are scattered or you will probably loose data! 5.) And you should set a minimum free space in your Global Share Settings for your SSD cache: This setting is only valid for Shared Access Paths and ignored by the new Direct Access Path. This means it reserves up to 100GB for your Plex container, no matter how many other processes are writing files to your SSD. Whats the benefit? After setting the appdata config path to Direct Access, I had a tremendous speed gain while loading covers, using the search function, updating metadata etc. And its even higher if you have a low power CPU as SHFS produces a high load on single cores. Shouldn't I update all path to Direct Access? Maybe you now think about changing your movies path as well to allow Direct Disk Access. I don't recommend that because you would need to add multiple paths for your movies, tv shows, etc as they are usually spreaded across multiple disks like: /mnt/disk1/Movies /mnt/disk2/Movies /mnt/disk3/Movies ... And if you move movies from one disk to another or add new disks etc this probably cause errors inside of Plex. Furthermore this complicates moving to a different server if this maybe uses a different disk order or an smaller amount of bigger disks. In short: Leave the other Shared Access paths as they are. Does this tweak work for other Containers? Yes. It even works for VM and docker.img paths. But pay attention to the requirements (create backup, flush the linux write cache, check your file locations, etc) before applying the Direct Access path. And think about, if it could be more useful to stay with the Shared Access Path. The general rule is: If a Share uses multiple Disks, do not change this path to Direct Access.
    1 point
  28. The easiest way to solve this is to: disable the docker and VMs services under Settings set the appdata and system share to Use Cache = prefer if they are not already set to that. start the mover from the Main tab. This will move files that should be on the cache to optimise performance from array to cache when mover finishes you can re-enable the docker and VMs services.
    1 point
  29. This definitely needs to happen. It has been promised for over 3 years. Or, at least, the Doc needs to be changed to reflect that we still only have NFSv3 and NFSv4 cannot be enabled. I tried to edit the wiki, but I don't have permissions. buzz
    1 point
  30. +1, I also request to add NFS v4 support in unraid
    1 point