Leaderboard

Popular Content

Showing content with the highest reputation on 09/28/21 in all areas

  1. I've pushed an update to the "Remux Video Files" plugin - this is now improved to handle transcoding streams that are not compatible with the destination container. For example, if you have a wmv and you want it converted to an mp4, this will need to be transcoded. It will default to h264/aac. If the stream type is not compatible, then it will just be removed. This will only affect people coming from unique video containers, if you have a mkv file in h265/ac3 and you run it with this plugin to remux to mp4, it will not transcode the video stream as the plugin knows that mp4 files can handle h265, so that stream will be simply copied. For people running into issues with a codec not being compatible with the container they are wanting to move to, can you add this plugin in your plugin flow BEFORE any other FFmpeg tasks - eg, before audio sorting, h265 encoding, etc. cc: @Aerodb @EdwinZelf
    2 points
  2. This was/is the idea that made me create PlexRipper in the first place and I'm happy someone else sees that as a cool thing as well! However, a lot needs to still happen to get to this goal, things like searching, separate API for Radarr/Sonarr etc but it is definitely on the Wishlist! My vision is, you as the user searches something in Radarr/Sonarr, PlexRipper searches through all available servers and returns the result, and based on settings in Radarr/Sonarr, downloads it to a folder and moves it to its destination. All of this works without opening PlexRipper once etc. This would make creating your own media collection so much simpler, no more searching through torrents/Usenet and Plex servers always have the best quality content compared to torrents/usenet as people only keep the good quality media. Also no DMCA requests that remove old or popular content from the public, Plex server owners can back-up their collection to their friends without setting up complex pipelines etc. I wish I had a Hyperbolic Timechamber and work on this 24/7
    2 points
  3. Hey folks, we are actively troubleshooting an issue with My Servers. Your systems will show as offline / access unavailable in the dashboard until we can get this resolved. Thanks. Sorry for the trouble today, things are looking stable and we'll be keeping an eye on it. If you are currently offline you can open a webterminal and type `unraid-api restart` to reconnect.
    2 points
  4. Da mich das Thema interessierte, habe ich ein paar SSDs genommen und ein paar Benchmarks im Array gemacht. Das verwendete Script, das ich für jede Disk ausgeführt habe: #!/bin/bash for n in {1..380}; do head -c 10000000000 /dev/zero | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt > /mnt/disk1/file$( printf %03d "$n" ).bin done Im ersten Test habe ich auf allen SSDs gleichzeitig geschrieben, was im Alltag eher unrealistisch sein dürfte, aber zeigt wie wichtig die Performance der Parität ist: Danach habe ich alles gelöscht und den Test wiederholt. Hier kommt nun der Nachteil zum Tragen, dass Unraid keinen TRIM im Array ausführt und dass man die Parität auch gar nicht trimmen kann (da die Parität zu 100% mit RAW Daten und keinen Dateien beschrieben wird): Die Parität bremst nun also das komplette Array. Diesen negativen Effekt hat man bei sehr guten SSDs, wie zB aus dem Enterprise Bereich, weniger, da diese einen (großen) Over-Provisioning Bereich besitzen, der für den Nutzer nicht sichtbar ist und auf dem beim Durchlauf der Garbage Collection Daten zwischengespeichert und aufgeräumt wird. Wartet man diese Garbage Collection ab (der SSD Controller macht dies automatisch in Leerlaufphasen, lässt sich nicht manuell anstoßen), dann werden die SSDs tatsächlich wieder schneller: Gar keinen Leitungseinbruch gibt es erwartungsgemäß beim Lesen von einer SSD mit dem folgenden Kommando: dd if=/mnt/disk1/file001.bin of=/dev/null bs=128k iflag=count_bytes count=10G Dann habe ich aber auch mal nur auf eine SSD geschrieben, so dass nicht parallel mehrere Paritäten berechnet werden, sondern nur eine, aber trotzdem limitiert hier das parallele Lesen und Schreiben: Hier wären NVMes natürlich viel schneller. Es gibt Modelle, die fast 2000 MB/s beim parallelen Lesen und Schreiben erreichen: https://www.computerbase.de/2021-04/wd-black-sn850-blue-sn550-test/2/#:~:text=Kopieren auf der SSD Allerdings kann man paralleles Lesen und Schreiben, solange man nur auf eine SSD im Array schreibt, vermeiden, in dem man in den Disk Settings "Reconstruct Write" aktiviert, was man bei SSDs auch machen sollte, da diese im Gegensatz zu HDDs im Millisekundenbereich in den Standby wechseln. Die Performance ist nun deutlich besser, da die Parität jetzt nur noch beschrieben wird: Sie kann aber noch besser sein, wenn man keine langsame SSD im Array hat, denn das schwächste Glied im Array bestimmt bei Reconstruct Write die Performance: Da ein TRIM die Parität zerstören kann, ist der TRIM bei Array-Disks übrigens deaktiviert: root@Tower:~# /sbin/fstrim -v /mnt/disk1 fstrim: /mnt/disk1: the discard operation is not supported root@Tower:~# /sbin/fstrim -v /mnt/disk2 fstrim: /mnt/disk2: the discard operation is not supported In meinen Fall wäre es zB sehr schlecht TRIM auszuführen, denn meine SSDs würde nach einem TRIM alle Sektoren mit Nullen beschreiben, so dass die Parität, die sich ja aus den Sektoren aller beteiligten Disks ergibt, direkt kaputt gehen würde: hdparm -I /dev/sde | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM "Deterministic read ZEROs after TRIM" nennt man RZAT. Man bräuchte stattdessen DRAT "Deterministic read data after TRIM". Dann würde die SSD nach dem TRIM weiterhin die selben Daten zurückgeben und die Parität bliebe valide. Wobei man natürlich auch überlegen könnte, dass man nach einem RZAT TRIM einen Paritätscheck mit Reparatur ausführt, aber in den Stunden ist dann natürlich die Gefahr gegeben, dass man keine Ausfallsicherheit hat. Aktuell versuche ich herauszufinden, wie man trotzdem einen TRIM ausführt (ohne jetzt das Array stoppen zu müssen und manuell zu mounten). Ich reiche dann auch davon einen Screenshot nach. Auch würde ich dann gerne testen was passiert, wenn man die Sektoren von gelöschten Dateien vorher mit Nullen beschreibt. Dann müsste ja theoretisch die Parität ganz bleiben. Vielleicht hat da jemand einen Tipp für mich.
    1 point
  5. Well, I converted a few files, and so far so good! Reduced file size, converting audio to AAC, creating a Stereo clone, checking for container size post conversion. I'll need more files to convert to fully test whether its rejecting for file sizes, audio streams are all working, changing containers etc. Thanks a ton man
    1 point
  6. Yeah, this must be something with the authentication. I try again tomorrow. Thanks.
    1 point
  7. Yes. But there will still be dragons. If you want to do that it would be really helpful. There will likely be issues with the container compatibility that is configured. I did my best, but I'm not an expert when it comes to what codecs are supported by what containers. What I decided to do with this plugin was create a compatibility file for the containers: https://github.com/Josh5/unmanic.plugin.video_remuxer/blob/master/lib/containers.json If you remux from mkv to mp4, the mp4 container does not list subrip as a compatible subtitle in "codec_names", so it will convert with the default encoder (for mp4 that is mov_text). This will still cause issues with image based subtitles and i'm not sure what else at this point. So I would live feedback if people are getting errors in the ffmpeg command, that will help me build this JSON file and form a better compatibility list.
    1 point
  8. I've been trying to figure this out, so I thought I would post what I have so far: There is a discord forum for fusionIO, here. This tutorial gave me some info, sounds like have to get source code, build a new kernel. This is the github source code link, here Need to build a custom kernal for Unraid, here (unraid uses slackware, I believe). My thought was to build a Slackware VM, here, and figure this out, build the kernel. (Then could I copy the kernel to the usb boot drive? dont know yet). Be forewarned, Slackware is a bit of different animal, only CLI, and there was a lot of prep to convert (deb packages) to be used. A lot of gaps in the instructions, as I am primarily ubuntu user. Googling on how to get fusionIO to work with Unraid, you might come across threads about drivers and paths, which has been requested and released. I think we need a github project for fusionIO + Unraid so that people can contribute to this effort. as of this post, there are still lots of fusionIO boards on ebay. They are not the fastest, but for the home brew server crowd, this is a great, high-endurance drive I would love to get working. Hope this helps somebody
    1 point
  9. Thank you for the hard work. The connection is finally going through and I'm seeing myself online and accessible through the internet now. Actually, it seems as if the my-servers page retrieves the connection information a lot faster than it used to! This is great! I'll mark this thread as closed.
    1 point
  10. Sorry for the trouble, you should be able to get back online now with `unraid-api restart`
    1 point
  11. This in principle. note that it is the last folder before a cutover to another disk, not just the last folder on the whole drive.
    1 point
  12. Fantastic! Installed and working. However I do get these errors when I run it, but it does not seem to affect functionality. nvtop: /lib64/libtinfo.so.6: no version information available (required by nvtop) nvtop: /lib64/libncursesw.so.6: no version information available (required by nvtop) Thank you so much!
    1 point
  13. Thank you for your great container. There have been shows my friends have that I can’t find. This helps a lot. Thanks again.
    1 point
  14. Worked for me when I selected Mode 8. I initially selected Mode 4 and it didn't work. Nice work!
    1 point
  15. What version of php are you using?
    1 point
  16. To answer my own question see WireGuard quickstart. WireGuard doesn't seem to work with proxied connections.
    1 point
  17. This plugin is read for testing in my development plugin repo... https://github.com/Josh5/unmanic-plugins It works by reading the tags to detect if a stream is a stereo clone of a another stream. The tags generated will be "[Stereo]" if there is not title for that stream, or "English [Stereo]" if there was a title for the stream. This plugin should also be compatible with detecting if a file was already processed by the legacy version of Unmanic. You can configure options like bitrate etc in the advanced options (just like all the other ffmpeg plugins). I have added an example of how to do this. Let me know if you have any issues.
    1 point
  18. Hi @AlainF, this is because unRAID runs its own DNS server using dnsmasq. I believe it's used for VMs. Details here of how to disable it:
    1 point
  19. ...ich vermute ein Problem mit der Namensauflösung, da auch der externe DNS weg ist. Evtl. klappt dann auch das Hairpin-NAT nicht richtig, bzw. dauert eben ewig. Das aber eine lokale IP nicht über diese (zB via ping) erreichbar ist, wie bei Deinem HA-Docker, wäre ungewöhnlich (ausser die App ist aus dem gleichen Grund nur mit sich selbst beschäftigt - was sagt die CPU LAst des Host?). Kurz: alles was Address-/Names- zu IP-Auflösung benötigt weglassen/auf IP-direkt Anwendung ändern oder einen lokalen DNS-Cache aufsetzen.
    1 point
  20. ...ah, ok...nice...but with a pricetag 3-times higher than the comodity 6-bay version (or at the same level as the 4U case itsself) 🤔
    1 point
  21. Hi CorneliousJD, Just want to say keep up the great, work! I just finished scanning thousands of raw photos and I love how simple, fast and responsive the whole thing is! This really is a great solution for photographers. In regards to future features, I'm wondering if there's anything planned for the ability to tag photos and create users to view only those said tags. For example, family photo tags to be able to give family members access to photos with only the "family" tag? Having this feature would really make sharing and managing user accounts much easier. Thank you!
    1 point
  22. Seems like this is possible from what I've read. If you gift me the game I can look into it and create a dedicated server if it's possible but as said above this should be totally doable.
    1 point
  23. Not right now, I played around a bit with RSS feeds so your books could be used in podcast players, but it lost priority. We have discussions on new features, the design and everything else on github. Your feedback and help with anything from testing to design would be greatly appreciated!
    1 point
  24. Per @Jaster's suggestion, how many individual servers are you running?
    1 point
  25. Hi @ich777 I asked a while back about the possibility of you creating a container for Survive the Nights - assuming you still don't have it would you be open to seeing if it's possible if I were to gift you the game?? The developers have introduced a server option in-game and from a few things I've read I believe it would be Linux based - although I know very little about these things! No problem if not, it's just the type of game I think would be loads better with a dedicated server! Cheers....
    1 point
  26. Fantastic work and thanks so much for creating this! I'll check out your android app on my CalyxOS pixel, but I was also wondering ... are there any existing (maybe even paid) iPhone apps that would play nicely with the audiobookshelf?
    1 point
  27. Hi All, What are the best practices for preclearing a drive in unRAID? ie. Which operation to choose, and how many cycles to run? Would you use different options for new drives as opposed to 2nd hand drives? So far I have been using "Erase and Clear the Disk", running it for 3 cycles. Is this sufficient, or is it overkill? The reason that I ask, is that I have just run the preclear on a 6TB disk, and 1 cycle of "Erase and Clear the Disk" took 48 hours ... TIA
    1 point
  28. Sort of, more than 10 years now and the forum became a major hobby for me.
    1 point
  29. Is gold plated ok? Congratulations @binhex!
    1 point
  30. I fixed it by myself. I changed permissions for /mnt/user/appdata/Zoneminder/mysql and files inside. Restarted the docker and it looks ok now.
    1 point
  31. I reinstalled Windows 10 and started adding one application at a time then running iperf3. I got to Norton Anti Virus and boom... Speed down to 3.5Gbps. (Even though I thought I ruled it out previously because I disabled it). I started digging deeper with Norton (I didn't want to have my PC unprotected). The exact culprit was "Intrusion Prevention". (Settings -> Firewall -> Intrusion and Browser Protection -> Intrusion Prevention). As soon as I turned it off.... Bam! 9.87 Gbps. P.S. Their customer service are idiots. I called before figuring out the root cause. I told the guy what's going on. Him: Well what do you expect your subscription to do? Me: Not slow down my network traffic. Him: Well we don't provide internet, it's an antivirus software. Me: Yes, a SECURITY suite software that includes firewall with network and browsing protection. Him: I don't know how we can help with your speed. ........... I hung up.
    1 point
  32. Please open a terminal window and type this: unraid-api restart When the API restarts it will hopefully make a connection and then from the My Servers Dashboard you should have options for "Local access" or "Remote access" instead of "Access unavailable"
    1 point
  33. I believe tHe frequency is determined by Settings -> Disk settings -> Tunable (poll_attributes): which has a default of 1800 seconds (30 minutes). I normally change it something shorter like 300 seconds.
    1 point
  34. It would be great to have this feature integrated. I have a lot of smaller disks (sub 4TB) that I'm working on removing from the array. And I have no need to replace every single disk with a larger one. What I would love to see is the possibility to mark disks for removal. And during the next few days the data on the disk will be reallocated to the free space on other disks. Then you can just remove the disk from the array.
    1 point
  35. By this guide Plex uses your RAM while transcoding which prevents wearing out your SSD. Edit the Plex Container and enable the "Advanced View": Add this to "Extra Parameters" and hit "Apply": --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 Result: Side note: If you dislike permanent writes to your SSD add " --no-healthcheck ", too. Now open Plex -> Settings -> Transcoder and change the path to "/tmp": If you like to verify it's working, you can open the Plex containers Console: Now enter this command while a transcoding is running: df -h Transcoding to RAM-Disk works if "Use%" of /tmp is not "0%": Filesystem Size Used Avail Use% Mounted on tmpfs 3.8G 193M 3.7G 5% /tmp After some time it fills up to nearly 100%: tmpfs 3.8G 3.7G 164M 97% /tmp And then Plex purges the folder automatically: tmpfs 3.8G 1.3G 3.5G 33% /tmp If you stop the movie Plex will delete everything: tmpfs 3.8G 3.8G 0 0% /tmp By this method Plex never uses more than 4GB RAM, which is important, as fully utilizing your RAM can cause an unexpected server behaviour.
    1 point
  36. Option 2 seems easiest, created the following script below and set it to run every 5 min using the "Users Scripts" plugin #!/bin/bash docker exec -u www-data Nextcloud php -f /var/www/html/cron.php exit 0 Thanks!
    1 point