Leaderboard

Popular Content

Showing content with the highest reputation since 07/25/21 in all areas

  1. Not sure where to post this but man this unRaid stuff just WORKS, rock solid. Almost a year now with two installs and they're both absolutely ROCK SOLID. Love the simplicity and stability of this setup. I would say money well spent. Keep up the good work gents.
    5 points
  2. I disagree. Recently since my last upgrade to a private beta release one of the lights on my garage door opener has started flickering. I refuse to believe that it's a coincidence.
    4 points
  3. See title of this thread.
    3 points
  4. When installing or editing a container, if there are any conflicts (or if another error happens which prevents starting), it will appear in the docker run command which is listed. If just starting an already installed container and due to a conflict / error it won't start you'll see "Server Execution Error", at which point you can easily see that actual issue by editing, make a change, undo the change and then hit apply As an aside, 6.10 when changing any of the ports in the UI does a check to see if the port is already in use and won't let you apply the changes.
    3 points
  5. Was lange währt, wird endlich gut. Darf ich vorstellen, mein neuer Server. Die Hardware wurde sofort erkannt, auch die beiden NVMEs. Nach einem BIOS Update wurde der Prozessor auch richtig erkannt und jetzt läuft soweit alles wie es soll. Ein paar Kleinigkeiten muss ich für die Ersteinrichtung noch machen, aber der Großteil läuft schon, die Datenmigration hab ich auch schon abgeschlossen. An der Stelle vielen Dank für die Hilfe an @mgutt, @ich777 , an @Morrtin und alle anderen, die geholfen haben.
    3 points
  6. I fear the day that the community succeeds and the Limetech staff won't survive these relentless attacks
    2 points
  7. Danke, hätte ich auch irgendwie selber drauf kommen könne 🙂
    2 points
  8. Hi. Backups, die mit DB-Backup erstellt wurden, lassen sich auch in der Konsole wiederherstellen. Die 2MB Limitierung in PHPMyAdmin ist normal und seit Jahren bekannt. Um das Backup als Restore zurückzuspielen, einfach im neuem MariaDB-Docker eine leere Datenbank, mit dem identischen Namen der gesicherten DB anlegen. Dann die Konsole des MariaDB-Dockers öffnen und folgenden Befehl (natürlich mit den eigenen Werten ersetzt) abschicken (Die Backupdatei muss im Verzeichnis liegen, von welchem der Befehl ausgeführt wird.): cat BACKUPFILE.sql | docker exec -i mariadb /usr/bin/mysql
    2 points
  9. As soon as you build the images with the gnif patch you can't install any plugins with drivers included. Just a little note, when 6.10.0 drops you don't have to build a custom Kernel and you can install everything with plugins, even the gnif vendor reset.
    2 points
  10. Hab das Netzteil, zufällig gekauft weils bei Mindfactory im Preisfehler für 59€ zukaufen gab. 2 Tage später kam der Tweakpc test. Bin froh ein Schnapper gemacht zu haben. Viel sagen kann man dazu nicht, leise und ordentlich verarbeitet.
    2 points
  11. Same error on my 6.9.2 box after the latest container update. Looks like there is already a GitHub issue targeting a fix in Glances 3.2.3 release: https://github.com/nicolargo/glances/issues/1905. Workaround would be to add a version tag to the container config like this:
    2 points
  12. ich würde auch auf jeden Fall eine Nvidia nehmen, nach wie vor besser für passthrough. und gvt.g zum gamen, nope, nicht geeignet, da reicht die Power nicht, für eine remote VM als flotten Office Rechner, perfekt, als Desktop mit USB - HDMI Adapter, auch gut, zum Gaming allerdings nicht geeignet, da wird die schnell aufegben, gvt-g ist limitiert und nicht nativ zu sehen wie igpu passthrough ... da würde dann auch ein game gehen (jedoch langsam) ... wenn du die AMD GPU hast, teste es doch einfach mal mit der als Alternative ... bevor du eine Neue kaufst, gibt ja genug wo
    2 points
  13. Works just fine in my Debian Bullseye container, but I recommend doing it with my Debian Buster container: Of course you have to install python3, all dependencies manually and afterwards install tartube with dpgk (to gain root privileges in my container you have to type in "su" and then the specified password for your root user). Since tartube creates a directory for all the files in ~/.config/tartube, the config from tartube is persistant. The only thing that I would do is to create a user.sh like @Ford Prefect mentioned above so that the container checks on every start
    2 points
  14. ...looks like there is no docker available anywhere. The App comes in a .deb flavour, so you should be able to install it in a GUI Docker. Did you try the Debian Buster from @ich777? You should be able to download und and install the "tartube.deb" semi-automatic with a user.sh script every time the docker starts, as per the hint/instructions given in the docker, see: If you want to install some other application you can do that by creating a user.sh and mounting it to the container to /opt/scripts/user.sh (a standard bash script should do the trick). [...] Storage Note: Al
    2 points
  15. I am happy to report that sas2flash -o -e 5 worked perfectly. Thanks for the help.
    2 points
  16. Yes, this one is a bit strange, I'll need to dig deeper to figure out what's the issue
    2 points
  17. @bat2o if it works externally and you try to access your nextcloud from the same LAN with your DuckDNS URL then maybe DNS-Rebind Protection / DNS Hairpinning (nat loopback) on your router/firewall can help you.
    1 point
  18. ok given path has been deleted, unraid is prompting again with update or install plugin, so i did a restart again to avoid that. and it seems to work now, i was missing the reboot after folder delete, i did not know this to be required thanks again enjoy your weekend
    1 point
  19. That's expected as mentioned above, and if the disk was rebuilt on top of the old one not much you can do now, you should have canceled the rebuild as soon as there were errors on a different disk. To move forward you need to fix filesystem on those disks, for the disks with a resiser fs that mention this: that's what you need to do, run it again with --rebuild-tree, it's going to take a few hours for each disk.
    1 point
  20. You can't have multiple disks in the pool unless using btrfs. 6.9+ allows multiple pools though. I have a "fast" pool as xfs for dockers and a "cache" pool as 2x btrfs raid1 for redundancy.
    1 point
  21. Thanks @alturismo. As I said, I am suspecting this is a Plex issue, but because I had the update and the hardware change on the same day, I wanted to rule out an Unraid/ Plugin issue. I tried using TVheadend, and wasn't able to get it working. However I was able to see all 4 tuners in there. I will head over to the Plex forums and see if anyone has an idea there. Thanks.
    1 point
  22. Awesome. I was going to buy a new motherboard but found an 8 port card with the LSI chip. Thanks.
    1 point
  23. Problem solved, I am a noob and blind. Thanks for making it clear
    1 point
  24. You should definitely try turning off XMP until you have your server in a stable state. The problem with any over-clocking is that it is impossible to predict with any certainty when it might cause a failure regardless of tests you do in advance. it seems that servers are more prone to this sort of issue than desktops, but I suspect it is just more noticeable as they tend to be left running 24x7.
    1 point
  25. Reiserfsck checks can take a long time (many hours) particularity with large disks, with the only sign of activity being fact that the disk is still being read.
    1 point
  26. I hate silence from tech companies. At least give some BS canned answer like "we are looking into the issue". When you email support they say go to the forums and check.......... 🙄
    1 point
  27. Du musst beides separat betrachten. Kein Mechanismus wird universell funktionieren. Bin gerade im Urlaub, deshalb nur kurz: Plex macht schon die ganze Arbeit mit seinen Scannern. Die XML Dateien aus der Plex Web API enthalten alle Informationen. Mit dutzenden Skripten hole ich das nachts raus und interpretiere das. Suche nach geänderten Metadaten etc. Nach der Musicbrainz ID habe ich noch nicht gesucht. Ich würde mir Mal die Musicbrainz ID von einem Album von der Musicbrainz Webseite holen und damit den Datenbank Dump von Plex durchsuchen. Vielleicht ist die drin.
    1 point
  28. Thats exacly the way you go. Remember to run that as post execution with every start of the container. Cheers.
    1 point
  29. The diagnostics you posted 3 hours ago in this post: are 5 and a half hours old and you have rebooted since then. I reviewed the thread and I see those diagnostics are actually older than the diagnostics you posted 4 hours ago here: That is why I was having a problem understanding your current situation.
    1 point
  30. Das hat wunderbar funktioniert - vielen Dank nochmal. Der Thread kann geschlossen werden.
    1 point
  31. See if this applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
    1 point
  32. OK so after a few days I can confirm everything is still running as expected which is fantastic. One thing I did notice, is that running the scheduled job using User Scripts meant that snapshots weren't working correctly - the old ones weren't being deleted. However within the container the scheduled tasks all execute AND take care of removing old snapshots which is ideal. Anyway, I'm just glad it's all working after a few days
    1 point
  33. FIXED, Bloody BIOS update save the day! Could've saved me the day of headache. Still have no idea what causing it tho.
    1 point
  34. ijqk kfwt dvya ugcv qmlr
    1 point
  35. V2 hat Sockel 1155 und V4 Sockel 1150. Sind also gänzlich andere Generationen und daher blöde Namenswahl durch Intel. Ist denn v4 eine Voraussetzung für GVT-g? Wie gesagt würde ich eh über einen Generationenwechsel nachdenken. Sowas in der Richtung: https://www.ebay-kleinanzeigen.de/s-anzeige/gaming-pc-gtx-1050ti-4g/1796960802-228-760 Den alten Kram dann verkaufen und man kommt vielleicht bei 200 € raus?! Stärkere / effizientere GPU gleich inklusive.
    1 point
  36. You are amazing! I've been waiting for something like this for a while. So easy to set up. Great work!
    1 point
  37. Gracias lo he estado probando desde hace unas semanas y la verdad es que está genial. Todos los dias a las 04:00 me apaga todos los dockers emcendidos, hace el backup completo y después enciende todos los servicios que de apagaron. Gracias x la respuesta.
    1 point
  38. Grundsätzlich gilt: Es gibt überall Sicherheitslücken und alles lässt sich hacken. Die Frage ist ob sich der Aufwand für den Angreifer lohnt die Hürden zu überwinden. Und genau diese Hürden kann man erhöhen. Mal von oben nach unten wie der Zugriff auf Plex am sichersten wäre: man holt sich einen smarten Switch und erstellt VLANs, also virtuelle getrennte Netze mit eigenen IP-Adressräumen. In ein VLAN käme Plex und die Clients und in ein anderes Unraid. So wäre gewährleistet, dass nur der Admin auf Unraid selbst käme. Plex selbst und die Clients könnten zwar imme
    1 point
  39. That is a 6.8.3 issue due to a change at the remote end. It is fixed in 6.9.2 (although if for some reason you cannot upgrade to that release there is a workaround posted in the forums).
    1 point
  40. Would be best to post in the relevant support thread for the thunderbird app (click it's icon, select Support)
    1 point
  41. Yes, the purpose of the Remote Access feature is to give you remote access to the webgui. Check out the docs for more details: https://wiki.unraid.net/My_Servers Do not put Unraid directly on the Internet or in your router's DMZ. But yes you can forward a port to the https webgui now. You should use a high port number (not 443) and you need to have a good rood password. Recent versions of Unraid will lock out intruders after three bad login attempts. For more security best practices see https://unraid.net/blog/unraid-server-security-best-practices
    1 point
  42. https://xkcd.com/936/
    1 point
  43. Yea, I'm having a lot of super weird occurrences after the most recent update also. The UI/Dashboard for unmanic becomes pretty much unresponsive. set_mempolicy: Operation not permitted in my logs. And I have one fire that just keeps failing. [h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 48 [h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 928 [h264 @ 0x5605116f0680] SEI type 81 size 1920 truncated at 32 [h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 47 [h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 927 [h264 @ 0x5605116f0680] SE
    1 point
  44. looks like there is a change, and retention period must now be a multiple of the index period. so 168h * 4 = 672h
    1 point
  45. pfsense has DNS Rebinding on by default, that prevents you from getting a local IP address when you do a lookup on yourpersonalhash.unraid.net You need to disable DNS Rebinding in pfsense for the unraid.net domain. I don't have a lot of experience for this, but if you turn on help for the Settings -> Management Access page it says: pfSense: If you are using pfSense internal DNS resolver service, you can add these Custom Option lines: server: private-domain: "unraid.net"
    1 point
  46. Sorry, hatte vergessen mein Ergebnis zu Posten: Es wurde am Ende die CyberPower CP900EPFCLCD 900VA/540W. Ergab am Ende für mich das beste Preis/Leistungsverhältnis und (laut Tests) einen vertretbaren Eigenverbrauch (von angeblich ca 2-6W). Mein Setup: Unraid Server (ca 50W), Ubuntu Server (ca 10W), Fritzbox, Raspberry Pi 3b, Raspberry Zero, Switch 8 Port Laufzeit Laut USV ca 40 min. Das reicht für mich um nen kurzen Stromausfall zu überbrücken, ohne dass alles direkt runtergefahren wird. Die USV ist per USB mit dem Unraid Server verbunden. Aus den CA habe ich
    1 point
  47. Yes it is needed as /transcode will be created, even it is not mapped through the containers settings. So I will hopefully update my guide for the last time ^^ EDIT: Done.
    1 point
  48. I finally got around to making some brackets for my 8tbs and the 804 case, here’s some pictures in case anyone else needs it for reference i originally planned to put the brackets inside the drive holder however the fit was too tight so they went on the outside, that’s why I lined them with red felt previously to avoid any marks or heat transfer from the drives on the holder nearest to the rear fans the bolts were too long and clashed with the fan so luckily I only have 4x 8tbs and mounted them in the other holder
    1 point