Leaderboard

Popular Content

Showing content with the highest reputation on 03/08/22 in Posts

  1. Da die alte Garde ja kaum noch zu bekommen ist und die 11th Gen ja zumindest Mal für UNRAID eher uninteressant war, könnte sich das nun mit der 12th tatsächlich ändern. Intel hat nun offiziell bestätigt dass die core i 12 Prozessoren ECC unterstützen: https://www.hardwareluxx.de/index.php/news/hardware/prozessoren/58264-alder-lake-auf-w680-mainboard-ermoeglicht-ecc-fehlerkorrektur.html Und ASRock Rack hat auch ein matx Board vorgestellt, dass auf den ersten Blick brauchbar aussieht: IMB-X1314 Mit einem non-k Prozessor könnte das vielleicht eine einigermaßen effiziente Basis werden Jetzt muss nur noch UNRAID richtig mit Alder Lake umgehen können und ein Weg gefunden werden die iGPU wieder so flexibel wie früher zu nutzen (evtl per sriov?) EDIT: Gemäß Intel ARK scheint das ab i5 12500 aufwärts zu gelten ("F"-Prozessoren sind offenbar ebenfalls ausgenommen)
    2 points
  2. I would love to see options to flag for compression and deduplication for my cache volumes. I can understand why it's not worth while for array volumes, but in cache where it's all dockers and VM's, it would save a ton of drive space. Even if you only allowed/recommended it for SSD's, many people would benefit, especially given the state of the supply chain.
    1 point
  3. Search shows zero mentions of CVE-2022-0847 on the forums, so I'm starting a new thread. This is a privilege escalation vulnerability introduced in Linux Kernel 5.8. It is fixed in 5.16.11, 5.15.25, and 5.10.102. Unraid OS 6.9.2 runs Kernel 5.10.28, so the current release of Unraid is vulnerable. Can we get a patch for this? Resources https://dirtypipe.cm4all.com/ https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0847 https://arstechnica.com/information-technology/2022/03/linux-has-been-bitten-by-its-most-high-severity-vulnerability-in-years/
    1 point
  4. You are right, my bad. I had linuxserver's container installed and not official. Thanks
    1 point
  5. I forgot. You must also change this value to your http value: Else, when you click on webui in unraid, it'll direct you to whatever the value is there by default, which is 8080 I think. I do run openhab as host, I simply run it on other port like you can see. In fact, port 9999 used to be the default port, they changed it somewhere in version 2.5 I think. Running it as host also make it available for network scan and other stuff like that. Technically, when you run as host, you don't need to define the port redirection like I did because all port are mapped automatically. I prefer having them specified anyway, this way, when you create other docker for other service, you can check all port already assigned under show docker allocation
    1 point
  6. the -e openha.... is extra parameter When you do that, you change what the service is listing from. But then, on docker template, you must add new port forwarding for these port because the default one have another port on the host side. So delete http port already there And create new one 1 by 1 As for the reverse proxy, you can install SWAG that already have everything setup for reverse proxy. But openhab shouldn't be open on the outside. for this, either use a VPN or openhab cloud, which is free (that's what I use, this way, nothing open).
    1 point
  7. Ability to rearrange GUI elements between columns. Currently they can only be re-arrange within a column. Ideally the best case solution would be to rearrange each element individually. Currently running on a portrait monitor, I lose half of the info
    1 point
  8. 1 point
  9. Reporting back, to eat my hat and learn the lesson. My issues related to...a change in my VPN Provider's certificate. Sorry binhex, and also, thanks for this docker. I'm going to give rTorrent a shot next, and may transition, after reading some of your past comments on "what's the best" de facto client to use on Unraid.
    1 point
  10. Hello, You don't have to modify the templates. What is the network configuration of the apps? If you are using Host, that mean it will use the same ip as the unraid system (which I use). Then, what you need to do. Under "HTTP Port" and "HTTPS Port", you input the port that you want to listen to. This port will be the port used to access the server. Docker itself listen on a port, but this is the "port forward" mapping from docker and your real NIC. This way, the service can listen to 8080, but you can open this port to 8888 from outside request. You'll have the LSP port and SSH Port too to modify if needed. Another thing you can do, if you want (like me) to have a matching port from the docker and outside, is add this to you extra parameters: -e OPENHAB_HTTP_PORT=8888 -e OPENHAB_HTTPS_PORT=8443 This command will tell openhab in the docker to listen to another port. If you do that, you must delete the ports in the template page and recreate them (because they expecte to listen on the default port). If you add port to the docker configuration, don't forget there's a host port (port the service inside docker listen) and a value (the port used to access from the outside). It's a good practice to have the same port on each side but it's not required (and not always possible). So if you use the -e command, you must change the host port value to the one you input. Else, you just change the value port and keep the host port the default the service listen to As for the host path, this is 100% to your installation, it's simply to where data is saved. This template is probably from a openhab2 installation and it wasn't changed during the update process. hope that help?
    1 point
  11. yep, exactly the same issue with official one. maybe this is somehow related? there was one reply in this thread with i7-4xxx and it worked (page 3-4 don't remember) so I guess it should work Update: Indeed it was related and now I was able to fix transcoding. So, If anyone have the same problem follow these steps: The only problem is that I couldn't downgrade ffmpeg in your docker container, it only worked in official one.
    1 point
  12. Unraid OS never tries to shrink the size of a file system (well there is an exception). What we do upon every mount is to try to expand the size of each file system. For the devices in the unRAID array this handles the case where a smaller device has been replaced with a larger one. If the file system cannot be expanded because the device is the same, or a new device is the same size, then the operation will of course fail - and this is reported in the system log - entirely expected. Another case were we need to explicitly expand a file system is following replacement of a smaller device with a larger one in a btrfs pool. In this case a 'btrfs filesystem resize' is executed Now that I think of it, actually code does not prevent you from replacing a larger device with a smaller one in a btrfs pool, so technically that would be a case where a file system is shrunk.
    1 point
  13. That's also what I suspect, diags would confirm.
    1 point
  14. That's metadata corruption, for this it's best to backup and re-format the pool.
    1 point
  15. Ich hatte vor kurzem gepostet daß das genannte Fujitsu verfügbar ist. Natürlich ist es jetzt schon wieder ausverkauft. Ich habs damals auch deshalb gekauft und den i3 9100 mitsamt passendem RAM gekauft. Die meisten NAS auf dieser Welt laufen ohne ecc und sehr viele Probleme scheint es nicht zu geben. Das Asrock BIOS scheint laut Anleitung umfangreiche Stromspareinstellungen zu haben und Modi zu unterstützen. Ich würde erwarten daß man mit Powertop wenigstens die angestrebten 20W idle erreicht.
    1 point
  16. Dir ist schon klar dass ... Ich kriege Angst wenn Leute das lesen auch noch für voll nehmen ... also wenn Kritik, dann vielleicht auch mit etwas mehr Hintergrund was du gemacht hast und dann kann man Dir sicher oder eventuell auch helfen
    1 point
  17. I am using a MB998SP-B ( https://www.icydock.com/goods.php?id=192 ) and I like the build quality. I think they also have 6 drive variants.
    1 point
  18. Checksum errors mean btrfs is detecting data corruption, make sure you're RAM is not overclocked since is a known source of data corruption with Ryzen/Threadripper and/or run memtest.
    1 point
  19. Die Meinungen gehen auseinander. Solange kein Problem auftritt ist ECC vollkommen unwichtig. Sollte aber ein Fehler längerfristig unbemerkt alle geschrieben Daten verändern/falsch abspeichern lassen, sind alle Daten ab dem Zeitpunkt des Auftretens betroffen. Es gibt billige ConsumerPC, da ist im Rahmen ihrer Betriebszeit nie ein Fehler bemerkt worden, was man so interpretieren kann, dass auch nie ein Fehler aufgetreten ist. Dann gibt es aber immer mal wieder Leute, die sich beschweren, dass ihr PC abgestürt ist. Das muß nicht zwingen ein RAM Problem sein, aber es könnte. Ohne dazu Zahlen gelesen zu haben: ich vermute mal aus der hohlen Hand: 99,x% aller Systeme haben keine solchen RAM Probleme oder es fällt eben nicht auf. Ich würde für meine Zwecke aber gerne auch gegen die vielleicht unter 1% abgesichert sein. Lieber ein entdeckter und verzeichneter RAM Fehler und dadurch evtl. stehendes System, als daß mir im schlimmsten Falle monatelang Stück für Stück die Daten geschreddert werden, die ich fuer mich als wichtig erachte. Wenn das so lange geht, dass auch meine Backups dann die verhunzten Daten übernehmen, ist alles zu spät. Aktuell umgehe ich das auf meinen "nicht ECC" fähigen Systemen (Windows) dadurch, daß ich zusätzlich manuell von jeder relevanten Datei eine Checksumme erzeuge und sporadisch auch auf 2 PC Systemen anhand der Dateien überprüfe. Mein unraid soll aber dann doch stromsparender werden, weshalb ich es dann direkt auch per ECC gegen einfache RAM fehler absichern will. Und ja, auch das hilft nicht gegen alles. 100%ige Sicherheit gibt es nicht. Wenn ein RAm fehler an der falschen Stelle zuschlägt und beispielsweis ejedes x-te Bit von einigen GB veraendert, kann es passieren, dass falsche Daten geschrieben werden, weil sie im RAM schoin verändert wurden. Dann berechnet die Parität anhand der falschen Daten auch die Parität und dann ist die geschrieben Datei auf dem Array falsch und die Paritätsplatte würde auch die falschen Daten rekonstruieren. Ja. Gratuliere: du hast das atuelle Dilemma erkannt. Die Hardwarebeschaffung ist für sowas wie unraid ist aktuell eben eine Wahl zwischen Lieferschwierigkeiten, hohen Preisen, zu hohem Stromverbrauch, etc. Die perfekte Auswahl ist aktuel schwer. Ich hatte für meinen I3-8100 das Fujitsu D3644-B gewählt. Seit einiger Zeit hat Kontron das Board übernommen. Aber auch das ist nicht einfach zu bekommen. Weil ich mehr Rechenleistung will, musst eich in den sauren Apfel beissen und habe dann nun zu neuer CPU, Board + RAm gegriffen, doch die Kombination macht aktuell Probleme. Weitere neue CPU. Board und RAM sind unterwegs/geliefert und demnaechst werde ich durch cross-tausch alles durchtesten müssen. Das wirdn dann wohl doch eines der teuersten Systeme, welches ich in den letzten 5 Jahren zusammengebaut habe. Will sagen: Es ist aktuell eben nicht einfach. und dann kann man noch Pech haben. Mach das und behalte die Daten einfach im Auge. Wenn sich dann nach ein paar Monaten keine Probleme zeigen, wird es schon gut gehen. *Prinzip Hoffnung* Es kommt dabei imemr darauf an, wie wichtig Dir die Daten sind.
    1 point
  20. nachlesen was es ist (auch hier im unraid Forum gibt es bereits einige Beiträge in der Suche zu ECC) und dann selbst entscheiden ... ? Ja, Nein, vielleicht, ... auf jeden Fall ...
    1 point
  21. Ich habe mir eben exakt die gleiche Konfiguration zusammen geklickt. Wenn du alles zusammen gebaut hast, würde mich der Stromvebrauch ja sehr interessieren. Oder anders gefragt, wie lässt sich der Rahmen bezüglich des erwartbaren Stromverbrauches ungefähr abstecken (ohne HDDs)? Ich plane nämlich einen 24/7 Betrieb...
    1 point
  22. Thanks ich777 for compiling these modules for all of us
    1 point
  23. @brin & @Kacper & @usmaple & @thecode: Here is a package for 6.9.2: battery-5.10.28-Unraid-1.txz with the two modules, place it somewhere on your server, navigate to the that folder in a terminal and to install it simply type in: installpkg battery-5.10.28-Unraid-1.txz depmod -a (this has to be done on every reboot, or simply put it somewhere on your server and add the above commands to your go file, keep in mind that you maybe have to add the path to the file) !!!THIS PACKAGE WORSK ONLY ON UNRAID 6.9.2!!! After that change the scripts to the following because the modules are installed directly to the libraries directory and this makes also sure that the script is compatible with newer Unraid releases (read below): FILE=/sys/class/power_supply/BAT0/capacity if [ -f "$FILE" ]; then capacity=`cat "$FILE"` else echo "$FILE does not exist. Trying to enable battery module" logger "$FILE does not exist. Trying to enable battery module" modprobe battery fi FILE2=/sys/class/power_supply/AC/online if [ -f "$FILE2" ]; then online=`cat "$FILE2"` else echo "$FILE2 does not exist. Trying to enable ac module" logger "$FILE2 does not exist. Trying to enable ac module" modprobe ac fi # online is expected to be 0 or 1 - ac on or off # capacity is expected to be 0-100 - percent of battery charge # for testing change number to 101 and 0 inside if statement # for normal operation set 30 and 1 as constant value to compare variables with if [ -n "$capacity" ] && [ -n "$online" ] && [[ "$capacity" -lt "30" ]] && [[ "$online" -ne "1" ]]; then echo "$(date) Battery bellow safe margin. Shutting down system." logger "Battery bellow safe margin. Shutting down the system." # Unraid shutdown command here /sbin/poweroff fi From what I see the two modules that are needed for this script to work are included in the next release from the Unraid RC series and you don't need the package anymore, the script that I've modified above should be enough and will work for the next Unraid RC series releases and even the stable releases afterwards.
    1 point
  24. Yes, I know this issue and I found no way around yet... Will look into it again.
    1 point
  25. Hello! Merci bien pour la réponse 😉 En fait j'y arrivais toujours pas malgré tes pistes... J'ai désinstallé une énième fois le conteneur, pris jdownloader2 de jlesage et là miracle ça a fonctionné du premier coup 🤩 Pourtant j'ai ciblé le même répertoire downloads... Bref ça fonctionne! Merci encore d'avoir pris le temps d'examiner!
    1 point
  26. Ah I missed the fact they are all tied to a single controller. In that case we must conclude that the specific drive (ST33000650SS) does not interpret the spin down commands as expected. Unfortunately there is no "standard behavior" re this. Newer Seagate drives tend to do the right thing, quite a few older ones do not. Your drive seems to have been pulled from an Adaptec system, which may or may not have to do with this issue. There's a slight chance that updating the drive's firmware will help, but I can't guarantee it will (or that you won't end up with a bricked drive), so weigh your risk. If you try this, you may want to check out this. I have not tried it, and it seems to be specifically not intended for Adaptec OEM drives, but who knows.
    1 point
  27. I don't use this particular Plex docker container or the XMLTV guide; however, I think your problem is you are possibly trying to use an unRAID file system path inside a docker container. The container has no concept of unRAID paths. You probably need to do a volume mapping in the container setup and map something like /guide to /media/downloads/EPG Data and then specify /guide in the XMLTV Guide location. I am not sure if that setup wants a path including the xml file name or just a path to the folder that contains the xml file. That would alter your volume mapping depending on which is needed. Something like this is needed: Another potential issue is spaces in the folder path (EPG Data). I don't know how the container will handle that. I eliminate spaces in unRAID folder names just because I know they can cause problems at the command line and maybe in docker containers/plugins.
    1 point
  28. https://github.com/bbilly1/tubearchivist#redis-on-a-custom-port
    1 point
  29. I as well would like to have a way to manage snapshots natively in Unraid. Just as an FYI, for those who don't know, you can use virsh from the cli, or install virt-manager on another computer to manage vms and/or snapshots. Not as simple but perfectly usable.
    1 point
  30. What do you all think of my new build? Decided to spruce up the Torrent case due to the sheer amount of space inside the case. Just waiting for a new GPU to stick in the top slot for my gaming streaming VM. The RGB panels I got from ColdZero (SSD RGB covers). 4 fit perfectly vertically. Spec: Gigabyte X570s, Ryzen 3950X, 128GB Mem, 3xSSD, 2xnvme, WX4100 GPU, GT1030 GPU, BeQuiet Fans, Fractal Torrent case, Noctua CPU cooler and fans.
    1 point
  31. IMPORTANT - Log4j Vulnerability There has been a report of a user running an unpatched version of Minecraft Java which was also exposed to the internet, this resulted in the log4j vulnerability being exploited and the user suffering from a hack against files exposed for the container (located in /config). All users PLEASE keep in mind, if you wish to expose Minecraft servers to the internet then you must either:- 1. Ensure you are running Minecraft jar based off Minecraft v1.18.1 or later (recommended). or 2. You must mitigate the vulnerability by following this document:- https://help.minecraft.net/hc/en-us/articles/4416199399693-Security-Vulnerability-in-Minecraft-Java-Edition Note:- binhex/minecraftserver users - I have done my best to automatically patch Minecraft according to the document linked above by detecting the version of minecraft jar and then patching accordingly, However it is up to the user to ensure the patching is correct and there are no new vulnerabilities found.
    1 point
  32. SOLUTION AT THE END OF MY POST I did my own little testing and it seems like only the last line in mover_ignore.txt gets evaluated. This is for directories at least in my use case. Not sure what happens if you sprinkle files into mover_ignore.txt. My test setup: test_folder/ ├── test/ │ ├── test.txt ├── test2/ │ ├── test2.txt ├── test3/ │ ├── test3.txt Test command find "/mnt/downloads/Media/Downloads/test_folder" -depth -type f | grep -vFf '/mnt/user/appdata/mover_ignore.txt' Single top level directory in mover_ignore.txt mover_ignore.txt /mnt/downloads/Media/Downloads/test_folder/ No files found as expected Multiple directories in mover_ignore.txt When mover_ignore.txt has multiple directories including parent directory test_folder/ and test3/ is on the last line /mnt/downloads/Media/Downloads/test_folder/ /mnt/downloads/Media/Downloads/test_folder/test/ /mnt/downloads/Media/Downloads/test_folder/test2/ /mnt/downloads/Media/Downloads/test_folder/test3/ Files found in output /mnt/downloads/Media/Downloads/test_folder/test/test.txt /mnt/downloads/Media/Downloads/test_folder/test2/test2.txt EDIT: Figured it out using this page. Having my mover_ignore.txt save with CRLF broke the patterns for preceding lines so only my last pattern in the file would work. Solution was to save the file with LF as the end of line sequence.
    1 point
  33. I experienced the same error in my logs and I found a fix in at this thread: https://forums.unraid.net/topic/111212-swag-zertifikat-l%C3%A4uft-aus/ that sorted the issue for me. The fix was for me to comment out line 19 in the /config/nginx/proxy-confs/onlyoffice.subdomain.conf and after a restart all seems fine again. Ps. use chrome to translate the thread into English. Ds. Good luck. 🙂
    1 point
  34. Nee. Das Archiv Bit der Quelle wird gesetzt - nicht des Ziels. Mir fiel beim Lesen dieses Threads ein, dass mir das im letzten Jahrhundert bei der Verwendung von XCOPY aufgefallen war. Ich meine, dass auch ZIP so etwas machte. Sobald die Tools eine Archiv Option besitzen, fummeln die an der Quelle rum. Wie es heute ist weiß ich nicht. Ich wette aber das sich daran nix geändert hat.
    0 points