Leaderboard
Popular Content
Showing content with the highest reputation on 01/24/23 in all areas
-
Hello, I came across a small issue regarding the version status of an image that apparently was in OCI format. Unraid wasn't able to get the manifest information file because of wrong headers. As a result, checking for updates showed "Not available" instead. The docker image is the linuxGSM docker container and the fix is really simple. This is for Unraid version 6.11.5 but it will work even for older versions if you find the corresponding line in that file. SSHing into the Unraid server, in file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php change line 448 to this: $header = ['Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json,application/vnd.oci.image.index.v1+json']; And the version check worked after that. I suppose this change will be removed upon server restart but it will be nice if you can include it on the next Unraid update 😊 Thanks9 points
-
I have included your update for the next Unraid version. Thanks2 points
-
Good catch, I also saw there was an issue since June 2022. I hope you dont mind but I opened the MR and also added some references. Thank you to figued out!2 points
-
And as you can tell by how I can't even write on a form, That it's a miracle that I got all that done on my own Not knowing anything about servers I picked this up 2 weeks ago I've never seen one before in my life2 points
-
The Unraid webgui is actually open source, since you found the solution if you are interested you can submit a PR here: https://github.com/limetech/webgui2 points
-
Hey everyone, head over to the Plugins tab and check for updates, My Servers plugin version 2023.01.23.1223 is now available which should resolve many of the issues folks are reporting. This release includes major architectural changes that will greatly improve the stability of My Servers, we highly encourage everyone to update. ## 2023.01.23.1223 ### This version resolves: - My Servers client (Unraid API) not reliably connecting to My Servers Cloud on some systems - Server name not being shown in the upper right corner of the webgui - Cryptic "Unexpected Token" messages when using a misconfigured URL - DNS checks causing delays during boot if the network wasn't available - Some flash backup Permission Denied errors ###This version adds: - Internal changes to greatly improve connection stability to My Servers Cloud - More efficient internal plugin state tracking for reduced flash writes - PHP 8 compatibility for upcoming Unraid OS 6.122 points
-
Ive updated the OP and the unraid template with instructions for configuring ownership. Thanks @dreadu and @SOULV1CE for figure that out2 points
-
Dear Unraid Community, I wanted to take a moment to wish farewell to Eric Schultz @eschultz, who is leaving the company for other adventures. Eric has been an invaluable member of our team and played a critical role in the growth and success of the company and Unraid. He has always been willing to go above and beyond to ensure that our technology and systems run smoothly and efficiently. On behalf of everyone at the company, I would like to thank Eric for all his hard work, dedication, and contributions to the team and company. All the best, Spencer1 point
-
Da musst du hoffen, dass die größer ist als die WD Re. Da reichen schon ein paar Byte. Aber das siehst du dann ob unRAID die akzeptiert.1 point
-
UnRAID will put them in the correct slot based on their serial numbers. That said, take a pic of their locations anyway.1 point
-
1 point
-
The submitted change in the GitHub PR shows that additional manifest formats will now be accepted. The relevant documentation regarding the formats is in the code comments. Presumably some of the Docker projects have changed the format which was causing the original code to fail.1 point
-
What I've investigate so far, out of curiosity, and I may be wrong, cause I learned it today: - although github registry works with both: Registry v2.2 and Open Container Initiative (OCI) - seems like some Docker images are only with Open Container (and I dont know why or how this works), the webgui is sending the Accept header only for Registry 2.2 (see the docs) - the changes in this thread is to add the Accept header for Open Container Initiative (see the docs) That was my 2cents, hope it help to understand1 point
-
1 point
-
1 point
-
Ok- I *THINK* this should fix things. For some reason, rich text pasting was enabled (maybe a system change from a recent forum update?) but I changed it to "Paste as Plain text" which I believe was the default before. Please let me know if this issue persists.1 point
-
Hi I want to clear a drive as per the instructions in the below tutorial I want to use the script to clear the drive However, the links in step 8 are broken and I can't find the script via search in the forums. Is someone able to repair the links? https://wiki.unraid.net/Shrink_array1 point
-
1 point
-
Setz den Wert erstmal auf Null. Dann schauen ob das hilft. Danach könntest du einen Wert größer als 12000 und kleiner als 45000 versuchen, um PS4 zu deaktivieren. Und ja, Screenshot ist korrekt.1 point
-
De retour. Après avoir rebooter pour la X eme fois mon serveur, j'ai eu la chance de voir ce fameux message m'indiquant une nouvelle version de "MyServers". Installation faite après la désinstallation de l'ancienne version. Je crois les doigts, et j'espère que tout va tenir ... réponse dans un jour ou deux, j'ai envie d'y croire.1 point
-
Thanks for this. You saved me some time. The PR is already accepted and the fix is applied on github.1 point
-
1 point
-
Yes and no. The reason behind why it is like it is for the legacy driver is because I only compile the latest legacy driver which is available at a new release from a new Unraid version, like in this case Nvidia Driver version 470.141.03 was the latest legacy driver when Unraid 6.11.5 was released on November 3rd 2022: (BTW 5.19.17 is the Kernel version from Unraid 6.11.5 and this version allows me to exactly identify in the plugin which version from Unraid a user is running and needs to be downloaded since the driver depends on the Kernel version) This driver should run with all cards which need the legacy driver just fine. On the other hand, for stable Unraid versions I compile ever new Nvidia driver (for recent cards) that is released in the life cycle for a specific Unraid version, like you can see for 6.11.5 there are a lot since November 3rd: If you want to use this card for a VM then please uninstall this plugin, this plugin is only meant if you want to use your Nvidia card in a Docker, or even multiple, Docker contianer(s). See the first post from this thread:1 point
-
What is wrong with driver version: 470.141.03? From what I see in your Diagnostics the driver is loaded and your card is recognized just fine. BTW if you want to use this card for hardware transcoding that's a pretty bad choice because it even doesn't support h265 (HEVC). The screenshot that you've posted above is for another Unraid version too...1 point
-
Eventuell ein Anzeichen, dass der RAM stirbt: https://linustechtips.com/topic/1321751-ecc-error-every-314-seconds-on-ryzen-5800x-and-micron-ecc-sodimm/ Kontaktiere vielleicht mal den RAM Hersteller und frag nach einem Tausch und verweise auf die Logs.1 point
-
WOO HOO! You are my hero! First time I've seen this 100% correct since I started on it a week ago Sunday. I had at one point added "disable_xconfig=true" to the config file. It ended up displaying out put that was squished down and extending too wide for the monitor (see picture). I had found this in a bug report thread for 6.9. I then had a bunch of issues undoing it (set to 'false', removed entirely) that I ended up rebuilding my flash from an earlier backup. (That's a whole other frustrating story). So I decided not to go down that path again. So thank you, once again, for your help. Not just for this, but all you do for the Unraid community!1 point
-
Wie gesagt reicht es auch "32G" statt "32" einzugeben. Das Thema hatten wir schon mal.1 point
-
1 point
-
1 point
-
Fantastic, thank you @JorgeB, this has resolved my issue. Appreciate your support here very much!1 point
-
Es macht einfach keinen Sinn: Die Datenbank ist nun 289 MB groß statt 220 MB, aber ich sehe überall im HA, dass wie von mir verlangt die Verlaufsdaten nicht mehr ermittelt werden: Und ich denke hier ist das Problem zu suchen. Der räumt gar nicht auf, sondern ermittelt nur einfach nichts mehr: Sollte der Recorder nicht auch die historischen Daten nun löschen?! EDIT1: Also laut Anleitung kann man einen "Purge" über die Entwicklertools manuell anstoßen, was ich auch wie folgt gemacht habe: Nur das ändert nichts am Ergebnis. Ich vermute mal, weil es die "apply filter" gar nicht gibt. Denn wenn ich auf den YAML-Modus umschalte, sieht man nur das: Ich mein welche Filter soll er denn anwenden!? Den "exclude"-Teil aus meiner recorder-Regel nimmt er jedenfalls nicht. Und die anderen Optionen wie "Days to keep" usw werde ich bestimmt nicht nehmen, weil er dann ja bestimmt alles global löscht, was älter ist. Ich will ja nur bestimmte Sachen weg haben und nicht alles. EDIT2: Ok. Jetzt habe ich die Lösung. Man muss nicht wie in dem Thread im HA Forum beschrieben "Purge" nutzen, sondern "Purge Entities". Leider übernimmt HA nicht die vorhandenen Regel vom Recorder. Stattdessen muss man sie hier wieder manuell eintragen, was ich dann auch wie folgt gemacht habe: Jetzt ist zB das Temperatur-Chart wie erwartet komplett leer: Da die Datenbank aber immer noch 289 MB groß ist, habe ich zum Abschluss doch noch mal "Purge" benutzt und da das Repack ausgewählt: Nun ist die Datenbank auf 236 MB geschrumpft. Da ich deutlich mehr erwartet habe, werde ich die Tage noch mal die DB analysieren und schauen welche Werte diesmal in großer Anzahl vorkommen.1 point
-
They must have had a tidy up!. Will do. Note that I dont use this anymore as I am running Adguard on my Opnsense firewall instead.1 point
-
1 point
-
No I would expect it to be pretty quick. You can do it manually by editing config/ident.cfg on the flash drive, finding the USE_SSL line and setting it to: USE_SSL="no" Then run: /etc/rc.d/rc.nginx reload If that fails for some reason then a reboot will clear it up.1 point
-
1 point
-
Thank you JorgeB for the link. I followed the instructions and it resolved my issue. Thanks again. Cheers.1 point
-
1 point
-
This is by design since your passwords are not saved in My Servers.1 point
-
If you ar using SLL see if this helps: https://unraid.net/blog/ssl-certificate-update-21 point
-
Confirming this worked for me too. Not sure I needed to replace both, but I did anyway and Swag and Nextcloud are both back and up and running. For noobs like me, here's what I did: 1. Stop the Swag container 2. Go to the /mnt/user/appdata/swag/nginx folder 3. Rename your ssl.conf to ssl.conf.old and nginx.conf to nginx.conf.old (just in case we to restore them) 4. Copy ssl.conf.sample to ssl.conf and nginx.conf.sample to nginx.conf 5. Start the container and you should be good.1 point
-
Hi All. Just updated Swag and now getting this. nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/conf.d/stream.conf:3 Does anyone know how I can solve this, until then I cannot access anything from outside my network. Thanks.1 point
-
Yes, you need to go and setup storage in idrac, from memory you need to go to storage area for the physical disks and mark them as not for raid. They should show up after that.1 point
-
Ich hatte das auch vor ein paar Monaten. Zwei, drei Container waren angeblich "nicht verfügbar". Habe dann von der Basic in die Advanced view umgeschaltet und ein "force update" gemacht. Dann waren sie wieder "verfügbar". Was genau das Problem war, kann ich nicht sagen. Bisher nicht wieder aufgetreten...1 point
-
1 point
-
Hi, You have to create the directory with command "install" for the non root user 1001 it's the only way I find to execute the container with non root right. With this container I don't find any way to use an other non root user/group like 99/100 To create the directory before the container installation with the good owner group and privilege in terminal : sudo install -d -m 0755 -o 1001 -g 0 redis cd redis/ sudo install -d -m 0755 -o 1001 -g 0 bitnami With this command you will not have any error after the container installation. It's is now more secure than the use of 777 like in previouse unraid versions.1 point
-
Weatherflow2mqtt is a template based around the work of briis/hass-weatherflow2mqtt. You will want to use mosquitto or another MQTT broker.1 point
-
Just an perhaps unrelated followup, but this is the first Google-hit if you google "unraid sh high CPU". I did what @BRiT said and it look like sh were triggered by atd (scheduled job). It also had inotifywait (looks for updated files) in the same process tree. This lead me to the conclusion that the plugin "Dynamix Cache Dirs" may be involved somehow. I deactivated that plugin (since my problems with spinning up disks were solved) and the high CPU stopped. This was just a reminder for me to deactivate plugins I don't currently use.. This may help someone else coming to this seemingly dead thread. :)1 point
-
By default, /storage is configured read-only. You can change this by editing the "Storage" setting (make sure to switch to the Advanced View in container's settings).1 point
-
I had this issue as well recently, I found the only way to get rid of them was to go the actual disk where the files reside and delete them from there, so /mnt/diskx/Films/ instead of /mnt/user/Films/. When I did this I was able to delete the files successfully. Good luck1 point