Leaderboard

Popular Content

Showing content with the highest reputation since 04/29/21 in all areas

  1. New repository is: vaultwarden/server:latest Change it in docker settings: Stop the container Rename repository to vaultwarden/server Hit Apply and start the container That's it. Don't forget to go to unRAID Settings >> click on Fix Common Problems (if the scan doesn't start automatically then click RESCAN) and you will receive a notification to apply a fix for *.xml file change. I just went through this procedure and can verify everything went smooth and well.
    9 points
  2. @TGFNAS Das Gehampel ist nicht das Problem. Der Witz usw ist auch alles fein. Aber die Übermittlung der Informationen... Nerd pur, dazu teilweise Infos verdreht und alles nur sehr kurz angerissen. Ich mein, für mich kein Problem, aber wie viele kommen da mit. Das ist so als würde Albert Einstein in einem TikTok Video die Relativitätstheorie erklären und am Ende einen Mic Drop machen 🤯
    6 points
  3. We will be making a bigger announcement soon, but for now 🔊
    5 points
  4. Huhu, wollte auch kurz mein mein Unraid-Server bzw. Rack vorstellen. Verbaut ist folgendes: - AMD Ryzen 3800x - ASRock Rack X570D4U-2L2T - 64GB DDR4-3200 ECC - Alpenföhn Brocken Eco (nicht auf dem Bild) - Fractal Define R5 Folgende Platten sind Verbaut (nicht alle Platten sind auf dem Foto zu erkennen, da es ein älteres Bild ist ) - 1x 12 TB als Parity - 2x 8TB - 2x 4TB Docker: - Nextcloud - NginxProxyManager - Datenbank - PiHole - Plex - 2x Minecraft-Server per MineOS - noch ein paar andere kleine Docker VM:
    5 points
  5. That 'asshole' would be me. Please keep in mind that I have a full time demanding job, a child, a 10 month old and a wife outside of what I do in ombi. All of my free time is pretty much dedicated to working on the product. V4 was released as it's more stable than v3 and I needed to release.it at some point or I never would have. If you are not happy with it then I suggest your stick to v3 and if you want the voting feature ported faster, then you submit a PR or be able to contribute in some other way.
    4 points
  6. This thread is meant to replace the now outdated old one about recommended controllers, these are some controllers known to be generally reliable with Unraid: 2 ports: Asmedia ASM1061/62 (PCIe 2.0 x1) or JMicron JMB582 (PCIe 3.0 x1) 4 ports: Asmedia ASM1064 (PCIe 3.0 x1) or ASM1164 (PCIe 3.0 x4 physical, x2 electrical, though I've also seen some models using just x1) 5 ports: JMicron JMB585 (PCIe 3.0 x4 - x2 electrically) These JMB controllers are available in various different SATA/M.2 configurations, just some examples:
    4 points
  7. Ich bin, kurz nachdem ich zu unRaid kam, auf den Kanal von TGF aufmerksam geworden. Wohl dem YouTube-Algorithmus geschuldet. 😉 Meiner Meinung nach machen die Videos auf unRaid und seine Funktionen aufmerksam/neugierig. Die fundierten Infos dazu habe ich mir dann anderweitig besorgt. Ich denke aber, das genau das beabsichtigt ist. Die Videos sollen unterhalten. Und das tun sie.
    3 points
  8. Unraid 6.9.x ist draußen! Ich wünsche allen eine schöne Woche. Viel Spaß
    3 points
  9. My response, not the official Limetech response. Every single user of CA has had to acknowledge two distinct popups regarding applications prior to any installation being allowed. A general one and also another specifically geared towards plugins, which link to https://forums.unraid.net/topic/87144-ca-application-policies-notes/?tab=comments#comment-809115 and https://forums.unraid.net/topic/87144-ca-application-policies-notes/?tab=comments#comment-817555 respectively, which also detail some (but nowhere near all) of the security procedures in place. The lists within CA are not j
    3 points
  10. Your definitely not the assholes here....
    3 points
  11. so ive been reading for ages but theres so many pages here, my issue is that when the array is stopped the mergerfs mount is not unmounted casuing unraid to keep retrying indefinately to unmount shares, i have to manually kill the PIDs for rclone mounts to get the array to stop. is there a fix for this? ps -ef | grep /mnt/user to find the PIDs then kill PID to kill it i tried adding fusermount -uz /mnt/user to the cleanup script and run at array stop and that kills all the mounts. but im not sure thats the best way to do it (so this didnt actually work on a reboot)
    3 points
  12. 3 points
  13. Thanks for the thorough response. Me and the 10479 people that will ask after me VERY MUCH appreciate it :-)
    3 points
  14. This is resolved. It was an issue with RAM. The new memory is in and the server is running smooth and happy again.
    2 points
  15. License is only validated at array start, so as long as you don't stop the array you should be fine to wait until normal USA west coast business hours. @SpencerJ
    2 points
  16. You should be able to just put the RSS URL in your podcast app. Works for me. https://feeds.buzzsprout.com/1746902.rss
    2 points
  17. Personally I do mean personally. I think Podcasts should be a mixed bag of topics. Possibly reviews of Plugins, dockers, future content on and on. I think if you stick with one particular subject you might entertain some and bore others.
    2 points
  18. 2 points
  19. fand ich das Video an sich gut, Nope, find ich das Video ganz "interessant" um mal neugierig zu werden, klar ... und besser so eine Unraid Werbung als gar keine Unraid Werbung und man muss "provokant" sein um Interesse zu wecken ... gehört einfach dazu, leider ... so rein sachlich, fachlich, .... kommt man heute nicht weit ... Und ja, das mit den Array Pools ist halt daneben gewesen, zu schnell ohne ... aber hey, es gibt schlimmeres
    2 points
  20. Klar versuche ich, wenn die Zeit es zulässt mitzulesen und zu schreiben! Welche Aussage ist denn falsch? XFS Arrays haben noch keinen Raid (btrfs würde ich nicht anfassen hab ich deswegen auch nicht weiter ausgearbeitet). Ah habs gefunden! Ja war missverständlich ausgedrückt geht auf meine Kappe. Da man HDDs in einen Pool schmeissen kann und Multipools anlegen, kann haette ich einfach nur Multipools auch mit HDDs sagen sollen... *hatofshame*
    2 points
  21. Indeed! Stay tuned 🙂
    2 points
  22. Hey, please be nice or please go elsewhere. Thanks
    2 points
  23. Nur 8GB RAM im Server und Unifi flutet manchmal den ganzen RAM. Hatte ich auch schon mal.
    2 points
  24. @n3cron das heißt nur das Nvidia jetzt jetzt endlich wieder Offiziell, nach 7 oder 8 Jahren, die durchreichung ihrer Grafikkarten von einem Host Betriebssystem an eine VM erlaubt ohne Workarounds wie zB das das BIOS file in dem VM Template angegeben werden muss und endlich kein Fehler Code43 mehr in Windows ausgegeben wird. Das heißt aber nicht das du die Grafikkarte für Unraid und für die VM gleichzeitig verwenden kannst. Meines wissens ist es schon möglich eine Nvidia Grafikkarte die von Unraid benutzt wird an eine VM durchzureichen mit Workarounds aber nur musst du dann bed
    2 points
  25. Hi All, I like to report that my server have been up now for nearly 6 days now and the last parity check finished. As of right now I'm only running 5 docker containers all with only use the Bridge Network. As far as the other docker containers that I was previously running prior to the OS upgrade to 6.9.2 I'm in the process of migrating them to a Docker for Windows environment. This server has an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz, 3600 Mhz, 8 Core(s), 16 Logical Processor(s) with 64.0 GB of RAM. Much more suited hardware that my main storage array. I just
    2 points
  26. Funktioniert auch. Bin von DigitalDevices einfach begeistert und der Support von denen via Github oder Mail ist auch super.
    2 points
  27. Regardless of the status of the drive, you should really set up notifications so that you are alerted of issues as they happen not when you happen to look at the interface. You got "lucky" that the issue only concerned the parity drive and nothing bad happened on another drive. You might lost data the next time.
    2 points
  28. Original comment thread where idea was suggested by reddit user /u/neoKushan : https://old.reddit.com/r/unRAID/comments/mlcbk5/would_anyone_be_interested_in_a_detailed_guide_on/gtl8cbl/ The ultimate goal of this feature would be to create a 1:1 map between unraid docker templates and docker-compose files. This would allow users to edit the docker as either a compose file or a template and backing up and keeping revision control of the template would be simpler as it would simply be a docker-compose file. I believe the first step in doing so is changing the unraid t
    2 points
  29. Cross-posting here for greater user awareness since this was a major issue - on 6.9.2 I was unable to perform a dual-drive data rebuild, and had to roll-back to 6.8.3. I know a dual-drive rebuild is pretty rare, and don't know if it gets sufficiently tested in pre-release stages. Wanted to make sure that users know that, at least on my hardware config, this is borked on 6.9.2. Also, it seems the infamous Seagate Ironwolf drive disablement issue may have affected my server, as both of my 8TB Ironwolf drives were disabled by Unraid 6.9.2. I got incredibly lu
    2 points
  30. Being new to Unraid (and just teaching people in general), I try to lay out my steps of problem solving so others can learn. I can say I have solved the issue above. Midnight Commander is a similar "program" to Krusader. Essentially MC is a built in Krusader. Some will say Krusader is fasters, others say no difference, but it's a way to move/delete/copy/etc. files from one share to another or one disk to another (don't do disk to share or vice versa - i've been told). Midnight Commander is accessed by "SSH'ing into the server. I use Putty but when you log in via SSH
    2 points
  31. Hallo, I deleted @Hoddl's upload. Please anonymize sensitive info and reupload 🙂 Danke @i-B4se
    2 points
  32. ....parallel Aufnahmen machen oder PiP nutzen. Du kannst einen Multischalter auch noch im WoZi dazwischen schalten...die kann man auch Kaskadieren, wenn es die richtigen sind. Edit: https://de.wikipedia.org/wiki/Unicable ...was zum lesen Gesendet von meinem SM-G960F mit Tapatalk
    2 points
  33. Awesome project! Perhaps I am a bit blind but how do I add additional users? I do not have the "+ Add" option here.
    2 points
  34. EDIT: There is a workaround I found on the GIT repository. Basically the author of the speedtest docker needs to rebuild it. Until then, you can rebuild it yourself and point it to your own local repository. See this link for instructions. Same issue has the others have reported regarding the SpeedTest docker. Seems one of the parameters no longer allows NULL values? Debug Logging Output: Loading Configuration File config.ini Configuration Successfully Loaded 2021-04-19 16:00:09,787 - DEBUG: Testing connection to InfluxDb using provided crede
    2 points
  35. Big News from NVIDIA Just a few hours ago, NVIDIA added an article to its support knowledge base regarding GPU passthrough support for Windows VMs. While we've supported this functionality for some time, it was done without official support from the vendor themselves. This move by NVIDIA to announce official support for this feature is a huge step in the right direction for all of our VM pass through users. This should also help instill confidence in users that wish to pass through these GPUs to virtual machines without the worry that a future driver update would break this functiona
    2 points
  36. @awediohead not sure why discord isn't letting you chat? I'll look into it. To answer your question, if the port is already in use by different app just use a random one one digit up or down if you like. It can't be the same one because the other app is using it and unraid won't let you anyway.
    1 point
  37. One plan would be not to install the parity drives until you have populated the array with the data from the old server. (That will increase the write speed to the array by a factor of two or more!) I would then consider just copying the data over the network. It sound like you have less than 12TB of data which means the copy time would be about 36-40 hours (if my math is correct!)
    1 point
  38. And it is working again. Thank you for your time and effort !
    1 point
  39. Edit the Telegraf Docker container. Select advanced view in the top right. Scroll down to the Post Arguments line. Put the commands you want to run on this line. And Bob's my uncle.
    1 point
  40. You can't change the UUID because both devices are still part of the same pool, do this: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all pool devices (from both pools), start array to make Unraid "forget" current pool config, stop array, reassign both devices to the same pool (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any pool device), start array, pool should mount normally and you can start docker/VM services if stopped before, now see here to remove one of the devices from
    1 point
  41. Ok. Thanks again and ..... "Guten Appetit"
    1 point
  42. The note about mcelog not supporting the processor is a misnomer. It's telling you (in very bad english) that it's using the edac_mce_amd package instead) You shouldn't Yes
    1 point
  43. Ok I checked out syslog.txt from the diagnostics and included relevant log entries below. It looks like on April 17th the Hardware Error occurred: Apr 17 21:30:26 Tower kernel: mce: [Hardware Error]: Machine check events logged Apr 17 21:30:26 Tower kernel: [Hardware Error]: Corrected error, no action required. Apr 17 21:30:26 Tower kernel: [Hardware Error]: CPU:0 (17:71:0) MC27_STATUS[-|CE|MiscV|-|-|-|-|SyndV|-]: 0x982000000002080b Apr 17 21:30:26 Tower kernel: [Hardware Error]: IPID: 0x0001002e00000500, Syndrome: 0x000000005a020001 Apr 17 21:30:26 Tower kernel: [Hardwa
    1 point
  44. Big thanks to those who helped out in my absence (@creativity404). I went on holiday and was expecting to have a connection to the internet, but I didn't. @Creativity404 Are you still having trouble yourself, or did you get to the bottom of it? Sadly the XMRig data folder is tiny, if you don't want to increase your docker image size, what I can do is install the drivers and use a utility like ncdu to find what locations are suitable to create volumes for. What driver version are you trying to use? Nvidia right? The container is already based on Ubuntu, but
    1 point
  45. First thing to confirm is that your system is as stable as it seems. As a longtimer, it can seem fine and have subtle issues. If you're still interested in making sure things are good, it's best to start with a diagnostic.zip file. It's generally the first thing you'll get pinged for here, and it's typically a good starting point. Important things we're looking at are kernel logs and docker logs, both of which are going to be invaluable in tracing issues. That'll be on Tools, Diagnostics, Download. Drag and drop that onto your next reply, and someone (I WILL eventually respond if y
    1 point
  46. 1 point