Leaderboard

Popular Content

Showing content with the highest reputation on 04/14/21 in all areas

  1. Ich wollte hier mal meinen aktuellen Stand zeigen. Meinen Server habe ich selbst in einem 10 Zoll Rack verbaut: HDDs und Motherboard sind auf simplen Rackböden montiert: Hardware MB: Gigabyte C246N-WU2 CPU: Xeon E-2146G mit Boxed Kühler vom i3-9100 (der vorher verbaut war) RAM: 64GB ECC NT: Corsair SF450 Platinum LAN: 10G QNAP Karte HDD: 126TB bestehend aus 1x 18TB Ultrastar (Parität) und 7x 18TB WD Elements (Ultrastar White Label) Cache: 1TB WD 750N NVMe M.2 SSD USV: AEG Protect NAS quer auf Gummifüßen Als Server-Namen habe ich "Thoth" gewählt, da dies der ägyptische Gott der Weisheit war. Das verleitet auch manchmal dazu ihn "Thot" zu nennen. ^^ Bei 8 stehenden HDDs liegt der Verbrauch im Leerlauf bei 23W: Disk-Übersicht: Beim Hochladen direkt auf die Disks komme ich auf über 90 MB/s, was ich den schnellen HDDs zu verdanken habe: Auf den Cache sind natürlich 1 GB/s kein Problem: Dank 50% RAM-Cache gehen aber auch die ersten 30GB auf die HDDs mit 1 GB/s: Diese Kombination aus Performance und geringem Stromverbrauch bietet nur Unraid 😍 Ich betreibe außerdem noch einen Unraid Backupserver an einem externen Standort. Dieser nutzt ein Asrock J5005 und ist möglichst kompakt / günstig aufgebaut, wobei ich in einem Bitfenix ITX Case einen zusätzlichen HDD Käfig eingepasst habe, um 9 HDDs verbauen zu können:
    3 points
  2. Today's blog follows a couple of student's educational journey with Unraid in their classroom: https://unraid.net/blog/unraid-in-the-classroom If you are an educator and would like to teach with Unraid in the classroom, please reach out to me directly as we would love to support this educational program at your place of instruction!
    3 points
  3. I've made the correction and moved the ruleset definition before the include. Note To make this work, the user has to delete the files /config/rsyslog.conf and /config/rsyslog.cfg from the USB device. Reboot the server and reconfigure the syslog service in the GUI again.
    2 points
  4. I am using flash drives that I have had for 5 years plus with no problems. My experience is that if you stick with USB2 drives and avoid the tiny form factor ones they DO tend to be reliable.
    2 points
  5. Hopefully @giganode has not too much to do so he can answer here soon since he is the guy who can help with the AMD cards.
    2 points
  6. This release contains bug fixes and minor improvements. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Thank you to all Moderators, Community Developers and Community Members for reporting bugs, providing information and posting workarounds. Please remember to make a flash backup! Edit: FYI - we included some code to further limit brute-force login attempts; however, fundamental changes to certain default settings will be made starting with 6.10 release. Unraid OS has come a long way since originally conceived as a simple home NAS on a trusted LAN. It used to be that all protocols/shares/etc were by default "open" or "enabled" or "public" and if someone was interested in locking things down they would go do so on case-by-case basis. In addition, it wasn't so hard to tell users what to do because there wasn't that many things that had to be done. Let's call this approach convenience over security. Now, we are a more sophisticated NAS, application and VM platform. I think it's obvious we need to take the opposite approach: security over convenience. What we have to do is lock everything down by default, and then instruct users how to unlock things. For example: Force user to define a root password upon first webGUI access. Make all shares not exported by default. Disable SMBv1, ssh, telnet, ftp, nfs by default (some are already disabled by default). Provide UI for ssh that lets them upload a public key and checkbox to enable keyboard password authentication. etc. We have already begun the 6.10 cycle and should have a -beta1 available soon early next week (hopefully).
    1 point
  7. I have one that has run 24x7 for years as have many others. It get restarted for upgrades of OS and hardware and when changing disk configuration, but other than that, it runs 24x7. With my electricity costs, running for a month at idle would cost about $2. I figure that is my real cost of running it 24x7 as other uses of the server cost what they cost to use it as intended. Like Kizer, all my disks spin down after a period of inactivity (1 hour in my case) and all dockers and VMs are on SSDs.
    1 point
  8. Your biggest gotchas with either system is going to be transcoding for Plex if you need that and resources for the VM if it is constantly running. For software (CPU based) transcoding, you need 2000 passmarks per 1080p encoded stream. The is the Plex recommendation. Both CPUs are in the 6300 - 6700 passmark range which means two simultaneous streams is your limit as you need about 2000 also for unRAID overhead operation. If a VM is running at the same time, you could find yourself in a resource crunch. For hardware (GPU based) transcoding, you need either an integrated GPU or a PCIe Nvidia graphics card. The i7 3700 has a very limited integrated GPU which will basically give you H.264 encoding but not much else. The i3-9100F would be a much better option up to and including HEVC 10-bit 4K video except for that "F". That means the CPU has no integrated graphics. The non-F version of that processor is a great transcoder for Plex. It looks like with either option, you are going to have to go with a supported discrete Nvidia GPU if you need or want hardware transcoding. You may not need transcoding if everything is in a format that can direct play locally which is recommended. Playing to mobile devices or remotely often requires transcoding. The other consideration with either system is how active the VM will be. You really need at least 2c/2t assigned to a very active VM. The i3 does not support hyperthreading and has only 4 cores. The i7 3700 has 4c/4t so you have a bit more headroom there. In an active VM you usually want at least 4GB RAM assigned to the VM and 8GB or more is better depending on intended use. With VMs, you also need to consider things such as video (passthough GPU or just VNC video), keyboard and mouse setup, USB ports or other things you need to passthrough. The IOMMU groupings of your motherboard will either make that easy or very difficult. My guess is that the MSI will have better IOMMU groups but I do not know that for sure. For the docker containers you have listed (with the Plex caveats noted above) either system will work well for NAS and docker use. The VM need complicates it a little depending on your intended uses and whether or not it is an occasional-use VM or an an always-on heavy-use VM and your hardware passthrough needs. Both CPUs support VT-x/VT-d for virtualization so assuming the motherboards do as well, you are good to go with VMs.
    1 point
  9. I have flash drives in use since the unRAID 4.7 days... more than 10 years ago. 1 is a USB 1.0 drive and the other is a USB 2.0 drive. both are running fine and not given me any problems. If you are losing USB drive at 1 per year that seems quite excessive for the amount of writes that should be going to a flash drive normally.
    1 point
  10. I’m having the exact same issue and was about to get on here to ask. I’ve deleted the default.conf and restarted the container many times. I’ve cleared browser cache, opened new private browsers. In viewing the new default.conf, all the new lines appear to be correct as per a comparison posted above of the old and new lines in the conf. Also running 21.0.1 Waiting with bated breath for the LXC gods to share their awesome wisdom and expertise!
    1 point
  11. I was able to rescue the most important data. Thank you for your help guys!!
    1 point
  12. 1) make sure your current config is NOT set to auto start the array. 2) note the position of each drive, as listed in unRAID. cross-reference with the serial# on each disk if needed. TAKE SPECIAL NOTE OF PARITY! 3) move the disks and flash drive to the new machine and boot. 4) your array will most likely be all wrong. do a new config, assign the disks as they were before, and ESPECIALLY THE PARITY! 5) if you did it right, you can tag the parity as VALID, and go on with your life. otherwise, you can rebuild parity, but only AFTER you verify all your data drives are present and populated.
    1 point
  13. try a different endpoint, france looks to be dead right now.
    1 point
  14. IPMI will allow out of band if you want it.
    1 point
  15. Very unlikely to happen as Apple itself has dropped AFP from the latest Mac OS Big Sur. You should be able to set up Time Machine to point to an SMB share.
    1 point
  16. that will be why you cannot access it now, as 'LAN_NETWORK defined as '10.10.20.0/24'' and your IP will now be in a different range and thus blocked. this might of been fixed by switching endpoint, it def looks to be working now, so i think your issue is now that you are blocked on your vpn range due to LAN_NETWORK not including your vpn range, see previous comment above, wait till you get home and try it on your lan, or alternatively add in your vpn range to LAN_NETWORK (use comma to separate the networks) and restart the container and try accessing the web ui again. no not true, debug just gives more verbose output, it does not stop the web ui from running.
    1 point
  17. That is very interesting, I haven't seen that yet. I've had this setup for my Other for about a year and she has yet to have any issues like that when streaming. I'm not 100% sure which versions of software she is running because my production stuff is different than my test stuff. I'll definitely be keeping a look out for that in the future and maybe pitch in on some forums when we experience some issues. Thanks for the heads up this is the first I have heard of it.
    1 point
  18. its fine, honestly, i didnt want it to come across narky, i just wanted to prevent any further 'i just upgraded unraid and now its broke' type posts, thats all 🙂
    1 point
  19. Problem are eth117 / br117, suspect cause by dirty setting in network config file. A simple way was delete whole config file or edit yourself.
    1 point
  20. I managed to get my FiveM server up and running using the info in this thread to get the txadmin set up. I was having issues getting the recipe to run for the base ESX default recipe. I had to add the mariadb docker containers and myphpadmin docker containers. it appears to be saving and working curerntly but it's 6AM and I'm calling it a night if anyone else has set up an ESX roleplaying server I'd like to know how you did it and what issues you encountered. I currently can't figure out how to update txAdmin it says I'm using an outdated version but it works so I guess the annoying red text will just be there for now lol.
    1 point
  21. No idea..but start reading from here: It seems @giganode solved the same issue by changing the refresh rate.
    1 point
  22. Can you try to do a of container and try it again if it stops working after some time? I have updated the container and it should hopefully work now.
    1 point
  23. You are a god damn hero. Thank you! I seem to be full up round again.
    1 point
  24. This is it yes, it's marked as beta for the application, not the packaging as a docker image. Sent from my CLT-L09 using Tapatalk
    1 point
  25. So the drives finished formatting overnight and now the smartctl -i /dev/sdX shows no more Type 2 formatting. Will now proceed with parity build and see if errors come up again.
    1 point
  26. You can do this yourself with a 'user.sh' somewhere on your server (but I don't recommend to put it into the root of your CSGO server directory) and the container will check if the library is installed on every start/restart of the container. To do that: Create a file somewhere on your server named 'csgo.sh' Put the following in the file (please note the '-y' at the third line so that this library will be automatically installed): #!/bin/bash apt-get update apt-get -y install lib32z1 Mount the script in the Docker template from you server to the container: '/opt/scripts/user.sh': Click on "Add" Click on "Apply" Now the container will check on every start if the library is installed. Hope this is something you can work with and does the job for you.
    1 point
  27. As far as I know you're not at RAID anything. Why do you think you're at RAID 0? Your 2 data disks are independent and provide no redundancy. And I'm not suggesting RAID anything. Unraid IS NOT RAID. Even after you add parity you won't be at RAID anything, at least not technically. Unraid allows a parity disk that lets you rebuild a missing disk from parity and all the other disks, but the implementation is different from any traditional RAID. Unlike RAID, Unraid allows you to mix different sized disks in the array, and you can easily add disks without rebuilding the array. Also, each data disk in the array is an independent filesystem that can be read all by itself without the other disks. This also means no striping, of course, so Unraid won't be as fast as striped RAID.
    1 point
  28. I am suggesting that parity is never a substitute for backups. But parity is still a good idea. Don't cache the initial data load. Never try to cache more than cache can hold. It is impossible to move from cache to array as fast as you can write to cache. If you keep all the source data until after you get it all copied to your Unraid server, you could wait until then to add parity since the copy would go faster without parity. Then, after making sure you have another copy of anything important and irreplaceable, you could reuse the disks with the source data in your new server. You get to decide what qualifies as important and irreplaceable.
    1 point
  29. I found the solution to this particular problem: I had to enable "Host access to custom networks" in the Advanced Docker settings and enable the desired Subnet.
    1 point
  30. For now forget about increasing or evaluating security and GET THOSE THINGS BACKED UP. Once you have offline backup of items you can't afford to lose, then start dealing with the rest of this. Once you have a backup strategy in place, then you can start evaluating open ports and examining the security of each of the answering services. Of course all this assumes you haven't put your server in the DMZ or anything like that.
    1 point
  31. You're welcome, glad it works now have a nice day
    1 point
  32. It's in the beta. Switch your repository to emby/embyserver:beta or wait for the next release.
    1 point
  33. Ist dir der Stromverbrauch egal? Weil das wird ein Stromschlucker sondergleichen. Für Docker und 1G LAN ist die Leistung der CPU die untere Grenze. So gut wie alle aktuellen CPUs übertreffen diese Leistung bei weitem. Brauchst du für Plex eine iGPU? Weil die hätte die CPU ja nicht mal. Du hast vermutlich keine Lust 500 € auszugeben? So in dem Bereich läge meine Empfehlung für dich: https://geizhals.de/?cat=WL-1881351 Oder die Sparversion für 330 €: https://geizhals.de/?cat=WL-1928373 Die Hardware bietet viele Upgrade-Möglichkeiten. Eine zweite M.2 über eine Adapterkarte, ein leistungsstarker Xeon Prozessor, bis zu 8 HDDs (je nach Case), etc. Wobei ich beim Case immer zum Deep Silence 4 greifen würde, da man die Platten viel einfacher verbauen kann. Es ginge noch günstiger, aber dann nur noch ohne ECC RAM. Hier die CPUs im Vergleich: https://www.cpubenchmark.net/compare/Intel-Pentium-Gold-G5400-vs-Intel-i3-9100-vs-Intel-Xeon-E5-4640/3248vs3479vs1224 Wichtig ist nicht die Gesamtpunktzahl, sondern Single Thread Rating. Da ist der Xeon E5 4640 sehr schwach.
    1 point
  34. ...was ist das fürn MB? 2x SATA3 und 2x SATA2, nur eine M.2 unbekannter Spezifikation.....zumindest wenn man dem reichsten Mann der Welt trauen darf. Hast Du nen guten Feuermelder und ne Versicherung? Ganz ehrlich...ein gebrauchter namhafter Dell, Fujitsu, Lenovo oder HP, zur Not auch in DDR3 tut es auch, wenn nicht noch was besseres zu finden ist. Was ist Dein Budget, ohne Platten? Edit: bzw. was genau hast Du schon auf dem Tisch?
    1 point
  35. I got a HBA last week from this guy : https://www.ebay.fr/usr/bd-xl?_trksid=p2047675.l2559 It got there pretty fast from the Netherlands (to France). It's just a sample of one, but that is better than nothing.
    1 point
  36. no. see q28:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  37. But, how do I do that in Unraid with Plex in docker? Is there an easy way? Edit: I'm using this docker (are there any differences on the dockers, anyone I should use instead?). https://hub.docker.com/r/linuxserver/plex/ and when when changing from "latest" to "1.22.0.4163" in VERSION on the "edit docker page" in Unraid and press Apply, I'm still on Version 1.22.2.4282 in app.plex.tv when I'm checking from a browser. Edit2: Is it version-1.22.0.4163-d8c4875dd I shall write instead? Or do I need to specify more? Edit3: 1.22.0.4163-d8c4875dd was the exact line. Now I'm down do the working version. When Plex has fixed the problem, what is the best version to be on then? docker, latest or public? /Söder
    1 point
  38. change the endpoint, see q28 for how to do this:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  39. DEVELOPER UPDATE: 😂 But for real guys, I'm going to be stepping away from the UUD for the foreseeable future. I have a lot going on in my personal life (divorce among other stuff) and I just need a break. This thing is getting too large to support by myself. And it is getting BIG. Maybe too big for one dash. I have plenty of ideas for 1.7, but not even sure if you guys will want/use them. Not to mention the updates that would be required to support InfluxDB 2.X. At this point, it is big enough to have most of what people need, but adaptable enough for people to create custom panels to add (mods). Maybe I'll revisit this in a few weeks/months and see where my head is at. It has been an enjoyable ride and I appreciate ALL of your support/contributions since September of 2020. That being said @LTM and I (mostly him LOL) were working on a FULL Documentation website. Hey man, please feel free to host/release/introduce that effort here on the official forum. I give you my full blessing to take on the "support documentation/Wiki" mantel, if you still want it. I appreciate your efforts in this area. If LTM is still down, you guys are going to be impressed! I wanted to say a huge THANK YOU to @GilbN for his original dash which 1.0-1.2 was based on and ALL of his help/guidance/assistance over the last few months. It has truly been a great and pleasurable experience working with you man! Finally, I want to say a huge thanks to the UNRAID community and its leadership @SpencerJ @limetech. You guys supported and shared my work with the masses, and I am forever grateful! I am an UNRAIDer 4 LIFE! THANKS EVERYONE!
    1 point
  40. @giganode @saber1 bestreicht mich mit Butter und nennt mich Toast 😅 Ich habe es hinbekommen. Nvram habe ich iMacPro1,1 ausgewählt und dieser hat mir dann auch Werte in Smbios eingetragen. Ausschlaggebender Punkt war allerdings, dass ich mit dem Hackintool den PCI-Namen der Netzwerkkarte auslesen musste und diesen unter "DeviceProperties" eintragen muss. Dort den Key "built-in" mit dem Wert "01" und "DATA" eingeben, Hacki neustarten und jetzt hat die Anmeldung geklappt. Wenn Sie mich morgen nicht abholen, scheinen se auch nichts dagegen zu haben 😁😆
    1 point
  41. How to Setup Nextcloud on unRAID for your Own Personal Cloud Storage https://www.youtube.com/watch?v=fUPmVZ9CgtM How to Setup and Configure a Reverse Proxy on unRAID with LetsEncrypt & NGINX (Swag nehmen) https://www.youtube.com/watch?v=I0lhZc25Sro Swag nehmen, dann muss man das letzte Video nicht mehr machen: How to Migrate from Letsencrypt to the New Swag Container https://www.youtube.com/watch?v=qnEuHKdf7N0
    1 point
  42. Ich würde vorschlagen die Datenbank zu sichern bzw in eine neue Datenbank zu kopieren. Mit der spielt man dann. Also den anderen Nextcloud-Container installieren, stoppen und dann die Web-Dateien und Shares in das Verzeichnis des neuen Containers kopieren. Im letzten Schritt dann in der config.php des neuen Containers die Datenbank so anpassen, dass sie sich mit der Kopie verbindet und zum Schluss starten und schauen was passiert. Solange man nur mit den Kopien der Daten arbeitet, kann ja nichts schief gehen. PS Ich nutze zum Abgleich von Verzeichnissen und Dateien Beyond Compare. Dann sieht man ob die Installationen sich irgendwo gravierend unterscheiden, wobei sie das bei gleicher Version ja eigentlich nicht tun sollten.
    1 point
  43. Something is wrong with your flash drive, it may be defective. Reboot, if the problem persists please post a screenshot so we can see the full message.
    1 point
  44. Don't know if this has been mentions yet but, quicksync stops working after version 1.22.0.4163. I tested this across multiple containers, each with the same results.
    1 point
  45. Good Afternoon everyone, I currently use untangle as my home router, the only issue being it does not have avahi/mDNS feature built into it. This creates an issue with smart home devices as this is a critical service for them to function. Does anyone know of a way to use unraid, preferably a docker, to be the mDNS forwarder/server for the network, to keep the devices off the main network, but still allow stuff like that to work?
    1 point
  46. To rebuild to same disk: Stop array Unassign that disk Start array with disk unassigned Stop array Reassign that disk Start array with disk reassigned to begin rebuild
    1 point
  47. I was trying to do this recently and think I may have found a different way that is a little more foolproof. I wanted to move my appdata share from "disk1" to the "cache." What I did was this: 1. Shutdown the Plex Docker. 2. Change the "appdata" user share cache setting from "no" to "Prefer." (From help docs for Prefer: When the mover is invoked, files and subdirectories are transferred off the array and onto Cache disk/pool.) 3. Run the Mover. 4. Change the "appdata" user share cache setting from "Prefer" to "Only." That seemed to do what was explained here, and avoided using the command line.
    1 point
  48. Note to self: info here: https://packaging.python.org/tutorials/installing-packages/#ensure-you-can-run-pip-from-the-command-line pip does not work from scratch, first run: python -m ensurepip --default-pip Then optionally upgrade pip using: pip install --upgrade pip And then just install the package from https://pypi.org/project/requests/ using: pip install requests
    1 point