Leaderboard

Popular Content

Showing content with the highest reputation on 01/09/21 in all areas

  1. Hallo allerseits. Nachdem ich hier viel Hilfe erfahren habe möchte ich ein bisschen was zurückgeben. Meine Anforderung war vor allem an das Gehäuse daß es in den 19" Schrank passt. Das neue Fractal Design R5 passt genau in ein 19" Rack wenn man das Gummi unter den Füßen abreißt dann die Füße mit den Schrauben unter dem Gummi abschraubt. Leider komme ich bei diesem Serverschrank nicht mehr an die vorderen USB Ports, auch wenn ich das Gehäuse weiter rausziehe. Deshalb habe ich mich dafür entschieden den USB Stick mittels pinheader innen aufs Board zu stecken. Mangels Geld habe ich mich für gebrauchte Hardware entschieden: M/B: Gigabyte Technology Co., Ltd. H97-D3H-CF CPU: Intel® Xeon® CPU E3-1231 v3 Memory: 16 GiB DDR3 Kingston Hyper X Und irgendein Scythe Towerkühler gingen für insgesamt 110 Euro her. Dazu gab es noch einen usb pinheader , ein paar Kabel, und einen USB Stick für gesamt etwa 25 Euro. Meine vorhandene 4TB Seagate Ironwolf ist als Parity Laufwerk im Einsatz und als Datenplatten nutze ich zwei der vielgescholtenen WD Red 4TB EFAX Festplatten. Da meine vorhandene 240GB Samsung Cache SSD nicht nur zu alt sondern auch viel zu klein war habe ich diese durch eine 1TB Crucial MX500 ersetzt. Zusammen mit dem BeQuiet pure Power 11 500W Netzteil und der UNRAID Basic Lizenz komme ich so auf etwa 630 Euro Kosten für das komplette NAS. Zieht man die Festplatten ab sind es nur knapp 350 Euro für das NAS Grundgerüst. Deutlich weniger als eine fertig NAS die nicht so potent wäre. Da ich Berufsbedingt ein wenig an Server Hardware (Blech) habe, wurde der Server auf einem Auszug platziert. So kann man auch, dank des wunderbaren Gehäuses und passenden Kabeln mal eben die CacheSSD austauschen. Ohne den Server rauswuchten zu müssen weil das Seitenteil nicht verschraubt werden muss. Mein Schrank ist noch nicht Perfekt, aber er funktioniert soweit gut. Schönheit kommt noch.. Der Server läuft soweit sehr gut. Power ist genug vorhanden und so kann ich gleichzeitig Nextcloud nutzen und per PLEX einen Film auf meinen TV streamen. Lasst Euch nicht von den angezeigten Temperaturen verwirren, mein Keller ist nicht gar so warm. Mit eingeschalteten Festplatten braucht der Server knapp 50 Watt, laut meinem Verbrauchsmessgerät. Sobald der Server in den idle geht und die HDDs abgeschaltet werden zeigt das Messgerät 33 Watt an. Das wären also Jährliche Kosten von 86 Euro im idle, ich rechne also mit Betriebskosten von etwa 100Euro/Jahr. Das ist nicht wenig, aber verglichen damit was 8TB Speicherplatz sonstwo kosten würden komme ich damit klar. Abschließend möchte ich mich bei allen Tippgebern hier bedanken. Allen voran @mgutt und @Ford Prefect aber auch viele andere die hier geschrieben haben und Denkanstöße lieferten haben sehr geholfen. Ohne Euch wäre es deutlich schwieriger ins Thema zu kommen. 👌 Allen Einsteigern würde ich noch ein kleines Tool ans Herz legen. Heimdall. Wirklich wunderbar. Gruß Martin
    2 points
  2. With the release of Unraid 6.8 comes support for WireGuard VPN connections. At the moment the GUI part is offered as a separate plugin, but will be integrated into Unraid in the future. This approach allows for quick updates and enhancements without dependency on Unraid version releases. People starting with WireGuard should read the quick-start guide written by @ljm42. See Please use his topic only to ask questions about using and setting up WireGuard. The GUI has online help as well, please have a look at this too. Use this topic to report any issues or bugs or proposed enhancements for the WireGuard functionality. This way things stay grouped together. Thanks
    1 point
  3. Application Name: SWAG - Secure Web Application Gateway Application Site: https://docs.linuxserver.io/general/swag Docker Hub: https://hub.docker.com/r/linuxserver/swag Github: https://github.com/linuxserver/docker-swag Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support. PS. This image was previously named "letsencrypt", however, due to a trademark related issue, it was rebranded SWAG and is being published in new repos. In order to migrate to the new image, all you need to do (at a minimum) is to open the container settings and change the "Repository" field from "linuxserver/letsencrypt" to "linuxserver/swag". If you prefer, you can change the container name to "swag" as well, although it is not required. As long as you keep the environment vars the same and the "/config" folder mount the same, all the settings will be picked up by the new container. Please see here for more detailed instructions: https://github.com/linuxserver/docker-swag/blob/master/README.md#migrating-from-the-old-linuxserverletsencrypt-image
    1 point
  4. PLEASE - PLEASE - PLEASE EVERYONE POSTING IN THIS THREAD IF YOU POST YOUR XML FOR THE VM HERE PLEASE REMOVE/OBSCURE THE OSK KEY AT THE BOTTOM. IT IS AGAINST THE RULES OF THE FORUM FOR OSK KEY TO BE POSTED....THANKYOU The first macinabox is now been replaced with a newer version as below. Original Macinabox October 2019 -- No longer supported New Macinabox added to CA on December 09 2020 Please watch this video for how to use the container. It is not obvious from just installing the container. Now it is really important to delete the old macinabox, especially its template else the old and new template combine. Whilst this wont break macinabox you will have old variables in the template that are not used anymore. I recommend removing the old macinabox appdata aswell.
    1 point
  5. Thanks for doing the plugin and instructions.
    1 point
  6. The drivers are compiled and provided by limetech, I simply created a plugin and the instructions page so that the installation is as easy as possible.
    1 point
  7. If it is caused by one or more docker containers you can check which ones are using the most RAM by turning on the advanced view in the upper right corner in the Dockers tab. You can also limit how much RAM a docker container uses by adding the --memory= parameter to the docker container Extra Parameters. Here is an example of one of my containers which I have limited to using a maximum of 4GB RAM:
    1 point
  8. Here's a Video tutorial for the UUD! Not sure Nate's forum handle but thank you!
    1 point
  9. Thank you very much for your help.
    1 point
  10. I will look into this ASAP, weekends are usually family time for me...
    1 point
  11. Everything is working now - thanks a lot for the quick response!
    1 point
  12. They introduced a new dependencie, added it now and it should start just fine, please force a update of the container on your Docker page in Unraid.
    1 point
  13. You can copy panels from here https://grafana.com/grafana/dashboards/10914
    1 point
  14. I think it's because the symbolic link needs a trailing '/' to indicate it's a directory. i.e. /mnt/disks/shareName/
    1 point
  15. I see your 3 and raise you my 21. I manage a whole raft of sites, I'm not about to upgrade without good reason.
    1 point
  16. Unfortunately, without diagnostics from RC2, it will be hard to determine why...
    1 point
  17. Passiv wird es wegen der HDDs eh nicht und die 30W vom G5400 könntest du bei Bedarf auch passiv kühlen, nur das bringt ja keinen Mehrwert. Ich habe das Asrock J5005 im Backup NAS. Da machte es "Sinn", da ich schlicht keinen Platz für einen Kühler hatte 😅 Die da verbaute Lüftergröße ist richtig eklig, wenn ich das so sagen darf ^^ Mehr als 2U gehen nicht? Das kostet das selbe und dürfte qualitativ deutlich besser sein (4U): https://www.silverstonetek.com/product.php?pid=331&area=de Es gibt auch Zubehör für einen Rackeinbau: Wobei Martins Lösung sogar einen komfortableren Zugang zu den HDDs ermöglicht: https://forums.unraid.net/topic/101154-meine-19-spar-serverkonfiguration/
    1 point
  18. Hm. Was mir nicht so gut an den Fantecs gefallen hat: Die S-ATA/SAS-Backplane + SFF-8087 erfordert so wie ich das sehe ein Adapterkabel. Die drei 40er Lüfter würde ich schon von Anfang an durch welche von Noctua ersetzen. Und diese Einschränkung: ATX-Netzteil mit 80mm Lüfter -oder Mini-Redundant PS/2 Sowie die Berichte über mäßige Verarbeitungsqualität bzw. schlechte Hotswap Rahmen haben mich vom Kauf abgehalten. Ich hätte zu einem Chenbro Servergehäuse tendiert, konnte aber keines in meiner Preisklasse (100€) finden und habs anders gelöst. https://www.ebay.de/itm/SUPERMICRO-CSE-825-CHASSIS-2U-PSU-560W-80-BACKPLANE/173159294058?_trkparms=aid%3D1110006%26algo%3DHOMESPLICE.SIM%26ao%3D1%26asc%3D20131231084308%26meid%3D30c7b8aebb23402a81e17fce269e4cd6%26pid%3D100010%26rk%3D7%26rkt%3D12%26mehot%3Dpp%26sd%3D133627704017%26itm%3D173159294058%26pmt%3D0%26noa%3D1%26pg%3D2047675%26algv%3DDefaultOrganicWithAblationExplorer%26brand%3DSupermicro&_trksid=p2047675.c100010.m2109 Gruß Martin
    1 point
  19. Took care of it, permissions are back to root:root. Thanks @dmacias! Interesting you were still able to use your ssh keys, that’s what identified the issue for me. I was receiving an error saying my keys were ignored due to bad permissions on /etc.
    1 point
  20. You will need to enable physical passtrough of the Device or NIcs. There is a video: ...and here's the plugin available to help doing this:
    1 point
  21. 1 point
  22. Hello, During the night, the container has been automatically updated from 7.6.21-ls25 to 7.6.21-ls26. Since then, the existing GPU slot is disabled with the following message in the log : 08:22:15:WARNING:FS01:No CUDA or OpenCL 1.2+ support detected for GPU slot 01: gpu:43:0 GP106GL [Quadro P2000] [MED-XN71] 3935. Disabling. The server has been folding for at least 10 days, nothing else has changed in the setup (Unraid 6.9.0-beta35 with Nvidia Driver). Output of nvidia-smi : Sat Jan 9 09:26:32 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P2000 Off | 00000000:2B:00.0 Off | N/A | | 64% 35C P0 16W / 75W | 0MiB / 5059MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ So, I think this release has an issue, probably far beyond my technical skills ... Does anybody know how, as a workaround before a fix, I could downgrade from 7.6.21-ls26 to 7.6.21-ls25 which was perfectly fine ? Thanks in advance for the support. Edit : I got support on linuxserver.io discord channel, problem temporarily solved. Thanks again ! The container has been updated with new nvidia binaries, and it seems it is no longer compatible with my installed drivers (455.45.01). I don't know if it would be OK with the regular drivers proposed by Nvidia Driver plugin, ie 455.38 in 6.9.0-beta35. For sure, the best solution would be to have the latest stable Nvidia drivers (v460.32.03) in Unraid, as the latest version of the container is fully compatible with them. The workaround to downgrade to the previous version of the container is to edit the template repository from "linuxserver/foldingathome" to "linuxserver/foldingathome:7.6.21-ls25". Folding again, which is the most important for the moment ! But this example clearly raises the problem of regular updates of Nvidia drivers by Limetech when 6.9.0 is stable ... Until then, I'll stick to ls25 version of the container. Edit 2 : As I had posted an issue on github, the developers proposed me to test a dev build. I did and it worked fine, so I suppose a new version of the container will be soon publicly available and others should not have the same issue. Thanks for the responsiveness to @aptalca and linuxserver team !
    1 point
  23. That is an error generated within the Parity Check Tuning plugin's code that should only be possible if there is something wrong with the /etc/unraid-version file on the system (that is being used to check what version of Unraid is currently running). Since the contents of that file are built into Unraid it's contents should always be correctly formatted. That implies to me that there might be some RAM issue that has lead to that file being corrupted, especially as the rest of the logged message looks like gibberish. As a check I would be interested if you could start up a console session and use the command cat /etc/unraid-version and post the results. I would expect it to be something of the form version="6.8.3"
    1 point
  24. Looks like he has 2 dockers that were updated 2 days ago
    1 point
  25. This doesn't seem to be the correct support thread for that. You can go directly to the correct support thread for any of your dockers by clicking on its icon and selecting Support.
    1 point
  26. Trurl, we seem to be completely out of the woods! Let me know if I can help you in any way! Patreon account or whathaveyou!! You the best!
    1 point
  27. Use UD to mount an ISO and it will be shared in the VM.
    1 point
  28. Anyone know what Plex folders can be excluded in CA Backup / Restore application? I heard that you don't need to back up the entire Plex appdata directory. It takes quite a while to do a backup now and I suspect that the Plex files are responsible.
    1 point
  29. Ok, just went through the thread. I will start from scratch with the key loaded onto the USB. Thanks a lot!
    1 point
  30. This does work with swag as is on my machine. There may be an issue with your port set up.
    1 point
  31. No both stay. The only thing gone is the ES machine learning (hook processing).
    1 point
  32. 2021-01-08 now released. Should fix update issues with UnRAID 6.9. Thanks @mlapaglia for the contribution.
    1 point
  33. The following is based on a request by the OP @Waffle I'll be editing this post throughout the day, as I complete my thoughts, so until I put a big "DONE" at the top, consider it an incomplete work in progress. I just don't want to lose my work by accident. Copying all data from current array, to new disks as Unassigned Devices. "Tools" used (hyperlinks): Community Applications Unassigned Devices and UD+ binhex - krusader When I make reference to any of the above, use the hyperlinks to get specific instructions on how to install or manage them. They are a treasure trove of information. We'll need Docker to work, so if you haven't already, go to your 'SETTINGS' tab and pick 'DOCKER' under System Settings. Make sure it's enabled. You'll need to install Community Applications first. Once you have that, everything else is installable from there, by search for them by name. With all of them installed, and assuming you have plugged in drives that are NOT part of the array, you should be able to see drives listed in 'Unassigned Devices'. If format is greyed out, you'll need to go into UD's settings and enable 'Destructive Mode'. Click 'FORMAT' to format the disk. Make sure to pick XFS (it's default) as you're turning these new disks into an array later. Follow the prompts to complete formatting. 'MOUNT' the disk by pressing the button. Repeat as needed for all of the Unassigned Devices you have listed that need formatting. Let's get ready to copy some data. You'll have the krusader docker available on the 'DOCKER' tab. If it's not 'started', click on the icon for it, and in the menu click 'start'. Once it's running, same menu, click 'WebUI'. Krusader is just a GUI file manager. It's easy enough to use, but take note it only has a single folder 'tree' on the left; make sure you have clicked into the correct target window if you want to change your left and right views, otherwise you'll wonder what you've done wrong. Also, the 'noVNC' call out on the left, has a right-pointing chevron; click it. It'll get that menu mostly out of the way. The folder you'll care about, is the 'HostMedia' folder. You'll find everything important in there. Here, we need to make a decision: The 'disk#' folders are the individual data disks in your array. You can access your data through them, but only for the specific disk in question. The 'user' folder contains all of your shares. From there you can access your data without regard to which disk it's actually on. *IF* you have a known failing/problematic disk in the array, I'd suggest going directly to that disk to copy the data on it. The 'disks' folder contains all of the Unassigned Devices drives you've added/formatted above. From this point on it's just a matter of using the krusader file manager to get your source and target set up, and clicking 'F5 Copy.'
    1 point
  34. All disks are mounted, including the emulated disk2. The emulated data is exactly what will be rebuilt so it looks pretty good. To rebuild to the same disk Stop array Unassign disk Start array with disk unassigned Stop array Reassign disk Start array to begin rebuild During rebuild, you should see lots of Writes to the rebuilding disk, lots of Reads of all other disks, and zeros in the Errors for all disks. I would estimate 4TB to take 8-12 hours. Using the array during that time won't break anything, but it will make everything go slower.
    1 point
  35. Another really cool thing happened today. @SpencerJ has featured the UUD as the very FIRST recipient of the "Best of the Forum" blog.🥇 I wanted to personally thank him for writing up this article and for his continued support of our work. You can check out the blog post here. ☺️ https://unraid.net/blog/ultimate-unraid-dashboard
    1 point
  36. Hey, yeah sorry, i wanted to write this down for quite some time. So the goal was to view and controll dockers and VMs from in Grafana. - Status of Container / VMs - display the container/vm Icons - Enable/disable Container / VMs Short: install Unraid API Plugin install Grafana JSON API Plugin install Grafana dynamic Image Plugin Long: To get these Infos, i use the Electric Brain Unraid API Luckily, this Unraid Addon creates an Json API, wich we can use. There is nothing special about the APi. Just install and setup. The second Part is to show the Data from the Unraid API in Grafana. Therefore im using the Grafana Plugin grafana-json-datasource https://grafana.com/grafana/plugins/marcusolsson-json-datasource?pg=plugins&plcmt=featured-undefined&src=grafana_footer https://github.com/marcusolsson/grafana-json-datasource?utm_source=grafana_add_ds Install this in Datasource Plugin in Grafana and create a new Datasource. Enter your Unraid IP, with the Port of the API and add /api/getServers like http://192.168.x.x:3005/api/getServers From here, we just have to display the Unraid API Data in some nice way. So how to use the new created Json Datasource Examples Get all VMs $.['servers']['192.168.2.254']['vm']['details'][*].name Get the status of theses VMs $.['servers']['192.168.2.254']['vm']['details'][*].status Get all Docker Container $.['servers']['192.168.2.254']['docker']['details']['containers'][*].name $.['servers']['192.168.2.254']['docker']['details']['containers'][*].status $.['servers']['192.168.2.254']['docker']['details']['containers'][*].imageUrl If you only want to see the running Container $.['servers']['192.168.2.254']['docker']['details']['containers'][?(@.status== 'started')].name From here we can make a Panel which show only the running Container as Icons. For this we need another Grafana Plugin, called Dynamic image panel https://grafana.com/grafana/plugins/dalvany-image-panel?pg=plugins&plcmt=featured-undefined&src=grafana_footer We use again our JSON Datasource and seraching for all started container, get the imageurl and the Container Name as Tooltip. You have to add your Unraid URL as Base URL. $.['servers']['192.168.2.254']['docker']['details']['containers'][?(@.status== 'started')].imageUrl $.['servers']['192.168.2.254']['docker']['details']['containers'][?(@.status== 'started')].name just ask, if there are any Questions left.
    1 point
  37. And in any case, parity is not a substitute for a backup plan. You must always have another copy of everything important and irreplaceable. Parity can help recover from a failed disk, but there a plenty of more common ways to lose data, including user error.
    1 point
  38. Probably. The way Unraid uses parity protection, your data is only as safe as your weakest drive. All the sectors of all the drives are needed when reconstructing a failed drive, so having a known bad drive included in the array even though it's still "working" at the moment is a bad idea. Imagine this scenario... you purchase a pair of brand new drives, one for parity, one for data1. You decide that your gaggle of old drives are good enough, they haven't completely died yet, and what are the chances of two dying at the same time, right? Parity will allow you to rebuild to a new drive if one of the old ones fail, so you feel safe. Until one of your brand new drives decided to fail, and that one weak old drive decides it can't handle all that stress of rebuilding, so you just lost both drives worth of data. The parity check is a good tool to keep up with the health of your drives, if something feels off, like it did with that failing drive in place, you need to figure out why. If a parity check won't complete error free in a timely fashion, a rebuild of a failed drive won't either. Also... not all drive issues are really disk failures, MANY times it's connections or power.
    1 point
  39. If you are running the latest Unraid 6.9.0 rc release then the Parity Check Tuning plugin can now restart array operations after a shutdown/reboot or array stop/start.
    1 point
  40. 移掉需要先停止阵列,然后到工具-新配置进行设置,再然后移除硬盘,开阵列。不管是永久移除还是临时移除,阵列失去了所移除硬盘上面存储的数据和其他盘的校验数据,所以需要重新进行校验,否则一旦其他盘出现故障,校验数据不完整,可能无法恢复。 在开启阵列的情况下移除,估计要spin down硬盘,热插拔,但这样操作会一直提示阵列错误,直到重置阵列或当插入容量不低于原磁盘的硬盘。 插入的新硬盘将会恢复缺失原磁盘数据。
    1 point
  41. Want to say thanks again for CA Backup / Restore Appdata as it just saved me a lot of time. I changed the location of my appdata folder to a different hard drive and forgot one setting in PLEX and lost my whole config last night. Thankfully I slept on it and remembered, "hey, I have a backup." This app just saved me a few hours and it has saved me a few hours a few other times too Thank you, craigr
    1 point
  42. If your Pi hole isn't your DNS Server, then the Peer DNS server should be your router or something else (default gateway). In this case try to put 10.100.1.1 and not 10.100.1.2 as in the picture above. Don't forget to scan your code again (if you're using your mobile device) when you change this setting.
    1 point
  43. The backup does not include the docker.img (binary files) as it's very easily recreated and the apps reinstalled and it would automatically add 20+Gig to the backup set for no benefit
    1 point
  44. Map an external port to the Zoneminder docker port 80.
    1 point
  45. Trying to get this to run together with Home Assistant. When trying to access the camera feeds via HA it fails. It seems like the issue lies in the deactivated http access. Is there an easy way to reactivate http access?
    1 point
  46. Ok issue appears to be solved when disabling discovery, i did however notice that a radio i am running at https://darkone.network/radio/ is mounting a smb share using ver1.0 (smb1) and maybe this was the cause? i changed the mount to be 3.0 but i have not yet re-enabled discovery as it is not something i need, but perhaps the 1.0 smb was causing trouble? Anyway, issue resolved.
    1 point
  47. Netdata Sent from my LG-D852 using Tapatalk
    1 point
  48. You need to set up a persistent name for the usb device based on its usb-id. There is a tutorial for doing that here http://hintshop.ludvig.co.nz/show/persistent-names-usb-serial-devices/ You will also probably need to figure out how to get unraid to regenerate the udev rules every reboot.
    1 point
  49. Feature request: Instead of one giant tarball, could this app use separate tarballs for each folder in appdata? That way it would be much easier to restore a specific app's data (manually) or even pull a specific file since most of them could be opened with untar guis. Plex is the major culprit with it gargantuan folder.
    1 point
  50. Hi I have made a video guide about how to setup radaar It shows how to prepare your existing movie catalogue to be compatible with Radarr. How to install the container then finally how to configure and use Radarr. Hope you find it useful!
    1 point