Leaderboard

Popular Content

Showing content with the highest reputation on 05/01/21 in all areas

  1. New repository is: vaultwarden/server:latest Change it in docker settings: Stop the container Rename repository to vaultwarden/server Hit Apply and start the container That's it. Don't forget to go to unRAID Settings >> click on Fix Common Problems (if the scan doesn't start automatically then click RESCAN) and you will receive a notification to apply a fix for *.xml file change. I just went through this procedure and can verify everything went smooth and well.
    6 points
  2. Huhu, wollte auch kurz mein mein Unraid-Server bzw. Rack vorstellen. Verbaut ist folgendes: - AMD Ryzen 3800x - ASRock Rack X570D4U-2L2T - 64GB DDR4-3200 ECC - Alpenföhn Brocken Eco - Fractal Define R5 -> IPC 4U-40248 Folgende Platten sind Verbaut (nicht alle Platten sind auf dem Foto zu erkennen, da es ein älteres Bild ist ) - 1x 16 TB als Parity - 1x 14 TB - 1x 12 TB - 2x 4TB Docker: - Nextcloud - NginxProxyManager - Datenbank - PiHole - Plex - paperless--ng - makeMKV - grocy - BarcodeBuddy - 2x Minecraft-Server per MineOS - Ark-Server - noch ein paar andere kleine Docker VM: - 1x Win für einen DayZ Server - 1x Win für verschiedene Schrank: - Dell R210II für pfSense (Noctua lüfter verbaut) - normale Firewallsettings - OpenVPN-Server: damit ich von Unterweges mit meinen mobilen Geräten auf mein Heimnetz zugreifen kann - VPN-Anbieter: worüber einige Docker verbunden sind. - pfBlockerNG: läuft aber noch nicht perfekt - Fritzbox 7490 für das WLan - Smart-UPS 1500 - Mikrotik CSS326-24G-2S+ - QNAP QNAP TS-253Be-4G -> dient als Backup. Fährt alle zwei Tage hoch und Unraid schiebt die Daten per rsync aufs QNAP --> neuer Backupserver - kleine Ablage für externe Platten + anderen Stuff Geplant: - zweiter Mikrotik für 10Gbit - CRS305-1G-4S+IN oder CRS309-1G-8S+IN die Entscheidung ist noch nicht gefallen. - externe Backuplösung. Bin mir aber noch unschlüssig ob ein zweiter Server bei jemanden stehen soll oder in die Cloud Der komplette Schrank hat einen Verbraucht von ca. 100 Watt laut APC und 110 Watt laut TP-Link Steckdose. Den einzelnen Verbrauch vom Server oder so kann ich leider nicht sagen, da ich es nicht gemessen habe. Der Dell R210II bekommt noch eine andere CPU spendiert (E3-1220L v2) da der aktuell verbaute E3-1220 (V1) viel zu überdimensioniert ist. Dadurch werden auch noch ein paar Watt eingespart. EDIT 03.07.2021 So sieht es aktuell aus: -------------------------------------ALT------------------------------------------------------------------------------ Danke an allen für die Unterstützung und Ratschlägen! Mega Support hier Gruß
    3 points
  3. It's down for everyone I think.
    3 points
  4. Thanks for the thorough response. Me and the 10479 people that will ask after me VERY MUCH appreciate it :-)
    3 points
  5. This thread is meant to replace the now outdated old one about recommended controllers, these are some controllers known to be generally reliable with Unraid: Note: RAID controllers are not recommended for Unraid, this includes all LSI MegaRAID models, doesn't mean they cannot be used but there could be various issues because of that, like no SMART info and/or temps being displayed, disks not being recognized by Unraid if the controller is replaced with a different model, and in some cases the partitions can become invalid, requiring rebuilding all the disks. 2 ports: Asmedia ASM1061/62 (PCIe 2.0 x1) or JMicron JMB582 (PCIe 3.0 x1) 4 ports: Asmedia ASM1064 (PCIe 3.0 x1) or ASM1164 (PCIe 3.0 x4 physical, x2 electrical, though I've also seen some models using just x1) 5 ports: JMicron JMB585 (PCIe 3.0 x4 - x2 electrically) These JMB controllers are available in various different SATA/M.2 configurations, just some examples: 6 ports: Asmedia ASM1166 (PCIe 3.0 x4 physical, x2 electrical) * * There have been some reports that some of these need a firmware update for stability and/or PCIe ASPM support, see here for instructions. These exist with both x4 (x2 electrical) and x1 PCIe interface, for some use cases the PCIe x1 may be a good option, i.e., if you don't have larger slots available, though bandwidth will be limited: 8 ports: any LSI with a SAS2008/2308/3008/3408/3808 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, 9500-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed (most of these require a x8 or x16 slot, older models like the 9201-8i and 9211-8i are PCIe 2.0, newer models like the 9207-8i, 9300-8i and newer are PCIe 3.0) For these and when not using a backplane you need SAS to SATA breakout cables, SFF-8087 to SATA for SAS2 models: SFF-8643 to SATA for SAS3 models: Keep in mind that they need to be forward breakout cables (reverse breakout look the same but won't work, as the name implies they work for the reverse, SATA goes on the board/HBA and the miniSAS on a backplane), sometimes they are also called Mini SAS (SFF-8xxx Host) to 4X SATA (Target), this is the same as forward breakout. If more ports are needed you can use multiple controllers, controllers with more ports (there are 16 and 24 port LSI HBAs, like the 9201-16i, 9305-16i, 9305-24i, etc) or use one LSI HBA connected to a SAS expander, like the Intel RES2SV240 or HP SAS expander. P.S. Avoid SATA port multipliers with Unraid, also avoid any Marvell controller. For some performance numbers on most of these see below:
    1 point
  6. The solution should be to update Unraid, or if that's not possible, install a newer version of 'runc' yourself (search and you'll find a how-to somewhere around here by binhex)
    1 point
  7. Was lernen wir daraus. Erst Anleitung lesen ^^
    1 point
  8. Hello again, After trying everything possible, I opened Vaultwarden with a different browser. Great, it works with that. I have now emptied the browser cache of my Firefox and now it works again. Small cause, big effect. Thank you for your support.
    1 point
  9. Das steht auch im Docker. Wenn br0 verwendet wird muss Port 80 und 443 genommen werden.
    1 point
  10. Unless there is a longer log elsewhere, the one in the ui doesn't go back far enough. But those were the last 3 lines to be transferred to disk 7, who currently has 49.2 kb free. Source disk still has 36.8 gb on it, belonging to those 3 lines. So i'm assuming that was the issue. Mover was disabled, but guess something else was writing to that disk during transfers. All lines before were green (disk 7), and after those 3 lines (disk 12) were also green checks. One feedback (been meaning to give for a while). A select/deselect all checkbox would be great. Currently working on clearing off 8 4tb drives to move into a pool instead of main array, and each time it requires select source + shares, then unselecting over 20 destination disks. The auto-select-all is great when all/most of your disks are <70% full, but otherwise a quick unselect all would be nice to have. Either way thanks for the tool
    1 point
  11. ok, I'll try doing a BIOS update first to see if there any bugs that an update might fix. I appreciate your help on this! If I ever figure it out I'll let you know.
    1 point
  12. Agreed, this does work correctly. it did disappear from my Docker list, but once the xml was fixed, it came back. No data seems to have been lost.
    1 point
  13. Can confirm that changing the repo and then running Fix Common Problems plugin works with no issue. I did run into a problem with the extension on my browser. I had to log out and back in before it would sync again.
    1 point
  14. Ok, so if you want to avoid using SSD cache you just can add a path to mount the media folder onto a non cache share of unraid. Create a new share on unraid and set the use of cache to "no", like this, including the disks you want to use to store NVR: Be aware that this is going to increase a lot the write/reads on your disk, so I recomend you to use a proper disk to do this job (a WD purple series or so https://amzn.to/3e6UIjC (refered link)). You should config all your shares except the NVR one to not to use this disk and only use it to store NVR) Then go to your hass container and edit it. You need to add a folder path and point the /media/frigate folder of the container to the non cache share, something like this: Apply the change and now all the new media created on the /media/frigate folder on hass will be stored on that folder of unraid, skipping the cache disks.
    1 point
  15. 1 point
  16. Yes, the context path was added back to your airsonic template. Remove it and it will work again.
    1 point
  17. Ok. Thanks again and ..... "Guten Appetit"
    1 point
  18. The website lists 2G as the base. In reality it's more like 4. I was giving my own thoughts. But once you add any apps, 8 becomes the base
    1 point
  19. I got the following alert from Fix Common Problems. I did not have mcelog installed at the time I received this message. It's installed now, although I haven't received the notification again since I installed. I have attached my diagnostics for further assistance. Thanks in advance! Your server has detected hardware errors. You should install mcelog via the NerdPack plugin, post your diagnostics and ask for assistance on the unRaid forums. The output of mcelog (if installed) has been logged EDIT: Updated Diagnostic report with mce log
    1 point
  20. Nim den. ich hab jetzt beide zu Hause und mehr ist immer besser zwecks erweiterbarkeit. Der kleine wurde mir schnell mal zu wenig aber hab schon wieder einen Einsatzzweck für den gefunden.
    1 point
  21. Please report such issues! I can't test every single driver version but from my knowledge every driver version works just fine at least with Unraid 6.9.2 (I also stick to thr latest production branch but at least I try to test every version ).
    1 point
  22. Oh boy, tough to remember now, I didn’t spend much time on the issue. I think after adding the GPUID Plex would encounter an execution error when trying to start. I think it was v460.65 but hard to remember now, it was the latest at the time. Someone else had the same issue and rolling back to 455.45 fixed it so I just did that and didn’t troubleshoot further, which worked. Thanks, good to know that was more of an anomaly than something to plan for. I’ll just stick to the prod branch for now. Thanks!
    1 point
  23. Wenn ich mich recht entsinne, wird mit deinem radeontop plugin gleichtzeitig auch der treiber nicht mehr geblacklistet. Korrigiere mich falls ich hier etwas missverstanden habe. Grundsätzlich solltest du radeontop wieder entfernen. Ansonsten wird die Karte auch immer wieder von unraid initialisiert. Da du sie ja aber nur für vms nutzen willst, empfielt es sich, die rx nur von den vms aus zu "managen". Wenn du mit macOS alles fehlerfrei machen kannst, ist deine Konfiguration definitiv richtig. Den Automatismus von macOS nach einer gewissen Zeit in Standby zu wechseln muss man ausschalten. Dies führt in einer macOS vm nur zu Problemen, wie du ja selber gemerkt hast Ich gebe dir grundsätzlich den Tipp, deine rx immer in einer laufenden vm zu haben. Der Verbrauch sollte damit auf einem minimum gehalten werden. Klingt vielleicht merkwürdig, aber eine nicht vollständig initialisierte gpu gönnt sich immer mehr, als initialisiert in einer vm (auch wenn ein zusätzliches OS läuft). macOS ist dabei sparender als win10 bei meinem System...
    1 point
  24. Dann brauchst du nur Intel-GPU-TOP wenn du die AMD Karte in der VM verwendest brauchst du Radeon-TOP nicht. Unraid bringt zwar alles mit sich aber du musst die module immer noch per hand aktivieren zumindest irgendwo eintragen das die beim booten aktiviert werden. Wenn du dir schon ein Custom Image erstellt hast würd ich das zumindest gleich mit der Intel-GPU-TOP option im Kernel-Helper aktivieren, dann brauchst du nachher nix mehr per hand zu machen. Für das andere wäre @giganode der richtige da ich mich mit den AMD karten nicht wirklich auskenne bzw selbst keine besitze (muss mir immer eine borgen wenn ich was ändere an den tools).
    1 point
  25. Do yo know on whitch folder of hass is storing all the media?
    1 point
  26. I'm not the developer of the app, I'm just mantaining the Template of the container for Unraid. The modified addon is mainteined by another guy so ask him if he is going to maintain it. https://github.com/pdecat
    1 point
  27. Thanks for the responses. Nvidia driver plugin installed and configured HW transcoding with Plex. Followed this guide for assistance. No issues so far....
    1 point
  28. Every version should work just fine. I can tell for sure that the 465.24.02 and also 460.73.01 working just fine since I use them with Emby and that is not very different. What wasn't working? Some people rely on the Production Branch and some on the New Feature branch, regardless of what branch you choose it should work just fine but keep in mind that the new Feature branch can have various issues/bugs but never experienced on. I only display the latest 8 drivers and if there is more than 8 drivers the last ones simply not displayed to not overload the plugin GUI.
    1 point
  29. Ok I checked out syslog.txt from the diagnostics and included relevant log entries below. It looks like on April 17th the Hardware Error occurred: Apr 17 21:30:26 Tower kernel: mce: [Hardware Error]: Machine check events logged Apr 17 21:30:26 Tower kernel: [Hardware Error]: Corrected error, no action required. Apr 17 21:30:26 Tower kernel: [Hardware Error]: CPU:0 (17:71:0) MC27_STATUS[-|CE|MiscV|-|-|-|-|SyndV|-]: 0x982000000002080b Apr 17 21:30:26 Tower kernel: [Hardware Error]: IPID: 0x0001002e00000500, Syndrome: 0x000000005a020001 Apr 17 21:30:26 Tower kernel: [Hardware Error]: Power, Interrupts, etc. Extended Error Code: 2 Apr 17 21:30:26 Tower kernel: [Hardware Error]: Power, Interrupts, etc. Error: Error on GMI link. Apr 17 21:30:26 Tower kernel: [Hardware Error]: cache level: L3/GEN, mem/io: IO, mem-tx: GEN, part-proc: SRC (no timeout) FCP logs this every day when it runs the daily scan: Apr 18 04:40:08 Tower root: Fix Common Problems: Error: Machine Check Events detected on your server Apr 18 04:40:08 Tower root: mcelog not installed On April 22nd I installed mcelog. From April 23rd to present, FCP logs it slightly different on its daily scan. Apr 23 04:40:08 Tower root: Fix Common Problems: Error: Machine Check Events detected on your server Apr 23 04:40:08 Tower root: mcelog: ERROR: AMD Processor family 23: mcelog does not support this processor. Please use the edac_mce_amd module instead. Apr 23 04:40:08 Tower root: CPU is unsupported This thread I found explains why the log is slightly different once mcelog was installed (mcelog doesn't work with AMD). My questions are as follows: Is that initial error anything to worry about if it just occurred once? Do I keep getting the FCP popup notification about a hardware error, even after I told it to ignore it, because that error from the 17th still exists in the system log and FCP keeps detecting it when it runs the daily scan? If I just reboot the server, and the error from the 17th doesn't happen again, will I not get that FCP popup notification anymore because the syslog will be erased on reboot and will no longer contain that error from the 17th?
    1 point
  30. Bumping this. I edited OP with a new diagnostic report since I got the error notification again after I installed mcelog. Any insight would be greatly appreciated. Thanks!
    1 point
  31. For future reference, syslog since last reboot is already included in diagnostics. The only time it might be useful to have a separate syslog is if you have Syslog Server setup to save syslogs so you can get us syslog after a crash. Apr 30 11:00:01 Infinity kernel: mdcmd (36): check Apr 30 11:00:01 Infinity kernel: md: recovery thread: check P ... Apr 30 11:00:14 Infinity emhttpd: read SMART /dev/sdb Apr 30 11:00:14 Infinity emhttpd: read SMART /dev/sdc Apr 30 11:00:20 Infinity flash_backup: adding task: php /usr/local/emhttp/plugins/dynamix.unraid.net/include/UpdateFlashBackup.php update Apr 30 11:00:35 Infinity webGUI: Successful login user root from 192.168.86.49 Apr 30 11:07:50 Infinity kernel: mdcmd (37): nocheck Cancel Apr 30 11:07:50 Infinity kernel: md: recovery thread: exit status: -4 That is so close to on the hour that I wonder if it isn't a scheduled check. And then you cancel it several minutes later. What do you get from the command line with this? crontab -l and this? cat /etc/cron.d/root
    1 point
  32. You don't need a custom kernel, just the Nvidia driver plugin, to use either of the cards with docker containers.
    1 point
  33. Go to "setting" page > "Management Access" > set "Use SSL/TLS:" to No. There also have detail help.
    1 point
  34. No worries! I had a look into the reinstalling every time by the way, looks like with the CUDA driver it does so I'll patch that. I think setting a volume on the host to install to will require multiple volumes, I could get around this using symbolic links but it's a little hacky so I'll think about it for a while before implementing it. I'll @guillelopez you when I've worked out a way to do it and implemented it
    1 point
  35. So endlich ist alles da und es kommt nur noch ein zweiter 16gb ram Riegel. Gibt es nun im BIOS etwas was ich noch einstellen sollte? Aktuell läuft die Initialisierung und ich bin nun mit dem 9700 16gb ram und 4 hdd etwa bei 45 Watt kann ich mit leben aber eventuell kann ich ja noch etwas optimieren. Liebe Grüße. Martin
    1 point
  36. I am not sure if there is a better way but I just figured this out if you or @master_of_pants have not yet. Or for anyone else... - Log in to your admin account and access the 'System Settings' page - The first section says 'Invite Links' with a button underneath - Click the button for 'Show Links' - Click the '+' next to the words 'Invite Links List' - If you know the username you want to add you can type it in, or select only the user group, link expiration date, and space. - From the list view, simply right click on the word 'link' next to the user you just created and send that link in an email or text message, etc.
    1 point
  37. ....hier gibt es auch noch ein paar Infos, aber auch ohne Gewähr: https://www.hardwareluxx.de/community/threads/10gbit-homenetzwerk.807277/page-151#post-28324739
    1 point
  38. Diese Treiber sollen mit Win10 x64 funktionieren. https://support.hpe.com/hpesc/public/swd/detail?swItemId=MTX_4d26cc3176a645189ec454e2ff#tab4 Aber ohne gewähr.
    1 point
  39. Hallo, ich habe die auch gehabt. Windows 10 Bootvorgang bis zu 10min. Hab jetzt die Mellanox x3 drin, keine Probleme. Die NC550SFP+ laufen jetzt in einem Win Server 2019 ohne Probleme. MfG
    1 point
  40. @Marc_G2 Looks like it cannot find the rootfs. Syslinux should look similar to this as a minimum.. You may have additional options after the bzroot. default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label Unraid OS menu default kernel /bzimage append initrd=/bzroot label Unraid OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest Or if you click on flash within GUI. Ignore the addition entries I have.
    1 point
  41. No problem. Not a knock on one community or the other but I just converted within the last month from FreeNAS to Unraid. I'll say the Unraid community seems much more helpful and patient. That combined with the features and GUI/functionality of Unraid, I wish i would have never used FreeNAS. As to Krusader not working, I am not quite sure. Even with Docker's enabled, I just could not figure out how to find information stored disk by disk. I could find my shares but could not find the disks. If i were able to find the disks in Krusader, then it would have worked fine (and I searched around for quite a bit of time).
    1 point
  42. Apps - Previous Apps. Hit the download button
    1 point
  43. I can confirm that the NetApp PM80xx series cards do not work with the 0.1.40 driver on version 6.9 and above. I have just setup a new system with a DS4243 and the NetApp card. Could not see the drives at all. As soon as I downgraded to 6.8.3 all is back and visible.
    1 point
  44. EDIT: There is a workaround I found on the GIT repository. Basically the author of the speedtest docker needs to rebuild it. Until then, you can rebuild it yourself and point it to your own local repository. See this link for instructions. Same issue has the others have reported regarding the SpeedTest docker. Seems one of the parameters no longer allows NULL values? Debug Logging Output: Loading Configuration File config.ini Configuration Successfully Loaded 2021-04-19 16:00:09,787 - DEBUG: Testing connection to InfluxDb using provided credentials 2021-04-19 16:00:09,789 - DEBUG: Successful connection to InfluxDb 2021-04-19 16:00:09,789 - INFO: Starting Speed Test For Server None 2021-04-19 16:00:09,797 - DEBUG: Setting up SpeedTest.net client Traceback (most recent call last): File "/src/influxspeedtest.py", line 8, in <module> collector.run() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 171, in run self.run_speed_test() File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 119, in run_speed_test self.setup_speedtest(server) File "/src/influxspeedtest/InfluxdbSpeedtest.py", line 71, in setup_speedtest self.speedtest = speedtest.Speedtest() File "/usr/local/lib/python3.7/site-packages/speedtest.py", line 1091, in __init__ self.get_config() File "/usr/local/lib/python3.7/site-packages/speedtest.py", line 1174, in get_config map(int, server_config['ignoreids'].split(',')) ValueError: invalid literal for int() with base 10: ''
    1 point
  45. I don't think so because they also said: and also: I think they will not support the feature anywhere in the near future... But this is the first step in the right direction sort of.
    1 point
  46. Thanks for the replies. They were most helpful. For the curious, I know how my server became vulnerable. About three weeks ago I had a major network issue (it completely broke) and long story short I ended up removing my firewall (Ubiquiti USG), because it stopped working. I saw this as an opportunity to install pfSense onto my server as a VM. However, life, work, and health kept me from completing this in a timely manner. Now four days ago I got my server back online (was offline) in preparation to install pfSense, in the hope of upcoming free time, and yesterday is when my server was breached. Apparently, I never turned the firewall from my ISP modem back on. To conclude, the rest of my devices on my network are fine, the network is secure again, IP address changed, and looking forward to combing through my server, connecting it back to the network and installing pfSense.
    1 point
  47. I've been working on both 1.4 and 1.5 simultaneously. They will still be released separately, but the code overlaps in some areas, so I had to figure some of it out now. Goal is for a super clean and refined Varken/Tautulli/Plex Dash which will be integrated directly into UUD, sporting some of the same falconexe style/customizations (like working growth trending) found in the UUD. @Stupifier Thought You Would Appreciate This Sneak Peek...
    1 point
  48. @testdasi You may want to consider this. I’m tentatively planning on adding Varken/Plex panels/stat tracking to the Ultimate UNRAID Dashboard (UUD) in version 1.5. Not a guarantee until I get into it, but if I can integrate some/all of it, that would be cool.
    1 point
  49. This docker is not secure as is for outside access. Digging around so far I found that logging was not enabled so I enabled it on the template under advanced, then extra parameters -e LOG_FILE=/data/bitwarden.log -e LOG_LEVEL=warn -e EXTENDED_LOGGING=true and now it logs into the /data/bitwarden.log file. Now I cant execute fail2ban so maybe its not installed either because its not where the link you send shows it to be. I am not that familiar with docker honestly so I wouldnt know where to begin with that. I love this app and thanks for getting it for us worst case scenario I can have it log to letsencrypt and configure a jail for it in there.
    1 point
  50. 1 point