Leaderboard

Popular Content

Showing content with the highest reputation since 10/28/21 in Posts

  1. Hallo an alle! Wollte hier schon längst mal meinen Server vorstellen da ich das für schon längst fällig hielt und ich sonst irgendwie nie so richtig Zeit gefunden hab. Der Server besteht aus folgenden Komponenten: Case: NZXT H2 Classic (Frontblende wurde entfernt für besseren AirFlow) zusätzlicher HDD Cage: ICY Dock MB074SP-B (wird demnächst gegen ein MB074SP-1B mit Hot-Swap getauscht) CPU: Intel Core i5-10600 CPU Kühler: Noctua NH-U14S Motherboard: ASUS Z490-E GAMING RAM: 4x Corsair Vengeance LPX 16GB DDR4 @2666MT/s C16 Netzteil: Corsair RM850x Addon Karten: Mellanox ConnectX3 CX311A-XCAT 10Gbit/s SFP+ NIC 2x DigitalDevices Cine C/T v6 Dual Tuner TV Karten Dell Perc H310 LSI 9240-8i im HBA Modus Coral Dual Edge TPU (leider nur einer verfügbar da nur über PCIe x1 angebunden) Nvidia T400 2GB Speicher: 2x Samsung 970 Evo Plus 1TB ZFS Mirror (appdata, Docker, libvirt,...) 2x Crucial MX500 1TB als Cache Pool (Nextcloud Datenverzeichnis, unRAID Cache,...) 1x M2 NVMe Transcend 128GB (per VirtIO durchgereicht zu einer Debian VM zum bauen der Docker Container) 6x WD Reds/White Labels für das Array mit einer Parity (Debian aptitude Mirror, verschiedenste Mirror von Betriebssystemen, Private Cotnainer Registry, Medien...) 1x Industrial Samsung SSD 128GB (per VirtIO durchgereicht zu einer VM zum bauen der Plugin Pakete für unRAID) 1x WD Red Unassigned Devices (Nextcloud externe Speicher, Backups, nicht kritische Daten...) Boot Stick(s): 1x Transcend JetFlash 600 Extreme-Speed 32GB USB 2.0 (unRAID) 1x SanDisk 16GB Cruzer Blade USB 2.0 (durchgereicht zu einer unRAID VM) Der Server beherbergt außerdem auch noch ein Git Repo, Jenkins und wie schon oben erwähnt eine Debian VM & eine unRAID VM. Auf dem Server werden lokal alle meine Docker Container gebaut, werden danach zu DockerHub und nochmal auf den Server in eine Private Registry (sicher ist sicher ) hochgeladen. Wie schon oben erwähnt befindet sich auf dem Server noch eine unRAID VM die gestartet wird wenn eine neue Version von unRAID gefunden wird, diese wird dann automatisch auf die neue Version aktualisiert. Danach startet der Build Prozess für die verschiedensten Plugins die nach dem erfolgreichem build auf Github in das dementsprechende Repositor hochgeladen werden. Eine zusätzliche Routine wurde ebenso eingebaut die die unRAID VM startet wenn eine neue Version von ZFS, CoreFreq und Nvidia Treiber gefunden wird die diese Packages für die aktualle Release version von unRAID kompiliert und hochlädt. Momentan wird bei einem Build Vorgang, wenn eine neue unRAID Version gefunden wird, folgendes kompiliert: ZFS Package @steini84 USB Serial Package @SimonF USB IP Package @SimonF NCT 6687 Package Nvidia Treiber Package DigitalDevices Package LibreELEC Package TBS-OS Package Coral TPU Package Firewire Package CoreFreq AMD Package CoreFreq Intel Package AMD Vendor Reset Package HPSAHBA Package Sound Package (noch kein Release geplant) So ein Build Vorgang dauert ca. zwischen 35 und 45 Minuten, je nachdem wie viele Nvidia Treiber Version gebaut werden müssen, da mittlerweile mindestens zwei bzw. in Zukunft drei gebaut werden müssen: Production Branch New Feature Branch Beta Branch (nur falls vorhanden) 470.82.00 (letzte Treiberversion die Serie 600 und 700 unterstützt) Der Build Vorgang ist vollständig automatisiert und wird spätestens nach 15 Minuten nachdem eine neue unRAID Version Released wurde gestartet. Ein Hinweis zum Verbrauch, durschnittlich liegt die Systemlast beim Bild Vorgang bei ca. 180Watt für die 35 bis 45 Minuten, hab noch ein Bild von der Auslastung ganz unten hinzugefügt... 🙈 Nur zur Erklärung, diese Packages müssen für jede unRAID Version kompiliert/erstellt werden da die Module die dafür benötigt werden in Abhängigkeit zum Kernel der jeweiligen unRAID Version stehen, die Plugins erkennen eine Änderung der Kernel Version beim Booten und laden die Packages für die jeweilige Kernel Version herunter und werden dann beim Start auch gleich installiert. Das ist mitunter ein Grund warum ich gegen Virtualisierte Firewalls auf unRAID bzw. AdBlocker die auch unRAID mit einschließen bin, da ein herunterladen der Packages beim Start von unRAID dann nicht möglich ist weil eben keine Internetverbindung besteht bzw. der DNS Server (im Falle von AdBlockern) noch nicht verfügbar ist. Momentan überlege ich den Server mit einem i9-10850k auszustatten um den Build Vorgang nochmal zu verkürzen aber da diese CPU momentan schwer zu bekommen ist und auch nicht gerade billig ist muss das noch warten. Ich hoffe euch hat die Servervorstellung und der kurze Einblick hinter die Kulissen wie so einiges bei mir auf dem Server funktioniert gefallen. Hier noch ein paar Bilder: Auslastung beim Build Vorgang, immer zwischen 90 und 100% :
    10 points
  2. Hey Unraid Community! For the first time ever, we're running a Cyber Monday Sale: 20% off Unraid Pro and Pro Upgrades! If you're planning a new build soon or want to purchase a key for a friend or family member, do it this Monday, 11/29/21- 24 hours only from 12:01-11:59 PST! No server installation required for purchase. For full details, head over to unraid.net/cybermonday
    8 points
  3. I have a created a file manager plugin, which I will release when the next Unraid 6.10 version comes out, This plugin extends the already present Browse function of Unraid with file management operations, such as copy, move, rename, delete and download. Operations can be performed on either folders and/or files and objects can be selected using a selection box at the left (in case multiple objects need to be copied or moved for example) or by clicking on a selection popup to an operation on a single object. All operations need to be confirmed before proceeding, this should avoid accidental mistakes. The file manager gives direct access to all resources on the array and pools and should be handled with care. Below two screenshots to give a first impression. Once released more info will be given in the plugins section.
    8 points
  4. Just FYI, the new 5.0.6 update for Satisfactory borks the server, it will not start. Its been reported on their bug tracker already, so avoid restarting your server until they issue a new patch. Satisfactory Q&A (satisfactorygame.com) EDIT: NEVERMIND. ISSUE IS FIXED WITH -multihome=0.0.0.0 in the docker settings under "game parameters". Add this to fix your servers! Issue was caused by CoffeeStain adding IPV6 support
    6 points
  5. 16 at home counting a couple of test servers plus a small server at work, do I win?
    5 points
  6. In den anhängenden Bildern seht Ihr einen Unraid Server der aus drei Gehäusen besteht. Der eigentliche Server (oben) besteht aus: ------------------------------------------ 1x Supermicro SC846E16 Gehäuse 1x Supermicro BPN-SAS2-EL1 Backplane 1x Supermicro X12SCA-F Mainboard 1x Intel Xeon W-1290P CPU 1x Noctua NH-U9S Kühler 4x Samsung 32 GB M391A4G43AB1-CVF RAM --> 128 GB 1x LSI 9300-8i HBA mit zwei Kabeln an interne Backplane angeschlossen 2x LSI 9300-8e HBA mit jeweils zwei externen Kabeln an eine JBOD/DAS Erweiterung angeschlossen 2x Samsung 970 EVO Plus 1 TB PCIe NVMe M.2 SSD 24x Festplatten mit Dual Parity - überwiegend Toshiba Jede der beiden JBOD/DAS Erweiterungen besteht aus: --------------------------------------------------- 1x Supermicro SC846E16 Gehäuse 1x Supermicro BPN-SAS2-EL1 Backplane 1x Supermicro CSE-PTJBOD-CB2 Powerboard 1x SFF-8088/SFF-8087 Slot Sheet 2x SFF-8644/SFF-8088 Kabel (zu einem der beiden LSI-9300-8e HBA im Server) 24x Festplatten mit Dual Parity - überwiegend Toshiba Jede JBOD/DAS Erweiterung wird im Server als eigenständige Unraid VM geführt. Jeder Unraid VM ist einer der beiden LSI 9300-8e HBAs durchgereicht sowie ein eigener Unraid USB Lizenz Stick. Jeder Unraid VM sind 16 GB RAM zugewiesen. Jede Unraid VM besitzt Zugriff auf alle CPUs - es gibt also keine CPU Isolation. Alle Festplatten der JBOD/DAS Erweiterungen sind via SMB im eigentlichen Server gemountet. Der Server hat somit alle Festplatten aller drei Gehäuse im Zugriff. Es gibt keine User-Shares. Alles läuft über Disk-Shares. Da ich nach wie vor BTRFS für meine Zwecke als - ahem - gefährlich betrachte, betreibe ich mit den beiden 1 TB PCIe NVMe M.2 SSDs keinen RAID Cache/Pool. Die Docker- und VM-Subsysteme laufen auf einem Single XFS Pool den ich selbst regelmäßig auf die zweite SSD repliziere. Das System ist "rock solid". Es hat schon mehrere Inkarnationen hinter sich. Unraid, und hier speziell das Unraid Array, genießt mein volles Vertrauen. Mir fehlt nur noch die Unterstützung für mehrere Unraid Arrays. Dann könnte ich die beiden LSI 9300-8e entfernen und statt dessen die in den Backplanes eingebauten Expander nutzen. Theoretisch geht das jetzt schon - ich hatte das auch schon getestet - die Performance Werte über VirtIO waren aber unerträglich und kläglich. Vorderansicht: Rückansicht (ich habe es nicht so mit Kabeln): Das innenleben eines der beiden JBOD/DAS. Links kommen die beiden Kabel vom zuständigen HBA, in der Mitte das Powerboard: Hier der Link zum alten Server - den gibt es nicht mehr: https://forums.unraid.net/topic/78165-how-many-tbs-is-your-unraid-server/?do=findComment&comment=797795
    4 points
  7. In episode 12, @jonp covers the recent announcement by Facebook regarding "The Metaverse." Is this really a good idea and is Facebook the right organization to launch it? Could this be the beginning of an Orwellian future? In addition, Jon also provides a public service announcement regarding Linux driver support of hardware and how some users with specific gear may need to hold back on updating Unraid to the latest release.
    4 points
  8. Good day! Machinaris v0.6.3 is now available. Changes include: Staicoin - cross-farming support for this blockchain fork. Chia - Update to version 1.2.11. See their changelog for details. NOTE: If you encounter v0.6.3 upgrade error, you are likely using a customized appdata location. Please visit Docker tab of Unraid UI. Select each Machinaris container (one by one), edit their Config, Show More Settings (at bottom), find mnemonic_path and Edit it to the location where you are storing the original mnemonic.txt. Apologies again for any inconvenience to existing users.
    4 points
  9. I'm going to - update to 1.8.x and - expose allowManaged, allowGlobal, allowDefault options in unRAID UI But no specific timelines. When I get some free hands on a weekend.
    4 points
  10. 4 points
  11. 3 points
  12. Pre-6.10 release: if someone wants to try out Unraid with a Trial key they have to give us an email address. When someone makes a key purchase they have to give us an email address. Hence we already have email/key database, i.e., "accounts". The primary purpose of the UPC is to now make these accounts accessible by users. We leveraged the IPS (forum) member capabilities to do this. That is why we enabled 2-factor authentication for the forum. When you "sign-up" via a sever we create an association between the account screen name/email address and the particular key GUID. If you already have a forum account then "sign-in" will do the same association. This lets us simplify a great deal of support having to do with keys. You also get a nice "dashboard" (the My Servers menu option) that shows all your servers. In order to show this dashboard, the sign-up/sign-in process will upload your server name and description. This is so we can present a link to your webGUI on your LAN. But of course this is a local IP address and only works if you click from a browser running in a PC on the same LAN. We don't send up any other info and the code that doing the sending up is open source in the webGUI - you can examine and see this. If you don't want to use your forum account to associate with your keys, then just create a different forum account. Yes, having "accounts" will open the door for us to provide "cloud based" services. For example you can install our My Servers plugin and get real-time status of your server presented on the dashboard as well as automatic flash configuration backup. If you don't want this, don't install the plugin. If you don't want your server to appear "signed in" then sign out. For those who think the will never sign-in and are disturbed by having a "sign in" link in your header - well we will consider cosmetic changes. No doubt some may have more questions and want more details. So let's do this: go ahead and fire away but please ask only one question or ask for only one clarification per post and I'll try to answer them all until we're all exhausted.
    3 points
  13. Good day! Machinaris v0.6.5 is now available. Changes include: Cryptodoge - cross-farming support for this blockchain fork. Docker images now roughly 1/3 the size of previous releases. Shared base image further decreases download size for forks. API endpoint /metrics/prometheus exposes plotting statistics. Thanks to @Nold360 for the contribution! Windows deployments now support automatically mounting remote plot shares (such as on a NAS) using CIFS in-container. On Wallets page, display total wallet balance including cold wallet address amounts.
    3 points
  14. If they start enforcing the min 5 requirement (they don't now), then I think that might be the solution - users pairing up in groups of 5.
    3 points
  15. Compose Manager Beta Release! This plugin installs docker compose 2.0.1 and compose switch 1.0.2. Use "docker compose" or "docker-compose" from the command line. See https://docs.docker.com/compose/cli-command/ for additional details. Install via Community Applications Future Work: A simple unRAID web-gui integration is in the works.
    3 points
  16. Ich stelle vermehrt fest, dass der Unraid Cache mit einem klassischen Schreib-/Lese-Cache verwechselt wird. Ursprünglich war der Unraid Cache lediglich ein Schreib-Cache (siehe Link unten). Zwischenzeitlich wurde er zu einer schnellen Ablage für Docker Container und VM Images aufgebohrt. Was er aber nach wie vor nicht ist, ist ein klassischer Lese-Cache. https://wiki.unraid.net/Manual/Overview#Cache
    3 points
  17. Hello everyone. Tailscale for unraid has become rather more popular than I ever imagined, when I started this it was in the great tradition of scratching my own itch, wanting to access my sever over tailscale. Since then there have been over 250,000 downloads, there are tutorials on youtube, and increasing numbers of requests for new features and support. So I think it might be time to open this up a little bit more and so I have a few asks. Firstly, if you want a new feature, or think you have found a bug please don't post them here - or at least not only here, please create an issue on github if at all possile. https://github.com/deasmi/unraid-tailscale Secondly I'm just one person and while this is a realatively simple thing, it's really just packaging tailscale, using it can get more complicated as new features are always being added. So if you'd like to get involved as a developer or tester for future things please let me know by sending me a DM along with how you'd like to help. Thank you Dean
    3 points
  18. Buy a bigger house. Problem solved
    3 points
  19. ZFS Master 2021.11.09a is live with a few changes, check it out: 2021.11.09a - Add - List of current Datasets at Dataset Creation - Add - Option for export a Pool - Fix - Compatibility with RC version of unRAID
    3 points
  20. there are ongoing issues with PIA DNS, you can try setting NAME_SERVERS to the following (removes PIA DNS):- 84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1
    3 points
  21. I have a 12600K and DDR5 mem arriving next week and will do some testing.
    3 points
  22. Exactly. Seemed to me to be completely pointless to have a GUI when no matter what you had to go to dockerhub to find out everything you needed to add in anyways. So I forced you to do that. Unfortunately, most people don't understand that change and thought I was taking something away from them. Which I'm not. But, I have given in to pressure and it will get re-added on the next release.
    3 points
  23. I think the glaring issue is that this thread seems to imply that the unraid user interface, or server itself should be hardened against external attacks. This would mean that unraid itself is exposed to the external network/internet, which basically just shouldn't be the case. This is a big clear red "don't do that." Instead, use a reverse proxy to get services running on the unraid server exposed to the outside world. As far as getting access to unraid itself exposed to the outside world, if you absolutely must, I would use something like Apache Guacamole with 2FA. This way the server itself is never exposed to the outside world, and your interface to it is protected with 2FA. I don't think this is something in the scope of unraid to develop a secure remote access implementation. I don't think the WebUI has been scrutinized with penetration testing, and I don't think a system with only a root account should ever be exposed to the internet directly.
    3 points
  24. I've re-written the vpn setup guide to be more up to date and hopefully a bit clearer for new users, let me know what you think guys, any feedback is welcome, as are PR's:- https://github.com/binhex/documentation/blob/master/docker/guides/vpn.md
    3 points
  25. My sugestion and i think one of the must important updates to VM's in Unraid:
    3 points
  26. EDIT: Fixed this for now by changing the repo to: dyonr/qbittorrentvpn:alpha Just updated today, and the GUI isn't coming up. Docker log shows below. Note that PID is not displayed. 2021-11-01 13:58:10.792792 [INFO] A group with PGID 100 already exists in /etc/group, nothing to do. 2021-11-01 13:58:10.812855 [INFO] An user with PUID 99 already exists in /etc/passwd, nothing to do. 2021-11-01 13:58:10.831970 [INFO] UMASK defined as '002' 2021-11-01 13:58:10.853118 [INFO] Starting qBittorrent daemon... Logging to /config/qBittorrent/data/logs/qbittorrent.log. 2021-11-01 13:58:11.880629 [INFO] Started qBittorrent daemon successfully... 2021-11-01 13:58:11.902356 [INFO] qBittorrent PID: 2021-11-01 13:58:12.179287 [INFO] Network is up The qbittorrent.log shows this on the end, which might be the issue: /usr/local/bin/qbittorrent-nox: error while loading shared libraries: libQt5Sql.so.5: cannot open shared object file: No such file or directory /usr/local/bin/qbittorrent-nox: error while loading shared libraries: libQt5Sql.so.5: cannot open shared object file: No such file or directory
    3 points
  27. The latest version of the official telegraf container no longer runs as root, which explains the issue. Here I used "telegraf:latest" (i.e. 1.20.3 currently), with apt-get update and install commands in the Post Argument, and ofc that can't work without root privileges. Pulling "telegraf:1.20.2" instead (previous version running as root) solved the issue, but it's only a temporary workaround ...
    3 points
  28. Ganz schlechte Idee. Du solltest mal nachschlagen, was das bewirkt. Da gibt es sogar Erklärungen auf Deutsch.
    3 points
  29. Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The default password for the gameservers if enabled is: Docker It there is a admin password the default password is: adminDocker Please read the discription of each docker and the variables that you install (some dockers need special variables to run). If you like my work please consider Donating for further requests of game server where i don't own the game. The Steam Username and Password is only needed in templates where the two fields are marked as requirde with the red * Created a Steam Group: https://steamcommunity.com/groups/dockersforunraid If you like my work, please consider making a donation
    2 points
  30. You know the drill - "Go PRO or go home"
    2 points
  31. The "master" case is more your typical unraid build except I added an external SFF-8088 adapter card that accepted the external cables from the slave and made the connections internally to the SAS card. Still the same old motherboard in the master. Finally, I put all of the drives in the drive trays, connected the two boxes together, and spun up UNRAID. I put the two machines on a cheap set of wheels (I can't believe this thing could take the weight, but it did) https://www.amazon.ca/gp/product/B08R1F9S17/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 It's tucked away under a desk in the corner, easy to clean under, etc. I'm using negative air pressure in the case for cooling with all air coming in through the drive bays. I plan on building a screen I can place over the front of the two cases to catch dust. I'll post pictures of that when it's done. So that's how you spend a bunch of money on your unraid setup without adding any more drive space! Thanks for reading, and happy compulsive designing. Ben IMG_0493.MOV
    2 points
  32. 纵观整个论坛,尤其是简体中文板块,几乎没有类似的解决方案,即便有也是比较旧的了。 本文基于 UNRAID 6.10.0-rc2 版本演示,实际上 6.9.2 也是可以的,不过只有在 6.10.0-rc2 中才有对 Windows 11 的完整支持,如果需要安装 Windows 11 建议也升级至 6.10.0-rc2。 大前提:BIOS中打开了Intel vt-x和vt-d(AMD端应该叫AMD-V和IOMMU),并且设置 IGD 为第一显示设备(也就是 BIOS 之类的默认从核显输出),且需要至少一个显示器连接到主板上的视频输出端口(也可以是欺骗器,总之需要系统识别到)。 1. 直通核显 实际上直通核显和独显并不严格要求顺序,如果对独显的直通没有信心,也可以先做直通独显。直通核显应该算是最难的了,建议先整(整不好可以直接劝退了哈哈哈) (1) 进入MAIN→Flash→Syslinux configuration→Unraid OS (中文版是主界面→Flash→Syslinux 配置→Unraid OS,看到右边绿色的那个框就对了) 内容替换为: kernel /bzimage video=efifb:off vfio-pci.ids=8086:3185,8086:3198 disable_vga=1 modprobe.blacklist=i915,snd_hda_intel,snd_sof_pci,mei_me,snd_hda_codec_hdmi,snd_hda_codec_realtek append initrd=/bzroot 其中,vfio-pci.ids=8086:3185,8086:3198 这一段,不同的CPU和主板都有所差异,以我的为例是: 有些CPU第二个设备是音频输出,其实都随意吧。这里试错成本也比较低,搞错了大不了再来一次( 至于如何查看,可以先看第三点的地方。 (2) 进入 SETTINGS→VM Manager→ADVANCED VIEW (中文版:设置→虚拟机管理器,然后点一下右边的高级视图) 设置如下属性: PCIe ACS override: Downstream VFIO allow unsafe interrupts: Yes 中文版如图所示: (3) 前往TOOLS→System Devices (中文版:工具→系统设备) 把刚刚填的vfio-pci.ids对应的设备勾上即可。 (4) 重启主机,新建虚拟机,配置要求如下: Windows 最好使用 i440fx,Linux 最好使用Q35,都选最新版本。 其中,VNC 必须先行保留,因为显卡可能还驱动不起来,需要用 VNC 装系统。等装完系统,驱动程序也会自动安装,到时候可以将第一显卡设为核显,不需要保留 VNC。图形 ROM BIOS 需要从这里下载:https://github.com/my33love/gk41-pve-ovmf,并且放在自己知道的地方,把路径填到该选项中。如果不指定图像 ROM BIOS,大概率是无法正常输出视频的。貌似 Intel 核显都可以用这个,所以不需要查型号了,如果有不可用的情况请回复到帖子中。 其他的设置根据自己的喜好或者用默认设置即可。 (5) 启动虚拟机,安装系统,最后等 Windows 自动安装驱动即可。如果使用 Windows 8.1 及以下操作系统,大概率是需要自己手动下载驱动程序的(而且不一定有兼容低版本操作系统的驱动程序),所以还是建议直接 Windows 10 或 Windows 11。 核显部分就到这里,接下来是较为容易的独显部分。 2. 直通独显 这一部分没有太多讲究,也可以完全不按我的步骤来做,不过为了确保尽可能一次成功还是按步骤来做。 (1) 前往TOOLS→System Devices (中文版:工具→系统设备) 将显卡对应的框勾上,例如: 可能会有很多子设备,但是没关系,一般来说不需要配置它们,让它们自动勾上就行了。(如果需要声音输出可以在声卡一项选择) (2) 创建虚拟机,配置随意,但是 BIOS 最好选 OVMF/OVMF TPM。 显卡选择跟核显的差不多,不过图形 ROM BIOS 一般来说可以留空,但是部分显卡可能无法正常启动或者驱动报43错误,这种情况下就需要整 vbios 了。由于感觉比较麻烦(需要一个打好驱动的物理机系统,用GPU-Z导出vbios,也许还需要小小修改一下),我这里就放弃了一些显卡。不过至少我的 GTX 1660 SUPER 还是可以正常驱动的,就不需要指定了。(GT 740 无法驱动,懒得整了) (3) 启动虚拟机,安装系统,安装独显驱动,然后看看设备管理器的显卡设备有没有叹号,没有的话就大功告成! (如果有的话,慢慢折腾吧,我也帮不了你)(尤其是万恶的43错误) 那么核显和独显直通的分享就这么多了,希望能够帮到初入 UNRAID 的大家。我大概花了3天来解决这些问题,期间还更换过方案,最终还是定下核显+独显的方案(主要还是考虑到 PCIe 以后的分配问题),虽然在论坛里问了但是并没有人理我(哭哭),最后也是按着外面的教程一步步试错,最终总结出这样的步骤。当然也会有按我的步骤不行的人,我希望能够多多探讨,而不是直接丢下一个“没用”就拍拍屁股走人了,如果你真心想说这话的话,我建议你还是直接关掉这个帖子吧,此贴不适合这类言论。 参考: https://www.right.com.cn/forum/thread-6006395-1-1.html https://github.com/my33love/gk41-pve-ovmf https://post.smzdm.com/p/ag8l254m/
    2 points
  33. Might be some suggestions in the attached that help, for me it manifests as shfs @ 100% when array is running, but usb kworkers are the culprit when stopped. Possibly moving usb connections around to different (ports) controllers helped.
    2 points
  34. Will you continue to service license key replacements by email if needed? Since a server running a full (not trial) license doesn't need internet and can be fully air gapped, it would be nice to have the assurance that when a USB stick fails we can still email support with the current GUID and desired new GUID and get our license reissued to the new stick without the server ever having an internet connection on the server itself. I'm perfectly willing to give up the automatic instant key replacement function in exchange for an old fashioned email chain that could take a couple days if the server doesn't have to have an internet connection.
    2 points
  35. Hi, that log shows bad news about the state of the Chia database within. Log message about `corrupted` means you are best to: 1) Stop Machinaris 2) Delete /mnt/user/appdata/machinaris/mainnet 3) Start Machinaris up and wait for a re-sync of blockchain and wallet. I would recommend checking the SMART stats on your cache drive to ensure its still good. Finally, to speed up #3 you can either import a backup or use the new blockchain db download feature. Your choice of course.
    2 points
  36. 2021.11.13 Added support for screenshots, videos and more. Maintainers: See these posts: https://forums.unraid.net/topic/38619-docker-template-xml-schema/page/4/?tab=comments#comment-1051850 https://forums.unraid.net/topic/115672-ca-developer-profiles/?tab=comments#comment-950547 Can be added to plugins, docker containers, and ca profiles For an example of what you can do, look at the MyServers plugin in CA Template errors will now display on every app's information sidebar On every app information display, you may see something akin to This will show the user exactly what CA is modifying from the maintainer submitted template so that the user has an easier time installing / using the app, or if CA was unable to automatically fix what to look for. Hopefully by making these errors more visible (rather than being buried in the Statistics section) the maintainers will be more likely to fix them. Various other fixes and tweaks.
    2 points
  37. Du musst den Container bearbeiten und weitere "path" hinzufügen. Da dann /mnt/disk1 zb
    2 points
  38. I was in the same boat, just needed a low power, low cost card with modern support for HDMI 2.0, DP 1.4, 4K@60Hz, H264/H265 Hardware encode/decode, HDR, PCIe 3.0, etc all in 30 watts... I ended up going for an AMD RX550. The trick is that there are two versions sold as the RX550 so you need to get the right one. The way to work out the difference is down to memory speeds and stream processors (SPs); Old Lexa Core - Polaris 12 (incompatible) Stream Processors 512 (CUs 8 - 512SP) Memory Speed 1750MHz (7000Mz effective) Reference Clock 1183MHz Newer Baffin Core - Polaris 11 & 21 (compatible) Stream Processors 640 (CUs 10 - 640SP) Memory Speed 1500MHz (6000Mz effective) Reference Clock 1071MHz If the card has the device ID of 0x67ff then it will work OTB. https://devicehunt.com/view/type/pci/vendor/1002/device/67FF Device ID 0x699f = RX 550 512SP ✗ Device ID 0x67ff Rev FF = RX 550 640SP ✓ Device ID 0x67ff Rev CF = RX 560 ✓ Here are some models which are Baffin and WOB; Yeston RX550-4G LP D5 Gigabyte RX 550 D5 2G (rev. 2.0) Sapphire PULSE RX 550 4G G5 640SP Sapphire PULSE RX 550 2G G5 640SP Asus AREZ-PH-RX550-2G Asus PH-RX550-4G-M7 There may be others but this is what I am aware of. I ended up with the Yeston for about $99 AUS but I got it before the market went stupid. It was all I could find in Australia as Sapphire and Asus were not bringing their low end cards into the country.
    2 points
  39. I open my bitbucket. https://kameleon83@bitbucket.org/kameleon83/unraid-docker-webui.git
    2 points
  40. Back up! https://unraid.net/download Sorry for the inconvenience.
    2 points
  41. Nur wenn auf die Daten-Platte geschrieben wird.
    2 points
  42. Du musst trotzdem mal schauen ob dein Mainboard das ganze überhaupt unterstützt. Es sollte dann eine "allow rtc by OS" Einstellung geben, heißt bei dir eventuell auch anders. Das Sleep Plugin hat selbst keine Aufweckfunktion. Dafür bräuchtest du ein Skript das du dann mit dem userskripts Plugin oder vielleicht auch im Sleep Plugin bei "custom commands before sleep" ausführen kannst. Wie das Skript dann auszusehen hat, da kann ich dir leider auch nicht helfen. Vielleicht weiß ja wer anders weiter.
    2 points
  43. Hey, I tried doing the same process and it's working fine now. It was definitely a cover issue as the web browser version showed invalid cover. The issue was the android app would crash instantly, not allowing me to change the cover in the app. Once changed in the browser version the android app worked fine again. I'll be using the app extensively next few days so will report back anything. Oh and thank you for your commitment to this project, I had been looking for a dedicated audiobook self hosting option for ages. I've tried several other options in the past, this is by far the best 👍 Maybe one of unraids amazing YouTubers could do a short guide on the setup and feature set, it would help give the project a wider audience.
    2 points
  44. To be honest, I doubt anyone is looking for the answer they need at the 200+ pages of some threads. But will people be served better with a multitude of individual threads ? I am not sure, the experience from Unraid's General sub-forum shows a lot the same questions being asked again and again, sometimes in a short time. That said, maybe people find the answer they were looking for, thus never ask a question and we are not aware of it. I think it is just a question of being able to search for what you need. Is a single thread better than multiple threads in a subforum for the forum search function or regular search engines ? In the end though, we should ask the people behind those large threads. What do they think about that ? Can it help them or require more time and ressource to assist ?
    2 points
  45. Das gibt's doch nicht. Ich wusste bisher nicht, dass man da drauf klicken kann. Danke.
    2 points
  46. something like this perhaps: NOTE: For the container to have access to your optical drive(s), you need to add them to your containers *Extra Parameters* line. An optical drive is represented by two Linux device files: /dev/srX and /dev/sgY. For optimal performance, the container needs both of them. This is done by clicking the button on the top right labeled Basic View to switch to Advanced View, this will expose the *Extra Parameters* line. To determine the right devices to use, start the container and look at its log. Then add the Devices listed to the Extra Parameters Line. Example Log Output: [cont-init.d] 95-check-optical-drive.sh: executing... [cont-init.d] 95-check-optical-drive.sh: looking for usable optical drives... [cont-init.d] 95-check-optical-drive.sh: found optical drive [/dev/sr0, /dev/sg1], group 19. [cont-init.d] 95-check-optical-drive.sh: exited 0. then add as follows: --device /dev/sr0 --device /dev/sg1
    2 points
  47. SSH into the server or use the console and type: mover stop
    2 points