Leaderboard

Popular Content

Showing content with the highest reputation since 09/23/21 in all areas

  1. Hallo zusammen, ich weiß, dass es hier eventuell nicht hingehört, aber ich möchte mich bei allen aktiven Mitgliedern dieses Forums bedanken. Ihr habt mir alle den Einstieg in die unRAID-Welt sehr erleichert. Ich war seit einigen Wochen stiller Mitleser und habe mir mein System zusammengebaut. Seien es die Hardware-Ratschläge oder die Lösungen zu kleinen/großen Problemen. Als Neuling, wie ich einer bin, konnte ich mir bis jetzt hier im Forum zu jeder Frage eine Antwort erlesen. Klasse! Mein System läuft und ich fange schon jetzt an mehr damit machen zu wollen 🙈 Vielen, lieben Dank! 👏 Wünsche euch allen nur das Beste.
    9 points
  2. No crypto currency? That one will be a global payment system.
    7 points
  3. I'm willing to do so but wait a little longer, otherwise this could become quickly a pretty big mess in terms of configuration.
    7 points
  4. Currently working on a TPM emulator plugin for unRAID.
    7 points
  5. Compose Manager Beta Release! This plugin installs docker compose 2.0.1 and compose switch 1.0.2. Use "docker compose" or "docker-compose" from the command line. See https://docs.docker.com/compose/cli-command/ for additional details. Install via Community Applications Future Work: A simple unRAID web-gui integration is in the works.
    6 points
  6. i can say i made alot of tests with @ich777 and it looks all good to be coming soon (very) its now a matter to make it as easy as possible due webgui which is also pretty far done, so no manual actions would be needed. here from 1 of 3 VM's running simultan incl. vTPM etc ... so also more then 1 VM at a time is working flawlessly tested on a rBar RTX 3070 VM Gaming, GT 1030 Desktop VM and a gvt-g VM (homeoffice), all good, so just be a little patient
    6 points
  7. Prometheus Fritzbox Exporter Download and install the Prometheus Fritzbox Exporter plugin from the CA App: Log in to your Fritzbox and go to "System -> FRITZ!Box-Benutzer": Click on "Benutzer hinzufügen": Create a new user with a password for the exporter (in this example "grafana") and tick the following check boxes ("FRITZ!Box Einstellungen", "Sprachnachrichten, Faxnachrichten, FRITZ!App Fon und Anrufliste" & "Smart Home"): Go to "Heimnetz -> Netzwerk -> Netzwerkeinstellungen" and select "Statusinformationen über UPnP übertragen": Go to the plugin settings by clicking on "Settings -> Fritzbox Exporter" (at the bottom of the Settings page) : Enter the username and password that you've created in Step 3 for the exporter in your Fritzbox and click on "Confirm & Start": After that you should see in the right top corner that the Exporter is running and details about it: Open up the prometheus.yml (Step 4 + 5 from the first post), add a line with '- targets: ["YOURSERVERIP:9042"]' (please change "YOURSERVERIP" to your Server IP), save and close the file: Go to the Docker page and restart Prometheus: Open up the Grafana WebUI: In Grafana click on "+ -> Import": Now we are going to import a preconfigured Dashboard for the Fritzbox Exporter from Grafana.com (Source), to do this simply enter the ID (12579) from the Dasboard and click "Load": In the next screen rename the Dashboard to your liking, select "Prometheus" as datasource and click on "Import": Now you should be greeted with something like this (please keep in mind that the Dashboard can display N/A at some values, especially at the gauges, since there is not enough data available, wait a few minutes and you will see that the values are filled in):
    6 points
  8. In this episode, hear an update on Windows 11 support for VMs on Unraid, the process for updating, and potential pitfalls. In addition, @jonp goes deep on VM gaming and how anti-cheat developers are wrongfully targeting VM users for bans. Helpful Links: Expanding a vdisk Expanding Windows VM vdisk partitions Converting from seabios to OVMF
    5 points
  9. A little bit more progress...
    5 points
  10. TPM is required in more and more things nowadays, including installing Windows 11 on a VM. Seems that KVM on unRAID can be done to support a virtual KVM, according to this guide: https://www.linkedin.com/pulse/swtpm-unraid-zoltan-repasi/ Can this be implemented natively on unRAID maybe for 6.10?
    4 points
  11. I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity/read check speed using a fast SSD only array with Unraid V6 Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s IBM H1110 PCIe gen2 x4 (2000MB/s) - LSI 2004 chipset, results should be the same as for an LSI 9211-4i and other similar controllers 2 x 570MB/s 3 x 500MB/s 4 x 375MB/s Asmedia ASM1064 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40156 and other similar cards 2 x 450MB/s 3 x 300MB/s 4 x 225MB/s Asmedia ASM1164 PCIe gen3 x2 (1970MB/s) - NOTE - not actually tested, performance inferred from the ASM1166 with up to 4 devices 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 and 6 Port Controllers JMicron JMB585 PCIe gen3 x2 (1970MB/s) - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s Asmedia ASM1166 PCIe gen3 x2 (1970MB/s) 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 x 355MB/s 6 x 300MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) * PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9211-8i PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset 4 x 565MB/s 6 x 465MB/s 8 x 330MB/s (190MB/s*, 185MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 565MB/s LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 565MB/s (425MB/s*, 380MB/s**) * PCIe gen3 x4 (3940MB/s) ** PCIe gen2 x8 (4000MB/s) SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link with LSI 9211-8i (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link with LSI 9211-8i (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® SAS2 Expander RES2SV240 - 10w Single Link with LSI 9211-8i (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. Dual Link with LSI 9211-8i (4000MB/s) 12 x 235MB/s 16 x 185MB/s Dual Link with LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link with LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 500MB/s 12 x 340MB/s Dual Link with LSI 9300-8i (*) 10 x 510MB/s 12 x 460MB/s * tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can closer to SAS3 speeds, with SAS3 devices limit here would be the PCIe link, which should be around 6600-7000MB/s usable. HP 12G SAS3 EXPANDER (761879-001) Single Link with LSI 9300-8i (2400MB/s*) 8 x 270MB/s 12 x 180MB/s 16 x 135MB/s 20 x 110MB/s 24 x 90MB/s Dual Link with LSI 9300-8i (4800MB/s*) 10 x 420MB/s 12 x 360MB/s 16 x 270MB/s 20 x 220MB/s 24 x 180MB/s * tested with SATA3 devices, no Databolt or equivalent technology, at least not with an LSI HBA, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Intel® SAS3 Expander RES3TV360 Single Link with LSI 9308-8i (*) 8 x 490MB/s 12 x 330MB/s 16 x 245MB/s 20 x 170MB/s 24 x 130MB/s 28 x 105MB/s Dual Link with LSI 9308-8i (*) 12 x 505MB/s 16 x 380MB/s 20 x 300MB/s 24 x 230MB/s 28 x 195MB/s * tested with SATA3 devices, PMC expander chip includes similar functionality to LSI's Databolt, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Note: these results were after updating the expander firmware to latest available at this time (B057), it was noticeably slower with the older firmware that came with it. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0** , Skylake, Kaby Lake, Coffee Lake and Canon Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. **Except low end H110 and H310 chipsets which are only DMI 2.0 DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
    4 points
  12. OK. Update just released which allows descriptions to be on the cards (defaults to be no). Enable it in CA Settings Minor performance increase in certain cases Rearranged debugging If you've got issues with CA not loading / the spinner never disappearing then Go to Settings. The CA settings page is now in there also (User Utilities Section) Enable debugging and apply Go to the apps tab. Wait at least 120 seconds Go back to Settings, CA Settings and hit Download Log. Upload the file here. (Also re-added 6.8.0+ compatibility - NOT TESTED)
    4 points
  13. Hey, ich wurde hier im Forum gut beraten und vorallem ist dies meine erste Unraid installation, deshalb möchte ich mein Setup vorstellen: Konfiguration: - Mainboard ASUS Pro WS X570-Ace - CPU AMD Ryzen 9 5950X 16C/32T @ECO-Mode - Kühler be quiet! Dark Rock Pro 3 - RAM ECC Kingston Server Premier 2x 16GB DDR4-3200 ECC - Netzteil be quiet! Straight Power 11 Platinum 850W - GPU PNY Quadro P2000 - Gehäuse Fractal Design Meshify 2 XL Festplatten: - Parität 1x Toshiba N300 8TB - Disks 5x Toshiba N300 8TB - Cache 1x M.2 Samsung SSD 980 PRO 2TB USV: APC Back-UPS 2200VA Meine Firewall (schon länger in Betrieb): - OpnSense Firewall Software - HP EliteDesk mit Pentium Gold G4400 - 9x Intel NIC - Zyxel GS1900-24 Switch Aktuell werden meine Daten vom alten (Ubuntu) Server auf den neuen Unraid übertragen. Dann werden die 2x 8TB Platten geleert und in den neuen eingebaut (deshalb am Bild nur 4 Platten zu sehen). Bisher bin ich mit diesen Setup und Unraid sehr zufrieden bis auf kleinigkeiten: Der Chipset Kühler am Mainboard ist blöd gestaltet direkt in der höhe von PCIe 1. Selbst die "kleine" P2000 verdeckt den ein stück, eine Gaming-GPU würde den Kühler ganz verdecken... Nicht so tragisch, ich belasse die P2000 auf Slot 2 Das ACC Express Remote managment system von Asus ist Mist, ich habe es abgeschalten. Schade das ich es in Unraid nicht hinbekomme die Temperaturen von Mainboard / CPU anzeigen zu lassen. Die App "Dynamix System Temp" zeigt irgendetwas anderes als die reale Temperatur. Entäuscht bin ich auch bzgl. GPU durchschleifen. Ich habe extra einen 5950X gewählt um eine Gaming-VM zu betreiben mit meiner ASUS Strix 5700 XT. Nur leider funktioniert das garnicht, ist eine mänge rumgepfusche und wenn es mal klappt kann man die VM nicht mal neustarten, man muss das ganze Unraid System neustarten, so hätte mir ein Ryzen 5800X auch gereicht. Naja vilt. gibt es mal ein Upgrade zu einer neuen nVidia GPU, hoffe die das ist dann überhaupt möglich gleichzeitig meine P2000 mit Treiber zu betreiben und die andere nVidia durchschleifen. Sonst bin ich fleißig am einrichten Ich muss mich noch genauer bzgl Netzwerk & Unraid informieren. Am ende soll es paar Docker Dienste geben die von außen erreichbar sind. Die kommen auf eine eigene NIC in ein eigenes Netzwerk separiert. Meine VMs bekommen auch eine eigene NIC. Dokumentation bzw. Tutorials / Videos zu den Netzwerk Settings gibt es für Unraid leider sehr mangelhaft, den rest von Installation bis Docker / VM gibt es in massen Ich finde das Fractal Gehäuse ist richtig Sexy und hat massenhaft Platz für Platten Das Glasfenster ist etwas unnötig aber naja was solls 😊 Ich hoffe das System wird mich viele Jahre treu begleiten. Spannend wie sich die Hardwarepreise weter entwickeln werden... Wird sich bestimmt nicht so schnell verbessern.
    4 points
  14. I have been working in a ZFS Plugin for the Unraid Main Tab, considering that I am not an experienced PHP developer, this probably is going to take a while. However, I want to ask you ZFS users, which information do you want to check in the Main tab, considering that as today we don't have any information about the pools. Any comment or feedback appreciated, this is the current look:
    4 points
  15. Update... Fresh install from the latest Windows 11 Insider Preview also works just fine (please ignore that it says not compatible because I've only assigned 50GB to the vdisk instead of the required 64GB *doh*) :
    4 points
  16. I wanted to thank the team for how seamless they've made the backup and restore process! My USB drive failed last night, but luckily, the latest configuration was stored safely in the cloud. Restoring the configuration was extremely easy and got the server up-and-running in just a few minutes. A+ work, everyone!
    4 points
  17. You call it "patch", but it's not a patch! If you are worried about reverting changes after following the tutorial to enable tpm and use ovmf compatible secure boot uefi bios, why don't you install win11 with the registry hacks to bypass them? A friend of mine told me he was able to receive patch tuesday update too. Once unraid will be upgraded you can add tpm and change ovmf uefi bios type.
    3 points
  18. I'll contact Steef about it to make sure he's planning to update the core image. If not, I'll roll my own and update here. I'll report back here once I know though. Edit: the necessary libraries are already installed in this image, so no changes are necessary. This has already been tasted by someone running the beta update. Please see this Github issue for more info. Cheers!
    3 points
  19. I would really like to see the new/trending/top new installs links back on the sidebar. I use them for discovery a lot of the time, and right now you have to Show More, do your browsing, click on the apps link again, then Show More on the next category. That's my only real complaint with the redesign, I'm happy to have the option to toggle descriptions as well. Thanks for all the work, Squid!
    3 points
  20. AutoAdd Issue in Deluge First, let me say that I've benefited greatly from this container and the help in this thread, so thank you all. And although I'm running the container on a Synology unit, I thought I'd finally give something back here for anyone who may be having a similar issue. Background Container was running great for me up to binhex/arch-delugevpn:2.0.3-2-01, but any time I upgraded past that I had problems with Deluge's AutoAdd functionality (basically its "watch" directory capability that was formerly an external plugin that is now baked into Deluge itself). Certain elements worked like Radarr/Sonnar integration, but other elements broke, like when I used manual search inside Jackett, which relies on a blackhole watch folder. I ended up just rolling back the container, it worked fine again, and I stopped worrying about it for a while. However, with the new (rare) potential for IP leakage, it's been nagging at me to move to the new container versions. Initially, I wasn't sure if it was the container, the VPN, or Deluge itself, but it always kind of felt like Deluge given the VPN was up, I could download torrents, and Radarr/Sonarr integration worked -- it was only the AutoAdd turning itself off and not behaving itself when using Jackett manual searches that wasn't working. I'm actually surprised I haven't seen more comments about this here because of how AWESOME using Jackett this way is! (Y'all are missing out, LOL). The Fix I finally put my mind to figuring this out once and for all yesterday, and I believe I tracked it down. Turns out the baked-into-Deluge AutoAdd is currently broken for certain applications (like watching for Magnets), and that version is in the current BinHex containers. Even though the fix hasn't been promoted into Deluge yet (so of course not in the container yet either), there is a manual fix available, and it's easy (just drop an updated AutoAdd egg into the Deluge PlugIns folder and it will take precedence over the baked-in version). I will say that I literally just implemented and tested, so it's possible I'll still run into problems, but it's looking promising at the moment. Thanks again for this container and this thread, enjoy! The temporary fix can be found here
    3 points
  21. I have started to create a plugin for Snapshots, Currently this is for BTRFS based snapshots only. It is still very early in the development process, so I would still class as Alpha, and not all functions are in place. I have been working with JorgeB to work through initial issues of formatting etc so thanks for the feedback and help, but there may be use cases outside of our current environments. If you use Docker folder support instead of image there could be a lot of subvols displayed. If people want to look at the current development, you can download the plugin at your own risk, any suggestions are welcome, but I will not be able to provide support on this at present until it is released as a beta. Schedule function is not developed, its just the edit page at present. {YMD} if used in the path prefix it will replace with date and time. Send does not support remote systems or incremental at this time. plugin download location https://github.com/SimonFair/Snapshots/raw/main/snapshots.plg access is on tools page under system information.
    3 points
  22. Wait until it's publicly available.
    3 points
  23. The only thing that you maybe have to do is that you have to add a "new" VM with the same settings and the same vdisk as you have right now in your current VM but with the OVMF TPM BIOS so that the TPM emulation kicks in like: When you have done this delete the "old" VM but keep the vdisks. Please also keep in mind that you maybe have to reactivate Windows if you do it like that but that would be the easiest way to do it and if you have linked your VM to your Microsoft account you should be able to recover the activation/key. Hope I explained that in an understandable way... This changed a little bit to make the switching experience a little easier. The things that are necessary are that your current installation from Windows 10/11 is OVMF based and you've linked your Windows 10 VM to your Microsoft account to reactivate Windows after you've switched from "OVMF" to "OVMF-TPM". If you have a SeaBIOS VM you can convert that to OVMF with a little more steps involved.
    3 points
  24. @milfer322 @thebaconboss @pieman16 @dest0 I have created a new release on the development branch! Most important fixes are, added 2FA support and fixed the "new device spam" https://github.com/PlexRipper/PlexRipper/releases/tag/v0.8.7 Next high priority fixes are these: - Download media files are being wrongly named due to a parsing error and given the incorrect file permissions. - Download speed limit configurable per server as not to fully take up all available bandwidth - Slow download speeds with certain servers - Disabling the Download page commands for TvShow and Seasons download tasks, these are currently not working and might confuse users I've also added the feedback I received here to the Github issues, please subscribe to those for updates!
    3 points
  25. In episode 9 of the Uncast, @jonp is joined by @Sycotix, @DiscDuck, and @Hawks from the Ibracorp community. They have been producing great content for the Unraid community since the beginning of 2021 showing users how to get the most out of their servers. We talk about how they got started with Unraid, the value of owning your own data, some of their personal interests, and much more!
    3 points
  26. bloody foreigners, staying over there - not driving our trucks.... 🤣
    3 points
  27. Has the plan for VM snapshots gone away?
    3 points
  28. Please open a terminal window and type this: unraid-api restart When the API restarts it will hopefully make a connection and then from the My Servers Dashboard you should have options for "Local access" or "Remote access" instead of "Access unavailable"
    3 points
  29. Per @Jaster's suggestion, how many individual servers are you running?
    2 points
  30. Just remove Just delete, script changes are resetted at boot Yes
    2 points
  31. Just got an email from Roon about some major updates to Linux cores. I guess they’re switching from Mono to .NET which requires libicu to be installed ahead of the update. Any thoughts as to if anything needs to be done on our end for this to work properly? https://help.roonlabs.com/portal/en/kb/articles/linux-performance-improvements
    2 points
  32. I really wish you hadn't removed the Docker Hub integration. I realize the templates were pretty bare-bones but it at least filled out the name, description, repository etc. making it a lot faster than going to the Docker page, manually adding a container and starting with a completely blank template.
    2 points
  33. No hagáis caso a este hilo. Ya he encontrado el zip con el backup del usb y no puede tener un nombre más descriptivo... Saludos
    2 points
  34. To be honest, it wasn't a particularly rigorous test... I turned it on, and set a Linux ISO to download over Usenet. It was about half the usual speed (~20MB/s vs ~40MB/s). By the time I'd turned it back on, that download had finished. So I grabbed a different ISO and tried that and it was fast again. I took it that, as I'd previously disabled it, this must be why. HOWEVER, sometimes different downloads connect to different (slower) Usenet servers, so it's not unusual for my downloads to occasionally be significantly slower. I shall try it again today to see if it does really impact me as much as I thought. To answer your Qs: QoS is off Threat management/IDS/IPS is off Hardware offloading is enabled (as is 'Hardware offload scheduler' whatever that is) UPDATE: I've enabled Traffic Identification and this time it didn't negatively affect my downloads. Last time must have been a coincidence. Thanks for the headsup!
    2 points
  35. Yeah! I was trying to figure out how to filter such logs and this works perfectly! Thank you so much SimonF and ich777!
    2 points
  36. This is exactly it! To be clear to anyone with similar issues as me, I simple did the following in the unRAID terminal: mkdir /boot/config/modprobe.d/ echo -e 'blacklist btusb\nblacklist bluetooth' > /boot/config/modprobe.d/bluetooth.conf This will make it so that unRAID created a folder called "modprobe.d" inside your /boot/config/ folder. Then it creates a script inside the "modprobe.d" folder that blacklists your bluetooth device at boot and therefore stops it from causing anymore problems. If you want to reactivate your bluetooth temporarily, you can simply type the following in your unRAID terminal: modprobe btusb modprobe bluetooth However, if you'd like to activate your bluetooth completely (persistent after reboot), simple remove the file that you just created. Big thanks to @SimonF for pointing me in the right direction! Also, thank you to @JorgeB, @ChatNoir and @trurl Have a great weekend!
    2 points
  37. Hi All, I just wanted to report back that I was able to solve this issue! After updating the bios, the card and drive showed up, no issues! Thanks again for support.
    2 points
  38. I'm using an origin cert. on NPM not a lets encrypt so renew the origin cert? -i renew the cloudflare origin cert. and updated NPM and all subdomains to newly created cert. still getting spammed errors.? -i was able to resolve this spamming error by removing any down or unused cnames from CF.
    2 points
  39. For now it is emulated, haven't got time yet to look into it how to passtrough a real TPM device but when the new unRAID version drops, passthrough should also be possible.
    2 points
  40. Well i'm hoping i have finally sorted it! Never had that long uptime since it was built. I found out the 128gb of Corsair LPX 16gb dimms in the server have different version numbers which relates to different chip sets! luckily i had more dimms in another machine so i have managed to sort a 128gb set with same chips and looks like it has got me sorted at long last. Link to the below quote from reddit about version numbers.
    2 points
  41. no big deal, with the next unraid update it ll work again (also on the insiders dev channel users here ) and also easy and nice to handle with existing VM's, like @ich777 already pointed out, give it a few days ... windows wont stop working now
    2 points
  42. Diese Meldung habe ich mit docker exec -u 0 Nextcloud /bin/bash -c "apt update && apt install -y libmagickcore-6.q16-6-extra" im Terminal wegbekommen. Aber man muss das bei jedem Nextcloud-Update neu ausführen.
    2 points
  43. I've pushed an update to the "Remux Video Files" plugin - this is now improved to handle transcoding streams that are not compatible with the destination container. For example, if you have a wmv and you want it converted to an mp4, this will need to be transcoded. It will default to h264/aac. If the stream type is not compatible, then it will just be removed. This will only affect people coming from unique video containers, if you have a mkv file in h265/ac3 and you run it with this plugin to remux to mp4, it will not transcode the video stream as the plugin knows that mp4 files can handle h265, so that stream will be simply copied. For people running into issues with a codec not being compatible with the container they are wanting to move to, can you add this plugin in your plugin flow BEFORE any other FFmpeg tasks - eg, before audio sorting, h265 encoding, etc. cc: @Aerodb @EdwinZelf
    2 points
  44. Go to the Plugins tab and check for updates. You'll want to make sure you are running the latest version of the My Servers plugin, which is currently 2021.09.15.1853. If you are still having issues, open a webterminal and type: unraid-api restart
    2 points
  45. This is already in the works from @limetech itself but will need an additional Kernel module and will be most likely included in the next or one of the next RCs. But keep in mind if you pass through the TPM from the host you can use it only for one VM at the same time.
    2 points
  46. The reason it isn't on this list for this poll is for reasons that might not be so obvious. As it stands today, there are really 3 ways to do snapshots on Unraid today (maybe more ;-). One is using btrfs snapshots at the filesystem layer. Another is using simple reflink copies which still relies upon btrfs. Another still is using the tools built into QEMU to do this. Each method has pros and cons. The qemu method is universal as it works on every filesystem we support because it isn't filesystem dependent. Unfortunately it also performs incredibly slow. Btrfs snapshots are really great, but you have to first define subvolumes to use them. It also relies on the fact that the underlying storage is formatted with btrfs. Reflink copies are really easy because they are essentially a smart copy command (just add --reflink to the end of any cp command). Still requires the source/destination to be on btrfs, but it's super fast, storage efficient, and doesn't even require you to have subvolumes defined to make use of it. And with the potential for ZFS, we have yet another option as it too supports snapshots! There are other challenges with snapshots as well, so it's a tougher nut to crack than some other features. Doesn't mean it's not on the roadmap
    2 points
  47. That would be very nice if Unraid would support snapshots for VMs. I would prefer this feature above all others.
    2 points
  48. Hier eine Liste der Server, die von den Usern der Community vorgestellt wurden (hier bitte nur Projekte verlinken):
    2 points
  49. Prometheus PiHole Exporter Note: You can connect to any PiHole on your local network and of course if you run it on unRAID in a Docker container or VM. Download and install the Prometheus PiHole Exporter plugin from the CA App: Go to the plugin settings by clicking on "Settings -> Pi-Hole Exporter (at the bottom of the Settings page)": Enter your IP from PiHole and also your API Token and click on "Confirm & Start": (Please note that if you run your PiHole in a Docker container in a Custom network like br0 you have to enable the option "Enable host access" in your Docker settings, otherwise the plugin can't connect to your PiHole instance) To get your API Token go to your PiHole instance, Login and click on "Settings -> API / Web interface -> Show API Token -> Yes, show API Token": After that you should see in the right top corner that the Exporter is running and details about it: Open up the prometheus.yml (Step 4 + 5 from the first post), add a line with '- targets: ["YOURSERVERIP:9617"]' (please change "YOURSERVERIP" to your Server IP), save and close the file: Go to the Docker page and restart Prometheus: Open up the Grafana WebUI: In Grafana click on "+ -> Import": Now we are going to import a preconfigured Dashboard for the PiHole Exporter from Grafana.com (Source), to do this simply enter the ID (10176) from the Dasboard and click "Load": In the next screen rename the Dashboard to your liking and click on "Import" Now you should be greeted with something like this (please keep in mind that the Dashboard can display N/A at some values since there is not enough data available, wait a few minutes and you will see that the values are filled in): (Now you will notice that this warning: "Panel plugin not found: grafana-piechar-panel" appears on the Dasboard, to fix this follow the next steps) Go to your Docker page and click on Grafana and select "Console": In the next window enter the following 'grafana-cli plugins install grafana-piechart-panel' and press RETURN: After that close the Console window and restart the Grafana Docker container: Now go back to your PiHole Dashboard within Grafana and you should now see that the Dasboard is fully loaded: ATTENTION Please note if you restart your PiHole container the Exporter will stop and you have to manually start it from the plugin configuration page with the "START" button. This is also applies if you have CA Backup installed and the container is beeing backed up. To workaround that you don't have to manually restart it after each CA Backup do the following steps: Go to Settings and click on the bottom on "Backup/Restore Appdata": Confirm the Warning that pops up and scroll all the way down to the bottom and click on "Show Advanced Settings": At Pi-Hole make sure that you click on the switch so that it shows "Don't Stop": Scroll down to the bottom and click "Apply": NO DATA SHOWING UP IN THE PIHOLE DASHBOARD If no data is showing up in the PiHole Dashboard it is most likely the case that you have configured another Datasource like Telegraf in Grafana, to solve this issue go to this post:
    2 points