Leaderboard

Popular Content

Showing content with the highest reputation on 09/07/21 in all areas

  1. For Episode 7 of the Uncast, @jonp breaks down what's coming in Unraid 6.10 Stable and beyond: In version 6.10 of Unraid OS, we've added a new element to the webGui known as the User Profile Component (UPC). This element is used to help simplify key management but, due to some confusion in the community, we wanted to record an episode to further explain this new component and how it works. This dovetails into talking further about My Servers, your privacy, and our promise to existing customers. After that, @jonp spends some time talking about ZFS and why so many users are requesting official support for it in our feature request thread. If you have any questions about the topics talked about in this pod, please post them here!
    2 points
  2. While I would really like to see ZFS implemented natively, I also want it to be available as a second pool/array. With 6.9.x allowing up to 35 pools in the Pool Devices section, that's likely the place I would be happy with configuring a ZFS pool. Of course if it was available as a second array beside the main parity protected array, that would work too. Use the slower unRAID pool for storage but use the ZFS pool for disk I/O intensive tasks like video editing/scrubbing.
    2 points
  3. I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity/read check speed using a fast SSD only array with Unraid V6 Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s IBM H1110 PCIe gen2 x4 (2000MB/s) - LSI 2004 chipset, results should be the same as for an LSI 9211-4i and other similar controllers 2 x 570MB/s 3 x 500MB/s 4 x 375MB/s Asmedia ASM1064 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40156 and other similar cards 2 x 450MB/s 3 x 300MB/s 4 x 225MB/s Asmedia ASM1164 PCIe gen3 x2 (1970MB/s) - NOTE - not actually tested, performance inferred from the ASM1166 with up to 4 devices 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 and 6 Port Controllers JMicron JMB585 PCIe gen3 x2 (1970MB/s) - 2w - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s Asmedia ASM1166 PCIe gen3 x2 (1970MB/s) - 2w 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 x 355MB/s 6 x 300MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) * PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9211-8i PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset 4 x 565MB/s 6 x 465MB/s 8 x 330MB/s (190MB/s*, 185MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 565MB/s LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 565MB/s (425MB/s*, 380MB/s**) * PCIe gen3 x4 (3940MB/s) ** PCIe gen2 x8 (4000MB/s) SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link with LSI 9211-8i (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link with LSI 9211-8i (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® SAS2 Expander RES2SV240 - 10w Single Link with LSI 9211-8i (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. Dual Link with LSI 9211-8i (4000MB/s) 12 x 235MB/s 16 x 185MB/s Dual Link with LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link with LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 500MB/s 12 x 340MB/s Dual Link with LSI 9300-8i (*) 10 x 510MB/s 12 x 460MB/s * tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can closer to SAS3 speeds, with SAS3 devices limit here would be the PCIe link, which should be around 6600-7000MB/s usable. HP 12G SAS3 EXPANDER (761879-001) Single Link with LSI 9300-8i (2400MB/s*) 8 x 270MB/s 12 x 180MB/s 16 x 135MB/s 20 x 110MB/s 24 x 90MB/s Dual Link with LSI 9300-8i (4800MB/s*) 10 x 420MB/s 12 x 360MB/s 16 x 270MB/s 20 x 220MB/s 24 x 180MB/s * tested with SATA3 devices, no Databolt or equivalent technology, at least not with an LSI HBA, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Intel® SAS3 Expander RES3TV360 Single Link with LSI 9308-8i (*) 8 x 490MB/s 12 x 330MB/s 16 x 245MB/s 20 x 170MB/s 24 x 130MB/s 28 x 105MB/s Dual Link with LSI 9308-8i (*) 12 x 505MB/s 16 x 380MB/s 20 x 300MB/s 24 x 230MB/s 28 x 195MB/s * tested with SATA3 devices, PMC expander chip includes similar functionality to LSI's Databolt, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Note: these results were after updating the expander firmware to latest available at this time (B057), it was noticeably slower with the older firmware that came with it. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0** , Skylake, Kaby Lake, Coffee Lake, Comet Lake and Alder Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. **Except low end H110 and H310 chipsets which are only DMI 2.0, Z690 is DMI 4.0 and not yet tested by me, but except same result as the other Alder Lake chipsets. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
    1 point
  4. Ist laut Announcement in 6.10 RC1 gefixt.
    1 point
  5. 1 point
  6. The sign in screen has a "Troubleshoot" link, please press that and let us know what account you are trying to sign in with and what error message you get.
    1 point
  7. Most part, parity is the same. But the big difference is that ou explain, you can read data off remainig drives, event if all parity + 1 drives failed. It's at the cost of writes performance when Turbo Write is OFF, but this ability (to read the others drives) is so great. And there are multiple options to do ZFS in Unraid, FreeNAS/TrueNAS in a VM is an option. The ZFS plugin is another for example. That's why I prefer multiple arrays instead of ZFS
    1 point
  8. I used OpenELEC or LibreELEC as a VM a few years ago and passed through a GPU and an USB device with succes.
    1 point
  9. You can use the RCON to send messages to the users on the servers. Not strictly necessary... ark does not continuously write to the .ark files, you can just copy them out of the data directory and they will be current as of the last server tick, or the last "save world" command. https://github.com/CydFSA/A3C to see some examples.
    1 point
  10. If you are familiar with the container from @binhex then I recommend to stick to it, they do basically the same. @binhex containers are based on Arch, mine are based on Debian, I include also different languages with the container, don't know if you can change the language in the containers from @binhex. I've made the container multilingual because someone on the German sub forums asked for it if it's possible to make it German.
    1 point
  11. You are a guru and a wizard all in one. You are very very intelligent @ghost82 thank you for all of your great advice!!! Unreal Much appreciated.
    1 point
  12. omfg sorry for wasting yout time .... the error was caused by my switch that was stuck in an update and was duing shit lol
    1 point
  13. You need a VM running on the server with the video card passed through to the VM and running Kodi there.
    1 point
  14. Thanks! Turns out things messed up further with all the back and forth and now I can't get the HD channels on either the PCI or the USB tuners. I'm beginning to suspect something is going on with the DVB-T2 provider(!). They have a reputation of malfunctions... will keep working on that.
    1 point
  15. Somewhat detracting from the conversation guys but what is meant by that statement is that while @nuhll was posting about one issue I saw in the logs similar entries to the ones I had. Unrelated conversations.
    1 point
  16. Noticed the lights don't come on after plugging in ethernet cable so I've replaced the motherboard and now it works properly. It was a defective motherboard. Thanks!
    1 point
  17. You only need to map '/dev/bus/usb' as a device and frigate will detect your TPU and only use that device. There is no need to change anything on that line on the template.
    1 point
  18. Tick the box saying Yes I want to do this and hit the Format button.
    1 point
  19. Good to have those explanations. Hopefully it will dispel some of the concerns about UPC or at least allow to continue the discussion on a better basis.
    1 point
  20. Yes. I did not even notice until you mentioned it, but my order record shows it is from HyperHawk. I would be upset about the switch (if it was a switch or hidden), except I got the 5yr factory warranty, so it is OK. Ordered Aug. 30, recvd Sept 6, warranty to Sept. 30, 2026. I think they go by mfg date plus 1 month to sell, plus 5 yr., or something similar. It is difficult to know if you are getting factory warranty, or dealer warranty which is more risky. In this case, if Seagate factory did not show for the serial number I received, I would have returned it. But it is scary since the listing does not say "factory" warranty. Is Exos an SMR? If it is not SMR, then that's one more reason it is a sweet spot for me.
    1 point
  21. Again unfortunately there isn't enough here for us to go on. It's complaining about the OS.dmg.root_hash which makes me think that there is an issue with the image that was downloaded... Given that you are working with a VM, most of the normal hardware compatibility issues should not exist because it's all virtualized. Hence there is most likely something about your VM config that is incorrect and causing an issue. Start again from scratch with a fresh docker and OS download. Don't try to pass any hardware through at first, just get the system booting and then worry about USB/GPU/etc. Apple have a tendency to keep changing the product IDs of their OS images and of course new versions come out all the time. It is a lot of work to keep the Macinabox code up to date and probably more than Spacinvaderone was counting on. @ghost82 has been hard at work to fix a lot of these issues; https://github.com/SpaceinvaderOne/Macinabox/commit/2aba67bc2738d3ecc7a156a1a9b897665d6982ff https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/page/84/?tab=comments#comment-1010581 This was for OpenCore 0.7.0 and Mac OS 11.4 I'm not sure on the state of this work now with OpenCore 0.7.2 and Mac OS 11.5 being out. Take a look at the dortania debug guide. You may need to install the debug version of OpenCore to get more information. https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/extended/kernel-issues.html#stuck-on-eb-log-exitbs-start You can also just try building your own setup from scratch. It isn't that bad if you don't mind investing some time to tinker. https://dortania.github.io/OpenCore-Install-Guide/ It's pretty rewarding.
    1 point
  22. Hey, No its Under "bot" You Create the "app" like you did, and then under "bot" -> "Add Bot" to create the actual Bot. And there you will find the Token
    1 point
  23. Can you try to do a new scan with the "new" card, eventually something changed from the old one and does not work properly now...
    1 point
  24. Hey everyone, this issue with repeated lock outs should now be resolved, you should no longer have to manually delete any cookies.
    1 point
  25. I've created a fork of the PIA scripts to simplify the install process on unRaid, it's still not as simple as importing a configuration, but the scripts now generate a file following the "wg#.conf" convention which gets picked up by the Dynamix WireGuard plugin, it also fills the public key and VPN type fields correctly (which exist in "wg#.cfg"). I also added a user script to be used with the User Scripts plugin to make configuration changes (like re-selecting a server) easy to make, all you really need to fill to be up and running are the PIA account credentials. You can find my fork at https://github.com/DorCoMaNdO/pia-wireguard-unraid, the user script is part of the repo at unraid_userscript.sh
    1 point
  26. It seems like the error is related to the MariaDB container being upgraded and as a result MariaDB being upgraded from 10.1 to MariaDB 10.2. To get rid of this ssh into the Mariadb container and delete the binary log file in /config/databases (these log file are called "ib_logfile0", ""ib_logfile1" etc). After deleting restart the MariaDB container. https://mariadb.com/kb/en/upgrading-from-mariadb-101-to-mariadb-102/+comments/2903
    1 point
  27. @Abigel My guess is that this is the problem. The LTS version of the controller does not support the WiFi 6 access points. I have a U6-Lite as well as a few AC APs and am running controller version 5.14.23 with no issues. You will need to update your controller to a version that supports the U6-Lite.
    1 point
  28. Yes this is the intent. You would go to Main -> Pool Devices and create a ZFS pool similar to how you can add a btrfs pool today. In general, ZFS and btrfs pools are for speed whereas the Unraid array would be used when you value flexibility.
    1 point
  29. Welche Hardware ist für Unraid optimal? Die offiziellen Mindestanforderungen von Unraid: Ein 64-Bit-fähiger Prozessor mit 1GHz oder besser Mindestens 2GB RAM Linux-Hardwaretreiber für Speicher-, Ethernet, und USB-Controller Zwei Festplatten - um den Schutz deiner Dateien mit einer Paritätsfestplatte zu gewährleisten Rein von den Funktionen empfehlen wir: eine CPU mit iGPU eine CPU mit guter Single Thread Leistung (Liste von Passmark), wobei ich 1400 Punkte als Minimum empfehle ein Mainboard mit zwei M.2 Slots für einen ausfallsicheren SSD Cache (RAID1) möglichst viele SATA Buchsen um langfristig auf Erweiterungskarten verzichten zu können keine RAID Karten/Controller (werden nicht unterstützt) Die höchste Effizienz, Komfort und Funktionsumfang bietet ein Intel System mit iGPU bis zur 10ten Generation: Xeon Workstation CPUs der Serien E-21xxG (8th), E-22xxG (9th) und W-12xxn(10th) sind in der Regel mit einer iGPU ausgestattet bis zur 10ten Generation funktioniert das Intel GVT-g Plugin, das die Nutzung der iGPU in mehreren VMs erlaubt, also i-Core 11xxx oder Xeon W-13xx - oder neuer - können das nicht! ab der 10ten Generation ist der Stromverbrauch bei Intel leicht gestiegen, ab der 11ten ist er deutlich höher, die 8te und 9te Generation ist die sparsamste. ab der 11ten Generation unterstützt Intel keinen Legacy Mode mehr, was das Durchschleifen von Hardware an VMs erschweren kann (alte Karten kennen kein UEFI und manche bekommen ihre VMs nur zum Laufen, wenn der Unraid Server im Legacy Modus gebootet wurde) ab der 11ten Generation gibt es eine komplett neue iGPU Generation, deren Linux-Treiber noch nicht ausgereift sind (Plex SDR Tonemapping geht nicht) Intel Systeme verbrauchen im Leerlauf teilweise deutlich weniger Strom als AMD Systeme. ab der 13ten Generation könnte Intel wieder effizienter geworden sein. Besonders sparsam sollen die Kontron Boards für the 12te Intel Generation sein (wo auch die 13te drauf laufen sollte, bitte selbst prüfen!) Warum ich von AMD abrate: ein AMD System besitzt nur bei den teuren und seltenen Ryzen 4xxxG oder 5xxxG eine iGPU eine AMD iGPU besitzt deutlich weniger Video-Transcoding Leistung als eine Intel iGPU alte AMD Ryzen 1xxx laufen nicht stabil in Linux AMD Setups reagieren sehr empfindlich auf "zu schnellen" RAM. Verzichtet also auf RAM jenseits von 3200 Mhz (bei DDR4) so etwas wie Intel GVT-g gibt es bei AMD nicht es gibt kein sparsames AMD Mainboard mit 8x SATA und 2x M.2 die iGPU kann nicht an eine VM durchgeschliffen werden Finger weg von Threadripper: Hier gibt es Latenzprobleme in VMs manche Boards haben beim Einsatz von PCIe 4.0 Hardware oder der Aktivierung der sparsamen C-States einen Bug und werfen den unRAID USB Stick aus Wann würde ich ein AMD System in Betracht ziehen: es ist bereits vorhanden der Stromverbrauch spielt keine Rolle man braucht die hohe Kernzahl und Leistung eines zb 5900X für möglichst wenig Geld und möchte evtl auch ECC RAM einsetzen Welche Optionen habe ich, wenn ich ECC RAM haben möchte: die meisten AMD Systeme unterstützen ECC RAM (technische Mainboard Daten lesen!) bei Intel unterstützen alle Xeon CPUs ECC RAM und bis zur 9ten Generation auch die Pentium Gold und i3 CPUs Achtung: Consumer/Workstation Systeme unterstützen in der Regel nur ECC und kein Reg ECC RAM! Für die Suche passender Hardware empfehle ich Geizhals, da es dort sehr gute Filter gibt. So kann man zb leicht alle DDR4 Non-Reg ECC Module finden Besonders sparsame Netzteile bei niedriger Last (<25W): - PicoPSU - Corsair RM550x (2021), verbraucht 1W mehr als die PicoPSU - Be Quiet Pure Power FM 11 550W, verbraucht 1,6W mehr als die PicoPSU - alle anderen Netzteile verbrauchen 3 bis 4W mehr als eine PicoPSU - informiert euch bei TweakPC über die neuesten Netzteile, denn wegen dem neuen ATX12VO Standard, kommen immer mehr sparsame Netzteile Hier ein paar Bauvorschläge : https://geizhals.de/?cat=WL-3054899 (ITX, Mini-Case, Non-ECC) https://geizhals.de/?cat=WL-2107598 (Intel, mATX, ECC, 10te Generation) https://geizhals.de/?cat=WL-2161844 (Intel, mATX, Non-ECC, 8te und 9te Generation) https://geizhals.de/?cat=WL-2107596 (Intel, mATX, ECC, 8te und 9te Generation) https://geizhals.de/?cat=WL-1881432 (Intel, ITX, ECC, 1x M.2, 8x SATA, 8te und 9te Generation) https://geizhals.de/?cat=WL-1881408 (Intel, ITX, 1x M.2, 8te und 9te Generation) https://geizhals.de/?cat=WL-2166906 (AMD, mATX, ECC, 2x M.2, 8x SATA, IPMI, 10G, 4xxxG/5xxxG) Alle XEON Prozessoren der 8ten und 9ten Generation bei eBay sind vielleicht auch interessant.
    1 point
  30. Hello I'm looking to get this docker running: Portfolio Performance Docker I was wondering if anybody already has it running?
    1 point
  31. Please open a terminal window and type this: unraid-api restart When the API restarts it will hopefully make a connection and then from the My Servers Dashboard you should have options for "Local access" or "Remote access" instead of "Access unavailable"
    1 point
  32. Currently at 114 TB. Starting to replace my 8 TB with 10's. Once complete 150 TB, but that will take the rest of the year I believe. Update 12-16-2019 Finished upgrading hard drives to 10 TB's. With my old 8 TB drives created a second server. 230 total TB's between the 2. I now am done. Specs are in signature. added picture of my 150 server..AKA Landfill Edit : This is my current storage and as you can see , I have upgraded to the 10 TB drives. If I do go larger next step would be to the 16TB drives.
    1 point
  33. I believe you have more wrong other than file permissions, however, to fix your appdata permissions for Let's Encrypt and Nextcloud do the following: chown -cR nobody:users /mnt/user/appdata/letsencrypt /mnt/user/appdata/nextcloud chmod -cR ug+rw,ug+X,o-rwx /mnt/user/appdata/letsencrypt /mnt/user/appdata/nextcloud fixAppdataPerms.sh
    1 point
  34. Redownload unRAID version you want, copy any bz* files from the download into the root of flash, run make_bootable again. If that doesn't help try a different USB port.
    1 point