Leaderboard

Popular Content

Showing content with the highest reputation on 11/02/20 in Posts

  1. As you might now, DockerHub will be doing HUGE changes that affects users, read more HERE. The TL;DR is that each ip is only allowed to do 100 unauthenticated pulls in a span of 6 hours. FYI checking for updates counts as a pull. Logging in to a dockerhub account only doubles that. So for the request: Currently dockerMan only works well with images from dockerhub. Once you use another registry, it looses the aspect of being "up to date". This is troublesome for people and organizations that has, will or is currently migrating to other registries (Personally I now push to GitHubs Container Registry). I herby ask for dockerMan to also have the same integration for other registries (or a documented selected few), to satisfy users. This migration will also see "new images" in CA. (pings @Squid)
    7 points
  2. I am a Chinese user. Thank you for the script, I learned a lot of knowledge.
    2 points
  3. I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity/read check speed using a fast SSD only array with Unraid V6 Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s IBM H1110 PCIe gen2 x4 (2000MB/s) - LSI 2004 chipset, results should be the same as for an LSI 9211-4i and other similar controllers 2 x 570MB/s 3 x 500MB/s 4 x 375MB/s Asmedia ASM1064 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40156 and other similar cards 2 x 450MB/s 3 x 300MB/s 4 x 225MB/s Asmedia ASM1164 PCIe gen3 x2 (1970MB/s) - NOTE - not actually tested, performance inferred from the ASM1166 with up to 4 devices 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 and 6 Port Controllers JMicron JMB585 PCIe gen3 x2 (1970MB/s) - 2w - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s Asmedia ASM1166 PCIe gen3 x2 (1970MB/s) - 2w 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 x 355MB/s 6 x 300MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) * PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9211-8i PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset 4 x 565MB/s 6 x 465MB/s 8 x 330MB/s (190MB/s*, 185MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 565MB/s LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 565MB/s (425MB/s*, 380MB/s**) * PCIe gen3 x4 (3940MB/s) ** PCIe gen2 x8 (4000MB/s) SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link with LSI 9211-8i (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link with LSI 9211-8i (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® SAS2 Expander RES2SV240 - 10w Single Link with LSI 9211-8i (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. Dual Link with LSI 9211-8i (4000MB/s) 12 x 235MB/s 16 x 185MB/s Dual Link with LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link with LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 500MB/s 12 x 340MB/s Dual Link with LSI 9300-8i (*) 10 x 510MB/s 12 x 460MB/s * tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can closer to SAS3 speeds, with SAS3 devices limit here would be the PCIe link, which should be around 6600-7000MB/s usable. HP 12G SAS3 EXPANDER (761879-001) Single Link with LSI 9300-8i (2400MB/s*) 8 x 270MB/s 12 x 180MB/s 16 x 135MB/s 20 x 110MB/s 24 x 90MB/s Dual Link with LSI 9300-8i (4800MB/s*) 10 x 420MB/s 12 x 360MB/s 16 x 270MB/s 20 x 220MB/s 24 x 180MB/s * tested with SATA3 devices, no Databolt or equivalent technology, at least not with an LSI HBA, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Intel® SAS3 Expander RES3TV360 Single Link with LSI 9308-8i (*) 8 x 490MB/s 12 x 330MB/s 16 x 245MB/s 20 x 170MB/s 24 x 130MB/s 28 x 105MB/s Dual Link with LSI 9308-8i (*) 12 x 505MB/s 16 x 380MB/s 20 x 300MB/s 24 x 230MB/s 28 x 195MB/s * tested with SATA3 devices, PMC expander chip includes similar functionality to LSI's Databolt, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Note: these results were after updating the expander firmware to latest available at this time (B057), it was noticeably slower with the older firmware that came with it. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0** , Skylake, Kaby Lake, Coffee Lake, Comet Lake and Alder Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. **Except low end H110 and H310 chipsets which are only DMI 2.0, Z690 is DMI 4.0 and not yet tested by me, but except same result as the other Alder Lake chipsets. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
    1 point
  4. I've spent a few weeks getting my X570 AORUS Elite WiFi + 3900X + GTX1070 running to my liking so I thought I would share. These settings are also confirmed working on the AORUS Pro WiFi and AORUS Ultra. It will probably be similar for all the X570 AORUS boards. Here are the settings for USB passthrough, single Nvidia GPU passthrough, and more. Using BIOS F10. This is important as your IOMMU groupings can/will change with AGESA updates. UEFI / BIOS Settings: Tweaker -> Advanced CPU Settings -> SVM Mode -> Enable Settings -> Miscellaneous -> IOMMU -> Enable Settings -> AMD CBS -> ACS Enable -> Enable Settings -> AMD CBS -> Enable AER Cap -> Enable USB Passthrough: Leaving PCIe ACS override disabled, you should have ~34 IOMMU groups (give or take depending on how many PCIe devices you have connected) if you look in Tools > System Devices. There should be 3 USB controllers with the same vendor/device ID (1022:149c). Two of them will be lumped together with a PCI bridge and "Non-Essential Instrumentation." Those are the two we want to pass! The more logical option would be the controller isolated in its own group, but I could NOT get that one to pass. The trick is run your Unraid USB off that third controller, and we can pass the other two controllers together. Run your Unraid USB out of the rear white USB port labeled BIOS. That white USB 3.0 port plus the neighboring 3 blue USB 3.0 ports share a controller. Use these other ports for your keyboard and mouse (to be passed through as devices) and your UPS or whatever else you want Unraid to access. Note the addresses of the two USB controllers AND the "Non-Essential Instrumentation" in that IOMMU. In my case they are 07:00.0, 07:00.1, 07:00.3. Create the file /boot/config/vfio-pci.cfg with the following contents: When you reboot, these devices will available in the vm xml gui to passthrough under Other PCI Devices. Pass all 3 of them together! If you do not pass the "Non-Essential Instrumentation" Unraid will throw a warning in the logs that the .1 controller is dependent on it and unavailable to reset. When you passthrough all three together you will get no errors/warnings and everything works. Bonus: Bluetooth on this board is a usb device tied to the .3 controller and is passed through along with the controller! Note: When you add or remove PCIe devices, these addresses can/will change. When you add or remove a PCIe device, check Tools > System Devices to see if the USB addresses have changed and update vfio-pci.cfg accordingly. Single (NVIDIA) GPU Passthrough: For single GPU passthrough, you need to disable graphical output in Unraid. From the Main menu, click the name of your Boot Device (flash). Under Syslinux Config -> Unraid OS, add "video=efifb:off" after "append initrd=/bzroot". The line should now read "append initrd=/bzroot video=efifb:off". When you reboot you will notice there is no video output when unraid boots (you will be left with a freeze frame of the boot menu). Your solo GPU is now ready to pass. For Nvidia you will need the vbios for your card. I dumped my own following this tutorial using second gpu. If you can't dump your own, trying following this tutorial to download/modify a working vbios. Now simply pass your GPU, vbios, and the Sound Card that goes with your GPU from the vm xml gui. Fan Speed Sensors and PWM Controllers: See warning below! You can already see your CPU temp (Tctl) using the k10temp driver with Dynamix System Temperature. If you want to see Fan Speeds on your dashboard, or use the Dynamix Auto Fan Control plugin, we can force the it87 driver to load for the it8628 on this board. To force this we need to set another boot flag, "acpi_enforce_resources=lax". Add this the same way as above, after "video=efifb:off". That line in your syslinux.cfg should now read "append initrd=/bzroot video=efifb:off acpi_enforce_resources=lax". Next, add the following line to /boot/config/go The it87 driver will now load on boot and your fan speeds will be displayed on the Unraid dashboard, and the fan controllers will be available in Dynamix Auto Fan Control. Warning: Setting acpi_enforce_resources to lax is considered risky for reasons explained here.
    1 point
  5. Je tiens à remercier @superboki pour tous ces tutoriels Unraid. Veuillez suivre!
    1 point
  6. Hi, Is it possible for me to change my username on the forum please? would like to change it to Juzzotec. Thanks
    1 point
  7. Interesting. So I just tried it with Romania - but had no luck. But then I went in and added the line from the FAQs A22, to the Romania file. And now it works! I'm not sure if certain locations are just not working, or if it had to do with where I put the cipher-fallback line, but it's working for me now! Very curious that you didn't need to add the line, but oh well! Appreciate your help @binhex and @diditstart!
    1 point
  8. Q22:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  9. Dieses nutze ich jetzt https://hub.docker.com/r/linuxserver/sabnzbd/
    1 point
  10. Pour votre information: voici quelques tutoriels vidéo Unraid 🙂
    1 point
  11. merci pour c'est informations et les détails pour le NAS ( cela va me servir pour remplacer mon vieux synology ds413J ) si tu peux encore faire d'autres tutos , c'est le bien venu surtout en langue française , ils sont rare. Amicalement Phil
    1 point
  12. Yep, you'll need to verify evertyhing when importing anyways as the import from URL parser doesn't always get everything perfect anyways, but for now you'll need to convert manually for fractions. This is an ehancement/feature request to support fractions. I'd suggest logging a comment to show your support for wanting fractions as well. https://github.com/vabene1111/recipes/issues/142
    1 point
  13. You DO NOT need to purchase a new license. Upgrades are FREE.
    1 point
  14. just the bz that you replaced only. Or you can upgrade to a newer Unraid version via the normal method and it will also work without the patch.
    1 point
  15. Yes you can but then you have to do that every ~3 months and that does not seem like a solution to me. docker exec -it NginxProxyManager sh and then certbot renew But a quick google shows that in CloudFlare you should be able to exclude an url (yoursite.com/.well-known/*) from being cached and then it should work perfectly.
    1 point
  16. Cache device is dropping offline: Nov 2 09:36:58 Tower kernel: pcieport 0000:00:06.0: AER: Uncorrected (Fatal) error received: 0000:00:00.0 Nov 2 09:36:58 Tower kernel: nvme 0000:05:00.0: PCIe Bus Error: severity=Uncorrected (Fatal), type=Inaccessible, (Unregistered Agent ID) Nov 2 09:37:29 Tower kernel: nvme nvme0: I/O 710 QID 14 timeout, aborting Nov 2 09:37:29 Tower kernel: nvme nvme0: I/O 711 QID 14 timeout, aborting Nov 2 09:37:29 Tower kernel: nvme nvme0: I/O 712 QID 14 timeout, aborting Nov 2 09:37:29 Tower kernel: nvme nvme0: I/O 713 QID 14 timeout, aborting Nov 2 09:37:29 Tower kernel: nvme nvme0: I/O 714 QID 14 timeout, aborting Nov 2 09:37:59 Tower kernel: nvme nvme0: I/O 710 QID 14 timeout, reset controller Nov 2 09:38:29 Tower kernel: nvme nvme0: I/O 0 QID 0 timeout, reset controller Nov 2 09:38:34 Tower kernel: nvme nvme0: Device shutdown incomplete; abort shutdown Previous reports of issues with Adata devices, if you can try a different brand.
    1 point
  17. See here for some benchmarks on the possible performance with various controllers, of course board/CPU are also a factor, i.e. a PCIe 3.0 HBA will only perform optimally with a PCIe 3.0 board/CPU.
    1 point
  18. Just keep in mind that if your certificate needs to be renewed it will most likely fail due to the same issue!
    1 point
  19. Guten morgen, ich habe vor einem 3/4 Jahr auch den Umzug von Synology (415+) auf Unraid gemacht. Ich selbst nutze ebenfalls Plex und mache Hardwaretranscoding mit der iGPU meines Intel Xeons. Bzgl Datenbankumzug von Plex gibt es in Youtube ein Video von SpaceInvader, in welchen er zeigt, wie man die Plex Datenbank von einem Docker Container in einen anderen verschieben kann. Evtl. kannst du damit was anfangen. (Deine Datenbank von Synology verschieben und dann im Docker auf Plex einbinden) Verschlüsselung der Daten kannst du in Unraid in Settings einstellen. Ich denke, dass die meisten besseren CPU's das Codieren auch hardwaretechnisch machen und es somit zu keiner hohen Systemlast kommt. Bzgl Kameras habe ich auch 3 Kameras an meinem Haus. Habe mit mehreren Programmen gespielt und bin dann zu folgender Lösung gekommen. Ich habe eine VM (komplette Platte) mit Xpenology (Synology System) erstellt, Dort verwende ich weiterhin die Surveillance Station (Habe Lizensen) Die VM und Aufnahmen funktioniert ohne Probleme und benötigt so gut wie keine Systemlast! Grüße aus Bayern
    1 point
  20. Hi JorgeB: Indeed, there was some issue with the PCIE slot I was using, not sure what is the issue but, I switched my graphics card P200 and the NIC, and now it shows up. I will setup pfsense now. Thank you all. you guys are the best.
    1 point
  21. Maximum check period is 4 times per day, so you'd have to have 50 containers installed and all 50 show an update to get nailed by it. (Settings - Notification settings) Yup. Because the system works by the repository entry. It all works out in the end, and will adjust itself over time. But, I do have a couple of pet peeves with how some apps are handled by the maintainers that modifying how this works will fix my OCD
    1 point
  22. I believe that dockerMan already does support other registries on 6.9. (https://github.com/limetech/webgui/pull/650 unless I'm misreading what this is for ) Tomorrow, once the change actually takes effect, I will be testing whether it's a container pull or an image pull (multiple images in a container) that counts towards the quota. Realistically, it shouldn't affect many users around here (uber power users and some developers being the exceptions). On a quick look at the coding, it's not that hard to add in dockerHub credentials to the system to allow you 200+ pulls every 6 hours. But, yeah if you're checking for updates to your containers every hour, then you may run into it. CA itself doesn't particularly care which registry is used. There's a few apps currently which do not utilize dockerHub.
    1 point
  23. Look at /mnt/user/appdata/NginxProxyManager/log/letsencrypt.log, you should have more details about the failure.
    1 point
  24. Done. Process worked great, thank you!
    1 point
  25. Two mode balance-tlb: Outgoing traffic is load balanced, but incoming only uses a single interface. The driver will change the MAC address on the NIC when sending, but incoming always remains the same. balance-alb: Both sending and receiving frames are load balanced using the change MAC address trick. http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3850636/Understanding-NIC-Bonding-with-Linux.htm
    1 point
  26. Lolz, just looked at your parts list. We have a lot in common. Here are some thoughts; - Up the DRAM speed. The AMD CPU likes 3200MHz or even 3600MHz. I have 4x16GB of G.Skill Ripjaws 3200. - Do use multiple HDDs for parity support. - Consider an SSD cache in front of your main HDD array. I initially didn't have any cache but ended up putting a pair of (mirrored) 250GB SATA SSDs for better performance. - I also had a second NVME drives, one dedicated for each of the primary OSs (Mac and Win10). - Change your video card to something current unless there is a specific need for a Quadro - Consider multiple GPUs and USB cards if you want to offer dedicated hardware to your VMs. Also keep in mind the Define 7 XL is huge. I got away with just the non-XL. Just my 2c
    1 point
  27. I'm on an AsRock x570 board with a 3950x and my system is rock solid. But in general would recommend staying away from AMD video cards. I had endless issues with a 5700xt which would cause random reboots when shutting down a VM with the card passed through. Also had issues getting the card into a low power mode (tended to pull >100W even if the VM was shutdown). I believe this all comes down to the long standing AMD VFIO Reset Bug. There are patches for linux that work around this issue but last time I checked they are not in Unraid because they caused their own stability issues. Interestingly I have a RX550 passed through to a VM which gets shutdown/rebooted regularly and has been flawless so go figure.
    1 point
  28. Anyone else having trouble with the server list? I’ve been running mine daily with no issues for a while, but when I went to do a manual test recently it threw back nothing. So I popped into settings where I had manually selected a nearby Spectrum server and the list that’s coming up now has very few servers and none anywhere near my state (Florida). If I go to the Speedtest website it auto selects the nearby server so I don’t know what’s going on. to make sure it was still at least working I changed it to Auto and it selected something in Nebraska or something. It worked though and didn’t give any errors.
    1 point
  29. v0.3 released: - changed undervolt installation method Now undervolt is not installed by URL, instead its installed through the pip/setuptools package management, which is the recommened method to install python packages.
    1 point
  30. Je me suis servi du Docker Krusader. Je me suis connecté sur le premier serveur et la même chose sur le 2e . Il fonctionne un peu comme des logiciel de transfert FTP comme Filezilla ou Transmit sur mac.
    1 point
  31. I have the ASUS TUF Gaming X570-Plus (non-wifi). I would not recomend you to buy this board. I have two gpus but can only use one of them as the system hogs one of them and stubbing does not make any difference. I have a multi-controller USB pci-e x4 whitch I can't connect, so I had to use a single controller usb x1 to be able to get some more usable usb-ports for vm's. Adding a third pci-x1 gpu to use as dummy disables the network.
    1 point
  32. I just edited the repo to be portainer-ce and it upgraded just fine.
    1 point
  33. Quick look at the plateforms and MBs, the x299 provides 5GBE/1GBE vs 2.5/1 on the x570. As well as more M.2 and SATA ports. All things been close (in price apparently, for the speed I don't know), the Intel looks more interesting. (I swear I am not an intel fanboy ; would love for amd to be more competitive on this segment as well, but Threadripper is pretty pricy ) Let's see what the guys that do use VMs think (I don't).
    1 point
  34. don't know much about those processors relative performance but I can still make a few remarks: forget 3600MHz RAM, the intel is only rated for 2933 and the amd for 3200 the intel provides way more PCIE lanes than the amd part 48 vs 20, this means that you can actually have 2x PCIEx16 in the intel (and leaves you room for expansion like an HBA card), not on the amd https://www.amd.com/en/products/cpu/amd-ryzen-9-5950x https://ark.intel.com/content/www/us/en/ark/products/198017/intel-core-i9-10980xe-extreme-edition-processor-24-75m-cache-3-00-ghz.html This certainly does not cover every thing for you to make your choice but should be considered.
    1 point
  35. Thanks to your comment I found out the user and passwd as well lol. Cannot find it in the front page or installation page. Not sure how to edit it either.
    1 point
  36. B550 Motherboards is not good for iommu groups. I don't know if a bios correct this. Choose an x570 it's better. Envoyé de mon HD1913 en utilisant Tapatalk
    1 point
  37. I asked this question by myself and found out that this message is created through the underlying Slackware: https://www.linuxquestions.org/questions/slackware-14/locking-all-cpu's-to-their-maximum-frequency-4175607506/ So I checked my file as follows: cat /etc/rc.d/rc.cpufreq #!/bin/sh # # rc.cpufreq: Settings for CPU frequency and voltage scaling in the kernel. # For more information, see the kernel documentation in # /usr/src/linux/Documentation/cpu-freq/ # Default CPU scaling governor to try. Some possible choices are: # performance: The CPUfreq governor "performance" sets the CPU statically # to the highest frequency within the borders of scaling_min_freq # and scaling_max_freq. # powersave: The CPUfreq governor "powersave" sets the CPU statically to the # lowest frequency within the borders of scaling_min_freq and # scaling_max_freq. # userspace: The CPUfreq governor "userspace" allows the user, or any # userspace program running with UID "root", to set the CPU to a # specific frequency by making a sysfs file "scaling_setspeed" # available in the CPU-device directory. # ondemand: The CPUfreq governor "ondemand" sets the CPU depending on the # current usage. # conservative: The CPUfreq governor "conservative", much like the "ondemand" # governor, sets the CPU depending on the current usage. It # differs in behaviour in that it gracefully increases and # decreases the CPU speed rather than jumping to max speed the # moment there is any load on the CPU. # schedutil: The CPUfreq governor "schedutil" aims at better integration with # the Linux kernel scheduler. Load estimation is achieved through # the scheduler's Per-Entity Load Tracking (PELT) mechanism, which # also provides information about the recent load. SCALING_GOVERNOR=ondemand # For CPUs using intel_pstate, always use the performance governor. This also # provides power savings on Intel processors while avoiding the ramp-up lag # present when using the powersave governor (which is the default if ondemand # is requested on these machines): if [ "$(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver 2> /dev/null)" = "intel_pstate" ]; then SCALING_GOVERNOR="performance" fi # If rc.cpufreq is given an option, use it for the CPU scaling governor instead: if [ ! -z "$1" -a "$1" != "start" ]; then SCALING_GOVERNOR=$1 fi # To force a particular option without having to edit this file, uncomment the # line in /etc/default/cpufreq and edit it to select the desired option: if [ -r /etc/default/cpufreq ]; then . /etc/default/cpufreq fi # If you need to load a specific CPUFreq driver, load it here. Most likely you don't. #/sbin/modprobe acpi-cpufreq # Attempt to apply the CPU scaling governor setting. This may or may not # actually override the default value depending on if the choice is supported # by the architecture, processor, or underlying CPUFreq driver. For example, # processors that use the Intel P-state driver will only be able to set # performance or powersave here. echo $SCALING_GOVERNOR | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor 1> /dev/null 2> /dev/null # Report what CPU scaling governor is in use after applying the setting: if [ -r /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ]; then echo "Enabled CPU frequency scaling governor: $(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor)" fi As you can see the default is "ondemand", but then follows this condition which sets "performance" as default: # For CPUs using intel_pstate, always use the performance governor. This also # provides power savings on Intel processors while avoiding the ramp-up lag # present when using the powersave governor By that explanation its not recommened to set something else then "performance" for an Intel cpu. I had problems with "powersave" in the past, but this was with an Intel Atom CPU (now I'm having an i3): https://forums.plex.tv/t/cpu-scaling-governor-powersave-causes-massive-buffering/604018 I never experienced similar problems with "ondemand", so I wanted this gorvernor for the i3, too. Nevertheless I checked the active governor as follows: cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor performance Ok, performance has been set as expected. Lets try to overwrite it: /etc/rc.d/rc.cpufreq ondemand Enabled CPU frequency scaling governor: performance Hmm.. does not work. Seems to be this condition: # ... For example, # processors that use the Intel P-state driver will only be able to set # performance or powersave here. Lets try it out: cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor performance performance performance performance This means our cpu cores support only "performance"?! EDIT: Yes, recent Intel CPUs do only support performance or powersave (with massive lags): https://wiki.archlinux.org/index.php/CPU_frequency_scaling#Scaling_governors And the most important part: The performance governor should give better power saving functionality than the old ondemand governor. For me it does not really look like a proper p-state handling as my cpus maximum is 3.6Ghz and with really low load it never reduces the frequency: So lets try to find out what goes wrong here. At first the pstate values: ls /sys/devices/system/cpu/intel_pstate/* /sys/devices/system/cpu/intel_pstate/hwp_dynamic_boost /sys/devices/system/cpu/intel_pstate/num_pstates /sys/devices/system/cpu/intel_pstate/max_perf_pct /sys/devices/system/cpu/intel_pstate/status /sys/devices/system/cpu/intel_pstate/min_perf_pct /sys/devices/system/cpu/intel_pstate/turbo_pct /sys/devices/system/cpu/intel_pstate/no_turbo root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/hwp_dynamic_boost 0 root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/max_perf_pct 100 root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/min_perf_pct 22 root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/no_turbo 1 root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/num_pstates 29 root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/status active root@Thoth:~# cat /sys/devices/system/cpu/intel_pstate/turbo_pct 0 Explanations can be found here: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/intel_pstate.html#user-space-interface-in-sysfs For example num_pstates returns the amount of p-states supported by the cpu. As we can see we have 29 for my cpu. And we know that the status is "active" and this means changing the p-states should work, but we do not know how: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/intel_pstate.html#active-mode do we have Active with HWP or not? I found this sentence: https://01.org/linuxgraphics/gfx-docs/drm/admin-guide/pm/intel_pstate.html#user-space-interface-in-sysfs Our value is zero. What could that mean? Another hint that we are using Active with HWP is this explanation: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/intel_pstate.html#energy-vs-performance-hints Lets check if they are present: Both are present so we can be sure. We use Active Mode with HWP: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/intel_pstate.html#active-mode-with-hwp I disabled all my writings to the unraid server and stopped all disks. In this state the server consumes 24W. Performance still does not downclock: watch -n1 "cat /proc/cpuinfo | grep \"^[c]pu MHz\"" cpu MHz : 3600.114 cpu MHz : 3600.910 cpu MHz : 3601.040 cpu MHz : 3600.269 Does HWP + Performance mean it never changes the p-state? Which algorithm is used and where can I find it or influence it? I tried it with powersave /etc/rc.d/rc.cpufreq powersave Enabled CPU frequency scaling governor: powersave I don't know why, but all disks started with several seconds delay and very small writes (1,4 kB/s) were done. I waited one minute and spun them down again. The power consumption stayed at 24W. The frequency is only a little bit lower watch -n10 "cat /proc/cpuinfo | grep \"^[c]pu MHz\"" cpu MHz : 3263.100 cpu MHz : 3021.631 cpu MHz : 3252.913 cpu MHz : 2819.033 I checked the available HWP profiles and which one is used: cat /sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference balance_performance cat /sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences default performance balance_performance balance_power power I did not found any documentation about these variables. Only this answer to the same question: https://superuser.com/a/1449813/129262 So lets try them out: echo "power" > /sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference echo "power" > /sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference echo "power" > /sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference echo "power" > /sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference cat /sys/devices/system/cpu/cpu*/cpufreq/energy_performance_preference power power power power I tested "power" and "balance_power". No difference in power consumption. If I set "/etc/rc.d/rc.cpufreq" to "performance" the "/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference" becomes "performance". If I set it to "powersave" the preference becomes "balance_performance". Conclusion: Although "performance" is set, there is no room to save more energy in an idle state. "powersave" seems only influence the time in which the core stays in a slower state which could cause latency issues, but the lowest p-state is the same for all profiles. This is completely different to my Atom CPU, which directly showed a lower energy consumption after changing the profile to "ondemand". So it depends on the used CPU. But finally "ondemand" is set if its present so no further optimization seems to be needed. The next we could check are the c-states. For this I used "powertop" from the nerd pack. It seems we have the best results as c-state C10 is used most of the time:
    1 point
  38. I know that the ASRock Taichi X570 has got a pretty good rep. https://www.asrock.com/mb/amd/x570 taichi/ I'm running the Phantom Gaming X with the additional 2.5Gb ethernet but it's basically the same board. I run a bunch of virtual machines, Win/Mac/Linux. This board is great because it has buckets of onboard hardware; - 8 x SATA ports - 3 x NVME - 3 x PCIE 16x If you plan on maxing it out with add on cards and NVME drives then you will need to pay attention to how to cut up the PCIE lanes. From memory the 3rd PCIE slot shares a x4 lane with the 3rd NVME slot. This is a limitation of the chipset not the motherboard. Still it gives you the max flexibility of any motherboard on the market. I needed this amount of capacity because I run a primary Mac and Win VM with dedicated NVME drive and GPU (5700xt and RX550) for each, plus the Mac needs a dedicated Wifi/Bluetooth PCIE card and an additional USB PCIE card (onboard USB/Wifi/Bluetooth passed to the Win VM). So I need all the PCIE slots One issue I hit was that my AMD 5700xt GPU does not play nice with the PCIE bus and on release (shutting down the Win VM) would often hardlock the machine. It's well known that AMD GPUs and virtualisation don't get on. I have taken it out for now and plan on trying a 2070 super when I have the time and money. Here is a thread on how to set it up.
    1 point
  39. Here they are with F10 bios and above settings. The Fresco Logic FL1100 is a PCIe card.
    1 point
  40. Your BIOS is a few versions older than what is still available on MSI website. If you update load defaults and adjust for what you need/had, then verify Cool'n'Quiet is enabled, HPET is enabled and ACPI is enabled. If you overclock or adjust voltage go back to stock until you verify frequency scaling is working or you give up. Boot into unRAID and grab another diagnostics.
    1 point