Jump to content

HumanTechDesign

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by HumanTechDesign

  1. Can you please check, what your BIOS GPU configuration is set to now? I have the mATX version of that board (W680M-ACE SE, which does not have an external IMPI card, but I assume, internally they are linked up the same). What bothers me the most: I had it working the other day but I had to reset the CMOS because of a soft brick (by trying to start the "secure erase" functionality built into the board). Since then, I have been trying a lot of different configurations but somehow it's always the same iKVM vs. iGPU. This BIOS has so many bugs. Sometimes, my board restarts three times just on its own until it finally decides that it is okay to go ahead now. Random "Fb" boot codes (apparently M.2 SSD error messages) just to be fine on the next boot. I'm a private user but if I was a sysadmin that had to use it in a professional context(!) I would return it to the distributor.
  2. Perfect! That's great to hear. Thank you very much (also for the quick response). Looking forward to full support!
  3. Sorry to bother again, but I wanted to bump this: Apparently @sir_storealot seems to have the same issue. Any workaround/suggestions apart from excluding the containers in question?
  4. Thank you so much! I was trying to find out why my userscript was failing with the exact same error. I thought, the occ command had moved again but I just could not pinpoint it. Without your post I wouldn't even have noticed that the whole cloud was essentially down. Downgrading fixed it for me. For everyone wondering how to downgrade MariaDB: just change the Repository in the MariaDB container to something like mariadb:11.2.3 instead of plain mariadb or mariadb:latest.
  5. Danke für Deinen Report! Mein Board ist auf dem Weg. Nachdem ich dann gelesen hatte, dass Boards wie z.B. das aktuell unfassbar günstige MC12-LE0 genau damit Probleme hatten, die iGPU zu aktivieren, weil alles in den IPMI/BMC umgeleitet wird, hatte ich gerade ein wenig Magengrummeln, als ich Deinen ersten Post gelesen habe. Bist Du jetzt weiterhin auf dem 4.24 BIOS? Auf der Website gibt es ja noch ein 4.26 BETA BIOS. Hast Du das schon mal getestet bzw. kannst dazu was sagen?
  6. Thank you for your input. As you can see in the docker compose mappings, these volumes do in fact live on the exact same /mnt/user path - just like any "vanilla" Unraid docker template. The only difference is that docker compose allows for reusing the mapping by using volumes. You are technically correct that it is not strictly necessary to do it this way (I could mount the paths in the different containers separately), but this way it is just cleaner and apart from the appdata backup plugin has never caused any issues (like @sir_storealot described)
  7. Thank you for a rework on the appdata plugin. I seem to have problems with named bind volumes from the docker compose plugins. First of all, this is the error from the debug log: [17.02.2024 03:06:28][❌][paperless_ngx_private-paperless-ngx-broker-1] 'paperless_ngx_private_redisdata' does NOT exist! Please check your mappings! Skipping it for now. [17.02.2024 03:06:39][ℹ️][paperless_ngx_private-paperless-ngx-broker-1] Should NOT backup external volumes, sanitizing them... [17.02.2024 03:06:39][⚠️][paperless_ngx_private-paperless-ngx-broker-1] paperless_ngx_private-paperless-ngx-broker-1 does not have any volume to back up! Skipping. Please consider ignoring this container. [17.02.2024 03:06:39][❌][paperless_ngx_private-paperless-ngx-db-1] 'paperless_ngx_private_pgdata' does NOT exist! Please check your mappings! Skipping it for now. [17.02.2024 03:06:50][ℹ️][paperless_ngx_private-paperless-ngx-db-1] Should NOT backup external volumes, sanitizing them... [17.02.2024 03:06:50][⚠️][paperless_ngx_private-paperless-ngx-db-1] paperless_ngx_private-paperless-ngx-db-1 does not have any volume to back up! Skipping. Please consider ignoring this container. [17.02.2024 03:06:50][ℹ️][paperless_ngx_private-paperless-ngx-gotenberg-1] Should NOT backup external volumes, sanitizing them... [17.02.2024 03:06:50][⚠️][paperless_ngx_private-paperless-ngx-gotenberg-1] paperless_ngx_private-paperless-ngx-gotenberg-1 does not have any volume to back up! Skipping. Please consider ignoring this container. [17.02.2024 03:06:50][ℹ️][paperless_ngx_private-paperless-ngx-tika-1] Should NOT backup external volumes, sanitizing them... [17.02.2024 03:06:50][⚠️][paperless_ngx_private-paperless-ngx-tika-1] paperless_ngx_private-paperless-ngx-tika-1 does not have any volume to back up! Skipping. Please consider ignoring this container. [17.02.2024 03:06:50][❌][paperless_ngx_private-paperless-ngx-webserver-1] 'paperless_ngx_private_data' does NOT exist! Please check your mappings! Skipping it for now. [17.02.2024 03:07:00][❌][paperless_ngx_private-paperless-ngx-webserver-1] 'paperless_ngx_private_media' does NOT exist! Please check your mappings! Skipping it for now. [17.02.2024 03:07:11][ℹ️][paperless_ngx_private-paperless-ngx-webserver-1] Should NOT backup external volumes, sanitizing them... [17.02.2024 03:07:11][⚠️][paperless_ngx_private-paperless-ngx-webserver-1] paperless_ngx_private-paperless-ngx-webserver-1 does not have any volume to back up! Skipping. Please consider ignoring this container. The volumes mentioned in the error log are named volumes in docker compose template files. This is necessary, because they are used in different containers in the template: volumes: data: driver: local driver_opts: o: bind type: none device: /mnt/user/appdata/paperless-ngx-private/data/ media: driver: local driver_opts: o: bind type: none device: /mnt/user/share/userdata/paperlessdata/media/ pgdata: driver: local driver_opts: o: bind type: none device: /mnt/user/appdata/paperless-ngx-private/pgdata/ redisdata: driver: local driver_opts: o: bind type: none device: /mnt/user/appdata/paperless-ngx-private/redisdata/ These are referenced by the containers in the template like so (only posting one example, the rest is the same): paperless-ngx-broker: image: docker.io/library/redis:7 restart: unless-stopped volumes: - redisdata:/data networks: - backend When running the docker compose template, the Unraid docker overview shows them like this: I therefore assume, the appdata plugin tries to pickup the mapping that Unraid provides in the overview instead of the mapping defined in the docker compose. To avoid the errors, I will exclude the containers now from backups. I was wondering: is there a stable and better way in the plugin to deal with docker compose named volumes? Thank you in advance!
  8. Interesting topic! With the current setup, one can only pass the whole iGPU to one VM, correct? With the SR-IOV/GVT-g plugins for Intel on Unraid, one can still use the iGPU for containers (jellyfin etc.) as well as (multiple) VMs. That would not be possible for this AMD setup, correct? If possible: Could someone with a working setup test this?
  9. I'm totally in line with your arguments! What I actually meant by my suggestion is to highlight the fact, that even with a (generally probably supported) E-2XXX platform, you might run into problems on the specific E-23XX line. I did not mean, that all E-21XX and E-22XX will be running fine (for the reasons you pointed out), but rather to push users of the E-23XX chip line to be especially cautious. Otherwise, they might check this thread and the first post and think they are fine because it's a E-2XXX chip with them being unaware of the fact that E-2XXX can mean Coffee Lake (Ref.) or Rocket Lake. In any case, thank you for the insights. I have not completely decided yet on buying the MB or not. If I do (with an E-2XXX chip), I will report back.
  10. I found that page as well and had the screenshot from it in the section that is now edited. However, even though gvt-g is listed there for 11.Gen/Rocket Lake, I'm pretty sure by now (unless someone else has it running sucessfully in this thread), that E-23XX will not have support for gvt-g or sr-iov (like @alturismo said). I understand that. However (maybe to avoid future confusion), an explicit highlight that E-23XX is probably unsupported (while E-21XX and E-22XX should be fine), could help. What do you think? (here is the official gvt Rocket Lake discussion - issue remains open and there does not seem to be any plan to actually enable gvt-g OR sr-iov: https://github.com/intel/gvt-linux/issues/190 )
  11. Thank you for this plugin. Could it be, that there is a minor error in the support list on the first page? It states, that E-2XXX platforms are supported. However, it seems, that the E-23XX line of CPUs is based on Rocket-Lake-E (while the E-21XX and E-22XX lines seem to be based on Coffee Lake (Ref.)) . [EDIT]: UHD 750 in E-23XX chipsets is probably Intel Xe AND Rocket Lake, so I assume missing compatibility. Nevertheless, I was wondering if there there are any reports on the use of gvt-g (and specifically this plugin) or sr-iov for the Xeon E-23XX chips on a C256 chipset. I am considering pulling the trigger on a sweet deal for a mainboard but only if there is Multi-VM support (either via gvt-g or sr-iov).
  12. This issue (not spinning up/down) even without the plugin has already been discussed earlier this year (even pre 6.12.x); see for example this I am still on 6.11.5, without the plugin installed and also experience the issue that the disks spin up automatically if I spin them down manually and don't spin down at all based on time. Unfortunately, in April we did not find the culprit (it was NOT the main tab issue), so therefore it is probably also related to the ZFS implementation in Unraid itself or a specific hardware configuration.
  13. Then it's really strange - as I said, I currently don't have the Master plugin installed and still see the same behavior. Also, this thread dated from March 2021 (German only unfortunately) describes the exact same behavior (unfortunately no solution):
  14. To be honest, I actually don't think it's the Master plugin (but the current ZFS implementation in Unraid). It could also be the ZFS companion plugin, if you have that installed. I had ZFS Master installed but uninstalled it due to the snapshot loading times (see earlier discussion in the thread). If I try to spin down ZFS disks in the UD section of main, I ALWAYS receive the spin up with the SMART message. Therefore, I really would not assume that it's only ZFS master.
  15. Thank you @Iker once more for this plugin! I have a small issue to report, when it comes to the loading times on the Main page: I recently attached a zfs-formatted external USB HDD as a backuppool. Because of the nature of this being a backup device, quite a lot more snapshots are being kept there than in my main pool(s) (~12.5k on the HDD vs ~1.2k on the largest main pool). Since I added this backuppool, loading times of the main page have gone up dramatically (up to around a minute or even two). Not surprisingly, the Snapshot Admin UI also takes ages to load. The `system` dataset (with all of the docker stuff) is already excluded via pattern and I noticed this change after hooking up the backup HDD and sending my snapshots over. I was therefore wondering, if there is any way for you to optimize (e.g. maybe "lazy load") the snapshots when loading UI? Comparable to the improvements you made with the general main UI. Thanks already for your hard work!
  16. I have a problem setting the following up: Instead of GUA (global routable addresses), I want to use ULA (local adresses from the fc00::/7 range as per RFC4193). I use an Opnsense with the docker containers on a separate VLAN (which creates a custom br0 in the format of br0.<VLAN>). For this VLAN, the ULA prefix has been set up in the firewall, advertising a custom IPv6 prefix (from the ULA range). I now want to use that prefix as the base for the IPv6 adresses generated for the containers using SLAAC. I have therefore setup the VLAN to use v4+v6 and v6 mode set to automatic. The network settings and docker manager correctly pick up the ULA range and gateway and the docker settings offer this as an ipv6 configuration for that VLAN (so far so good). However, when I now go to edit the container, the ipv6 range (which is correctly recognized by the settings page) is not shown next to the "Fixed IP address" entry field (different from the screenshot for example in the first post). The line --sysctl net.ipv6.conf.all.disable_ipv6=0 is added as an additional parameter. When starting the container (it is the pihole container with ipv6 activated in the environment variables), my router picks it up correctly but only reports the GUA (2000:... range). Therefore, apparently, no ULA but only a GUA was generated using SLAAC. Is this a container issue or an Unraid issue? Has anybody successfully implemented ULAs for the use by the docker service on Unraid? EDIT: Some screenshots as clarification EDIT: Nevermind - it actually DOES work with this configuration! Somehow, the ULA was just not showing up in my firewall NDP/MAC table. After pinging the container once more, it now picks it up correctly. For everyone looking for a solution to assign static local addresses via SLAAC to their containers on a separate VLAN here is quick walkthrough: Generate an ULA prefix (preferably actually random e.g. via some online tools) Enter the prefix as an advertised route for the interface in your router (in opnsense - and probably pfsense) this is done via a "Virtual IP" Configure Unraid's networking with automatic IPv6 on the VLAN interface with the docker containers (see above) - preferably turn off privacy extensions to generate "truly stable" addresses - however, I believe that this only affects the IPv6 adress of the interface in Unraid, not of the underlying docker containers Activate IPv6 networking in the docker settings for that interface (this should already show the new ULA prefix from your router prefilled!) add the line --sysctl net.ipv6.conf.all.disable_ipv6=0 as an additional parameter to the container add the line --mac-address xx:xx:xx:xx:xx:xx (with a MAC address of your choosing) as an additional parameter to the container (not sure if that is neccessary, but I wasn't sure if an update/restart of a container might generate new MAC address) Start the container Ping the container (or run something like ifconfig in the container) to find out the ULA of the container (should start with something like fc... or fd... - NOT the fe... address) Use that ULA as a stable IPv6 for that container Depending on the container, you might also run more arguments (e.g. environment variable or commands via the container console) to get full IPv6 support (e.g. in the pihole you would need to activate it via environment variables), but this depends on the container.
  17. As I have never seen an answer to that, but I'm still looking into this issue: Bump.
  18. Great new updates, working flawlessly! Thank you once more! When it comes to encryption, I might have another feature request (although I'm not sure if that is something for your plugin or on the ZFS for Unraid plugin side): I have two encrypted datasets (encrypted using CLI long before your plugin was around). In order to ensure an actual at rest protection for these datasets, they need to be unmounted AND the key unloaded (see here for example). Unfortunately, Unraid does not do this by default when shutting down (based on this comment but also general first-hand experience). Therefore I have a User Script to run at array stop (containing the commands zfs unmount tank/dataset zfs unload-key -r -a and the respective opposite commands for loading the key and mounting on start of the array). However, I sometimes had the problem, that the User Script did not properly execute completely (sometimes because the device was still busy) before the server shut down, resulting in a kind of unclean shutdown (auto-starting a scrub on the next boot). My workaround so far was stopping the array manually (and waiting for correct locking of the datasets) before shutting down. I was wondering if there is a way for your plugin to communicate with Unraid to encrypt at shutdown after unmounting with a higher priority than the User Scripts plugin.
  19. Awesome, yes, this one worked. I was apparently overcomplicating things! Also just a heads up: I was gonna report the issue that datasets with spaces in them (e.g. when there is a VM called Windows 10) weren't picked up correctly. This now apparently also seems to be solved, as it now correctly shows actually all datasets. So thank you very much once more! EDIT: Now that I'm here anyways, could I make two wishes (really not in a position to ask an open source developer, but I was thinking, maybe you could think about them) 1.) Could you think about a feature to delete multiple snapshots at once? Maybe just by having multiple checkboxes or by being able to select multiple rows in the Snapshot Admin? 2.) Maybe add another confirmation dialog when deleting a snaphot? Right now you really need to make sure that you clicked the button on the right row
  20. Hi, first of all thank you for this great plugin! Really useful to manage snapshots etc. Concerning this new change in pattern construction: I'm not that strong in pattern matching and therefore have problems to adapt this to my setup. Could you maybe provide some basic examples for really generic use cases (e.g. excluding all datasets below a certain folder etc.)? In my case, I have my docker system share on a dedicated SSD ZFS pool. Due to the docker-ZFS implementation, it generates a dataset for every layer of all the images (see e.g. this explanation). With the previous regex, I just exluded all sub datasets below the "system" share. I now tried just the basic versions (SSDPool is my mountpoint) /SSDPool/system/ /SSDPool/system/%w+ /SSDPool/system/(%w+) but none of them matched. Is there a way to match the whole content of the system subdirectory?
  21. Going for a long shot here, but did you ever figure out what caused this? Running into the exact same issue here!
  22. Hey there, new Unraid user here. I'm currently in the process to plan out the further VM use of my server, I had a look at the IOMMU groups of my board. It's a ASrock B550M Steel Legend (BIOS P2.30 (February 2022)) with a Ryzen 5 Pro 4650G (Renoir). When researching the board's capabilities, I stumbled upon this post (referencing this pastebin with IOMMU groups of this board). Unfortunately, I don't know, which BIOS this is on, but generally, the groups seem quite well separated: IOMMU Group 0: 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] 00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483] 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ee] 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43eb] 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43e9] 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea] 02:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea] 02:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea] 02:08.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea] 02:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea] 03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] [1002:67ff] (rev ff) 03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X] [1002:aae0] 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 05) 07:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 [8086:2723] (rev 1a) IOMMU Group 1: 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU Group 10: 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440] 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441] 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442] 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443] 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444] 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445] 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446] 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447] IOMMU Group 11: 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a] IOMMU Group 12: 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485] IOMMU Group 13: 0a:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486] IOMMU Group 14: 0a:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c] IOMMU Group 15: 0a:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487] IOMMU Group 2: 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483] 08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7) 08:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0] IOMMU Group 3: 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU Group 4: 00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU Group 5: 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU Group 6: 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484] IOMMU Group 7: 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU Group 8: 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484] IOMMU Group 9: 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61) 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51) I now have my board here and in the process of preparing it, I updated the BIOS to Version P2.30 (February 2022). I forgot to have a look at the IOMMU groups before flashing, but right now, Unraid lists the following devices for me (which looks way worse to me): IOMMU group 0: [1022:1632] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge IOMMU group 1: [1022:1632] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1634] 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge [1022:1634] 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge [1022:43ee] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43ee Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 001 Device 003: ID 05e3:0610 Genesys Logic, Inc. 4-port hub Bus 001 Device 004: ID 26ce:01a2 ASRock LED Controller Bus 001 Device 005: ID 0781:556b SanDisk Corp. Cruzer Edge Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub [1022:43eb] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43eb [1:0:0:0] disk ATA Samsung SSD 870 1B6Q /dev/sdb 500GB [2:0:0:0] disk ATA Samsung SSD 870 1B6Q /dev/sdc 500GB [3:0:0:0] disk ATA ST18000NM000J-2T SN02 /dev/sdd 18.0TB [4:0:0:0] disk ATA ST18000NM000J-2T SN02 /dev/sde 18.0TB [1022:43e9] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43e9 [1022:43ea] 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea [10ec:8125] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) [144d:a808] 04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [N:0:4:1] disk Samsung SSD 970 EVO Plus 1TB__1 /dev/nvme0n1 1.00TB IOMMU group 2: [1022:1632] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1635] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus [1002:1636] 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev d9) [1002:1637] 05:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device 1637 [1022:15df] 05:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:1639] 05:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 003 Device 003: ID 0463:ffff MGE UPS Systems UPS Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 004 Device 002: ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub [1022:1639] 05:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 006 Device 002: ID 090c:1000 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.) Flash Drive [1022:15e3] 05:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller IOMMU group 3: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 51) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 4: [1022:1448] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 0 [1022:1449] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 1 [1022:144a] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 2 [1022:144b] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 3 [1022:144c] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 4 [1022:144d] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 5 [1022:144e] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 6 [1022:144f] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 7 I currently have no dGPU installed, but I was thinking about passing trough the front USB IO Hub (and possibly other stuff). I am of course aware of the possibilities to break up the groups via the different methods (the classic https://www.youtube.com/watch?v=qQiMMeVNw-o), but I'm just a little surprised to see such strong differences in grouping. As I'm rather new to the topic, I was wondering, which factors actually influence the groupings. CPU? Kernel? Or basically only Chipset and corresponding BIOS? Would you expect the BIOS update (as I said, unfortunately, I did not check prior to flashing) to make groupings so much worse or did I just miss something in the configuration? Thanks in advance!
  23. Hey there, Unraid newcomer here. I'm currently in the process of migrating from a Syno over to Unraid. One of the reasons was the plan to share (and receive) large files from users outside my home network. With the Syno box I did not share any services directly to the outside but rather had a VPN running on my (edge, bare metal) pfsense but I was thinking, that I might now find a solution with the shiny new server. With Unraid I had a feeling that it was somehow almost "common practice" to make containers (such as Nextcloud, Plex etc.) available trough a proxy. However, in my eyes, this does not solve the problem of eventually being subject to brute force attempts (or worse). I know of the approach of using the Argo/CF tunnel to avoid having open ports to the outside. Nevertheless, the actual line of defense is still the HTTP Auth in the proxy and/or the login screen of the application. In my eyes, this still seems like a problem, if an exploit within those applications is discovered. Therefore I was thinking about segregating private and public services by actually duplicating the dockers (i.e. having a "public" Nextcloud exposed to the Internet and a "private" Nextcloud only available locally) and also having separate shares that they can access. However, as I'm not that deep in the docker space, I don't know, if this is best practice, if it creates a big (MGMT and processing) overhead to have multiple instances of the same container and if it actually brings any security advantage. Therefore, I'm looking for answers to the following questions: 1.) Is there any security benefit of running the same service in two different container instances and subnets? I especially want to prevent "container breakout" (between containers but also to the host)? (I already know of this thread, but I just want to know if there are other vectors such as Dirty Pipe and its - possible - threats or the general "plugins as root" problem) 2.) Is there a better way to achieve the possibility to make certain services availble to the public while at the same time making sure that the "most precious" private data stays safe from stuff like ransomware, hackers gaining root within a container etc. If there is no good answer to these questions I might stay with my stance on just running everything only locally behind the firewall etc. but maybe I'm just missing something or you have good arguments why the points are listed are not actually problems. Thanks in advance!!!
×
×
  • Create New...