• Posts

  • Joined

Everything posted by ZooMass

  1. Hi, my syslog gets spammed and 99% filled within minutes of booting up with millions of lines like this Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref] Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref] Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref] I am stubbing my graphics card with this plugin on unRAID 6.8.3. The address 09:00.0 is the device "VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)". HVM and IOMMU are enabled. PCIe ACS override is disabled. The graphics card passthrough (with dumped vbios rom) works in a VM, but on fixed 800x600 resolution (Nvidia drivers installed, Windows VM says there's a driver error code 43), but the VM logs say 2021-01-19T21:57:24.002296Z qemu-system-x86_64: -device vfio-pci,host=0000:09:00.0,id=hostdev0,bus=pci.0,addr=0x5,romfile=/mnt/disk5/isos/vbios/EVGA_GeForce_GTX_1070.vbios: Failed to mmap 0000:09:00.0 BAR 3. Performance may be slow Anybody seen this before? Can't find anything like it on the forum. EDIT: Found some more info. According to booting the server without the HDMI plugged in removed the spamming line. However, after plugging the HDMI back in and booting the VM, the VM logs are repeating lines like 2021-01-19T22:17:27.637837Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101afe, 0x0,1) failed: Device or resource busy 2021-01-19T22:17:27.637849Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101aff, 0x0,1) failed: Device or resource busy 2021-01-19T22:17:27.648663Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy 2021-01-19T22:17:27.648690Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy 2021-01-19T22:17:27.648784Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102000, 0xabcdabcd,4) failed: Device or resource busy 2021-01-19T22:17:27.648798Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102004, 0xabcdabcd,4) failed: Device or resource busy Windows device manager still says there are driver errors, and there are console-like artifacts horizontally across the screen, including a blinking cursor, on top of Windows. It seems like the unraid console and the Windows VM (or is it VFIO stubbing?) fight for the GPU. I have yet to try the recommendation in the above linked post to unbind the console at boot with the go script.
  2. Bumping this post because I am dealing with the same issue. I have the same four containers with these template missing warnings, pointing to the same A75G templates, for which Apply Fix shows the same error. I have the following templates on my USB: $ ls -lhAF /boot/config/plugins/dockerMan/templates/templates/ | grep jitsi -rw------- 1 root root 4.3K Apr 25 2020 jitsi-jicofo.xml -rw------- 1 root root 4.0K Apr 25 2020 jitsi-jvb.xml -rw------- 1 root root 13K Apr 25 2020 jitsi-prosody.xml -rw------- 1 root root 7.2K Apr 25 2020 jitsi-web.xml $ ls -lhAF /boot/config/plugins/dockerMan/templates-user/ | grep jitsi -rw------- 1 root root 4066 Nov 10 10:36 my-jitsi_bridge.xml -rw------- 1 root root 4336 Nov 10 10:37 my-jitsi_focus.xml -rw------- 1 root root 7276 Nov 10 10:09 my-jitsi_web.xml -rw------- 1 root root 12837 Nov 10 10:35 my-jitsi_xmpp.xml I renamed my containers according to the filenames in the templates-user folder.
  3. My question is very similar to this one. I have an arch-rtorrentvpn container (VPN disabled) using the network stack of a dedicated arch-privoxyvpn container using --net=container:vpn parameter. I am trying to set up port forwarding on the vpn container for rtorrent. On the arch-rtorrentvpn container, it just automatically acquires the forwarded port when the PIA endpoint being used supports it. I am aware of PIA's next-gen upgrades disabling port forwarding and I am primarily using their Israel, Romania, and CA Montreal servers. The arch-privoxyvpn container connects to those endpoints successfully, but it doesn't do the same automatic port forwarding that the arch-rtorrentvpn and arch-delugevpn containers do. Is there a setting to force this? I assume that the container supports it due to sharing the same container startup procedure across the binhex containers. Manually creating a STRICT_PORT_FORWARD variable in arch-privoxyvpn (like in the other two containers) has no effect. Even though I am using PIA, there is a log line that says: 2020-09-16 15:23:54,195 DEBG 'start-script' stdout output: [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment Is using the ADDITIONAL_PORTS variable equivalent to just adding a new port to the template? Is the vpn_options variable just extra parameters for the /usr/bin/openvpn command?
  4. ZooMass


    +1 demand unit Help us SpaceInvaderOne, you are our only hope (not tagging you because I'm sure you're already annoyed with the last three @'s in this topic).
  5. Having the same problem accessing web UI, I am using the manually created "docker network create container:vpn" and not "--net=container:vpn" extra parameter on unRAID 6.8.3 with Docker version 19.03.5.
  6. I have been experiencing the same issue with Jackett and LazyLibrarian. There has been some discussion of this web UI issue over on binhex-privoxyvpn (I lay out my details there). For the record, a lot of people are using that container instead of binhex-delugevpn as a dedicated VPN container. Any ideas or advice would be useful!
  7. Thank you for the quick response! My setup looks essentially the same as yours, with the VPN container named simply vpn, and unfortunately I still cannot access the web UI, just a 404. One thing I tried changing was that I changed the network from a custom public Docker network I have (to isolate from non-public-facing containers) to simply the bridge network like yours. Client container still receives the VPN IP, but I still can't access the web UI. I tried disabling my adblocker even though it should have no effect, and it in fact does not. The container is named jackettvpn because I modified my existing container, but that container's VPN is disabled.
  8. Thank you for these very clear instructions! I was just looking for something like this after hitting my VPN device license limit, and SpaceInvader One released this timely video. Like a lot of you guys I wanted to use a dedicated container instead of binhex-delugevpn, and this binhex-privoxyvpn is perfect for the job. However, I'm unable to access the client container web UI. I've now tested with linuxserver/lazylibrarian (to hide libgen direct downloads) and linuxserver/jackett (migrating from dyonr/jackettvpn, but also tried with clean image). I'm on unRAID 6.8.3 and I've tried both "docker create network container:vpn" and "--net=container:vpn" extra parameters. (also, for the record, "docker run" complains when you set a custom network:container in the dropdown and also have translated ports, so be sure to remove ports at the same time you change the network). I've added the ports for the client containers (in my two test containers those 5299 and 9117 respectively) to the binhex-privoxyvpn container named vpn, restarted vpn, and rebuilt & restarted the client containers. Still can't reach container web UI on [host IP]:5299 or [host IP]:9117. In the client containers, I can curl ifconfig.io and I receive my VPN IP, so the container networking seems to work fine. The client web UI seems to be the only issue. I've seen a couple people in the comments on SpaceInvader One's video report the same issue. Has anyone else experienced this or fixed it? Would love to have this setup work out!
  9. I'm having trouble using the "Tiered FFMPEG NVENC settings depending on resolution" plugin with ID "Tdarr_Plugin_d5d3_iiDrakeii_FFMPEG_NVENC_Tiered_MKV". It says it can't find my GPU. Command: /home/Tdarr/Tdarr/bundle/programs/server/assets/app/ffmpeg/ffmpeg42/ffmpeg -c:v h264_cuvid -i '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv' -map 0 -dn -c:v hevc_nvenc -pix_fmt p010le -rc:v vbr_hq -qmin 0 -cq:V 31 -b:v 2500k -maxrate:v 5000k -preset slow -rc-lookahead 32 -spatial_aq:v 1 -aq-strength:v 8 -a53cc 0 -c:a copy -c:s copy '/home/Tdarr/cache/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p]-TdarrCacheFile-p1cwX-Dg.mkv' ffmpeg version N-95955-g12bbfc4 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1) configuration: --prefix=/home/z/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/z/ffmpeg_build/include --extra-ldflags=-L/home/z/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/z/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 36.101 / 56. 36.101 libavcodec 58. 64.101 / 58. 64.101 libavformat 58. 35.101 / 58. 35.101 libavdevice 58. 9.101 / 58. 9.101 libavfilter 7. 67.100 / 7. 67.100 libswscale 5. 6.100 / 5. 6.100 libswresample 3. 6.100 / 3. 6.100 libpostproc 55. 6.100 / 55. 6.100 Guessed Channel Layout for Input Stream #0.1 : 5.1 Input #0, matroska,webm, from '/home/Tdarr/Media/Television/Stranger Things/Season 03/Stranger Things - S03E01 - Chapter One- Suzie, Do You Copy [HDTV-1080p].mkv': Metadata: encoder : libebml v1.3.5 + libmatroska v1.4.8 creation_time : 2019-07-04T07:03:27.000000Z Duration: 00:50:33.63, start: 0.000000, bitrate: 7850 kb/s Chapter #0:0: start 306.015000, end 354.521000 Metadata: title : Intro start Chapter #0:1: start 354.521000, end 3033.632000 Metadata: title : Intro end Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default) Metadata: BPS-eng : 7205368 DURATION-eng : 00:50:33.573000000 NUMBER_OF_FRAMES-eng: 72733 NUMBER_OF_BYTES-eng: 2732251549 _STATISTICS_WRITING_APP-eng: mkvmerge v21.0.0 ('Tardigrades Will Inherit The Earth') 64-bit _STATISTICS_WRITING_DATE_UTC-eng: 2019-07-04 07:03:27 _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES Stream #0:1(eng): Audio: eac3, 48000 Hz, 5.1, fltp (default) ... Stream #0:29 -> #0:29 (copy) Stream #0:30 -> #0:30 (copy) Stream #0:31 -> #0:31 (copy) Stream #0:32 -> #0:32 (copy) Press [q] to stop, [?] for help [hevc_nvenc @ 0x55aaaad84e40] Codec not supported [hevc_nvenc @ 0x55aaaad84e40] No capable devices found Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! I have an EVGA GeForce GTX 760, obv an older card. nvidia-smi doesn't support it. Tue Mar 10 13:54:11 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.59 Driver Version: 440.59 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 760 Off | 00000000:08:00.0 N/A | N/A | | 0% 35C P0 N/A / N/A | 0MiB / 1997MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ However my linuxserver/plex and linuxserver/emby containers do manage to use it for hardware transcoding. I made sure to set all the correct Docker template variables including --runtime=nvidia, NVIDIA_DRIVER_CAPABILITIES=all, NVIDIA_VISIBLE_DEVICES=<GPU ID>, I have Linuxserver Unraid Nvidia 6.8.3 installed. Any tips? I would really like to be able to transcode on the GPU, I've been brutally punishing my CPU for days slowly transcoding on Unmanic
  10. Unmanic is another good container. It's dead simple, you just point it at a directory and it converts x264 video files to HEVC.
  11. Super happy to see that we have resolved the issue! Thank you @Rich Minear @limetech and everyone else! I look forward to finally confidently upgrading from 6.6.7!
  12. That was it! Yes, I am running 6.6.7 because on 6.7+ my machine experienced the SQLite data corruption bug being investigated and 6.6.7 did not. Thanks for your help!
  13. Just ran the container, tried to start the VM, got this error: internal error: process exited while connecting to monitor: 2019-10-27T19:31:11.597434Z qemu-system-x86_64: -machine pc-q35-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off: unsupported machine type Use -machine help to list supported machines I am using the --catalina OS flag. Only change I made to the container is that I changed the VM images location to /mnt/cache/domains, but that should be the same as /mnt/user/domains anyway. Anybody seen anything like this? My XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>MacinaboxCatalina</name> <uuid>51e88f1c-a2ab-43af-981f-80483d6600c8</uuid> <description>MacOS Catalina</description> <metadata> <vmtemplate xmlns="unraid" name="MacOS" icon="/mnt/user/domains/MacinaboxCatalina/icon/catalina.png" os="Catalina"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Catalina-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/macos_disk.img'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:71:f5:95'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='' keymap='en-us'> <listen type='address' address=''/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=IAMZOOMASSANDIREMOVEDTHEMAGICWORDS'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> </domain>
  14. Just updated my Asus Prime B450-A to 7/29/2019 BIOS version 1607 which says it updates AGESA AB. Just tried to pass through an GeForce 760 to a Windows 10 VM, but I still get the same D3 error. EDIT: My CPU is an AMD Ryzen 2700.
  15. I'm trying to access the web UI through a reverse proxy, but when I go to https://mydomain.com/thelounge it just responds with a page that says "Cannot GET /thelounge/". I'm using update 15.05.19, the latest image as of posting this, and I made no changes to the default template except set the network to my public Docker network (where containers can resolve each other's hostnames). I made no changes to /config/config.js except set "reverseProxy: true,". I use jselage/nginxproxymanager container, here is my location block: location /thelounge { proxy_set_header Host $host; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://thelounge:9000; auth_request /auth-0; } It's a bit different from the block in https://thelounge.chat/docs/guides/reverse-proxies#nginx but I also tried adding the suggested lines and they made no difference. I also tried using this block from the linuxserver config even though I don't use the linuxserver/letsencrypt container, same thing. # thelounge does not require a base url setting location /thelounge { return 301 $scheme://$host/thelounge/; } location ^~ /thelounge/ { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver valid=30s; set $upstream_thelounge thelounge; rewrite /thelounge(.*) $1 break; proxy_pass http://$upstream_thelounge:9000; } Any other ideas?
  16. Also can't pass through GPU to Windows 10 VM. Ryzen 7 2700 Asus PRIME B40M-A/CSM version 1201 (can't downgrade, there was an update 1607 on 6/25/2019 that I haven't tried yet) unRAID both 6.7.0 and 6.6.7 EVGA GeForce GTX 760
  17. This may be specific to Code Server itself and not this container, but how do we tell VSCode the subdirectory path it's located on in a reverse proxy?
  18. I'm with rm and runraid on this one. I'm still experiencing database corruptions in Plex and restoring from backups daily, with /config on /mnt/disk1/appdata/plex like it has been suggested, on unRAID 6.7.0. This is a difficult problem to isolate, some people on the Plex forums have been discussing it with little progress, but I agree with runraid that it feels like the devs aren't giving this issue the attention it deserves. It's still happening to many people. I only started using unRAID in the past month and I'm still on the trial key. I've been meaning to finally purchase a key but I can't justify it until we figure out what's causing these database corruptions.
  19. Hey, that worked, thank so much! Silly of me not to try the line from the article itself.
  20. I keep getting this warning no matter how I set my ddclient.conf: Setting up watches. Watches established. /config/ddclient.conf MODIFY ddclient has been restarted Setting up watches. Watches established. WARNING: found neither ipv4 nor ipv6 address I'm using Namecheap. I've tried configuring the following ways by uncommenting the Namecheap section (I made no other changes to the default ddclient.conf): ## ## NameCheap (namecheap.com) ## protocol=namecheap server=dynamicdns.park-your-domain.com login=**********.*** password=********** @ ## ## NameCheap (namecheap.com) ## protocol=namecheap, \ server=dynamicdns.park-your-domain.com, \ login=**********.***, \ password=********** \ @ I made no changes to the default linuxserver/ddclient Docker template. I read this article for reference on how to configure ddclient for Namecheap: https://www.namecheap.com/support/knowledgebase/article.aspx/583/11/how-do-i-configure-ddclient
  21. Oh, thanks, my bad. I'm still pretty new to unRAID. GUI support would be nice, though!
  22. +1 I have a Nextcloud setup where I isolate the web app, database, and document server as separate containers/services for security and logical isolation. I just started using unRAID but I didn't realize that docker-compose isn't supported. I think it would make sense for an app Docker template to support several containers (though I understand this means changing the XML schema and further abstracting templates from containers). I also would appreciate greater support for Docker networking, like creating an independent NAT per each compose file, where you can give each container a network-specific hostname, and attach to several NATs. For example, in my Nextcloud compose file I only connect the web app container to the virtual NAT that is exposed publicly.
  23. Hi everyone, I'm currently picking parts for my first unRAID server build. Currently the only parts I have in my hands are an AMD Ryzen 7 2700 CPU, a ASUS B450M-A/CSM MicroATX motherboard, three 3.5" HDDs (specifically a 2TB, a 3TB, and a shucked 10TB Easystore for parity), an EVGA GeForce GTX 1070 (may or may not include this in the build, depends on if it fits, I don't plan to game on this build), and an old Corsair 500W Bronze PSU (though I'm strongly considering not using this and instead getting an EVGA 750W Gold PSU or similar). I'm trying to build a compact and visually appealing mATX server that will sit in a visible place. For drives, I expect to have 3-4x 3.5" HDDs (the ones mentioned before plus one more to expand), 1-2x 2.5" SSDs, and/or 1x NVMe SSD (for cache pool, depends on available deals). I've been eyeing the Bitfenix Prodigy M for a couple months even though it's hard to find available nowadays. Currently it's on Amazon for $102.07 (that includes tax+shipping). I really like the older-Mac-Pro-like appearance, and it seems to support 3-4x 3.5" drives, up to 5x 2.5" drives. I've heard that cable management can be difficult in this case due to the placement of the PSU, and that it's hard to ventilate properly because of the tight space. Is anyone here familiar with this case? I would appreciate any feedback or advice, including other case suggestions for this usage.