doesntaffect

Members
  • Posts

    188
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by doesntaffect

  1. Thanks @cmprvan! Did you try to point the docker to the zip?
  2. @cmprvan do you know what customers will get once they purchased a license for their domain? Will filerun provide the zip file after purchasing? I wanted to set up an instance on my Unraid and also run into the issue posted above. I am considering buying the smallest tier, but I am afraid the setup on UnRaid doesn't seem to be straight forward nowadays anymore.
  3. Is there are guide on how to setup the Vaultwarden container with a secure (non plain-text) token? I downloaded the container through apps, started it and generated a token in the container CLI with ./vaultwarden hash. Then pasted the token into the container config in Unraid - still cannot login to the admin panel and get a "Error: Invalid admin token, please try again." Any advise?
  4. Can you share statistics about CPU Load / Disk, NVME temps? How many / which docker? Power Consumption?
  5. 12500 / 13500T - shouldn't make a difference, however the BIOS has to be up to date to support 13Gen. I bough the 12500 used on eBay. I am also not using plex, however can see that transcoding performance is much higher than with my prior AMD system - tested with photoprism.
  6. 20 or 30cm Sata will be to short, my 50cm is probably 10cm to long but still doesn’t create a mess. I linked the cable in an earlier post. The two Molex cables are custom made and I ordered them through Etsy. Couldn’t find any shop that sells 6pin to female Molex in Germany. The cables have to be 20cm minimum, better 22cm for the bottom Molex connector to decent bending angles for the cables.
  7. Yes, I was looking at different options since I wanted to use all drives and 2x M.2 to have them in Raid 1, since my docker are sitting on the cache. And I also wanted to have the Pikvm internally, so had to spend the PCIe Slot for this. At the end I couldn’t find a suitable solution for me that allowed me to keep both M.2 and 5 instead of 4 drives. As of now the build is close to perfect for me and runs like a charm. Super stable, draws just 26W with several dockers running. I will use the empty HDD slot for a cold standby disk for the case 1 drives failes. I then only need to swap them and add the new drive to the array.
  8. I am using this one, 50cm long: https://a.aliexpress.com/_EuC3Yt7
  9. I bought the case, the KVM card and the super thin 4xSata cable through AliExpress, shipping to Germany took 3 weeks, no import taxes on all 3 items. Couldn’t find any shop for case & Blikvm in the EU. the SF450 psu isn’t sold anymore (SF550 is the smallest nowadays) so I went for a used unit. The psu does not come with a compatible Molex cable so had to get 2x custom cables through Etsy.
  10. The idle draw is often the same or close by across the different CPU versions. The TDP limit kicks in when the CPUs are facing load. I recently build a 12500T system, which draws 30W and less when idling with approx. 15 Dockers which are running from NVMEs. Under load the system is hovering at 58-59W max, in average close to 40W with VMs running etc. With a Non-T / K you'll see same idle draw and >100W under load.
  11. what will this device serve / act as? Plain File Server? Few Dockers? ... ?
  12. I ditched my EPYC Build and switched to Intel. Transition went smooth as hell - moved the disks & USB stick. Done. Had to reconfigure the temperature sensing plugin for the new sensor. That was all, aside from the hardware build process. Background for swap: Since years I was looking for a NAS like case and finally Jonsbo released the N2 last year - A 5x 3.5" 222.5mm(W) * 222.5mm(D) * 224mm(H) cube like case, close to the size of a usual NAS. I allows for an ITX board + low profile cooler and is SFX powered. I also wanted to try the 12/13th Gen Intel CPUs. Hardware related notes: I went for an Intel I5 12500T (T Because I wanted low noise cooling in a to often to warm cabinet). Everything else is standard. Same 4x 4TB WD Reds plus 2x 1TB M.2 in Raid 1 for cache (65% used for docker, mostly photoprism). Might swap the 12500 for an 13400T later in spring. For remote access I purchased a Blikvm CM4 PCIe card which puts the RPI CM4 based PIKVM board into the case (yeah, that sacrificed the one and only PCIe slot). But, with that config I have the same level of remote access to the BIOS level like with the IPMI BCM on an AsrockRack board. And with Tailscale enabled I can remote into the Bios from anywhere. I removed the front USB cables from the board since I don't use them and they would only limit ventilation and just look messy. To power the HDD backplane I ordered 2 custom 6-PIN to female Molex cables. They fit well into the case and the 20/22cm length is just right for this config. Hardware list: Case: Jonsbo N2 PSU: Corsair SF-450 CPU: I5 12500T Cooler: NH-L9i-17xx chromax.black w/ NF-A9x14 HS-PWM chromax.black + 1x Noctua NF-A12x15 PWM 120mm on the back Memory: 2x 8GB DDR5 4800 AD5U48008G-DT Board: ASUS ROG Strix B660-I Gaming (Wifi & BT disabled) M.2 2x 1TB WDS100T1X0E HDD: 4x WD RED attached w/ 4x slim Sata cable, 1 Sata port on backplane unused Remote access: BliKVM CM4 PCIe (ATX power on/off, BIOS access, console access), Powered through USB-C PSU until I get a decent PoE Switch The system draws 26 - 31W when 3 disks are parked, with all disk running 38 - 40W. Noise level is very low, only the disks are noticeable. Temps (Degree celsius) are: CPU 34° - 36°, Board 28, M.2's 42° for the one on top, 48° for the one mounted at the bottom of the board. HDDs 33° - 37°, Ambient Temp in the cabinet 23 - 26°. What I noticed is that the Intel based system is more responsive, especially videos archived in photoprism now play super fluent/fast without any load time. On the AMD system that didn't work at all - no sure if that's related to transcoding capabilities (I am more the hardware than Plex/encoding guy). Todos: Swap the HDDs to 4x 2TB SSDs Swap CPU with 13400T Slightly improve the cabling, although it’s already quite decent and nothing gets obstructed. Few Photos:
  13. Going to sell this config except disk and add-on cards. Case: Fractal Design Define 7 PSU: Be Quiet! Straight Power 550W Board: AsRockrack RomeD8-2T Full ATX w/ Bios 3.20; latest version as of 2021/8 CPU: AMD Epyc 7232P (8C/16T, 120W, Zen 2) Case Cooling: 2x Arctic P14 PWM fans in front to cool HDDs, 1x at the bottom of Case; fans are ultra silent CPU Cooling: BeQuiet! Silent Sloop 2 360 w/ 2x Noctua NF-S12B redux-1200 PWM, 1x NF-S12B redux-700 Memory: 64 GB Registed ECC RDIMMs (2x32 GB) Kingston KSM32RD4/32MEI @ 3200Mhz (Per Memory QVL) Anyone from Germany/EU interested pls let me know your realistic pricing proposals. The system is still running 24/7, super stable, so milage may vary until it gets sold
  14. That would be amazing. Thank much @KluthR What would also be nice is a scheduler, so that the backup of setups with several (large) container can be spread across time. In my context I have a photoprism container with 50K photos and several other container and the backup runs from 03:00am to 4pm. While the backup is running my network related containers are still down and I wonder if I can start them while the backup is still running?
  15. Is there a possibility to see the progress (and potentially time left) of the backup process in the Unraid GUI?
  16. well, that's a nice rig Any special apps you run that, where do you see most benefit from all the Flash drives?
  17. diagnostics attached. Even after a few days the status has not changed. ryzen-diagnostics-20220515-1246.zip
  18. Upgraded from rc4, did two reboots and all services seem to start. Array seems to start automatically and I can access dockers and manually start a vm, however on the main tab at the bottom of the browser page it says " Array Stopped stale configuration". When I enter VMs, I get "Array must be started to view Virtual Machines.", however the VMs are running. Any advise?
  19. I documented my build here. It still works like on day 1.
  20. @raptwa did you open a support ticket with ASrock yet? To check if they have any limitations in their BIOS regarding PCIe 1.
  21. I messed with the PCIe cards in different slots and apparently the current config seems to be the only one which works for my VM set up (GPU in PCIe 5). I could also add a second GPU in another slot, but I think PCIe 1 also didnt work for me. I am using the Riser card for thermal reasons.
  22. I cannot recall where that feature was hidden, since I sold that board already. My current config is in the EPYC watercooled thread. Regarding your system resets - Did yo reset the bios and did a memtest to see if there are any configuration issues or a faulty memory module?
  23. I removed Pihole (container, image, template) since it seem to be related to DNS issues in my network. Then I created an Adguard container which cannot start now since port 53 is still being occupied. Question - by what / which service? The docker engine throws this error when I create the container: docker: Error response from daemon: driver failed programming external connectivity on endpoint AdGuard-Home (b2fa2aa7baac63aa19aa0c4fbd4b9555fd2f034bac110b646683a171bb6bea29): Error starting userland proxy: listen tcp4 0.0.0.0:53: bind: address already in use. Any advise how this can be fixed? A netstat on the host show this, and I don't see any container listening on 53. netstat -pna | grep 53 tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 27171/wsdd2 tcp 0 0 0.0.0.0:3443 0.0.0.0:* LISTEN 30538/docker-proxy tcp 0 0 0.0.0.0:4533 0.0.0.0:* LISTEN 30022/docker-proxy tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 10590/dnsmasq tcp 0 0 192.168.178.249:80 192.168.178.54:53460 TIME_WAIT - tcp 0 0 192.168.178.249:80 192.168.178.54:53550 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53389 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53497 TIME_WAIT - tcp 0 0 192.168.178.249:80 192.168.178.54:53555 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53556 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53390 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53557 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53553 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53554 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53388 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53387 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53499 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53558 ESTABLISHED 7399/nginx: worker tcp 0 14 192.168.178.249:80 192.168.178.54:53561 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53552 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53551 ESTABLISHED 7399/nginx: worker tcp6 0 0 :::4533 :::* LISTEN 30029/docker-proxy udp 0 0 0.0.0.0:5353 0.0.0.0:* 27191/avahi-daemon: udp 0 0 0.0.0.0:5355 0.0.0.0:* 27171/wsdd2 udp 0 0 192.168.122.1:53 0.0.0.0:* 10590/dnsmasq udp6 0 0 :::5353 :::* 27191/avahi-daemon: unix 2 [ ACC ] STREAM LISTENING 153014 30047/containerd-sh /run/containerd/s/28ec7daac2528544a3768842fc90f52ffe93983dd533109f5b0b26ef918f62c5 unix 2 [ ACC ] STREAM LISTENING 148264 30559/containerd-sh /run/containerd/s/b7888a0669bb979537068e146779ff32858ff0a3e20c60dd4bda6eea39620139 unix 2 [ ACC ] STREAM LISTENING 149187 31565/containerd-sh /run/containerd/s/68f150ffeb953dca02559017754040e3b6b3bfe77f7cee2f65636bc58a86e4da unix 3 [ ] STREAM CONNECTED 159931 30559/containerd-sh /run/containerd/s/b7888a0669bb979537068e146779ff32858ff0a3e20c60dd4bda6eea39620139 unix 3 [ ] STREAM CONNECTED 153184 27298/containerd unix 3 [ ] STREAM CONNECTED 159753 28168/containerd-sh /run/containerd/s/5ec104dec6088fbefa7b61744247a6dd13c9bf6d90a40e3e7555bd447c1b5588 unix 3 [ ] STREAM CONNECTED 159911 30047/containerd-sh /run/containerd/s/28ec7daac2528544a3768842fc90f52ffe93983dd533109f5b0b26ef918f62c5 unix 3 [ ] STREAM CONNECTED 148368 31565/containerd-sh /run/containerd/s/68f150ffeb953dca02559017754040e3b6b3bfe77f7cee2f65636bc58a86e4da
  24. I am using a USB CDRW Drive and want to rip Audio CDs. When I put a CD in the drive the log shows following after a few moments: eject: tried to use `devtmpfs' as device name but it is no block device /config/ripper.sh: line 128: /usr/bin/ripit: No such file or directory The container log shows following: 06.01.2022 17:04:57 : Starting Ripper. Optical Discs will be detected and ripped within 60 seconds. *** Booting runit daemon... *** Runit started as PID 40 Jan 6 17:04:57 Ryzen cron[44]: (CRON) INFO (pidfile fd = 3) Jan 6 17:04:57 Ryzen cron[44]: (CRON) INFO (Running @reboot jobs) 06.01.2022 17:04:58 : Disk tray open 06.01.2022 17:05:58 : Disc still loading 06.01.2022 17:06:59 : CD detected: Saving MP3 and FLAC 06.01.2022 17:06:59 : Done! Ejecting Disk chown: cannot access '/out/Ripper/CD': No such file or directory chown: cannot access '/out/Ripper/CD': No such file or directory 06.01.2022 17:08:02 : CD detected: Saving MP3 and FLAC 06.01.2022 17:08:02 : Done! Ejecting Disk chown: cannot access '/out/Ripper/CD': No such file or directory chown: cannot access '/out/Ripper/CD': No such file or directory I changed the out folders a few times and created /Ripper/CD manually in /out - no luck. The drive also ejects the CD. I read the GitHub page and still I am not 100% clear how to use this container. Any advise?
  25. Can report that Apples AirPod Max work like a charm with a MacOS VM. Probably no surprise, but wanted to share this. Also came across this system tool (GER and ENG), which allows access to GPU power consumption & temperatures, system load etc. without adding any additional kexts.