Leaderboard

Popular Content

Showing content with the highest reputation on 11/17/22 in all areas

  1. Never thought I would really care, but lately I've been dolling up the server. The original build started in a different case around 14 years ago, but the server has resided in this Xigmatek Elysium for ~10 years. Hardware changes all the time it seems. My current hardware is below but is usually up to date in my signature. Server Hardware: SuperMicro X11SCZ-F • Intel® Xeon® E-2288G • Micron 128GB ECC • Mellanox MCX311A • Seasonic SSR-1000TR • APC BR1350MS • Xigmatek Elysium • 4x Norco SS-500's Hard Drive PODs • 11x Noctura Fans. Array Hardware: LSI 9305-24i • 190TB Dual Parity on WD Helium RED's • 5x 14TB, 9x 12TB, 5x 8TB • Cache: 1x 10TB • Pool1: 2x Samsung 1TB 850 PROs RAID1 • Pool2: 2x Samsung 4TB 860 EVOs RAID0. Dedicated VM Hardware: Samsung 970 PRO 1TB NVMe • Inno3D Nvidia GTX 1650 Single Slot. Forgot to add that I custom built all the power cables for all the hard drives and bays. I use 16 AWG pure copper primary wire. It's fun to keep all the black wires in the correct order and not fry anything, but I proved it doable 🙂. Having a modular power supply is nice. I was able to split 10x "spinners" per power cable in order to stay within the amperage limits of the modular power connectors and maintain wire runs without significant voltage drop. Here are some pics... 10GB fiber goes from the Mellanox MCX311A to a Brocade 6450 switch (finally ran the fiber after much procrastination). That feeds the house and branches off. 5GB comes into the 6450 switch from my modem and I typically get around 3GB (+/-0.50GB) from my ISP. It's a main line directly to my server from which I run VM's and use as my primary workstation. I really love unRAID and how easy virtualization is. craigr
    2 points
  2. Execute this script every 5 minutes (custom cron */5 * * * *) through the user scripts Plugin. It opens the DDNS URL only if the IPv4 or IPv6 has changed. By default the IPv4 is obtained through a web service like icanhazip.com and the IPv6 through the servers ethernet interface. Optionally you can obtain the IPv6 from a specific container: ipv6=$(docker inspect "Nginx-Proxy-Manager-Official" --format='{{range .NetworkSettings.Networks}}{{.GlobalIPv6Address}}{{end}}') Donate? 🤗
    1 point
  3. Hi, I have an issue with Unraid where I think the FUSE filesystem dies when a specific docker container receives a specific interaction. I know it sounds vague at start, but please read on, it's always reproducible and rather easy to do.. I'm using the latest Unraid (6.11.1 - basic license) and just upgraded my setup to have a parity drive, as well as to have a cache drive (did a clean install, not migrated). My setup is: 2x 4TB HDD (1 parity, 1 xfs) + 1x 500GB SSD as cache (tried with btrfs as well as xfs here) What I did: Installed all pre-requisites to have Apps as well as docker-compose; enabled docker, etc. - everything which you need to add the MariaDB-Official app:as well as to run docker-compose: Within Apps, added MariaDB Official, made its database persistent inside /mnt/user/appdata/... + created own network for it ("mariadb") I don't think the external MariaDB as well as the custom network is relevant here, but I didn't want to modify the example. I just wanted to use one single MariaDB for other use-cases too, hence the initial setup looked like this. Brought up Owncloud from their official docker-compose file (modified file attached to allow recreating the issue - modification includes the use the MariaDB Official docker as well as the persistent storage for both the Owncloud docker container as well as Redis - for both, I've added the volume mounts to /mnt/user/appdata/owncloud/...) Enabled remote syslog to gather some logs, as after the issue happens, there's no way to get any diagnostics... What is observed: After doing the Owncloud initial setup via web, you can check that everything is working fine in the web browser, as well as using the PC client, HOWEVER: If by any chance you'd try to access the server via the iOS Owncloud application, everything dies instantly - and I mean everything, instantly (you don't really even have to do anything, just have the iOS client request a login, or similar): You cannot access the owncloud web interface, it's dead. You cannot get into the owncloud container in any way, it's dead (there's no logs written by docker for the container about any kind of error) You cannot stop/remove the container: Error response from daemon: cannot stop container: owncloud_server: tried to kill container, but did not receive an exit event You cannot stop the docker service: stopping dockerd... waiting for docker to die... repeat x15 times... umount: /var/lib/docker: target is busy. You cannot stop the storage array anymore, you get into an infinite loop of the following log entries (it goes until infinity, does not take into account any timeout set at the array settings) Unmounting disks... shcmd (72301): umount /mnt/disk1 umount: /mnt/disk1: target is busy. shcmd (72301): exit status: 32 shcmd (72302): umount /mnt/cache umount: /mnt/cache: target is busy. shcmd (72302): exit status: 32 Retry unmounting disk share(s)... Unmounting disks... You cannot create diagnostics, it also hangs indefinitely at "Starting diagnostics collection..." - does not generate any kind of log powerdown -r also does not work the first time you issue it out. the second invocation of the powerdown -r reboots the server with the following logs (these are the last logs I receive via syslog, nothing after this until the new logs from the rebooted system) md: md_notify_reboot md: stopping all md devices md: 1 devices still in use. sd 4:0:0:0: [sdd] Synchronizing SCSI cache sd 2:0:0:0: [sdc] Synchronizing SCSI cache sd 1:0:0:0: [sdb] Synchronizing SCSI cache A bonus event here: before trying to do anything with the system (after owncloud dies) if you try to access the FTP settings (Settings / FTP Server) in Unraid, the Unraid web interface also dies completely, and nothing seems to work getting it back until you reboot the system. I didn't find any logs related to this either. Of course, as the array did not stop properly, the whole parity check needs to be re-done, which is 8+ hours with the 4TB disk, so it's not nice. After reading about a lot of similar issues (albeit very outdated ones, from 2015-2016) it seemed to me that the issue could be caused by the internal FUSE filesystem which tries its best to use the cache drive transparently. To put this theory to test, I've recreated the owncloud container with the volume shares mounted at /mnt/cache/appdata/owncloud as well as /mnt/disk1/appdata/owncloud - so bypassing the /mnt/user/appdata/... construct. In both cases, the application worked perfectly, there weren't any crashes; and most importantly, no Unraid OS level crashes (or rather, no storage array hanging issues) The problem with this solution is that I cannot use the cache feature which would be very beneficial for the HDDs to not spin up every time something needs to be written to them; so I'd really like to use the /mnt/user/appdata/... path.. (as far as I've understood, if you directly point a volume share to /mnt/cache/appdata, then it's true you're using the cache drive, but it also means that: the data is not going to be flushed at any time to the HDDs, where the parity would protect the data the amount of space you can use is only as much as the cache drive, as everything is stored there completely. What I've attached to this post: The remote syslog capture - it's rather long, but you can skip until Oct 8 11:14:37 in the log to see the important stuff onwards. The owncloud docker-compose.yml file, which can be used to reproduce the issue (warning - you WILL face parity check at the end...) - I've modified the user/passwords inside the file to be generic, as well as removed the external .env file need for easier testing. If you like, you can get the unmodified versions from their site: https://doc.owncloud.com/server/10.11/admin_manual/installation/docker/#docker-compose - scroll downwards, you'll find the .env as well as the docker-compose.yml - just make sure you modify it so it wouldn't use a docker volume, but an external share. A diagnostics zip file which I've created AFTER the system rebooted (I've noticed at some posts a diagnostics file was also requested after reboot, so here it is in advance..) Any help would be appreciated about solving this while still maintaining the ability to use the cache feature. tower-diagnostics-20221008-2003.zip syslog docker-compose.yml
    1 point
  4. Jo, AMD hatte deutlich früher als Intel auf PCIe 4.0 beim Chipsatz gesetzt und könnte so früher 8GB/s. Bei den ganz aktuellen CPUs hat man neben 2x X8 nun auch noch 1x X4 für eine NVMe. Dadurch also quasi 4 Lanes mehr. Aus dem Grund haben viele aktuelle Boards 3x M.2 (zwei davon dann über den Chipsatz). Jo. Viele ATX Boards haben sogar Shared Slots. Zb PCIe X4 ist nicht mehr nutzbar, wenn ein bestimmter M.2 Slot belegt wird. Das war früher ganz "normal" und haben die wenigsten bemerkt, weil sie selten mehr als eine dGPU verbaut haben. Fand ich damals aber auch ziemlich frech. Wie gesagt heute aber zum Glück kein Thema mehr. Der Chipsatz erlaubt jetzt scheinbar auch eine deutlich umfangreichere Aufteilung. Die ganz neuen AMD Doppel-Chipsätze sogar noch viel mehr. Hilft uns nur jetzt natürlich nicht beim Heimserverbau. Auch bin ich noch nicht so sicher, wie es mit PCIe 5.0 und Doppel-Chipsatz in Sachen Stromverbrauch so werden soll. Hier das Blockdiagramm vom Z690: https://forums.unraid.net/topic/130299-intel-kann-wieder-effizient-i5-13400-i5-13600-i5-13600k/?do=findComment&comment=1186003 Und hier ein gutes Beispiel warum selbst beim X670E dank zu vieler USB 3 Buchsen, nicht mal 8x SATA ohne weiteres möglich ist:
    1 point
  5. Beta ist es nur wegen der GUI. Kannst ganz normal verwenden, als backend nutzt es ganz normal targetcli-fb usw. und die normalen Kernel module von Linux.
    1 point
  6. 48 hours of continuous RW and still not triggering the error. 24 more and I'm ganna call it a pass. Still looking for insight on network stress testing.
    1 point
  7. @zAch523 you could use "--verbose" as an SteamPrefill Parameters for a more detailed log and then contact tpill90 on github with creating an issue here: https://github.com/tpill90/steam-lancache-prefill
    1 point
  8. CA Backup should automatically stop your dockers. I'd be careful backing up your plex cache. It will be really slow and take up tons of space. It can re-generate thumbnails in the background and seems like it does it in real time if users are browsing (if that's even a feature that matters.)
    1 point
  9. The VM paths had reverted to the wrong ones, i've changed them back and VM's are now loaded. Seem to have lost config on one of the docker containers also, so in the process of grabbing a backup. It's been stable now for 4+ hrs. Non the wiser to why it was sat constantly power cycling last night though, or why the VM paths had changed.
    1 point
  10. Thank you! That got me the local console login prompt via IPMI back. Not quite sure why I need to blacklist something that I do not have installed.
    1 point
  11. @caynam Your very welcome. Sorry it didn't work the first time. I got a little sloppy on my cutting and pasting and trying to dress it up. Lol
    1 point
  12. Hey, Final post. After a lot of tests and rebuilds, i've nailed down the messed up part. It wasn't the motherboard, it was ... the CPU (Ryzen 2700x). Apparently (and i got no idea how), the energy grid managed to fry part of the CPU. Meanwhile, with all the parts i've replaced, i've got a new computer. Even the motherboard is working normally now, with another CPU. Again, thanks for the help, and hope all of this might help someone out.
    1 point
  13. Blacklist the radeon GPU from loading, instructions here: https://wiki.unraid.net/Manual/Release_Notes/Unraid_OS_6.10.0#Linux_Kernel
    1 point
  14. For anyone who has this issue I spent ages and loads of things to fix it and eventually found this post: https://forums.unraid.net/topic/114864-how-i-lost-my-mental-integrity-about-code-43-gpu-passthrough/?do=findComment&comment=1127751 Removing the following from the VM XML file and boom VM boots absolutely fine and I can see my GPU in device manager: <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv>
    1 point
  15. Update: All cables checked and rebuild of disk2 completed with no errors from any other disks. Currently running parity check (with correction on).
    1 point
  16. Just to say for someone with almost zero knowledge of linux based systems and building a PC/server for the very first time is a very scary thing, what actually motivated me was this community right here, i saw so many people seeking answers and everyone gets the help they needed. Thank you very much for sharing your knowledge and time
    1 point
  17. That's a new one for me. I haven't really messed with the video stuff but I can take a look today after work or it might be better to post in the official discord support channel.
    1 point
  18. Ich habe im Abstand von einigen Minuten folgende Befehle abgesetzt. In beiden Fällen erhielt ich eine um 60 Minuten versetzte Uhrzeit bei der Zieldatei. Das bedeutet in meinen Augen, dass das der Datumsstempel der Quelldatei ist. Ich denke das Problem liegt bei der Quelle. Meine Vermutung: Die erzeugen jede Stunde zur 8. Minute ein Bild und unter dem Erstellungszeitpunkt wird es verteilt. Das passt auch zu Deinen 09:08 oben: root@Tower:~# cd /tmp/ root@Tower:/tmp# date Thu Nov 17 13:14:00 CET 2022 root@Tower:/tmp# wget https://pss.wsv.de/wsahmue/WebCam-Edersee-02-Aktuell/current.jpg --2022-11-17 13:14:05-- https://pss.wsv.de/wsahmue/WebCam-Edersee-02-Aktuell/current.jpg Resolving pss.wsv.de (pss.wsv.de)... 141.17.30.41 Connecting to pss.wsv.de (pss.wsv.de)|141.17.30.41|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 387194 (378K) [image/jpeg] Saving to: ‘current.jpg.1’ current.jpg.1 100%[====================================================================================================>] 378.12K --.-KB/s in 0.06s 2022-11-17 13:14:05 (6.13 MB/s) - ‘current.jpg.1’ saved [387194/387194] root@Tower:/tmp# ls -la current* -rw-rw-rw- 1 root root 402888 Nov 17 12:08 current.jpg -rw-rw-rw- 1 root root 387194 Nov 17 13:08 current.jpg.1
    1 point
  19. No, what you need to do is edit the container settings, activate the Advanced View, edit the "Storage" setting and change the Access Mode to Read/Write.
    1 point
  20. I do not. As my cache is large enough to hold another day (Or whatever the difference may be) by not adding that flag.
    1 point
  21. ELF headers belong on Linux executables and linkable libraries, though - and it's not going to be human readable, it's binary data - if you open it in a text editor it is going to be gibberish because that's not how it is meant to be processed.
    1 point
  22. @caynam Here you go. I messed up. I forgot to re-add the done to complete the loop when I was trying to make the folders generic enough. I should of just left well alone and let you edit/figure it out vs trying to be all slick. lol #!/bin/bash #noParity=true #arrayStarted=true #Remove Spaces-Linux and spaces are no good together cd /mnt/cache/Media/in/ for f in *\ *; do mv "$f" "${f// /.}"; done #Create Folder from name and move to final location. File types editable below for FILE in `ls /mnt/cache/Media/in | egrep "mkv|mp4|avi|vob|iso|MKV|MP4|AVI|VOB|ISO"` do DIR=`echo $FILE | rev | cut -f 2- -d '.' | rev` mkdir /mnt/user/Media/in/$DIR mv /mnt/cache/Media/in/$FILE /mnt/user/Media/in/$DIR done I just tested this in my /mnt/Media/in directory I used Transformers the Movie.mkv which was actually a dummy file I created to make sure I'm not deleting the real thing. It converted the file from Transformers the movie.mkv to Transformers.the.movie.mkv and then put that in a folder named exactly the same minis the .mkv at the end. So Transformer the Movie.mkv to /Transformer.the.Movie/Transformer.the.Movie.mkv It even worked when I did Transformers (2007) (720p).mkv Transformers.(2007.(720p).mkv
    1 point
  23. Ich auch (siehe mein 2nd System "Shipon" in Signatur). Ja. Siehe meinen Beitrag dazu: Dort sind in Summe 3 NVMe SSD in dem oberen PCIe Slot eingebaut (in der 4fach Bifurcation Karte) und 1 NVME SSD in dem OnBoard M.2 Steckplatz. Regelfall: Eine 2fach Bifurcation Karte ist meist so aufgebaut, dass die ersten 8 Lanes des PCIe Slots benutzt werden. (Diese mag es mit einem PCIe x16 Anschluß geben, aber in der Regel werden da eben nur die ersten (vordersten) 8 Lanes benutzt). Diese kann das Mainboard aber nicht in 4x4x aufteilen. Somit kannst Du bei so einer 2fach Bifurcation Karte nur eine NVMe SSD auf den ersten 4 Lanes betreiben, der Rest des Slots liegt dann brach. Das macht keinrn SInn, dann kannst Du dort auch einen simplen NVME SSD auf PCIex4 Adapter einbauen, Der ist billiger. Sonderfall: Solltest Du wirklich einen (mir bisher unbekannten) 2fach Bifurcation Adapter finden, der die ersten 4 Lanes für die erste NVMe SSD belegt und die zweite NVMe dann nicht direkt dahinter auf die folgenden 4 Lanes, sondern noch dahinter auf die dann danach folgenden 4 Lanes legt, dann kann es klappen. Aber ich habe so einen Bifurcationadapter noch nie gesehen. Auch würde man damit in dem obersten PCIe x16 Slot auf dem Board immer noch 8 Lanes verschwenden. Und diesr Sonderfall wäre vermutlich teurer zu kaufen als ein 4fach NVMe SSD Adapter, wie ich ich einsetze. Nimmt man hingegen eine 4fach Bifurcation NVMe Karte (Siehe meinen Beitrag zu meinem 2nd Server "Shipon") , kann man zumindest 12 der 16 Lanes für insgesamt 3 NVMe SSD benutzen. Wenn man nur 2 solche SSD dort einsetzen will, kann man das ja machen und hat noch Reserve um im 4.frei bleibenden M.2 Anschluß noch irgendetws zu betreiben. Vielleicht einem weiteren M.2 SATA Adapter mit JMB585 oder gar ASM1166 Kontroller für dann zusätzliche 5 respektive 6 weitere SATA Ports?
    1 point
  24. If you have your own domain and cloudflare is your registrar then I'd recommend setting up an argo tunnel and using the cloudflared docker. Otherwise use the swag docker. SpaceInvaderOne has a video for swag. For cloudflared there's site with good instructions.
    1 point
  25. Forgot to add that I custom built all the power cables for all the hard drives and bays. I use 16 AWG pure copper primary wire. It's fun to keep all the black wires in the correct order and not fry anything, but I proved it doable 🙂. Having a modular power supply is nice. I was able to split 10x "spinners" per power cable in order to stay within the amperage limits of the modular power connectors and maintain wire runs without significant voltage drop. Both power wires for the POD's are in one PET expandable braid sleeving finished off with shrink tube on both ends. They breakout and split between the PODS and go up and down (look between PODs two and three in the pics and you can see). They also breakout and split to two ports on the power supply. The SATA power wires I braided for fun. First time I ever did a four-wire braid.
    1 point
  26. so, i enabled hibernate in windows again as test scenario and i can say, looking fine so far, i ll test this over some period of time and see if that keeps stable may as note, after activating the hibernate button in windows again, also working when triggered from inside the VM will result in a "shutdown" for qemu so its running as it should be, virsh start also just wakes the VM up with all open apps very nice, thanks agin @ghost82 and @Darkman13, i dropped ths already completely as my experience was really bad with this
    1 point
  27. This is just a general observation of the bug-reports forum and not specific to this issue. As a newbie to Unraid, and reading through comments above and in other topics, it would appear that as the software has been maturing, more and more issues have been creeping in. This is not unexpected as the user base grows and the software becomes more sophisticated. I am not familiar (nor should I be) with the internal workings of product development and testing within LT. Though I am surprised that they are not using say Jira for bug/tracking and having it open to users. A forum like this in respect of "bugs" can drift and distract from the original topic. When I first joined Unraid/forums, I was under the impression that LT was across and responding to virtually all posts (mainly bugs) accordingly. It would appear that the majority of support comes from community members. Which is a good thing...but.. When a customer pays for a product, there is a certain unrealistic expectation that the software is always fit-for-purpose and bug-free :-). And when that doesn't happen we respond accordingly: I don't think that anyone here means to vent or come across aggressively or with any kind of malice. It is simple frustration, and that we feel that our voices are not making any traction sometimes. We all love and respect you and appreciate your efforts in resolving the many challenges in resolving the bugs. PS: I'm in software (web) development too and yes not having reproducible bugs is dang near impossible to fix. Perhaps just don't say this out loud. 😜
    1 point
  28. To be honest I'm really getting discouraged around trusting my data to Unraid. I updated to 6.11.1 because the SMB performance in 6.10.x was absolutely horrible compared to earlier versions. Then 6.11.2 has a severe bug that seems inexcusable and may cause data loss. And, after 4 years of many reporting the issue and even YouTube videos made on the subject, the USB installer still hangs for a great many of us at "syncing file system." And the list goes on with lots of issues that should have never been released or been fixed long ago. Unraid isn't open source and we're at the mercy of the devs. We pay for one or more licenses yet the overall experience is we're all on our own for support as if it's open source. I haven't seen anyone from Unraid participating in 99% of the threads here. Given the many recent issues I have to wonder if Unraid shouldn't just be moved to open source? We're not getting the sort of support, or quality, one can usually expect from paid software.
    1 point
  29. For anyone still struggling with this. I found that the container isn't setup to work in the typical "appdata" way and permission issues prevent warrior from saving the json config file. My solution was to accept all the defaults of the container and to just pass "Extra Parameters" to configure warrior. This will preserve your settings but any in-progress work gets lost when the container is re-created. My settings are to just use the preferred project and to max the concurrent items. I hope this helps someone. Set "Extra Parameters" to: -e DOWNLOADER='YourUsername' -e SELECTED_PROJECT='auto' -e CONCURRENT_ITEMS='6'
    1 point
  30. There's so much misinformation/disinformation in your post that its hard to pick a place to start, and I'd almost think it was satire. For example, chmod, chroot, b2sum are all part of the stock unraid release. Nothing additional was downloaded for those. Tmux was the first and only tool you listed that wasn't part of unraid stock, and could have easily been downloading following tutorials that used it. Additionally, Hugepages are simply either supported by the kernel or not supported by the kernel, they don't really get "enabled or disabled" per se, but can be adjusted. 2048kb Hugepagesize is default, and the current release of unraid uses a kernel that supports hugepages. "/" is the root directory. "/root" is the root user's home directory. People mess this stuff up all the time because we use the term root and the username root a lot. To segregate it, root is the superuser ("su"), and the root directory is the base directory of the filesystem ("/"). /bin and /sbin are unique directories, unless one was linked on top of the other there probably wasn't much cause for concern. Certain objects in /sbin will just be symlinks to /bin/executablename and this is by design. Without detailed information on what the symptoms were, what the changes being made were, and log output it's kind of difficult to say if you actually had a compromised system, or just had something configured incorrectly.
    1 point
  31. After starting to play around with UnRaid a couple of weeks ago I decided to build a proper system. I want to share build progress and key learnings here. Key requirements have been: AMD system Plenty of CPU cores Low Wattage ECC Memory IPMI Good cooling since the system sits in a warm closet Prosumer build quality Config: Runs 24/7 and is rock stable since day 1. UnRaid OS: 6.10 RC1 Case: Fractal Design Define 7 PSU: Be Quiet! Straight Power 550W Board: AsRockrack X570D4U w/ Bios 1.20; latest version as of 2021/10 CPU: Ryzen 9 3900 (65W PN: 100-000000070) locked to 35W TDP through Bios setting; CPU was difficult to source since it is meant for OEMs only. Cooler: Noctua NH-L12S Case Fans: 5x Arctic P14 PWM - noise level is close to zero / not noticeable Memory: 64 GB ECC (2x32 GB) Kingston KSM32ED8/32ME @ 3200Mhz (Per Memory QVL) Data disks: 3x 4TB WD40EFRX + 1x 4TB WD40EFRX for Parity (all same disks, same size) Cache 0: 2x 512GB Transcend MTE220S NVME SSDs Raid 1 Cache 1: 4x 960GB Corsair MP510 NVME SSDs Raid10. Set up with ASUS Hyper M.2 in PCIE X16 Slot (BIOS PCI Bifurcation config: 4x4x4x4x) Todos: Replace the 4 SATA cables with Corsair Premium Sleeved 30cm Sata cables Eventually install a AIO water cooler Figure dual channel memory setting out, atm. single channel config. Thats done. Eventually configure memory for 3200mhz, Done. Eventually install a 40mm PWM cooler for the X570. Update: After a few weeks of 24/7 uptime this seems to be unnecessary since the temps of the X570 settled at 68 - 70° Get the IPMI Fan control plugin working Temperatures (in Degree Celcius) / Througput: CPU @ 35W: 38° - 41° Basic usage (Docker / VMs) / 51° - 60° Load CPU 65W: 78 - 80° Load (This pushes fans to 1300 - 1500 RPM, which lowers the X570 temps to 65°) Disks: 28° - 34° Load SSDs: 33° - 38° Load Mainboard: 50° in average X570: 67° - 72° during normal operations, 76° during parity check Fan config: 2x Front (air intake), 1x bottom (air intake), 1x rear & 1x top (air out); 800 - 1000 RPM Network Througput: 1 Gbit LAN - Read speed: 1 Gbit / Write speed 550 - 600 Mbit max. (Limited by the UnRaid SMB implementation?). Write tests done directly to shares. So fare meeting expectations. Final Config: 2x1 Gbit Bond attached to a TP-Link TL-SG108E. Learnings from build process: Finding the 65W version of the Ryzen 9 3900 CPU was difficult; finally found a shop in Latvia where I ordered it. Some shops in Japan sell these too. The Case / Board config requires a ATX cable with min. 600mm length IPMI takes up to 3 mins after Power disconnect to become available The Bios does not show more than 2 M.2 SSDs which are connected to the Asus M.2 Card in the x16 slot. However, unRaid has no problem seeing them. Mounting the CPU before mounting the board was a good decision, should have also installed the ATX and 8PIN cable on the board before mounting it, since installing the two cables on the mounted board was a bit tricky Decided to go with the Noctua Top Blower to allow airflow for the components around the CPU socket, seems to work good so far Picked the case primarily because it allows great airflow for the HDDs and a clean cable setup The front Fans may require PWM extension cables for proper cable setup, depending where on the board the Fan connectors are located X570 is hot, however with a closed case airflow seems to be decent (vs. open case) and temps settled at 67° - 68° Removed the fan from the ASUS M.2, learned later that it has a fan switch too. Passive cooling seem to work for the 4 SSDs PCIe Bifurcation works well for the x16 slot, so far no trouble with the 4x SSD config Slotting (& testing) the two RAM modules should be done with the board not mounted yet since any changes to ram slots, or just in's/out's is a true hassle since the slots can only be opened on one side (looking down at the board on the left side, towards external connectors) and the modules have to be pushed rather hard to click in. IPMI works well, still misses some data in the system inventory. However the password can only have a max. length of 16 Byte; used a online generator to meet that. Used a 32 char PW at first instance and locked the account. Had to unlock it with the second default IPMI user (superuser) Asrock confirmed the missing data in the IPMI system inventory. Suggested to refresh the BMC what I didn't do yet. Performance: With CPU @ 35W the system performs well for day to day tasks, however feels like it could be a bit faster here and there. Nothing serious. VMs are not as fluent as expected. The system is ultra silent. With CPU @65W the system, especially VMs and docker tasks such as media encoding are blazing fast. VM performance is awsome and a Win10 VM through RDP on a MacBook feels 99% like a native desktop. The app performance in the VM is superiour to usual Laptops from my view, given the speed of the cache drive where I have the VM sitting at and the 12 core CPU. Fans are noticeable but not noisy. 45W Eco Mode seems to be the sweet spot, comparing performance vs. wattage vs. costs. Transcoding of a 1.7GB 4K .mov file using a Handbrake container: 65W config - 28 FPS / 3mins 30sec - 188W max. 45W (called ECO Mode in Bios) - 25 FPS / 3min 45sec - 125W max. 35W config - 4FPS / 25 mins - 79W max. Power consumption: Off (IPMI on) - 4W Boot 88W Bios 77- 87W Unraid running & ECO Mode (Can be set in Bios) - 48W Unraid running & TDP limited to 35W - 47W Parity check with CPU locked to 35W - 78W Without any power related adjustments and the CPU running at stock 65W the system consumes: 80W during boot 50 - 60W during normal operations e.g. docker starts / restarts 84 - 88W during parity check and array start up (with all services starting up too) 184 - 188W during full load when transcoding a 4K video CPU temps at full load went up to 86° (degree celcius). Costs: If I did the math right - the 35W config has less peak power consumption, however since calculations take longer the costs (€/$) are higher, compared to the 65W config. In this case 0.3 (188W over 3,5 Minutes) vs. 2.3 (78W over 25 Minutes) Euro Cent. So one might look for the sweet spot in the middle January 2021 - Update after roughly a month of runtime - No issues, freezes etc. so far. The system is rock stable and just does its job. Details regarding IOMMU groupings further below. I will revisit and edit the post while I am progressing with the build.
    1 point
  32. Thanks for the correction Jorge, in the meantime I've also got to this conclusion; in my head the cache drive was behaving differently (I thought it's working similar to AutoTier) Continuing my initial post, I've decided to try out yet another approach, but sadly this also failed: created a new share, and name it appdata-persistent and set it t use cache pool: no. created the owncloud container in /mnt/user/appdata-persistent/owncloud Initially it seemed this method worked and I could use it this way, however after some time fiddling with owncloud via the iPhone application (was still minutes, not hours), the app hanged again, and the exact same scenario happened as previously. Again no other logs were visible, certainly no error logs.
    1 point
  33. Talked to Alexis (Staff) on discord. This is an issue on Let's encrypt's end.
    1 point
  34. /mnt/user permission are wrong, type: chmod 777 /mnt/user then post output of ls -ail /mnt
    1 point
  35. Since this was the first result in a Google search when I was looking for a solution, I will post my solution here. I created an MDNS reflector with a Docker container to reflect between my secure LAN , IOT -VLAN2, and Guest -VLAN3 networks. Change the specifics to fit your situation. 1- Create docker networks for the VLANs (one time setup in Unraid terminal): docker network create --driver macvlan --subnet 192.168.2.0/24 --gateway 192.168.2.1 --opt parent=eth0.2 vlan2 docker network create --driver macvlan --subnet 192.168.3.0/24 --gateway 192.168.3.1 --opt parent=eth0.3 vlan3 2- Create docker container: Name:Avahi (name is used in Post Arguments later) Repository:flungo/avahi Network Type:Custom:br0 Fixed IP address:192.168.1.20 Docker Variables: REFLECTOR_ENABLE_REFLECTOR Key:REFLECTOR_ENABLE_REFLECTOR Value:yes Docker Post Arguments: (ADVANCED VIEW) ; docker network connect vlan2 Avahi --ip 192.168.2.20; docker network connect vlan3 Avahi --ip 192.168.3.20
    1 point
  36. Set a specific day of the week, like this: It results in this, the first Sunday of every second month:
    1 point
  37. Bit late to the party, But I am using the librenms/librenms version of the container from CA. Before starting the librenms container, create a new MariaDB container (I am using the linuxserver/mariadb one). Once that is loaded, use the CLI for that container to build your database, and allow access from another host: CREATE DATABASE librenms CHARACTER SET utf8 COLLATE utf8_unicode_ci; CREATE USER 'librenms'@'localhost' IDENTIFIED BY 'librenms'; CREATE USER 'librenms'@'%' IDENTIFIED BY 'librenms'; GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'localhost'; GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'%'; FLUSH PRIVILEGES; exit Now, when you configure the LibreNMS container, point the DBhost to be the IP of the container (in my case 192.168.0.196, the same IP as the unraid box itself, as I'm using bridged networks for the containers). After a couple of minutes you can open the LibreNMS GUI and get cracking.
    1 point
  38. For those that are reading this now who might be new to the forum (such as myself) - if you are using Linux, you can set the port type on Mellanox Infiniband cards using the following procedure: 1. su to root (if you're running Ubuntu, etc. and root account is disabled by default, you can either enable the root account or you can use su -s) 2. Download and install the Mellanox Firmware Tools (MFT). 3. Find out the PCI device name: # mlxfwmanager --query That will query all devices and output the PCI Device Name, which you are going to need to set the port link types. 4. Set the port link types: # mlxconfig -d /dev/mst/mt4115_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2 replace the stuff after the flag '-d' with your PCI device name obtained from (3). The example that I have provided above is what I have, where I've got a dual-port card, and therefore; I can set the link type for both ports. Alternatively, if you have a dual port card, and you actually USE Infiniband (because you aren't only doing NIC to NIC direct attached link, but you're plugged in to a switch), then you might set one port to be running IB and the other port running ETH. Perhaps this might be useful for other people in the future, who might be using something like this. (P.S. The Mellanox 100 GbE switches are more expensive (per port) than their IB switches.)
    1 point
  39. Dockers Available so far: SteamCMD DedicatedServers: CounterStrike: Source CounterStrike 2 TeamFortress 2 ArmA3 - requested by @MrSage Deathmatch Classic Ark: Survival Evolved CounterStrike 1.6 - requested by @BlueLight CounterStrike Condition Zero - requested by @BlueLight Left4Dead - requested by @Remy Lind Left4Dead 2 - requested by @Remy Lind Killing Floor 1 - requested by @Remy Lind Killing Floor 2 - requested by @Remy Lind Assetto Corsa - requested by @HaZe Insurgency - requested by @Remy Lind Don't Starve Together - requested by @Remy Lind Day of Infamy - requested by @bblair321 7 Days to Die - requested by @Benjamin Picard @jordanmw @Kirito Insurgency: Sandstorm - requested by @koalacommando HalfLife 2 Deathmatch Day of Defeat: Source Wurm Unlimited - requested by @Kirito ArmA3 ExileMod - requested by @MrSage The Forest - requested by @jordanmw Mordhau - requested by @dave234ee Garry's Mod - requested by @GRiiM RUST - requested by @killasniff Barotrauma - requested by @Freyer Quake Live - requested by @Cozmo85 Hurtworld - requested by @Newtious ATLAS - requested by @Masterism Unturned - requested by @Newtious SCP - Secret Laboratory Stationeers - requested by @Morthan TeamFortress Classic - requested by @coolasice1999 Starbound - requested by @veizour Squad - requested by @dave234ee & @Pede Conan Exiles - requested by @rooster7734 & @Boose77 & @CodeS1ave & @Spectral Force & @jordanmw Alien Swarm - requested by @veizour Alien Swarm: Reactive Drop Project Zomboid - requested by @thenoots Sven CO-OP - requested by @Cornflake Pavlov VR - requested by @InvicTech Days of War - requested by @INsane Day of Defeat Classic - requested by @INsane Avorion - requested by @stryder ECO - requested by @Newtious Pirates, Vikings & Knights 2 - requested by @PrisonMike Post Scriptum - requsted by @Nuke Fistful of Frags - requested by @squelch NEOTOKYO - requested by @squelch & @dustin44444 Memories of Mars - requested by @Fdirckx PIXARK - requested by @wyattbest HalfLife Deathmatch - requested by @MrLinford America's Army - Proving Grounds - requested by @ph0b0s101 Valheim - requested by @MadeOfCard Chivalry: Medieval Warfare - requested by @Nesquik Zombie Panic! Source - requested by @galluno Satisfactoriy - requested by @DazedAndConfused Wreckfest - requested by @xanderflix Craftopia - requested by @Patrick_W No More Room In Hell - requested by @bearcat2004 LambdaWars - requested by @spacecops Longvinter - requested by @Rokanza Necesse - requested by @NihilSustinet CoreKeeper - requested by @tiphae Last Oasis - requested by @Bryo V-Rising - requested by @darkslyde DayZ - requested by @Wadzwigidy Frozen Flame - requested by @Mefesto Creativerse - requested by @RazorX Euro Truck Simulator 2 - requested by @OPK-Desperado American Truck Simulator Operation: Harsh Doorstop Astroneer (Experimental) Sons Of The Forest Subsistence - requested by @Maverick38344 Icarus - requested by @Patrick_W Stormworks - requested by GitHub user: blackwellj Life is Feudal: Your Own - requested by @Atreja The Front - requested by @domrockt ARK Survival Ascended - requested by @nicknick923 Palworld - requested by @evakq8r Other DedicatedServers: FiveM - requested by @unstatic - (GTA V Modifications Server) Teeworlds (2D Shooter) Factorio (construction and management simulation) Terraria - requested by @TeeNoodle (Pixel Survival) Terraria & TShock MOD - requested by @TeeNoodle (Pixel Survival with Security built in) OpenTTD - requested by @mikeydk (Business Simulation) CounterStrike 2D (2D Top Down Shooter) Minecraft Basic Server (Runs nearly every edition of Minecraft without GUI) Planetary Annihilation - requested by @InventedStic (RTS) Minecraft Bedrock Edition Server - requested by @kronflux (Crafting/Survival/Building) Xonotic - requested by @CaffeinatedTech (Free Arena Shooter) OpenRCT2 (Theme Park building and managing simulation) CSMM for 7DtD (A Powerfull Server Managing/Monitoring tool for 7 Days to Die) - requested by @Spectral Force Altitude (2D Airplane combat) - requested by @Kudjo Mindustry (Hybrid Tower Defense) - requested by @Mantiphex Neverwinter Nights: Enhanced Edition - requested by @HellraiserOSU Quake III Arena - requested by @Enigmatical StarMade - requested by @Kirito Assetto Corsa Competizione - requested by @Florian_GER Vintage Story - requested by @JackPS9 Zandronum (Doom with Multiplayer) - requested by @squelch Windward - requested by @XFL Urban Terror - requested by @Cessquill OpenMW-TES3MP - Open Morrowind Multiplayer a free and open source engine recreation of the popular Bethesda Softworks game "The Elder Scrolls III: Morrowind" - requested by @OdinEidolon RedM - requested by @Izuzf (Read Dead Redemption 2 Modifications Server) DDNet - requested by @dyspandemic4832 (DDNet is an actively maintained version of DDRace, a Teeworlds modification with a unique cooperative gameplay) ioquake3 - requested by @OctopusVPS Luna Multiplayer - requestd by @Mew (Kerbal Space Program multiplayer mod) VEIN - requested by @Shadowcrit (Zombie Survival) BeamNG MP Server - (Multiplayer Server for BeamNG.drive) Unreal Tournament 99 - requested by @StalkS (Multiplayer Server for Unreal Tournament 99)
    1 point
  40. SOLVED!!! So First I want to say I am a little disappointed in the Forum. I've had one reply to my problem and while I do want to thank @Squid for attempting to provide help, I'm a little disappointed that only one person was willing to provide assistance. However thanks to Google, I found the problem. Bottom Line Up Front: Close all your GUIs...then play with VMs So as new user of unRAID, I have been very excited to get thing set up. What that means is, I have a mini-pc at home (it was in charge of my NAS that I left to try unRAID), a Mac Book Pro (still haven't successfully attempted to install Mac OS, however what I discovered might by one of the reasons), a surface pro, and an iPhone. When I was at home I would work on the mac book and the surface to get the server set up, but when I was at work I would bounce between my surface and my iphone when ever I had time. Fast foward a couple of days and you have my previous posts. Late last night I was looking though all my xml files, watching spaceinvaders video's for the 1000th time trying to figure out what I had done wrong, why did my windows vm keep giving me the BSOD. all the while I was cussing at @alkiax for talking me into switching to unRAID. during all of my research I starting coping logs like it was my job. I was monitoring this post waiting for someone to rescue me and I wanted to be capable of providing as much research documents as possible so they could tell me where I went wrong. So at one point I started looking a the logs for the server it self. It was glowing red with errors. So in all my unRAID expertise I went to work...you should start laughing at me now. I am not a programmer...no even a little. I've had no training other than my own fooling around. However if your screen is glowing red and I notice the words "preclear" (not availble in community plug ins) and I happen to now that I recently told the "preclear" plug in to update, well then the obvious solution is there is a problem with preclear. Turns out not some much! Once I removed the plugin the entire log turned red. In addition to seeing preclear a ton I notice another ip address. In a panic I did't pay much attention to the address (if I had I would have noticed quickly that it started with 192.168 which would have told me that it was on my side of the modem). So I stopped all my dockers...screen still red. So I stopped the array....screen still red. Finally I stopped all the plugins, at this point I was sure that I had downloaded something and I was being hacked because the logs where still glowing red and continuing to grow. Once again I called up my buddy @alkiax and at this point I was ready to kill him, what had he gotten me into? So I did what every reasonable person would do, I grabbed a beer and opened Google and started searching. My VM troubles were a thing of the past at this point I wanted to save the server. Then i stumbled across a post where @gridrunner asked a question about a "wrong card token message" which happened to be part of the glowing red error messages going across my log screen. Thankfully @itimpi had responded to gridrunner and told him that "The wrong card token message occurs when you have a browser window still open to the unraid GUI from before the last boot of unRAID." I had two instances up and running! one on my mac...the one that I was trouble shooting with, and one on my surface, the one I was watching the logs with. That got me thinking, I checked my phone, sure enough it was on the GUI and my other PC, the one that I remoted into while at work was on the GUI. With fingers crossed I closed all the GUIs, save one. The one remaining GUI I used to tell the server to reboot and then I closed it as well. Once the server restarted I logged back into the GUI (on just one computer this time) and sure enough the logs where clean. This got me thinking, I wonder if this was the problem with my VMs. One GUI shows they aren't started, one shows they are start and the other shows that I just shut it down. I'm not sure how all the 1's and 0's talk to each other but it was all I could hope for. So today after work, I sat down and built another VM (I had deleted the others at some point last night while the logs where glowing red...becuase one of the scripts glowing read referred to the "VM"). I got ridiculous and made three copies, one of the VM the moment I built it, another after I started to update the drivers from the built in windows ones to the virtio drivers, and another after I had installed splash top desk top. I was a little worried I'd have another crash. But I am proud to report, I have a VM that is up and running! I even managed to play a game on it for a little while. Tomorrow I will take one of the copies and pass though a USB control, and if that works I'll pass though my GPU.
    1 point