Leaderboard

Popular Content

Showing content with the highest reputation on 07/02/22 in Posts

  1. Hallo, Die Technik meiner zwei Server habe ich schon in diesem Thread vorgestellt: Klick Nun hab ich vor ein paar Wochen zufällig ein 3D Druck Modell gesehen für ein sogenanntes "Mac Trashcan" Gehäuse. Der Name kommt von einem MacPro der dieses Design hatte. Nun habe ich lange herumüberlegt ob ich das irgendwie nutzen kann. Am sinnvollsten wäre ein kleiner Hackintosh gewesen aber erstens habe ich genug von denen und zweitens sind ITX Boards aktuell selbst auf dem Gebrauchtmarkt teuer. Dann bin ich darauf gekommen das mein "J4105 immer an Server" ja itx Formfaktor hat und das das Fractal Node 304 verdammt gross ist wenn man nur 3-4 SSD im Server nutzt. Dazu kommt noch das der Server nicht so warm wird das sich das PLA verformt. Also kurz entschlossen... gemerkt das mein 3D Drucker ein zu kleines Druckbett hat und daraufhin für 159 Euro einen Ender 3 Pro gekauft (22ßx220x250). Hatte dann 2 Wochen Spass den zu verbessern. Raspberry Pi +Cam mit Mainsail und Klipper Firmware auf den Drucker geflasht. Standard Parameter, Extruder, Pressure Advance und Input Shaper parametriert und los ging es mit 80-100mm/s mit dem 3D Druck. Noch paar Kleinteile bestellt: - M3 Schrauben und Muttern 5 Euro - paar Steckverbinder Kabel und Taster (5 Euro) - 120mm Lüfter 10 Euro Filament hat ca. 7-10 Euro gekostet: Und das ist dabei rausgekommen. Ich hoffe es gefällt den ein oder anderen (Maße: 190mmx190mmx260mm und ich sehe gerade ich muss noch einiges entgraten, Deckel und Backplate Shellteil. Hab es gestern Nacht schnell zusammengebaut um die Downtime des Servers klein zu halten): Die Shell und den Deckel hab ich in 0.2mm Auflösung und 100mm/s Speed gedruckt. Dafür ist die sehr sauber. Allerdings sieht man beim Deckel innen einige Unsauberkeiten. Die kommen davon das ich ohne Supportmaterial gedruckt habe, lassen sich aber noch entgraten. Hier drunter ist ein 120mm/25mm Lüfter verbaut, ich werde ihn aber vermutlich wieder abklemmen wenn ich sehe das die Temperaturen nicht bis 50 Grad gehen. und das ist sehr wahrscheinlich. Momentan Regel ich ihn via Dynamix Fan Control und @ich777 Nuvoton treiber Paket. Der hübsche rote Taster neben dem PicoPSU Anschluss ist der Ein/Ausschalter: Gruss, Joerg PS: Das Modell bzw. die *.stl Files findet man bei Thingiverse.
    3 points
  2. Hat sich erledigt, ich habe es nun anders gelöst und nutze ein Template von theme.park und habe alles angepasst wie ich es wollte. Wer Interesse hat hier die CSS für Theme Engine: https://pastebin.com/gE2Mh0ir Ich habe die CSS von theme.park so zusammengebaut das es komplett offline ohne Server läuft. Einfach Copy and Paste in die Theme Engine einfügen. Wer die grünen Kugeln beim laufwerk auch blau anstelle grün haben möchte, kann es über den Theme Engine selbst tun, der Hex Code dafür ist: #0062aa (dunkel blau) Anbei ein paar Bilder. vielen dank an theme.park für die gute Grundlage. Ohne diese hätte ich das Theme so nicht hinbekommen. Liebe Grüße Patrick
    2 points
  3. In this blog series, we want to put a spotlight on key community members to get to know them a little better and recognize them for all that they've done for the Unraid community over the years. This next blog features two outstanding community members who have helped out countless new Unraid users over the years: @JorgeB and @ich777 https://unraid.net/blog/rockstars-jorgeb-ich777 If either have helped you out over the years here, please consider buying them a beer or lunch! JorgeB ich777
    1 point
  4. So, possibly. They both could try to access and encode the same video at the same time but the worst that would happen is one would fail with a read error. I have 4 instances on my unraid box. 2 for HD and 2 for 4K and split the folders like so: /mnt/user/MKHB/watch_hd_1 /mnt/user/MKHB/watch_hd_2 /mnt/user/MKHB/watch_uhd_1 /mnt/user/MKHB/watch_uhd_2 When you go to do the test if you stop the primary docker and only have the test one running you should be fine. P.S. I only run two of those at a time and each is pinned to half the CPU so they won't ever step on each other.
    1 point
  5. I've been agonizing over this choice for weeks, unable to pick between the 2 file systems for my main array (I'm already decided on going dual raid 1e for ssd cache/light vms). I have found general answers for many of the questions I have about the choice but there are still ones that just don't seem to be answered that I kinda want to know the answers to, hopefully definitively. What questions do I have? When a file is corrupted with a BTRFS system, is it unable to be transferred? I feel like I have read something similar to this a few times with there being an error, but also that there might be a way to get around it if for instance, you don't have another good copy of the file, like if it is unsafe or between backups. Is anyone familiar with this experience with perhaps a link to some sort of relevant resource on the topic specifically? I've read multiple times that BTRFS doesnt have a working FSCK. Apparently though, this isn't actually a problem because it has a replacement in btrfs-check. Is my understanding there more or less correct? While it can detect errors due to checksumming, it does not have a method of fixing these errors in unraid, specifically because unraid is using it on singular disks inside of the unraid array so those features would not be feasible. Luckily, from what I can tell, BTRFS has a method for relatively easily figuring out which errors were found so that a use can potentially restore the file from backups. Is there any problem with the assertions I've just made? The write hole issues with BTRFS mostly come into play with sudden ungraceful shutdowns due to a lack of proper atomicity. Also, its now really just a concern for the Raid 5/6 implementations that Unraid does not use in favour of the proprietary Unraid solution. Is this correct (this is one that seems really important and related to the first question I asked)? I'm sure that having a UPS (Which I do have) helps, but even still, ungraceful shutdowns can happen for many ungraceful reasons Why should I/Shouldnt I use BTRFS (something I'll answer a bit myself)? OK, so why do I care about BTRFS and what benefits will it potentially bring to me to warrant all the fuss and agonizing about the choice. The main things are: Data integrity - I at least know when a file is bad, even though I may not be able to fix it/will have to get a copy. Snapshots - Something to aid in protecting me from user error once set up, something I feel is a bit lacking in general. Both of these points have large asterisks as well of course. For data integrity, or at least more data integrity rather than less. I'm not even running ECC, and quite frankly, if the data integrity of a regular pc has been good enough for me so far, I probably don't have data that is so susceptible to corruption that I would notice a flipped bit or 2. The data im storing is mostly media, and some backups of other systems anyways. On top of that and as I've read (I think on this forum) a likely place for data corruption to come from on a modern system is ram, so while this data integrity feature is a nice thing to have, it may not be all that I hope that it is, at least unless I in the future decide to upgrade to ECC. For Snapshots, as far as I can tell, this will take a lot of time and effort to figure out how to set up correctly (I'm no fan of running someone else's long script unless I've read through it and understand fully what its doing, and even then, Id prefer knowing how to make my own rather than simply trusting my assumptions). This is why if I'm honest with myself, this feature would sit unused until hopefully, at some point in the future, Unraid natively supported snapshots. OK, that was a lot, and I know posting too much leads to people being uninterested in answering, so Il stop right there, but hopefully I've included enough detail, and evidence that I've put in the leg work that my questions will get answered and I can worry not. Oh, and also, here is a previous post with my specs roughly, if anyone feels they are relevant to my post TL:DR; I basically have 4 questions that have been holding me off from configuring my new server related to BTRFS. Thanks in advance.
    1 point
  6. With VPN off there are no iptables rules in place, but with VPN enabled there are very strict iptables rules in place to prevent leaking.
    1 point
  7. You got it wrong this time too, it should be 192.168.178.0
    1 point
  8. If you have a active internet connection on boot, no.
    1 point
  9. Ist mir zwar etwas zu blau lastig, aber hübsch und sieht nach einiger Mühe aus! Gut gemacht! Sowas habe ich auch noch irgendwann vor, aber erst einmal bin ich imemr noch im grundsätzlichen Aufbau meiner Systeme. Teletha Testarossa (1st System) und Shima Katase (2nd System) lassen Deine Raphtalia grüßen.
    1 point
  10. Wie hast du es geschafft alles Lokal anzupassen? Das will ich noch anpassen bei mir, dann hab ich auch nen Lokal betriebenen Themed Loginscreen abseits des Unraid Themes wie du.
    1 point
  11. Hi, yep I've got it working now. Had to use PHY_TYPE_P1/P2 to set the physical type first, then reboot and you can then set the link type via LINK_TYPE_P1/P2 and reboot again. If anyone comes across this, setting the PHY_TYPE_P1/P2 to SGMII(3) seems to result in best performance, though I still have some testing and tuning to do. You can use mstconfig -d 04:00.0 query to show the current settings and options as below. You may need to change "04:00.0" depending on what your Mellanox card reports via lspci.
    1 point
  12. No, but google did modify the procedure, you must set up 2FA and provision a new app password. Previous passwords probably won't work. Already discussed here.
    1 point
  13. If anyone is interested, I found the solution here: https://hub.docker.com/r/zabbix/zabbix-server-mysql/ I was trying to add the Variable AllowUnsupportedDbVersions=1 as per documentation but the actual variable to add for Docker is: ZBX_ALLOWUNSUPPORTEDDBVERSIONS=1 Solved the issue and Zabbix is now working. I know I should not use an unsupported MariaDB version (or at my own risk as they say) but I though it could help someone to update my own post. Thanks!
    1 point
  14. In the link you gave me, it says that overclocks are not recommended, and indeed my GPU was overclocked via an app on Windows. I deleted the app, and it doesn't seem to crash any more. I'll wait a bit to be sure, but if it solves the problem, I'll land the topic as resolved. Thanks for your help
    1 point
  15. @ich777 Thanks - I will try that on next reboot - Currently running with a more uptodate TBS card: TBS 6905 + TBS 6205 = no issues via TBS-OpenSource, Unraid 6.10.3.
    1 point
  16. I was able to figure out the issue, I'm going to post it both places in case anyone else runs into the same problem. For some reason the permissions for that share were messed up. I just had to go into the unraid file manager. Check the box next to my gameclips share, and then hit the permissions button at the bottom and change all of them to read/write. The group and other ones both were no access before for some reason. https://imgur.com/a/knBhLY5
    1 point
  17. In your case this is really bad because how it's configured the Mover will move the files from the Cache to the Array. You have two options, set the path in the template instead from /mnt/cache/... to /mnt/user/... or you can set the Cache to Prefer or Only. I would also recommend that you remove the container, also the minecraftbasicserver folder from your appdata directory and then pull a fresh copy from the CA App with the changes applied from above, but please only one.
    1 point
  18. The 6.11 builds of Unraid will have the new kernel natively. By then (based on what little I know), I would expect for HW transcoding to be totally working at that point.
    1 point
  19. As he said, no hardware transcoding *for now*. Given that working versions exist for the Plex, the kernel and ICR, it's just a wait till Unraid packages up and tests stuff for the next release of Unraid. Obviously I can't guess at when that will be, but I'd hazard a guess that they'll take this seriously and get the update ready ASAP.
    1 point
  20. Only if there is a subfolder for each container holding the various threads. I really can't imagine trying to sort out the chaos that would be created if Unraid's entire list of containers had random threads mixed together. As you said, it's bad enough having one thread per container, having possibly hundreds of threads referencing hundreds of different containers all mixed together? It is definitely confusing for a newby to be told to create a new thread for their general Unraid issue, and then turn around and tell them they CAN'T create a new thread for their container issue, they must use the existing thread. Fixing that would mean creating a new subfolder in the forum for each new container as it's created, and considering the bar for container creation is so low it would mean creating a new subfolder in the forum every few days, most of which would only ever have one thread.
    1 point
  21. Das Gehäuse ist hochwertig, aber der Zugang zu den HDDs ist nicht so prickelnd. Der Airflow soll auch nicht so toll für die HDDs sein. Die meisten nehmen daher lieber einen normalen Tower wie das Define R5. Ich denke das ist eine gute Wahl. Das neuere B365 soll ja einen schlechteren Node beim dem Chipsatz haben. Wäre also ineffizienter. Wäre schon spannend zu wissen wie gut der Verbrauch vom B360M-A ist Das erklärt dann denke ich auch warum das Asrock B365 Pro4 keine Bestwerte beim Verbrauch schafft. Damit wäre aber auch das Asrock B360 Pro4 eine Option. Das hat neben 2x M.2 auch 1x Wifi mit PCIe. Dh man könnte später eine Dual SATA Karte verbauen und hätte direkt 8x SATA ohne einen PCIe Slot verschwenden zu müssen. Willst du die CPU unbedingt neu kaufen? Bei dem Alter bietet sich der Gebrauchtmarkt an und wie schon DataCollector meinte, könnte man dann vielleicht sogar günstig einen i5 schießen. Die Leistung brauchst du zwar nicht, nachdem was du so vor hast. Aber haben ist besser als brauchen ^^ Ich würde einfach das nehmen, was man gerade günstig schießen kann. Denk dran, dass es auch noch den 8100, 8300, 8350K, 9100 und 9350K gibt. Im Leerlauf unterscheiden die sich alle nicht.
    1 point
  22. This turned out to be my overlook. I was not exporting which is effectively the on off switch.
    1 point
  23. ip via shim-br0解决办法 主要是br0是之前用命令创建的,估计是这个原因 docker network rm br0 删除br0 从官网的安装包中/boot/config/network.cfg,复制一份到自己的unraid里面,恢复默认网络 然后把/var/lib/docker/network/files/local-kv.db,改成local-kv.db.bak,然后重启docker,会自动创建br0 然后就正常了。。 【唠叨】苏州-厌世([email protected]) 22:27:02 我是这么解决的,里面的文件尽量加bak别直接删除 My solution is: Download a compressed package from the official website Download Unraid Server OS rename the `/boot/config/network.cfg` to `/boot/config/network.cfg.bak` extract /boot/config/network.cfg to unraid rename the `/var/lib/docker/network/files/local-kv.db` to `/var/lib/docker/network/files/local-kv.db.bak` and Restart docker
    1 point
  24. Multiple NICs on the same IPv4 network In most cases, two network interfaces should not be assigned to the same network; TCP/IP networking isn't meant to work that way. Even if things seem to work now, you could end up with problems down the road. Go to Settings -> Network Settings and review each interface. Reconfigure as needed to ensure only one nic is connected to a given network. If the nics are setup with DHCP, a simpler solution would be to ensure that there is only one network cable plugged into your server, regardless of how many nics there are. If you have any questions, or if the webgui doesn't show one of the nics mentioned in the FCP warning, post the full FCP warning and your diagnostics (from Tools -> Diagnostics) to this thread https://forums.unraid.net/topic/47266-plugin-ca-fix-common-problems
    1 point
  25. OK this was fun to work out what had happened, this setting is now disabled by default as per changelog below. Changelogs our saviour! I'm sure there was good reasoning behind the change, but it has made a less than intuitive process even more difficult. Could we look at improving the process for creating a "duplicate" docker @Squid? For those that stumble over the problem, this is how to re-enable the ability to create a duplicate docker container. Setting to re-enable reinstall default can be found on the settings of the CA page, change Enable Reinstall Default: setting from No (default) to Yes
    1 point
  26. The drop downs for changing boot devices is not actually working at this time. To do this manually, with the VM stopped, click the VM icon then select "Edit XML" from the drop down. From there, locate the <disk> tag for your ISO and for your virtual disk: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source file='/mnt/cache/domains/RC2 Test 1/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/win10.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> Adjust the "boot order" from 2 to 1 for the vdisk and from 1 to 2 for the cdrom. This will force the CDROM as the bootable device.
    1 point