Leaderboard

Popular Content

Showing content with the highest reputation on 06/26/22 in all areas

  1. I've been waiting for things to settle down, and looks as the dust has finally settled. I've never been one to trust any software release that ends in ".0", especially on a system that I rely on daily. Have a few days off, so I decided it was finally time to update from 6.9.1 to 6.10.3. With a recent flash and appdata backups, all plugin and docker updated, I updated from the remote GUI. And just like that, Malta-Tower is updated, back up and running. I did a restart (from the GUI), and my heart started racing when the system reported several of my drives were missing. Then I noted they all shared the same HBA. Powered down/up, and all is happy. Guess my HBA wanted that hard start to reinitialize. After a day of poking around, all is working great. Thanks to all at LT and the folks who worked through the issues with 6.10. Just dumb bad luck with the kernel bug discovered at the same time this was rolled out. Stuff happens. It is how you deal with the stuff once you find you've stepped in it which matters. Some dev teams will insist nothing is wrong. You folks took the flack, stayed in communication with the community, and found the needed work arounds. Kudos to all.
    3 points
  2. Hab ich mir schon angeguckt und was du damit machst ist natürlich schon krass 😅 Die ganzen Automatisierungen für VM‘s usw. da bin ich noch weit von entfernt 😅 Ich danke euch allen auf jeden Fall für die ganzen Hilfreichen Nachrichten. Unraid hat wirklich eine super Community 😉
    2 points
  3. Just did the update and downloaded the recommended driver and it works great! Thanks!
    2 points
  4. Summary: Support for Goobaroo game server docker containers, primarily modded minecraft servers. DockerHub: https://hub.docker.com/repositories/goobaroo I wanted to produce server docker containers that were upgradable and self installing direct from Curseforge and FTB. Modded Minecraft is for the Java Edition only. There are no modded servers for Bedrock/Windows 10 version of Minecraft. I'm just getting started, but wanted to share what I have so far. Current Available Servers: All The Mods 7 - https://www.curseforge.com/minecraft/modpacks/all-the-mods-7 All the Mods 7 To the Sky - https://www.curseforge.com/minecraft/modpacks/all-the-mods-7-to-the-sky All the Mods 8 - https://www.curseforge.com/minecraft/modpacks/all-the-mods-8 Create: Above and Beyond 1.3 - https://www.curseforge.com/minecraft/modpacks/create-above-and-beyond Enigmatica 6 v0.5.21 - https://www.curseforge.com/minecraft/modpacks/enigmatica6 FTB Inferno - https://www.feed-the-beast.com/modpacks/99-ftb-inferno FTB Infinity Evolved 1.7 v3.1.0 - https://feed-the-beast.com/modpack/23_ftb_infinity_evolved_1_7/versions FTB OceanBlock v1.12.0 - https://www.feed-the-beast.com/modpack/ftb_oceanblock FTB Presents Direwolf20 1.18 v1.4.1 - https://feed-the-beast.com/modpack/ftb_presents_direwolf20_1_18 FTB Skyfactory 2.5 v2.5.8 - https://feed-the-beast.com/modpack/ftb_presents_skyfactory_2_5 FTB Skyfactory 3 v3.0.21 - https://feed-the-beast.com/modpack/ftb_presents_skyfactory_3 FTB Stoneblock 2 v1.21.1 - https://www.feed-the-beast.com/modpack/4_ftb_presents_stoneblock_2/server-files Note, that I had to install Garden of Glass manually when installing the client though ATLauncher. I also had to enable InfiniteInvo-1.0.52 that was listed under Optional Mods Pixelmon v9.0.2 - https://reforged.gg SevTech Ages - https://www.curseforge.com/minecraft/modpacks/sevtech-ages Sky Factory 4 v4.2.4 - https://www.curseforge.com/minecraft/modpacks/skyfactory-4 Sky Factory One v.1.0.4 - https://www.curseforge.com/minecraft/modpacks/skyfactory-one StoneBlock 3 v1.0.0 - https://feed-the-beast.com/modpacks/100-ftb-stoneblock-3 Vault Hunters 1.12.4 - https://vaulthunters.gg Vault Hunters 3rd Edition 1.18.2 - https://vaulthunters.gg RLCraft 1.12.2 - Release v2.9.1c - https://www.curseforge.com/minecraft/modpacks/rlcraft I'm trying to keep the servers as consistent as possible. Common options include: Runs on port 25565 * see bellow for changing OPS, allows you to set list of users with Operator privileges JVM_OPTS: Tweak memory to suit your needs, but they are defaulted to recommended. EULA, needs to be set to true. Defaults to false. This is for the Mojang EULA available at https://account.mojang.com/documents/minecraft_eula Server will not start without accepting. If you are having troubles installing, please make sure that the permissions for the /mnt/appdata directory are correct. Important: Please update your containers with the latest version from today, Dec 13, 2021. These containers include the fix for log4j exploit CVE-2021-44228. This is a remote code execution exploit. https://www.minecraft.net/en-us/article/important-message--security-vulnerability-java-edition Changing the port In order to run multiple servers you need to work within the docker settings. Do not change the server.propeties at all. Changing it will only break the networking for the container because it is expecting the default port of 25565. To use a different port just change the PORT value in the docker config in the unraid ui, and use the bridge network. It will then map the port you defined to your unraid server IP and you can port forward that publicly if you like. Here are two servers running on my server. That column on the right shows the mapping of the port and IP inside the docker network on 172.17.0.0 to the IP of my unraid server and the mapped port. Or alternatively you can use br0 network and give each docker container their own IP on your internal network and port forward to that instead. But changing the server.properties will only break it.
    1 point
  5. A plugin designed to keep all (or selected) plugins / dockers up to date, with options for delayed upgrades. Formerly part of Community Applications, this module is now packaged separately. To install this plugin, go to the Apps Tab and search for Auto Update. Note that if you are utilizing CA's Appdata Backup / Restore module, then with this module installed you can also tell the Backup procedure to update the docker apps at the same time as the backup occurs
    1 point
  6. You’ll be fine. The polling for availability is over a long period. Server updates won’t be noticed.
    1 point
  7. No auto update built into crafty 4 as far as I can tell, you would have to do upgrade manually Sent from my CLT-L09 using Tapatalk
    1 point
  8. No idea if it's normal for crafty 4, keep in mind crafty 4 is in beta state right now it could be a bug Sent from my CLT-L09 using Tapatalk
    1 point
  9. On mobile now so haven't looked at diagnostics yet. Some things to consider. Nothing can move open files. Mover won't move duplicates. User shares won't show duplicates. Your screenshots are showing user share contents. You can also display disk contents in a similar manner. What do you see if you look there on cache?
    1 point
  10. Nur bei der iGPU und 12th gen. Verkauf sie doch wieder... ^^ Naja, ich hab auf meinem System keine Cores assigned oder irgendwas zugewiesen und der default Linux Scheduler macht das schon ganz gut. Ich würde dir eher dazu raten noch ein bis zwei generationen ab zu warten, sie es so wie mit AMD und Ryzen, die ersten zwei Generationen hatten auch so ihre "schwierigkeiten" und sind immer besser geworden. Big Little ist am Desktop gerade erst mal angekommen und die Software muss auch daran angepasst werden, in den meisten fällen... Puh, also da bist du dann schon hart an der Grenze und die Platten müssen auf jeden Fall im Idle sein meiner Meinung nach... Ich würd dir auf jeden Fall von Bleeding Edge Hardware auf Linux abraten...
    1 point
  11. @itimpi For shares we have the option to export especially for Time Machine: I prefer to use this option instead of an extra Docker if it isn't needed. So something is wrong in 6.10.X Reinstalling macOS is no option, as it works in 6.9.2 like a charm.
    1 point
  12. I'm going to give this a shot, since there's nothing in 6.10.x that is beneficial to me. Will report back. Yep this worked 100%. All Macs, M1/Intel, are backing up successfully to preexisting and new backups on a single Time Machine Unraid share now that I've downgraded to 6.9.2 from 6.10.3. Unfortunate that this is necessary, hopefully @limetech will reopen these bugs and investigate further. So just to summarize: Unraid 6.10.3 uses smbd version 4.15.7, while Unraid 6.9.2 uses smbd version 4.12.14 (checked with "smbd --version") All other SMB-related configs seem identical, such as /etc/samba/smb-shares.conf, /etc/samba/smb-names.conf, /etc/samba/smb.conf, and *a lack of* /boot/config/smb-extra.conf (meaning no SMB Extras defined) Time Machine backups proceed normally on Unraid 6.9.2 from both Intel and M1 Macs running MacOS 12.4, while the same incremental backups fail on Unraid 6.10.3 with a Console log Mac-side along the lines of "Operation not supported by device" UserInfo={DIErrorVerboseInfo=Failed to initialize IO manager: Failed opening folder for entries reading} The below SMB Extras config added to 6.10.3 allowed some but not all Macs to back up. Removing it caused then-working Macs to start failing again. [Global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native spotlight backend = tracker The below SMB Share config is the Time Machine destination on 6.9.2 and 6.9.3, which houses ALL Macs' backups (despite the instruction in https://wiki.unraid.net/UnRAID_6/Configuring_Apple_Time_Machine to have one backup per share, which seems to be bad advice): [TimeMachine] path = /mnt/user/TimeMachine comment = browseable = yes # Private writeable = no read list = write list = backup valid users = backup vfs objects = catia fruit streams_xattr fruit:time machine = yes fruit:time machine max size = 1200000M case sensitive = auto preserve case = yes short preserve case = yes I also keep my Time Machine share constrained to a single disk, as I've had cross-disk issues in the distant past
    1 point
  13. Ah super. Non j'avais bien laissé AppData Config Path, mais c'est vrai que du coup j'avais 2 "path" qui pointaient vers config, j'ai pas fait gaffe... donc voila comme tu l'indiques j'ai ajouté un Path (ou chemin d'accès, dans l'interface en FR) Et après avoir relancer le container, j'ai bien le chemin dans Plex ! Merci vraiment beaucoup pour ton aide. Bonne fin de week-end. 👍
    1 point
  14. save my sunday and give me the driver pleeeeeeeease ... just kidding 😂
    1 point
  15. Ist aktuell im Angebot (80€ die 2GB): https://www.lenovo.com/de/de/accessories-and-monitors/graphics-cards/graphics-cards/GRAPHIC-BO-NV-T400-HP-Graphics-Card/p/4X61F99432?orgRef=https%3A%2F%2Fwww.mydealz.de%2F&clickid=QVWTy1SSexyIUZq0CzSaoUE-UkDz5bQJSSC8RM0&Program=3786&pid=121977&acid=ww%3Aaffiliate%3A74clty&cid=de%3Aaffiliate%3Axg02ds
    1 point
  16. Genau, das ist ja mein Testserver und das Feedback dient ja mehr für Limetech und damit der Verbesserung von Unraid als für mich selbst. Daher habe ich auch gleich mal einen Bug-Report aufgemacht. Es nutzen ja massig Leute "powertop --auto-tune" oder meine Kommando-Liste in der Go-Datei und die wissen dann einfach gar nicht warum ihr Server crasht. Jetzt weiß ich zumindest warum mir mein produktiver Server so oft abgeschmiert ist, als ich die 10G Karte verbaut hatte und die Reihenfolge der Karten usw über die GUI ändern wollte. 🤪
    1 point
  17. The main issue with that was that the developers from Valheim first messed up pretty bad since the worlds got deleted in the worlds folder and a new world within the worlds_local was created. After they found out about that they released an updated that actually don't delete the old worlds but instead created a new folder within the worlds_local folder. So if someone, like me, who haven't had the container running and update the containers everything will work since they Docker will move the world and the game will pick up the new location without creating a new worlds_local folder, hope that explains that a little bit more. In your case you are a user where the worlds folder doesn't get deleted because you've pulled the newer update from Valheim and instead a new world is created. I have to say I won't change that again since this is only a one time thing and users could move the files manually...
    1 point
  18. @kubed_zero All my hopes were going into the SMB-Extras-settings. But in my case it didn't work. ==== The Timemachine-Log after last (working) Backup under 6.9.2 (No special SMB-Extras set) ==== 2022-06-25 20:44:13rogress] .•••• . 2022-06-25 20:45:05redSizing] Skipping further sizing for finished volume xxxxxxxx 2022-06-25 20:45:05rogress] Finished copying from volume "xxxxxxxx - Daten" 2022-06-25 20:45:05Collection] Found 0 perfect clone families, 1 partial clone families. Shared space 9,1 MB (9080412 bytes). 2022-06-25 20:45:05al] Saved clone family cache for 'xxxxxxxx - Daten' at /Volumes/Time Machine-Backups/Backups.backupdb/xxxxxxxx/2022-06-25-203850.inProgress/xxxxxxxx/.xxxxxxxx.clonedb 2022-06-25 20:45:08ntDoneAnalysis] .•++**__-•••. 2022-06-25 20:45:08rogress] Time Estimates Evaluation 2022-06-25 20:45:08mages] Found disk2s2 xxxxxxxx 2022-06-25 20:45:09mages] Found disk2s2 xxxxxxxx 2022-06-25 20:45:09al] Unmounted '/Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/xxxxxxxx/2022-06-25-203848/xxxxxxxx - Daten' 2022-06-25 20:45:09al] Unmounted local snapshot: com.apple.TimeMachine.2022-06-25-203848.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/xxxxxxxx/2022-06-25-203848/xxxxxxxx - Daten source: xxxxxxxx - Daten 2022-06-25 20:45:09mages] Found disk2s2 xxxxxxxx 2022-06-25 20:45:09al] Unmounted '/Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/xxxxxxxx/2022-06-25-185038/xxxxxxxx - Daten' 2022-06-25 20:45:09mages] Found disk2s2 xxxxxxxx 2022-06-25 20:45:09al] Unmounted local snapshot: com.apple.TimeMachine.2022-06-25-185038.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/xxxxxxxx/2022-06-25-185038/xxxxxxxx - Daten source: xxxxxxxx - Daten 2022-06-25 20:45:13al] Marked as reference snapshot: com.apple.TimeMachine.2022-06-25-203848.local 2022-06-25 20:45:13al] Backup result: { 2022-06-25 20:45:13al] Completed backup: 2022-06-25-204509 2022-06-25 20:45:13al] Mountpoint '/Volumes/Time Machine-Backups' is still valid 2022-06-25 20:45:14al] Failed to mount 'disk1s3', dissenter { 2022-06-25 20:45:15al] Copying recovery system 2022-06-25 20:45:15al] Failed to copy recovery set, error: No such file or directory 2022-06-25 20:45:15pThinning] Thinning 1 backups using age-based thinning, expected free space: 1 TB actual free space: 1 TB trigger 50 GB thin 83,33 GB dates: ( 2022-06-25 20:47:10al] Mountpoint '/Volumes/Time Machine-Backups' is still valid 2022-06-25 20:47:10mages] Found disk2s2 xxxxxxxx 2022-06-25 20:47:10mages] Found disk2s2 xxxxxxxx 2022-06-25 20:47:40ight] Spotlight finished indexing for '/Volumes/Time Machine-Backups' ==== The Timemachine-Log after first (not working) Backup attempt under 6.10.3 (SMB-Extras set) ==== 2022-06-25 21:06:05pScheduling] Not prioritizing backups with priority errors. lockState=0 2022-06-25 21:06:05al] Starting manual backup 2022-06-25 21:06:05ing] Attempting to mount 'smb://yyyyyyyy@unBackup._smb._tcp.local/unMachine' 2022-06-25 21:06:06ing] Mounted 'smb://yyyyyyyy@unBackup._smb._tcp.local/unMachine' at '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine' (1 TB of 4 TB available) 2022-06-25 21:06:06al] Initial network volume parameters for 'unMachine' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 60, QoS: 0x0, attributes: 0x1C} 2022-06-25 21:06:06al] Configured network volume parameters for 'unMachine' {disablePrimaryReconnect: 1, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-06-25 21:06:07Management] TMPowerState: 2 2022-06-25 21:06:07al] Skipping periodic backup verification due to power conditions: (null) 2022-06-25 21:06:07al] 'yyyyyyyy.backupbundle' does not need resizing - current logical size is 11,4 TB (11.397.019.360.768 bytes), size limit is 3,8 TB (3.798.891.797.913 bytes) 2022-06-25 21:06:07al] Mountpoint '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine' is still valid 2022-06-25 21:06:07al] Checking for runtime corruption on '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine/yyyyyyyy.backupbundle' 2022-06-25 21:06:45mages] Failed to attach using DiskImages2 to url '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine/yyyyyyyy.backupbundle', error: Error Domain=NSPOSIXErrorDomain Code=19 "Operation not supported by device" UserInfo={DIErrorVerboseInfo=Failed to initialize IO manager: Failed opening folder for entries reading} 2022-06-25 21:06:45al] Failed to unmount '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine', Disk Management error: { 2022-06-25 21:06:45al] Failed to unmount '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine', error: Error Domain=com.apple.diskmanagement Code=0 "No error" UserInfo={NSDebugDescription=No error, NSLocalizedDescription=Kein Fehler.} 2022-06-25 21:06:45al] Waiting 60 seconds and trying again. 2022-06-25 21:06:45llation] Cancelling backup because volume '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine' was unmounted. 2022-06-25 21:06:45llation] Requested backup cancellation or termination 2022-06-25 21:06:46llation] Backup cancelled (22: BACKUP_CANCELED) 2022-06-25 21:06:46al] Failed to unmount '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine', Disk Management error: { 2022-06-25 21:06:46al] Failed to unmount '/Volumes/.timemachine/unBackup._smb._tcp.local/yyyyyyyy/unMachine', error: Error Domain=com.apple.diskmanagement Code=0 "No error" UserInfo={NSDebugDescription=No error, NSLocalizedDescription=Kein Fehler.} 2022-06-25 21:06:46llation] Cleared pending cancellation request ==== The Timemachine-Log after first (working) Backup after downgrade again to 6.9.2 (SMB-Extras not set) ==== 2022-06-26 08:40:52rogress] .•••••• . 2022-06-26 08:41:01ight] Spotlight indexing queue is full (256 depth, 0 operations overflowed) 2022-06-26 08:41:27ight] Spotlight indexing queue is empty. 2022-06-26 08:41:56rogress] Finished copying from volume "zzzzzzzzz - Daten" 2022-06-26 08:41:56Collection] Found 6 perfect clone families, 1 partial clone families. Shared space 11,9 MB (11882900 bytes). 2022-06-26 08:41:59al] Saved clone family cache for 'zzzzzzzzz - Daten' at /Volumes/Time Machine-Backups/Backups.backupdb/zzzzzzzzz/2022-06-26-080502.inProgress/zzzzzzzzz/.zzzzzzzzz 2022-06-26 08:42:19ntDoneAnalysis] .•+______-••. 2022-06-26 08:42:19rogress] Time Estimates Evaluation 2022-06-26 08:42:19mages] Found disk2s2 zzzzzzzzz 2022-06-26 08:42:20al] Unmounted '/Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/SaberBookPro/2022-06-26-080500/zzzzzzzzz - Daten' 2022-06-26 08:42:20mages] Found disk2s2 zzzzzzzzz 2022-06-26 08:42:20al] Unmounted local snapshot: com.apple.TimeMachine.2022-06-26-080500.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/zzzzzzzzz/2022-06-26-080500/SaberBookPro - Daten source: zzzzzzzzz - Daten 2022-06-26 08:42:20mages] Found disk2s2 zzzzzzzzz 2022-06-26 08:42:20al] Unmounted '/Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/zzzzzzzzz/2022-06-25-203848/zzzzzzzzz - Daten' 2022-06-26 08:42:20mages] Found disk2s2 zzzzzzzzz 2022-06-26 08:42:20al] Unmounted local snapshot: com.apple.TimeMachine.2022-06-25-203848.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/zzzzzzzzz/2022-06-25-203848/zzzzzzzzz - Daten source: zzzzzzzzz - Daten 2022-06-26 08:42:31al] Marked as reference snapshot: com.apple.TimeMachine.2022-06-26-080500.local 2022-06-26 08:42:31al] Backup result: { 2022-06-26 08:42:31al] Completed backup: 2022-06-26-084227 2022-06-26 08:42:31al] Mountpoint '/Volumes/Time Machine-Backups' is still valid 2022-06-26 08:42:33al] Copying recovery system 2022-06-26 08:42:33al] Failed to copy recovery set, error: No such file or directory 2022-06-26 08:42:33pThinning] Thinning 2 backups using age-based thinning, expected free space: 993,42 GB actual free space: 994,17 GB trigger 50 GB thin 83,33 GB dates: ( 2022-06-26 08:53:01al] Mountpoint '/Volumes/Time Machine-Backups' is still valid 2022-06-26 08:53:02mages] Found disk2s2 zzzzzzzzz 2022-06-26 08:53:02mages] Found disk2s2 zzzzzzzzz 2022-06-26 08:53:32ight] Spotlight finished indexing for '/Volumes/Time Machine-Backups' 2022-06-26 08:53:42al] Unmounted '/Volumes/Time Machine-Backups' 2022-06-26 08:53:45al] Unmounted '/Volumes/.timemachine/unBackup._smb._tcp.local/zzzzzzzzz/unMachine' Under 6.9.2 all ist still fine. Right after upgrade to 6.10.X the Backup-Jobs not working anymore. Directly after downgrade to 6.9.2 the job starts without issues.
    1 point
  19. Hi, i just got the Valheim+ running again. I was still affected by the old world not being picked up and a new World was generated since. In the start-server.sh i noticed following block, which causes the old "worlds" folder to be moved inside the worlds_local as a subfolder "worlds_local/worlds/". so to fix it you would like to do moving the contents from the original worlds instead of the directory and if the server was already started once with that shell script: The worlds folders should be empty then and can be delted. All contents should be inside worlds_local directly without subdirectories. Thx for your work and i hope you keep it supported, it's the first time i had todo something manually!
    1 point
  20. Jetz verstehe ich glaube ich Dein Probem. Krusader im Docker nutzt ja kein SMB sondern das direkte Mapping von Docker Volumes. Du hast den Pfad, wie oben von @ich777 einfach nicht hinzugefügt, da "normalerweise" ja alles "Shares" unter /mnt/user hängen...Dein ZFS Volume aber eben nicht. Das hat aber nix mit SMB zu tun. Wenn ein andere Client im netz das ZFS als SMB-Share sehen soll, dann braucht es die Anpasung in der SMB.conf. Sind also zwei Dinge, die Du vielleicht gedanklich vermischt..
    1 point
  21. Solved: Cables checked and all working fine now (parity build at 20% and going strong with no errors)
    1 point
  22. Hi @ich777 First of all, big thanks for your version of Jellyfin. I use it on my new small Unraid media server build on an MinisForum HM90 mini PC. It uses an Renoir Zen 2 based Ryzen 9 4900H with Radeon RX Vega 8 Graphics. After some issues with your default port mappings 8096 > 8096 and 8920 > 8920 I now have access to the WebUI. I had to adjust the Ports to 18096 and 18920 for me to have access. Not sure whats wrong with the default mappings and my network config. I tried bridged, br0 and a custom network, same problem on all of them with the default ports. I installed the "Radeon TOP" and "GPU statistics" plugins to check the stats on the dashboard and with "radeontop" via terminal. Jellyfin statistics is showing the transcoding with 65fps. GPU statistics 4k source: radeontop idle: radeontop transcoding 4K source file: It looks like the GPU transcoding is working even without the adjustment of the go file. 👍 Jellyfin statistics is showing the transcoding with 65fps. I have also installed the "Dynamix System Temperature" and your "Nuvoton NCT6687 Driver" plugin. Unfortunately the temperature plugin doesn't find any drivers, so I have no fan or CPU temp monitoring available. Any ideas what I'am missing? If I shoud test some stuff for you, please let me know. Maybe you can add the 4900H to the list of supported hardware. 😉
    1 point
  23. I've disaabled and enabled bridging and now it works
    1 point
  24. It seems to have been a temperature issue. The 11600K (60-70°C) runs a lot hotter than the previous 1900X (50-60°C) while consuming the same amount of power. I had left the RPM of my Noctua NF-A14s at the same 600RPM I used in the old Threadripper system and had removed the second NH-D15 fan. Adding the second fan back in and increasing the fan speed to 700RPM seems to have fixed the issue.
    1 point
  25. Im Dashboard müsste hinter der Disk ein gelber Daumen runter sein. Da drauf klicken und dann "acknowledge" (ka wie das ins deutsche übersetzt wurde). Dann wird der "Ist"-Zustand als OK gespeichert und erst bei einer Veränderung gibt's wieder eine Warnung
    1 point
  26. Eine iGPU durchschleifen hat glaube ich noch keiner geschafft. Also unwahrscheinlich, dass das klappt. Wie transcodiert der denn aktuell? Werden die beschleunigt? Alturismo hat mit zwei GPUs und einem 10th Intel und ein paar Tweaks ca 60W im Idle: Dass man das mit Alder Lake schafft, bezweifle ich. Zumindest sind die hier genannten Messwerte nicht wirklich überzeugend:
    1 point
  27. Ich weiss. Deswegen hab ich ja, implizit, geschrieben: Bei Alder Lake entweder die P2000 mit 8-20W Zusatzverbrauch oder ein i5-10600 bei dem die iGPU für Transcoding genutzt werden kann. Dazu kommt noch das wenn er wirklich mal eine GPU an eine VM durchreichen will geht bei Alder Lake (wenn er die P2000 nutzt) nur VM mit P2000 ODER Transcoding mit P2000. Bei i5-10600 geht Transcoding über die iGPU UND VM mit P2000. Gruss, Joerg
    1 point
  28. Also Alder Lake wird noch nicht ganz auf Unraid 6.10.3 unterstützt bzw. eigentlich die iGPU nicht da Intel diesmal wirklich zu spät dran war und es wirklich verbockt hat mit den Treibern für Linux... Man muss aber auch noch beachten das im Falle von Plex, da die eine wirklich stark angepasste Version von FFmpeg nutzen, die den Server höchstwahrscheinlich crashen in verbindung mit Alder Lake. Es wurde schon angekündigt das es gefixt wurde für die Bare Metal installation aber noch nicht für den Docker container. Wenn du Emby oder Jellyfin verwendest solltest du keine Probleme haben. Darf ich fragen warum du eine Nvidia P2000 nehmen willst? Wieviele gleichzeitige streams hast du denn vorraussichtlich maximal? Ich würde dir eher zu einer Nvidia T400 oder ähnlichem raten, die bekommst du neu für Eur. ~130,- basiert auf Turing, braucht keinen extra Stromanschluss und kann maximal 35 Watt ziehen und sollte im Idle auf 1-2 Watt runter gehen wenn sie in P8 ist. iGPUs zu einer VM durchschleifen war schon immer Problematisch, zumindest von meiner Sicht aus gesehen... Die Efficient Cores und Performance Cores funktionieren einwandfrei unter Linux, das würde beispielsweise auch sehr interessante Anwendungszwecke bieten wie zB alle Efficienct Cores Unraid zuteilen und die Perforamnce Cores der/den VM/s, das würd aber nur lohnen wenn du wirklich was größeres vor hast. In meinem Fall hab ich einen i5-10600 und bin wirklich sehr zufrieden damit, stemmt alles bisher ohne Probleme:
    1 point
  29. Ich würde einen 10600 (oder gar 10700) präferieren wenn Du nicht auch die Grafikkarte an VMs weiterreichen willst. Übrigens, wenn du die Grafikkarte für eine VM nutzen willst und die durchreichst steht die dann nicht mehr für HW Transcoding zur Verfügung. Die Grafikkarte wird eine ganze Menge W im Idle verbrauchen. Ich hab mal nachgeschaut. Die reden von zusätzlichen 8-20W in Idle für die P2000. Hängt jetzt davon ab ob Du dir den Stromverbrauch zusätzlich erlauben willst. Das ist soviel wie die restliche Server Hardware (gute Komponenten vorausgesetzt) im Idle. Mein ASROCK B460 Board braucht im Idle mit 6 Platten ca. 13W, mal so als Beispiel. Das W480 Board mit 8 Platten ca. 16W (Und der lief mit ca. 40W mit einer AMD RX580). Für den Fall das du doch den 12600K nimmst lese auch bitte nochmal nach ob Unraid die Efficency Cores die die K Modelle haben auch wirklich unterstützt. Ich bin da nicht auf aktuellem Wissenslevel. Kann sein ab 6.10 gehts, ich bin aber nicht sicher.
    1 point
  30. Did something change with OpenRCT2? I'm trying to load a new save file for my server and it's telling me it's unable to open the save files. I noticed the save game files are now .park instead of .sv6. Would that have anything to do with it?
    1 point
  31. Not really. Pretty sure it’s all automated though. Once the arch build is released the docker image will grab it.
    1 point
  32. If I am looking at this right SSH access is turned of in settings->management access
    1 point
  33. It stops working exactly at that point i did the upgrade from 6.9.2 to 6.10.0. It can't be a solution to reinstall macOS. The issue is not on Mac site. Look at @mok's comment.
    1 point
  34. Unraid 6.10 is out, could you please update the hardening for samba 4.15? Thanks
    1 point
  35. For anyone stumbling into this now or in the future. the --shrink argument was added to the command. Go read this comment:
    1 point
  36. Yes, the "Proceed" button is still visible but disabled. Nothing will happen when you press it again. Clicking on 'X' closes the dialog and allows the user to navigate in the GUI as normal, the running operation continues in the background (even when the browser is closed). When the user revisits the File Manager it will re-open the dialog with the current status. This even works when the File Manager is opened on a different PC/browser. Clicking on 'Cancel' terminates the running operation, keep in mind that this may leave an incomplete operation. E.g. canceling a move operation half way will result in some files being moved but not all.
    1 point
  37. My Servers has several features, one of which is Remote Access. You can read about this and other features here: https://wiki.unraid.net/My_Servers Regarding Remote Access and DNS Rebinding... DNS Rebinding is a protection that prevents your system from resolving a real domain name to a private IP. If you want to use full and proper SSL on an private IP it has to be disabled. On high-end routers you can disable it for specific domains like unraid.net or plex.direct, on other routers it is an all-or-nothing switch. Our Remote Access solution requires an unraid.net certificate from Let's Encrypt. In Unraid 6.9.2 this means you also have to use an unraid.net certificate for local access, and thus you have to disable DNS rebinding. In Unraid 6.10 this is not a requirement and you can use an unraid.net certificate for Remote Access while using http or a self-signed certificate for Local Access (so no need to disable DNS rebinding) Please see this wiki page for more information: https://wiki.unraid.net/My_Servers#Configuring_Remote_Access_.28optional.29 As mentioned on that wiki, you need to have a complex root password. Unraid does have protections built in to guard against brute force attacks, but it won't help if your password is "password". Also from the wiki - Remote Access gives you access to the Unraid webgui. If you want access to docker containers or other devices on the network then you want to look at setting a WireGuard VPN instead:
    1 point
  38. Both settings use the same "virtio-net-pci" device as "virtio-net" is only an alias: qemu -device help ... name "virtio-net-pci", bus PCI, alias "virtio-net" The only difference is that the slower "virtio-net" setting removes the "vhost=on" flag (open the VM logs to see this setting): virtio-net -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \ virtio -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \ And it's absolutelly logic that this causes a bad performance for the "virtio-net" setting as QEMU then creates an additional "virtio-net device": https://www.usenix.org/sites/default/files/conference/protected-files/srecon20americas_slides_krosnov.pdf Instead of sharing the memory with the host: A good write-up can be found here: https://insujang.github.io/2021-03-15/virtio-and-vhost-architecture-part-2/ And now we understand the help text as well: Not sure about the stability thing, but if the Guest supports it, I would use "virtio", which enables vhost. As I think the names of the adapters are confusing I opened a bug report:
    1 point
  39. Ok so i figured it out thanks to this post on the forum: What I needed to do was add --shrink in between "qemu-img resize" and the .img you whish to change. So I needed to do "qemu-img resize --shrink Server_2016.img 30G" of course the .img name will be different from what I have as that is the name my VM.
    1 point
  40. Jetzt habe ich doch noch die C-States Einstellungen im BIOS gefunden. Diese sind etwas versteckt in einem Untermenü zu finden gewesen. Mit diesen Einstellungen und Ubuntu sieht es jetzt schon viel besser aus. Hier eine kurze Zusammenfassung meiner Änderungen im BIOS: Mit Ubuntu 20.10 Live komme ich damit im Idel auf nur 7-8 Watt. Hammer!!! C3_ACPI wird bei Ubuntu sehr häufig genutzt was vermutlich der Grund für die sehr geringe Leistungsaufnahme ist. Auch wird mit Ubuntu im Powertop die GPU angezeigt was in Unraid mit gleichen BIOS Einstellungen nicht der Fall ist. Wenn der Bildschirmschoner bei Ubuntu eingeschalten wird fällt die Leistungsaufnahem von etwa 8,5W auf 7,5W (-1W). MIt den gleichen BIOS Einstellungen genehmigt sich Unraid im Idle 4-5W mehr als Ubuntu. Die GPU wird im Powertop bei Unraid auch nicht angezeigt. Also irgendwie muß es an Unraid (Kernel? Treiber?) liegen. Andernfalls kann ich mir den Unterschied nicht erklären. Wie kann ich dieses Ergebnis auf Unraid reproduzieren
    1 point
  41. I was able to resolve this issue by re-installing macOS 12.4, deleting the existing TM backup and creating a new one. It appears that the problem wasn't related to unRAID.
    0 points