Leaderboard

Popular Content

Showing content with the highest reputation on 03/13/24 in all areas

  1. Vielen Dank! Das hat mir sehr weitergeholfen. Bin ein Newbie mit Unraid und Docker etc. Aber mit Deinen Tipps habe ich es geschafft, das Ding zum Laufen zu bekommen. Liebe Grüße, Dietmar
    3 points
  2. I went into the kernel modules built in the latest unraid on my machine and here are the kernel modules compiled. You can also see the thunderbolt network service is there and the ko thunderbolt net module is also compiled in. In the case of this TB3 network adapter I don't know if it needs to load the net module or not, but it is there. You may have to load it manually though (if it is necessary). This would be interesting to see if it is actually plug and play. I have no USB4 ports to test. alias tbsvc:knetworkp00000001v*r* thunderbolt_net alias wmi:86CCFD48-205E-4A77-9C48-2021CBEDE341 intel_wmi_thunderbolt alias pci:v*d*sv*sd*bc0Csc03i40* thunderbolt LM in kernel driver: thunderbolt-net.ko.xz thunderbolt.ko.xz I am not sure USB4 is compiled into the kernel yet so the modprobe needs to know about the port and how to attach the module to it but I am unsure it will work w/ USB4 but probably a trad TB3 port. But that is conjecture because I didn't look at the kernel symbols and have no idea what the Mobo would report upon plug. With all of that said just because you CAN do it, if it's not supported by limewire then you will be on your own.
    2 points
  3. You are of course correct that SMB1 is problematic from a security point at this time but I don't think you are correct in stating this is something that needs to be fixed on the android side - it seems like the most probable explanation so far in this thread is this is a mis-implementation of samba on unraid, somehow. Android already has clients that can handle SMB 1, 2, and 3 and things were perfectly fine with SMB2+ on unraid 6.9.2 but broke on 6.11.0+.
    2 points
  4. das Meiste wurde ka bereits von @EliteGroup beantwortet, meine Meinung, beide ... Win VM ist ja recht einach zu "skalieren", da der Win Server ja "managed" ist, schauen was Sache ist und entsprechend zuweisen, wenn da wirklich nur die DB Dienste abgehandelt werden braucht das OS mehr wie die Applikationen, bei dem vorhandenen Volumen musst du ja nicht "sparsam" sein Unraid und Ram, oder besser Linux host und Ram, Unraid "cached" recht viel und profitiert auch entsprechend davon, macht sich erst richtig bemerkbar wenn ich halt auch das NAS nutze was ja in deinem Fall nur ein "side" Effekt ist. Dazu einlesen zum Thema vm_dirty page und co ... Das OS ist "genügsam", dann kommt es halt auf die Applikationen an, Docker, laufende plugins, lxc Container, ... VM's !!! und der Rest wirs teilweise für file caching benutzt, gibt es einiges an "Infos" im Netz. das ist grundsätzlich schon ok, es gibt einen Grund warum SSD's nicht empfohlen sind. TRIM ist deaktiviert, vernichtet halt das Konstrukt "Parity", ist auch egal ob man Parity nutzt oder nicht, ist so ... ansonsten sind mittlerweile viele mit SSD im Array unterwegs da die Daten hier meist eh "statisch" sind und der TRIM Effekt auch nicht so relevant ist, man muss es nur wissen und damit leben das es passieren könnte dass die SSD's etwas langsamer werden im Lafe der Jahre ... ob ich mir jetzt dafür einen zfs pool antun würde ... auf keinen Fall aber der würde wieder vom RAM profitieren da ZFS RAM aktiv nutzt. Nur lies dich dazu wirklich ordentlich ein was alles dazu gehört, was ZFS kann und wann es Sinn macht dieses einzusetzen ... ich frag mal stumpf nach, wie sicherst du denn die VM's und Datenbanken (separat ?), extern auf nem 2. Gerät ? da dieses System "in der Firma" läuft ... produktiv ... würde ich schauen wie ich das immer schnell wieder produktiv bekomme im Falle das ....
    2 points
  5. After some discussion in the ASUS W680 board specific thread, I want to find out if other boards have the same problem with the current Unraid version (or rather specific Linux kernel). What is the matter? If you have a IPMI/BMC that uses the ast driver in Unraid (the problem turned up with the ASPEED AST2600, but this could be universal to other BMCs as well) and want to use the iGPU from the CPU (e.g. for a VM or docker), you run into the following situation: - If no device (or dummy plug) is inserted, you can verify the complete boot process of Unraid via the hardware VGA or via the KVM screen in your IPMI GUI but lose the iGPU in Unraid (e.g., when you want to use the Intel SR-IOV plugin). For me, it even crashes the boot process - If a device (or dummy plug) is inserted, you still keep it in Unraid, but the VGA/KVM stops updating after the blue boot loader screen of Unraid. This seems to be an issue that has popped up (at least with the board from the thread) only AFTER Unraid 6.12.4 (so 6.12.5 and beyond). I have found this kernel commit which seems to be describing exactly this behavior and could correlate with the kernel updates in the relevant Unraid releases. To find out if this is a general issue or specific to this board, I would now like to find people with the following constellation: - Updated Unraid to a release >6.12.4 - Use of an ASPEED BMC (or other BMC using the ast driver) - Use of BMC VGA/KVM (with full output until the CLI login) AND a detected iGPU in Unraid (with or without dummy plug) For reference: My setup is a ASUS Pro WS W680M-ACE SE with a 12600K. Multi-Monitor (and iGPU) is activated in the BIOS - BIOS settings don't seem to matter. I am currently on Unraid 6.12.8 (Linux Kernel 6.1.74) and can use SR-IOV with a dummy plug but KVM drops out after the bootloader. Other users with the exact same board report that they have KVM + iGPU/SR-IOV. However, they are still on 6.12.4 (Linux Kernel 6.1.49). If you want to find out, if your BMC in Unraid is adressed with ast, you can run lspci -v Then look for your BMC. The last lines should tell you the required part. For me, the (relevant) output looks like this: 06:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 52) (prog-if 00 [VGA controller]) Subsystem: ASPEED Technology, Inc. ASPEED Graphics Family Flags: medium devsel, IRQ 19, IOMMU group 18 Memory at 84000000 (32-bit, non-prefetchable) [size=64M] Memory at 88000000 (32-bit, non-prefetchable) [size=256K] I/O ports at 4000 [size=128] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+ Kernel driver in use: ast Kernel modules: ast
    1 point
  6. Ah had time to do some more digging, so looks like there are two versions of ffmpeg included, emby-ffmpeg and ffmpeg, and its the former that needs to be updated, so i have contacted the upstream dev about the issue, hopefully he should pick up my message and if he agrees then should bump the package version and i can then rebuild and we should be good.
    1 point
  7. Frank was referring to the credential box that pops up when you connect to the share in windows.
    1 point
  8. That warning is standard. The only requirement for port forwarding with this docker is to use a server that supports port forwarding. If you provide some details someone may be able to help.
    1 point
  9. If you’re using an external instance of MariaDB or MySQL you will need to manually create the database and tables using the provided schema sql scripts. Automatic database creation and updating are currently only supported for the container that includes MariaDB.
    1 point
  10. Greetings, my USB flash drive died last night shortly after backing up. I took the existing flash drive and attempted to check it's integrity on another computer. Unfortunately, it has a Data error (cyclic redundancy check) in diskpart and seems unrecoverable. I restored the backup to a new USB flash drive and renamed all previous trail, business, and pro.key files. When logging am I prompted with "Missing Key FIle", as to be expected. I've attempted to "Recover Key" and that options provides another trial key that does not work. Is there a way transfer the license to a new USB/GUID without contacting support? Or am I just to wait until their response? Any responses are appreciated.
    1 point
  11. Thank you. Worked like a charm.
    1 point
  12. Denke ich auch. Es wird einfach der Reihe nach gefüllt. Disk 1 = Voll > Disk 2 Disk 2 = Voll > Disk 3 Wenn jetzt neue Daten kommen, und in der Zwischenzeit wieder Platz auf Disk 1 ist, aber eigentlich Disk 3 dran wäre, wird Disk 1 genommen. @chilloutandi Unraid wird bei "Fill-Up" einfach immer die Disk in der Reihe nehmen, die die kleinste Zahl hat und Speicher frei hat. So verstehe ich "Fill-Up".
    1 point
  13. Did you try that too: https://grafana.com/grafana/dashboards/12357-nvidia-smi-graphs/
    1 point
  14. This response goes right along with the rants others have pointed out. Feel free to move on to something else since. you no longer want to support the work done on Unraid.
    1 point
  15. Ich empfehle Dir die Lektüre der ersten Schritte im Unraid Manual: https://docs.unraid.net/unraid-os/manual/what-is-unraid/ Dort findest Du u.a. das Kapitel zu den User-Shares: https://docs.unraid.net/unraid-os/manual/what-is-unraid/#user-shares Zusätzlich gibt es auf jeder Seite in der GUI passende Hilfetexte - also einfach mal '?' drücken. Ein User-Share besitzt eine Liste der beteiligten Platten (bei Dir alle), besteht aus den gleich benannten Wurzelordnern auf den beteiligten Platten, besitzt eine Füllmethode (wie sollen neue Daten auf die beteiligten Platten verteilt werden), einen optionalen Schreib-Cache, sowie zwei Zugriffsmethoden (über Disk-Shares oder User-Shares). Anfänger sollten auf jeden Fall mit den User-Shares beginnen und arbeiten. Um Dir bei Deinem Problem helfen zu können solltest Du hier entsprechende Screenshots des betreffenden User-Shares mit der Liste der Platten, der gewählten Füllmethode und der Art des Schreibens/Lesens zeigen. Alles andere käme Kaffeesatz-Leserei gleich. Und ja. All das funktioniert seit Jahrzehnten absolut stabil und ausgereift. Kann also nur eine Frage des Verständnis sein.
    1 point
  16. My strategy: Active shares are on SSD raid 10 - speed is needed and this is still the bottleneck at the moment, but good enough for now. I'm limited by PCI lanes on the current server hardware. This backs up to same machine spinning disk traditional unraid array with dual parity on a regular cadence. Scheduled so each happens to a different disk - this is an attempt to double down on the ability to lose multiple disks but still get files quickly in a crisis (as the most recent known good target disk can be spun up and read on its own if needed, without an array rebuild first) This unraid machine syncs to another unraid machine on a slightly different schedule. Still same site but different location. Off-site backup at the moment is manual (sync to physical hot swap disk and store in a secure off-site location) but backblaze is in the plan if I can do so within the limits of currently available internet speeds... May or may not be a good strategy, but it's been easy to adapt and change as learning happens.
    1 point
  17. Diese Aussage interpretiere ich als "Fill-Up" (Allocation method). @chilloutandi Hier die offizielle Doku zu den Methoden: https://docs.unraid.net/unraid-os/manual/shares/user-shares/ Allerdings muss ich sagen, das daraus tatsächlich nicht wirklich hervorgeht, wie mit neuen Daten verfahren wird, wenn Unraid eigentlich bereits bei, sagen wir, Disk 4 angekommen ist. Aber auf Disk 2 wieder genug Platz vorhanden ist. @alturismo Kannst Du etwas dazu sagen?
    1 point
  18. Das kommt auf das eingestellte Verhalten an, wie die Verteilung erfolgen soll. Bspw. High Water
    1 point
  19. PCIe slot should be enough, they are usually good for 75w
    1 point
  20. Thank you. I will give the disk a second chance and will keep an eye on it.
    1 point
  21. Not true, Arch tend to include bleeding edge stable releases and this image is no different, it includes ffmpeg 6.1.1, from the container:- ffmpeg version n6.1.1 Copyright (c) 2000-2023 the FFmpeg developers What is causing your issue i'm not sure, but its definitely not due to out of date ffmpeg.
    1 point
  22. No need to apologise, thanks for the speedy responses, will keep an eye on it.
    1 point
  23. As said, it's a dedicated plugin and not part of the Nvidia Driver plugin: Install it, configure it on the Settings page and after that you will see it on the Dashboard.
    1 point
  24. Ah that's a shame. Thanks for the quick response though.
    1 point
  25. Only way I know is to isolate the services and watch the symptoms. Disable docker engine, watch for spinups, re-enable docker, disable vm's, watch again, shut down all client machines on the LAN that have access to the shares, etc.
    1 point
  26. I think I have a fix. UD was doing a zpool operation when it was not necessary and that will spin up a disk. I'll do some testing and if it works out, the fix will be in the next release of UD.
    1 point
  27. Hmm, Ill look into that. Thanks. For now, Thank you @JorgeB! I was able to pre-clear one of my disks as well to confirm it is good. My final disk seems to be dead. SMART shows below. When i tried to pre clear it had a ton of errors in the logs and smart shows below. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-- 069 064 006 - 163877417 3 Spin_Up_Time PO---- 097 096 000 - 0 4 Start_Stop_Count -O--CK 084 084 020 - 16546 5 Reallocated_Sector_Ct PO--CK 100 100 010 - 0 7 Seek_Error_Rate POSR-- 083 060 045 - 213584792 9 Power_On_Hours -O--CK 082 082 000 - 15876 (164 223 0) 10 Spin_Retry_Count PO--C- 100 100 097 - 0 12 Power_Cycle_Count -O--CK 092 092 020 - 8295 183 Runtime_Bad_Block -O--CK 100 100 000 - 0 184 End-to-End_Error -O--CK 100 100 099 - 0 187 Reported_Uncorrect -O--CK 081 081 000 - 19 188 Command_Timeout -O--CK 100 100 000 - 0 0 8 189 High_Fly_Writes -O-RCK 100 100 000 - 0 190 Airflow_Temperature_Cel -O---K 073 063 040 - 27 (Min/Max 20/28) 191 G-Sense_Error_Rate -O--CK 100 100 000 - 0 192 Power-Off_Retract_Count -O--CK 096 096 000 - 8234 193 Load_Cycle_Count -O--CK 089 089 000 - 23081 194 Temperature_Celsius -O---K 027 040 000 - 27 (0 17 0 0 0) 195 Hardware_ECC_Recovered -O-RC- 082 064 000 - 163877417 197 Current_Pending_Sector -O--C- 100 100 000 - 40 198 Offline_Uncorrectable ----C- 100 100 000 - 40 199 UDMA_CRC_Error_Count -OSRCK 200 200 000 - 0 240 Head_Flying_Hours ------ 100 253 000 - 3027h+32m+37.751s 241 Total_LBAs_Written ------ 100 253 000 - 55058677784 242 Total_LBAs_Read ------ 100 253 000 - 277759799893 ||||||_ K auto-keep Thanks everyone for helping!
    1 point
  28. recreating brought the dockers back to life. Thanks
    1 point
  29. currently testing your approach, even though its annoying ofc decided to shutdown "half" of my containers to start with, running fine for almost 4 days now again. Ill wait another week and then turn on stuff again. Thanks for the help so far!
    1 point
  30. I might have tracked down the issue in the Linux kernel. I am trying to gather more input on this issue in this thread: This should also be relevant to @mrhanderson.
    1 point
  31. Install the plugin and configure it. It's a dedicated plugin.
    1 point
  32. You can, and performance will be similar, but the 8i model still needs some cooling, though it uses about half the power. Should not be needed, PCIe slot is good for 75w
    1 point
  33. Check settings. Whats inside allowed appdata source paths? Is the volume in question maybe "external" and therefore excluded? Some of your containers excluding /mnt which leads to empty zips..? Point to the backup and restore all XMLs and appdatas. Use Previous apps to install all containers. Currently you need some manual work. Another check here: I bet this container is inside of a group?
    1 point
  34. Here is my repo if you want to use for examples. https://github.com/SimonFair/unraid-lcd plan is to create a client to show Unraid specific info, but not built yet. client.php is my starter for 10 based on a pfsense plugin.
    1 point
  35. Thank you for all of these details above@CyrIng Running: AMD 3900x MSI Meg Ace x570 GTX 1660Super (Docker only. Plex/Frigate/Tdarr) Mellanox ConnectX3 2 ports @ 10gb Onboard 2.5 Realtek KB/M LSI 9211-8i 9x HDDs 2x NVME 1xSSD Sonoff Zigbee 3.0 USB Module Running 20 docker containers with multiple ARRs 2 Windows 10/11 VMs 1 Home Assistant VM raid:~# corefreq-cli -s -n -m -n -M -n Processor [AMD Ryzen 9 3900X 12-Core Processor] |- Architecture [Zen2/Matisse] |- Vendor ID [AuthenticAMD] |- Firmware [ 46.73.0-2] |- Microcode [0x08701030] |- Signature [ 8F_71] |- Stepping [ 0] |- Online CPU [ 24/ 24] |- Base Clock [ 99.999] |- Frequency (MHz) Ratio Min 2199.98 < 22 > Max 3799.97 < 38 > |- Factory [100.000] 3800 [ 38 ] |- Performance |- P-State TGT 3799.97 < 38 > |- CPPC Min 500.00 < 5 > Max 4599.96 < 46 > TGT 4599.96 < 46 > |- Turbo Boost [ UNLOCK] XFR 4699.96 [ 47 ] CPB 4599.96 [ 46 ] 1C 2799.98 < 28 > 2C 2199.98 < 22 > |- Uncore [ LOCK] CLK 1599.99 [ 16 ] MEM 4799.96 [ 48 ] Instruction Set Extensions |- 3DNow!/Ext [N/N] ADX [Y] AES [Y] AVX/AVX2 [Y/Y] |- AVX512-F [N] AVX512-DQ [N] AVX512-IFMA [N] AVX512-PF [N] |- AVX512-ER [N] AVX512-CD [N] AVX512-BW [N] AVX512-VL [N] |- AVX512-VBMI [N] AVX512-VBMI2 [N] AVX512-VNNI [N] AVX512-ALG [N] |- AVX512-VPOP [N] AVX512-VNNIW [N] AVX512-FMAPS [N] AVX512-VP2I [N] |- AVX512-BF16 [N] AVX-VNNI-VEX [N] AVX-FP128 [N] AVX-FP256 [Y] |- BMI1/BMI2 [Y/Y] CLWB [Y] CLFLUSH [Y] CLFLUSH-OPT [Y] |- CLAC-STAC [Y] CMOV [Y] CMPXCHG8B [Y] CMPXCHG16B [Y] |- F16C [Y] FPU [Y] FXSR [Y] LAHF-SAHF [Y] |- MMX/Ext [Y/Y] MON/MWAITX [Y/Y] MOVBE [Y] PCLMULQDQ [Y] |- POPCNT [Y] RDRAND [Y] RDSEED [Y] RDTSCP [Y] |- SEP [Y] SHA [Y] SSE [Y] SSE2 [Y] |- SSE3 [Y] SSSE3 [Y] SSE4.1/4A [Y/Y] SSE4.2 [Y] |- SERIALIZE [N] SYSCALL [Y] RDPID [Y] UMIP [Y] |- VAES [N] VPCLMULQDQ [N] PREFETCH/W [Y] LZCNT [Y] Features |- 1 GB Pages Support 1GB-PAGES [Capable] |- 100 MHz multiplier Control 100MHzSteps [Missing] |- Advanced Configuration & Power Interface ACPI [Capable] |- Advanced Programmable Interrupt Controller APIC [Capable] |- Advanced Virtual Interrupt Controller AVIC [Capable] |- APIC Timer Invariance ARAT [Capable] |- LOCK prefix to read CR8 AltMov [Capable] |- Clear Zero Instruction CLZERO [Capable] |- Core Multi-Processing CMP Legacy [Capable] |- L1 Data Cache Context ID CNXT-ID [Missing] |- Collaborative Processor Performance Control CPPC [Capable] |- Direct Cache Access DCA [Missing] |- Debugging Extension DE [Capable] |- Debug Store & Precise Event Based Sampling DS, PEBS [Missing] |- CPL Qualified Debug Store DS-CPL [Missing] |- 64-Bit Debug Store DTES64 [Missing] |- Fast Short REP MOVSB FSRM [Missing] |- Fast-String Operation ERMS [Missing] |- Fused Multiply Add FMA4 [Missing] |- Fused Multiply Add FMA [Capable] |- Hardware Lock Elision HLE [Missing] |- Hyper-Threading Technology HTT [Capable] |- Hardware P-state control HwP [Capable] |- Instruction Based Sampling IBS [Capable] |- Instruction INVLPGB INVLPGB [Missing] |- Instruction INVPCID INVPCID [Missing] |- Long Mode 64 bits IA64 | LM [Capable] |- LightWeight Profiling LWP [Missing] |- Memory Bandwidth Enforcement MBE [Capable] |- Machine-Check Architecture MCA [Capable] |- Instruction MCOMMIT MCOMMIT [Missing] |- Model Specific Registers MSR [Capable] |- Memory Type Range Registers MTRR [Capable] |- No-Execute Page Protection NX [Capable] |- OS-Enabled Ext. State Management OSXSAVE [Capable] |- OS Visible Work-around OSVW [Capable] |- Physical Address Extension PAE [Capable] |- Page Attribute Table PAT [Capable] |- Pending Break Enable PBE [Missing] |- Process Context Identifiers PCID [Missing] |- Perfmon and Debug Capability PDCM [Missing] |- Page Global Enable PGE [Capable] |- Page Size Extension PSE [Capable] |- 36-bit Page Size Extension PSE36 [Capable] |- Processor Serial Number PSN [Missing] |- Resource Director Technology/PQE RDT-A [Capable] |- Resource Director Technology/PQM RDT-M [Capable] |- Read Processor Register at User level RDPRU [Capable] |- Restricted Transactional Memory RTM [Missing] |- Safer Mode Extensions SMX [Missing] |- Self-Snoop SS [Missing] |- Supervisor-Mode Access Prevention SMAP [Capable] |- Supervisor-Mode Execution Prevention SMEP [Capable] |- Trailing Bit Manipulation TBM [Missing] |- Translation Cache Extension TCE [Capable] |- Time Stamp Counter TSC [Invariant] |- Time Stamp Counter Deadline TSC-DEADLINE [Missing] |- TSX Force Abort MSR Register TSX-ABORT [Missing] |- TSX Suspend Load Address Tracking TSX-LDTRK [Missing] |- User-Mode Instruction Prevention UMIP [Capable] |- Virtual Mode Extension VME [Capable] |- Virtual Machine Extensions VMX [Missing] |- Write Back & Do Not Invalidate Cache WBNOINVD [Capable] |- Extended xAPIC Support x2APIC [ xAPIC] |- AVIC controller for x2APIC x2AVIC [Missing] |- XSAVE/XSTOR States XSAVE [Capable] |- xTPR Update Control xTPR [Missing] |- Extended Operation Support XOP [Missing] Mitigation mechanisms |- Indirect Branch Restricted Speculation IBRS [ Unable] |- IBRS Always-On preferred by processor [ Unable] |- IBRS preferred over software solution [Capable] |- IBRS provides same speculation limits [Capable] |- Indirect Branch Prediction Barrier IBPB [Capable] |- Single Thread Indirect Branch Predictor STIBP [ Enable] |- Speculative Store Bypass Disable SSBD [Capable] |- SSBD use VIRT_SPEC_CTRL register [ Unable] |- SSBD not needed on this processor [ Unable] |- No Branch Type Confusion BTC_NO [ Unable] |- BTC on Non-Branch instruction BTC-NOBR [ Enable] |- Limited Early Redirect Window AGENPICK [Disable] |- Arch - No Fast Predictive Store Forwarding PSFD [ Unable] |- Arch - Enhanced Predictive Store Forwarding EPSF [Missing] |- Arch - Cross Processor Information Leak XPROC_LEAK [ Enable] Security Features |- CET Shadow Stack features CET-SS [Missing] |- Secure Init and Jump with Attestation SKINIT [Capable] |- Secure Encrypted Virtualization SEV [Capable] |- SEV - Encrypted State SEV-ES [Capable] |- SEV - Secure Nested Paging SEV-SNP [Missing] |- Guest Mode Execute Trap GMET [Capable] |- Supervisor Shadow Stack SSS [Missing] |- VM Permission Levels VMPL [Missing] |- VMPL Supervisor Shadow Stack VMPL-SSS [Missing] |- Secure Memory Encryption SME [Capable] |- Transparent SME TSME [Disable] |- Secure Multi-Key Memory Encryption SME-MK [Missing] |- DRAM Data Scrambling Scrambler [ Enable] Technologies |- Instruction Cache Unit |- L1 IP Prefetcher L1 HW IP < ON> |- Data Cache Unit |- L1 Prefetcher L1 HW < ON> |- Cache Prefetchers |- L2 Prefetcher L2 HW < ON> |- L1 Stride Prefetcher L1 Stride <OFF> |- L1 Region Prefetcher L1 Region <OFF> |- L1 Burst Prefetch Mode L1 Burst <OFF> |- L2 Stream HW Prefetcher L2 Stream <OFF> |- L2 Up/Down Prefetcher L2 Up/Down <OFF> |- System Management Mode SMM-Lock [ ON] |- Simultaneous Multithreading SMT [ ON] |- PowerNow! CnQ [ ON] |- Core C-States CCx [ ON] |- Core Performance Boost CPB < ON> |- Watchdog Timer WDT < ON> |- Virtualization SVM [ ON] |- I/O MMU AMD-V [ ON] |- Version [ 0.1] |- Hypervisor [OFF] |- Vendor ID [ N/A] Performance Monitoring |- Version PM [ 1] |- Counters: General Fixed | { 6, 6, 4 } x 48 bits 3 x 64 bits |- Enhanced Halt State C1E <OFF> |- C2 UnDemotion C2U <OFF> |- C3 UnDemotion C3U < ON> |- Core C6 State CC6 < ON> |- Package C6 State PC6 < ON> |- Legacy Frequency ID control FID [OFF] |- Legacy Voltage ID control VID [OFF] |- P-State Hardware Coordination Feedback MPERF/APERF [ ON] |- Core C-States |- C-States Base Address BAR [ 0x413 ] |- ACPI Processor C-States _CST [ 2] |- MONITOR/MWAIT |- State index: #0 #1 #2 #3 #4 #5 #6 #7 |- Sub C-State: 1 1 0 0 0 0 0 0 |- Monitor-Mwait Extensions EMX [Capable] |- Interrupt Break-Event IBE [Capable] |- Core Cycles [Capable] |- Instructions Retired [Capable] |- Reference Cycles [Capable] |- Last Level Cache References [Capable] |- Global Time Stamp Counter [Missing] |- Data Fabric Performance Counter [Capable] |- Core Performance Counter [Capable] |- Processor Performance Control _PCT [ Enable] |- Performance Supported States _PSS [ 3] |- Performance Present Capabilities _PPC [ 0] |- Continuous Performance Control _CPC [ Enable] Power, Current & Thermal |- Temperature Offset:Junction TjMax [ 49: 95 C] |- CPPC Energy Preference EPP [Missing] |- Digital Thermal Sensor DTS [Capable] |- Power Limit Notification PLN [Missing] |- Package Thermal Management PTM [Missing] |- Thermal Monitor 1 TTP [ Enable] |- Thermal Monitor 2 HTC [ Enable] |- Thermal Design Power TDP [ 105 W] |- Minimum Power Min [ 105 W] |- Maximum Power Max [ 105 W] |- Thermal Design Power Package < Enable> |- Power Limit PL1 < 142 W> |- Time Window TW1 < 0 ns> |- Power Limit PL2 < 500 W> |- Time Window TW2 < 0 ns> |- Thermal Design Power Core [Disable] |- Power Limit PL1 [ 0 W] |- Time Window TW1 [ 0 ns] |- Thermal Design Power Uncore [Disable] |- Power Limit PL1 [ 0 W] |- Time Window TW1 [ 0 ns] |- Thermal Design Power DRAM [Disable] |- Power Limit PL1 [ 0 W] |- Time Window TW1 [ 0 ns] |- Thermal Design Power Platform [Disable] |- Power Limit PL1 [ 0 W] |- Time Window TW1 [ 0 ns] |- Power Limit PL2 [ 0 W] |- Time Window TW2 [ 0 ns] |- Package Power Tracking PPT [ 142 W] |- Electrical Design Current EDC [ 140 A] |- Thermal Design Current TDC [ 95 A] |- Core Thermal Point |- Package Thermal Point |- Thermal Monitor Trip Limit [ 115 C] |- HTC Temperature Limit Limit [ 127 C] |- HTC Temperature Hysteresis Threshold [ 2 C] |- Units |- Power watt [ 0.125000000] |- Energy joule [ 0.000015259] |- Window second [ 0.000976562] CPU Pkg Apic Core/Thread Caches (w)rite-Back (i)nclusive # ID ID CCD CCX ID/ID L1-Inst Way L1-Data Way L2 Way L3 Way 000:BSP 0 0 0 0 0 32 8 32 8 512 8 i 65536 16w 001: 0 2 0 0 1 0 32 8 32 8 512 8 i 65536 16w 002: 0 4 0 0 2 0 32 8 32 8 512 8 i 65536 16w 003: 0 8 0 1 4 0 32 8 32 8 512 8 i 65536 16w 004: 0 10 0 1 5 0 32 8 32 8 512 8 i 65536 16w 005: 0 12 0 1 6 0 32 8 32 8 512 8 i 65536 16w 006: 0 16 1 2 8 0 32 8 32 8 512 8 i 65536 16w 007: 0 18 1 2 9 0 32 8 32 8 512 8 i 65536 16w 008: 0 20 1 2 10 0 32 8 32 8 512 8 i 65536 16w 009: 0 24 1 3 12 0 32 8 32 8 512 8 i 65536 16w 010: 0 26 1 3 13 0 32 8 32 8 512 8 i 65536 16w 011: 0 28 1 3 14 0 32 8 32 8 512 8 i 65536 16w 012: 0 1 0 0 0 1 32 8 32 8 512 8 i 65536 16w 013: 0 3 0 0 1 1 32 8 32 8 512 8 i 65536 16w 014: 0 5 0 0 2 1 32 8 32 8 512 8 i 65536 16w 015: 0 9 0 1 4 1 32 8 32 8 512 8 i 65536 16w 016: 0 11 0 1 5 1 32 8 32 8 512 8 i 65536 16w 017: 0 13 0 1 6 1 32 8 32 8 512 8 i 65536 16w 018: 0 17 1 2 8 1 32 8 32 8 512 8 i 65536 16w 019: 0 19 1 2 9 1 32 8 32 8 512 8 i 65536 16w 020: 0 21 1 2 10 1 32 8 32 8 512 8 i 65536 16w 021: 0 25 1 3 12 1 32 8 32 8 512 8 i 65536 16w 022: 0 27 1 3 13 1 32 8 32 8 512 8 i 65536 16w 023: 0 29 1 3 14 1 32 8 32 8 512 8 i 65536 16w Zen UMC [1440] Controller #0 Dual Channel Bus Rate 1600 MHz Bus Speed 1599 MHz DDR4 Speed 3199 MT/s Cha CL RCDr RCDw RP RAS RC RRDs RRDl FAW WTRs WTRl WR clRR clWW #0 18 22 22 22 42 73 6 8 34 4 12 24 5 5 #1 18 22 22 22 42 73 6 8 34 4 12 24 5 5 CWL RTP RdWr WrRd scWW sdWW ddWW scRR sdRR ddRR drRR drWW drWR drRRD #0 18 12 8 3 1 7 6 1 5 4 0 0 0 0 #1 18 12 8 3 1 7 6 1 5 4 0 0 0 0 REFI RFC1 RFC2 RFC4 RCPB RPPB BGS:Alt Ban Page CKE CMD GDM ECC #0 12480 880 560 416 0 0 OFF ON R1W1 0 8 1T ON 0 #1 12480 880 560 416 0 0 OFF ON R1W1 0 8 1T ON 0 MRD:PDA MOD:PDA WRMPR STAG PDM RDDATA WRD WRL RDL XS XP CPDED #0 8 16 24 24 24 255 0:F:0 13 2 13 26 896 10 4 #1 8 16 24 24 24 255 0:F:0 13 2 13 26 896 10 4 DIMM Geometry for channel #0 Slot Bank Rank Rows Columns Memory Size (MB) #0 #1 16 2 131072 1024 32768 TEAMGROUP-UD4-3600 DIMM Geometry for channel #1 Slot Bank Rank Rows Columns Memory Size (MB) #0 #1 16 2 131072 1024 32768 TEAMGROUP-UD4-3600 I now have everything registered properly per your instructions to include the blacklist: My power consumption: Was 175-180W with most HDDs spun down. 220-230W with them spun up. Now 110-120W with most HDDs spun down. 170W with them all spun up. No delays, no errors.
    1 point
  36. I'm going to give you some general advice because it sounds like that's what you are asking for. If you're feeling overwhelmed, I recommend that you hire a consulting engineer near you who has done this sort of thing before. When dealing with business-critical data, you don't want to experiment and try to learn on your own. Protection against drive failures is something best accomplished through RAID, not backups. For example, on our primary NAS, up to two of the six discs can fail without the NAS going offline or losing any data. On our secondary NAS, which has less valuable data, one of the five drives can fail without data loss. There are various forms of RAID and they can be configured to survive multiple drive failures. To use an extreme example, you could buy four 18TB drives and configure RAID so that you would lose no data even if only one of the three remained functional. Backups are for dealing with a catastrophic data loss, whether through flood, fire, malware, hackers, or some other act of God, Satan, or the Flying Spaghetti Monster. For that reason, you should follow some sort of backup strategy that puts backups offsite -- possibly in cloud storage or possibly on physical media. Based on what you've shared, my first thoughts would be an Unraid NAS that supports SAMBA protocol and automatic snapshots (snapshots let you roll back disks and directories to their state at some prior time). Unraid with a single ZFS pool consisting of four physical disks configured as RAID-Z2 would give you the abiliy to have half of the four discs fail simultaneously without any data loss, while supporting the aforementioned snapshots. My reason for suggesting Unraid is not because this is an Unraid forum. I'm on the Unraid forum because I've tried multiple commercial and open source NAS solutions and I think that Unraid is the best NAS OS, especially from a user interface perspective. I hope that I have left you with fewer questions rather than more. But remember that free advice is often worth what you pay for it, so don't trust me and especially don't trust anyone who disagrees with me!
    1 point
  37. Gewöhn dich dran. Eine Nextcloud Installation ohne Error-Meldungen / Bugs gibt es bis heute nicht. Das liegt meines erachtens an der Script Programmierung mit PHP. Das kommt noch aus einer Zeit in der es beliebt war für paar € einen WebHoster zu mieten um PHP Pages zu Hosten sowie Wordpress. Das wurde so in Docker migriert was ich persönlich für einen Fehler halte. Dadurch dauert es auch nervig lange bei jeden Seitenaufruf von Nextcloud selbst in einem High-End Server. Eine Nextcloud Alternative in .NET wäre meine Traumvorstellung 😁
    1 point
  38. I had to chime in, this hit a nerve. I agree with 1812, they want everything for free. Even downloading movies. They will spend the money for hardware but software they want for free or next to nothing. Take a look at the users that rant the most. They are fairly new members. I doubt they have experienced a hard disk failure. This is where Unraid shines. If they complain about pricing, I doubt they use parity drive(s). I say let them leave and go to an alternative. I chose Unraid 14 years ago. Back then the biggest concern was LT’s risk management since Tom was a one man show. I wanted a system to be expandable, I wanted to use my various sized hard disks. I wanted the disk to spin down and I liked the idea that you could still access a disk by itself. It had to be an unconventional server, Unraid fit the bill. I went with the pro license at that time since it was the only one that covered my hard disk count. I just checked my email invoice from “Tom” and it was on sale for $109 ($10 discount) at that time. I spent more for a UPS. Soon I was maxed out and bought two 1TB drives, then larger drives and survived through the 2TB limit! I have experienced the introduction of Joe L.’s creations; cache_dirs, unMENU and preclear. We endured a number of LT re-locations. Unraid has come along way. Thanks Tom! Sorry, I haven’t been active on this forum lately, I been busy doing other things and frankly, Unraid just works. I have recovered through a number of hard disk failures, parity swaps, array up sizing and array down sizing. All painlessly. BTW, I still have the original flash drive. I didn’t cheap out on that. I’ve recommended and help setup Unraid using the Pro license to lots of people and not one complained about the cost. When my kids finally move out, we will happily pay for the “Lifetime” license no matter what the cost.
    1 point
  39. 3.7.24 Update: Caveat Emptor: Multiple users have run into GUID conflicts with these devices. I will attempt again to contact Eluteng and ask about this to see if this was a recent manufacturing change or a one-off "bag batch". Another option/alternative to USBs that we've been internally testing and vetting are mSATA adapters/drives: This USB mSATA adapter (~$10) appears to provide unique GUIDs for Unraid: https://www.amazon.com/gp/product/B07VP2WH73/ This 32GB mSATA drive works with the above adapter (~$15): https://www.amazon.com/gp/product/B07543SDVX/ To avoid having the adapter hang off the back of the machine, these can allow you to mount it inside your case (depending on your system): https://www.amazon.com/gp/product/B000IV6S9S/ https://www.amazon.com/gp/product/B07BRVBQVW/ Important notes/caveats: The mSATA drive does not come pre-partitioned, so you have to create one yourself. Windows sees it as a hard disk, not a removable drive, so the USB Creator might not write to it and you may need to use the manual method. The USB Creator worked fine from a Mac for me. While we're not officially officially recommending these just yet, multiple members of the Unraid team are running OS instances off of this exact set up. We hope to have a full blog/video on this alternative soon. You can use them for Unraid VMs too, just be sure to configure "USB 3.0 (qemu XHCI)" in the VM template (even if the host only has USB 2 hardware!). As always, if you are running the same make/model of drive for both the host and the guest you will need the "USB Manager" plugin to pass the drive to the VM. Major props to @AgentXXL over in our Discord server for doing much of the early testing on this.
    1 point
  40. Snapshots are not "based on a previous snap" they are a Copy on Write copy of a subvolume. For the purpose of restoration there are no dependencies between them, (all of the data sharing stuff is handled by the CoW nature of the FS). You can delete any of them without effecting the others . The only time the relationship of one snapshot to another really matters is when sending them between filesystems using btrfs send. With send if the snapshot to be transferred has an ancestor snapshot at both the source and destination then the amount of data to transfer is reduced (highly simplified explanation). There is not really a simple gui way to handle rolling back. Snapshots appear as just folders on the filesystem. The simplest way of restoring is to delete the live file or folder and then copy it from a snapshot directory back into place. If you are restoring entire subvolumes (the whole snapshot), there are fancier ways of doing it involving deleting the subvolume and then creating a writable snapshot of the snapshot you want to restore, but copying is the easiest to understand. Since snapshotting only involves data disks an not the OS there is no need to bring the server down when restoring something. At most you might have to stop some VM or Docker containers that are using data from the subvolume to be restored.
    1 point
  41. Welcome Adam. I am sure you already know this but this forum has a "top-10 list" of Unraid users who are doing great work supporting other users. I often wonder if they have any other "job." 😀 You at least ought to get them an "official Unraid uniform" shirt although @Squid already has one. Just to be clear, I am not on this "top-10" list (top 500, maybe) so this is not a self-serving suggestion.
    1 point
  42. And he's even in the official LT uniform!
    1 point
  43. Hi @JorgeB, I appreciate the explanations! So far everything is working normally. I recreated all of my dockers using the Previous Apps feature and selecting all at once. I had a backup of libvirt,img file. I put in your recommendationsa and all is good. Syslog looks OK. A btrfs dev stats -c /mnt/cache renders no errors: [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 0 [/dev/nvme1n1p1].generation_errs 0 [/dev/nvme0n1p1].write_io_errs 0 [/dev/nvme0n1p1].read_io_errs 0 [/dev/nvme0n1p1].flush_io_errs 0 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0
    1 point
  44. EDIT After hunting and searching I was able to successfully figure out how to connect to docker container sudo and nano within docker copy files from docker container to unraid copy files from unraid to docker container(s) How to connect to docker container: docker exec -it contaidername /bin/bash ls -l To find the files or paths that need to be accessed for the copy command exit Add sudo & nano within docker container: Sudo: root@dockercontainerID# apk add sudo (1/1) Installing sudo (1.9.10-r0) Executing busybox-1.35.0-r17.trigger OK: 279 MiB in 80 packages Nano: root@dockercontainerID:/# apk add nano fetch http://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz (1/1) Installing nano (6.3-r0) Executing busybox-1.35.0-r17.trigger OK: 277 MiB in 79 packages How to copy files from docker container to UnRaid server: root@servername:~# sudo docker cp containername:/filename.xxx ~/file path on unraid -OR- root@servername:~# sudo docker cp containername:/filename.xxx /file path on unraid EXAMPLE: root@servername:~# sudo docker cp mariadb:/filename.txt ~/mnt/cache/ How to copy files from Unraid server to docker container: root@servername:~# sudo docker cp /unraid/location/123.txt containername:/ EXAMPLE:: root@servername:~# sudo docker cp /mnt/cache/123.txt mariadb:/ Hope that helps someone going down the same path as me tonight Cheers!
    1 point
  45. How to ignore a SINGLE file 1.) Find the path of the file you wish to ignore. ls -ltr /mnt/cache/Download/complete/test.txt root@Tower:/# ls -ltr /mnt/cache/Download/complete/test.txt -rwxrwxrwx 1 root root 14 Oct 27 11:32 /mnt/cache/Download/complete/test.txt* 2.) Copy the complete path used. /mnt/cache/Download/complete/test.txt 3.) Create a text file to hold the ignore list. vi /mnt/user/appdata/mover-ignore/mover_ignore.txt 4.) Add a file path to the mover_ignore.txt file While still in vi press the i button on your keyboard. Right click mouse. 5.) Exit and Save ESC key, then : key, then w key, then q key 6.) Verify file was saved cat /mnt/user/appdata/mover-ignore/mover_ignore.txt "This should print out 1 line you just entered into the mover_ignore.txt" 7.) Verify the find command results does not contain the ignored file find "/mnt/cache/Download" -depth | grep -vFf '/mnt/user/appdata/mover-ignore/mover_ignore.txt' /mnt/cache/Download is the share name on the cache. (Note, cache name could be different if you have multiple caches or changed the default name) How to ignore Muliple files. 1.) Find the paths of the files you wish to ignore ls -ltr /mnt/cache/Download/complete/test.txt ls -ltr /mnt/cache/Download/complete/Second_File.txt 2.) Copy the complete paths used to a separate notepad or text file. /mnt/cache/Download/complete/test.txt /mnt/cache/Download/complete/Second_File.txt 3.) Copy the paths in your notepad to the clip board. Select, the right click copy 4.) Create a text file to hold the ignore list. vi /mnt/user/appdata/mover-ignore/mover_ignore.txt 5.) Add the file paths to the mover_ignore.txt file While still in vi press the i button on your keyboard. Right click mouse. You should now have two file paths in the file. 6.) Exit and Save ESC key, then : key, then w key, then q key 7.) Verify file was saved cat /mnt/user/appdata/mover-ignore/mover_ignore.txt "This should print out 2 lines you just entered into the mover_ignore.txt" 8.) Verify the find command results does not contain the ignored files find "/mnt/cache/Download" -depth | grep -vFf '/mnt/user/appdata/mover-ignore/mover_ignore.txt' /mnt/cache/Download is the share name on the cache. (Note, cache name could be different if you have multiple caches or changed the default name) How to Ignore a directory instead of a file path, use a directory path. no * or / at the end. This may cause issues if you have other files or directories named the similar but with extra text. /mnt/cache/Download/complete *Note /mnt/cache/Download/complete will also ignore /mnt/cache/Download/complete-old *I use vi in this example instead of creating a file in windows, as windows can add ^m characters to the end of the line, causing issues in Linux. This would not be an issue if dos2unix was included in unRAID. **Basic vi commands https://www.cs.colostate.edu/helpdocs/vi.html
    1 point
  46. It would be a lot more helpful if you point to the actual links or the post that has them. But, thanks!
    1 point
  47. You can use dd but this will create a flat image, so if you have your windows physical disk of 1 tb the dd command will create a clone image of 1 tb. Better to use qemu-img command to create a sparse image. Assuming your windows 10 disk is /dev/sda qemu-img convert -p -S 512 /dev/sda -O raw /path/to/image/destination/win10image.img Make sure that the disk of /path/to/image/destination has enough space! I never tried, but the command should create a sparse image of let's say 1 tb (based on the example above), but with the -S option the command will write zeroes for the not used space. --> with the -S argument I don't know if the command creates an image of a size corresponding to that effectively used by the vm or if it creates an image of a size of that of the disk with unused spaced filled with zeroes. If it creates a file of a size corresponding to that of the disk you can run again the qemu-img command to deduplicate the image: qemu-img convert -p -O raw win10image.img dedup-win10image.img This will create a deduplicated image without the zeroes, so it will shrink its size. Important: always take into account the needed space of the destination disk to perform conversions of images. ------------- By the way you can also convert the vhd image with qemu-img: qemu-img convert -f vpc -O raw /path/to/vhd/file/image.vhd /path/to/destination/image.img Or you can directly use the vhd file in qemu/libvirt, example attaches the disk to bus virtio, (however my choice would be to convert it): <disk type='file' device='disk'> <driver name='qemu' type='vpc' cache='none' io='native' discard='unmap' /> <source file='/path/to/file.vhd'/> <target dev='vda' bus='virtio'/> </disk>
    1 point
  48. I think it already supports SMBv1. See here: https://www.cyberciti.biz/faq/how-to-configure-samba-to-use-smbv2-and-disable-smbv1-on-linux-or-unix/ Most people are actually trying to turn it off. MS is a major leader in this effort and most of the problems with smb have arisen out their changes to implement it! If you find that SMBv1 is not working on a WIN10 computer, I would suggest that you start there. I am not sure what the status on earlier versions of Windows is. I looked at the smb.conf (found in /etc/samba ) and there was no global reference to the samba version. If you want to change any Samba parameter, you can easily do it Settings >>> SMB >>> SMB Extras Unraid has provide to allow the user to easily add a smb-extra.conf which is automatically included in the SMB configuration as Samba starts. (Look at the smb.conf toward the end of the file for the hook.) If you find that it is not SMBv1 working in Samba, you can look up the Samba documentation by googling Samba documentation and here is a link directly to the smb.conf configuration MAN page: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html
    1 point
  49. [REQUEST] Would be nice if you can add fatrace :) Ubuntu Manpage - Fatrace
    1 point