NathanR

Members
  • Posts

    54
  • Joined

  • Last visited

Everything posted by NathanR

  1. Been running 6.11.5 for a while now. Current uptime is 2 months 22 days. Can't remember the last thing I did that required me to reboot it. I remember now, added a USB 3.0 card so I could pass-through for my W10 VM so I could run LOR for my Christmas lights. No crashes. Been very happy. Future Wishes: VM cloning, backup, snapshot, etc. Rebuild server with 13900K, X13SAE-F, 128GB ECC quicksync (GPU) encoding for plex more USB Plugins: Community CA Backup / Restore Appdata Dynamix Auto Fan Control Dynamix File Manager Dynamix System Statics Dynamix System Temperature Network UPS Tools (NUT) Unassigned Devices Unassigned Devices Plus Unassigned Devices Preclear Unraid Connect User Scripts Docker: blueiris (not on) DiskSpeed (not on) docker-diag-tools (not on) ESPHome glances Grafana-Unraid-Stack (not on) luckyBackup (not on) MongoDB (not on) OpenProject Phoronix-Test-Suite (not on) plex qbitorrent (not on) syslog-ng (not on) unifi-controller UniFi-Protect-Backup UptimeKuma VMs Dev-Home Assistant Home Assistant W10-SVR Win10 (not on) Decided to make a development home assistant VM to play around with things and not break my house. Process still works in Feb of 2024 Download Home Assistant .qcow2 file Copy to Array somehow Shared folder Upload via Dynamix File Manager app Create VM (linux) 32GB HDD Delete the vdisk vdisk1.img /mnt/user/domains/Dev-Home Assistant/vdisk1.img Run the convert command qemu-img convert -p -f qcow2 -O raw -o preallocation=off "/mnt/user/domains/Dev-Home Assistant/haos_ova-11.4.qcow2" "/mnt/user/domains/Dev-Home Assistant/vdisk1.img Boot the VM Launch VNC Get IP Login Woot/FIN
  2. So I don't know why people do what they do. What I can say is it is unlikely to matter what the mobo says it can support TDP wise. Rationale for the above statement: 1) 125W TDP is just a classification, real metrics will change depending on workload 2) Average power draw will be ~114w. Sauce: https://www.techpowerup.com/review/intel-core-i9-13900k/22.html 3) 8-pin CPU connector can deliver 235w so the mobo isn't power limited. Sauce: https://www.overclock.net/threads/gpu-and-cpu-power-connections.1773088/ 4) Power Components will work until their thermal limits are exceeded. Meaning the mosfets will deliver more current (power) than they are designed for until they reach thermal limiting. Sauce: https://www.infineon.com/dgdl/ir3550.pdf?fileId=5546d462533600a4015355cd7c831761 (note, this is a generic VRM, I do not know what VRM the X13 uses) 4A) servers usually have forced air cooling...thus the thermal headroom on the heatsink is very little because it is expected there will be hundreds of CFM moving across the mobo... Home server will cause thermal bottle-necks. 5) Bench/Stress peak power ratings are often meaningless, (albeit fun, interesting, useful to find the limits) for what we would typically use the 13900k For (server, docker, VM, storage) X13SAE-F mini-review/writeup https://www.servethehome.com/supermicro-x13sae-f-intel-w680-motherboard-mini-review/3/ Thank you! This is awesome. You probably saved me 20+hrs of work. I am considering moving my 5900X build to intel for two reasons. 1) iGPU for PLEX transcoding (my parents, wife-parents, sister, etc. all need things transcoded; only direct play is in-house to my NVIDIA shield) 2) lower power consumption (more efficient processors for dockers & server VMs) Build I'm considering: Mobo: MBD-X13SAE-F-O 64GB: MEM-DR532MD-EU48 13700k or 13900K Re-use everything else from my 5900X build
  3. Thank you for this! For some reason a recent Uptime Kuma or Unifi Controller update broke them talking to eachother for webpage status. Found-out the docker host connection was now added. Woot!
  4. Is it safe to delete the old 2FA setup (Inc Backups?)
  5. That makes sense. I was trying so hard to figure out how to do it. This was the key. Perfect link'd article! Let me try it when I get home tonight and see how I fair Yes, I think this will be more intuitive (to my brain).
  6. 6.11.5 I want this type of file structure: Media - SMB Share [one mapped drive for all users] Media\TV - Disk1 Media\Movies - Disk2 Media\Music - Disk3 Media\Photos - Disk3 Media\Projects - cache >[mover]> Disk3 How do I get there? Or is this even possible? I already have root share setup, I thought this would solve my problem; but not really. Resources: Similar thread:
  7. Creating an empty folder takes forever (I stopped waiting after 5min). Clicking create then clicking cancel immediately is my current workaround. What do? 6.11.5 2023.03.01
  8. +1 - L2 ARC I am new to Unraid and am surprised that 'cache' (now pools) aren't really 'cache' they are more like user defined write pools with user defined copy/move rules [hence the move from cache terminology to pool terminology]. ZFS will change much of this and enable us to have L2 ARC amongst other things.
  9. +1 I am constantly modifying my Unraid setup as I learn new things. I rarely need to turn off my HA VM or my Win 10 VM; yet taking down the array does that. Worse - turning on the array doesn't auto-on the VMs only Dockers --- I read numerous times about how to implement this when skimming through this thread. I also see questions about licensing, what to do when a mapped share is down, how to handle different locations for dockers/vms, etc. The devs really would be the experts at how as there are numerous obstacles to overcome. Array parity behavior, Array licensing behavior, Access to drives if array is down, Etc. I get it, there's lots to think through. --- Here's my take. Requirements - a working cache pool This forces users to not use the array. Limits to more experienced users. Licensing - Pro version enables support for multiple offline pools. Solves all licensing issues. All the licensing talk is too complicated. Why make code changes when you can make policy changes that make sense. Warning - Any mapped shares that a docker/VM uses will not work and can crash your docker/VM if the internal software cannot handle it. Why make this a Limetech issue? Push the responsibility on the user & container developer. Ex. 1) Plex - libraries shared to spinning rust on array - okay who cares. Plex handles missing library locations gracefully. Ex. 2) Windows VM - can't find a share. Windows will handle that gracefully (or not) lol. Ex. 3) HA, FW, UtKuma, Unifi, etc. - These reside entirely inside their docker/VM container. They only write out or read out if they have mapped syslog folder or something. Too complicated to check if it's on an array, etc. Who cares. Responsibility is on the user. New Buttons & GUI changes Array-Start/Stop button (1) - main array Array-Start/Stop button (2) - cache pool 1 Array-Start/Stop button (n) - cache pool n [drop down menu button] --- Implementation 0.1 'All or nothing.' Shares cannot point to the 'DockVM' pool [they can, but if they do, then you can't keep DockVM pool online] All Online Dockers volume mappings point to DockVM pool All Online VM disk mappings point to DockVM pool [This simplifies the programming required. sure there are dedicated disks, usb disks etc. who cares. Start simple] Offline Dockers/VMs are not considered [if they're offline and your turning off the array but not the DockVM pool...who cares] (3 pools) [You could forgo the cache pool of course, but most pro users are going to have a setup like this] Array [unraid] [hdd] Cache [whatever; raid, probably 1 or 0+1] [probably ssd or nvme] DockVM [whatever; raid, probably 1 or 0+1] [probably ssd or nvme] Clicking stop on the array - "error docker.list & VM.list are online & uses array volume mappings" I would use this right now. I'd make my current cache drive only for docker & vm until implementation 1.0 can roll out. I don't need to copy files to my cache and then use the mover to move to spinning rust. I'd rather have my dockers & vms stay up right now. Plex docker running? - I'd have to shut it down. Who cares, if the array is offline no one is watching movies anyways. But my HA & W10VM (with internal database on vdisk) stay up; woot.--- --- Implementation 1.0 'Almost All or nothing.' All Online Dockers volume mappings point to cache All Online VM disk mappings do not point to array All Offline Dockers cannot start without both array & cache 'online' All Offline VMs cannot start without both array & cache 'online' The only reason this is separate from 0.1 is due to the fact that the shares are mapped to the same cache pool as the dockers & VMs. Thus I'm assuming there will be some code changes & checks to ensure compatibility. Plex docker - must be shutdown to shutdown array [but keep cache online] --- Implementation 2.0 'Granular approach' All of the above but: Checks if the primary storage for docker or VM is on the cache pool. Ignores mappings to array (but reports warning) Plex docker - stays running; volume mappings to array are offline/not working. Hopefully container doesn't crash. --- Implementation 3.0 'Granular Auto Approach' All of 2.0 plus the ability to auto-on/off dockers & VMs based upon which pool is going up/down. Global setting for restart Unraid OS auto-start or start/stop array pool(s) to auto-start Docker/VMs.
  10. This is the answer to my question. My expectation of VM behavior would be identical to docker behavior. --- I should have been much more clear with my post and my image is confusing as it shows nothing wrong. I apologize. Expected Behavior: 1. Set [certain] VMs & Dockers to auto-start 2. Stop array 3. Start array 4. [certain] VMs & Dockers previously set to auto-start...do just that >> auto-start 5. Restart Unraid 6. [certain] VMs & Dockers previously set to auto-start...do just that >> auto-start Current Behavior: 1. Set [certain] VMs & Dockers to auto-start 2. Stop array 3. Start array 4. No VMs auto-start & [certain] Dockers previously set to auto-start...do just that >> auto-start 5. Restart Unraid 6. [certain] VMs & Dockers previously set to auto-start...do just that >> auto-start
  11. Negative. Edit: Enable SMB Multi Channel: Yes Enhanced macOS interoperability: No Seems to work
  12. So I have a similar issue. Any file transfer causes tons of errors in the logs. I have a bug-report post. What should we be doing in the meantime until a fix can be completed?
  13. "VMS" = plural 1 & 2 should auto-start. They do not. 3rd VM isn't supposed to start.
  14. Ahh, Samba error. Hopefully the fix gets merged soon!
  15. Still happening. Very weird as transfers still occur and the files are still there.
  16. OP updated. Sorry, 1D10T error. I thought I was in the general support forum. I thought it was a bit weird to see the severity dropdown.
  17. Changed Priority to Annoyance As far as I can tell, windows is still transferring the file...so 'annoyance' seemed more appropriate.
  18. Version 6.11.5 2022-11-20 Model: Custom M/B: ASRockRack X570D4U-2L2T Version - s/n: BIOS: American Megatrends International, LLC. Version P1.40. Dated: 05/19/2021 CPU: AMD Ryzen 9 5900X 12-Core @ 3700 MHz HVM: Enabled IOMMU: Enabled Cache: 768 KiB, 6 MB, 64 MB Memory: 64 GiB DDR4 Multi-bit ECC (max. installable capacity 128 GiB) Network: bond0: fault-tolerance (active-backup), mtu 1500 Kernel: Linux 5.19.17-Unraid x86_64 OpenSSL: 1.1.1s Uptime: 23 hours, 33 minutes Not sure what to make of this. Am fairly new to Unraid. Messing around with my server (added Emby docker) while copying over a large folder (~200G) to my server from my desktop. Saw high (20%) CPU even though docker/VM don't show any usage (3-5%) Randomly clicked the syslog button... and saw this. What do? 01-31-2023 19:32:18 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/68a018d92ba81499554e4850dea1357e4e8639e9] failed 01-31-2023 19:32:18 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.755826, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:18 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/689df748d1cd62661a202359400ce7fbbaed9971] failed 01-31-2023 19:32:18 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.628410, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:18 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/68995c8b0a3098c2a2ab67f3217939164ad372ac] failed 01-31-2023 19:32:18 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.502410, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/6897dadd84282fe447ab5e2d850b263a30589024] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.385726, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/68973b2f7b9da0da3ba5b79ba84bc9fe247f1a43] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.261690, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/6893ed2223c9a2435d3d4a891d8345341022b007] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.138887, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/688f130db113dd673926d3db641f8ac93005ebe9] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:18 Storinator smbd[20034]: [2023/01/31 19:32:18.034068, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/688a48c632936f9f0afe8519aee0bc0d3680d271] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.906046, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/68842793cd6658b2fe4377e06f472120b44a0979] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.785420, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/68822ec6a841d8b3c83cb15468073835f86e6b6d] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.668262, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/6881428a9f6c5c8c0168a38c4df7609a1caa3eb9] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.551534, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/6880a3f9f18602eef2cef3c0bfe35d1f18f1c567] failed 01-31-2023 19:32:17 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.440756, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:16 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/68806dec00f689c3cbd8dd2b9f504f2bf3ba1dda] failed 01-31-2023 19:32:16 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.329113, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:16 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/687fe5ff4dbd3f40584419872c708449338ab41c] failed 01-31-2023 19:32:16 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.131462, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) 01-31-2023 19:32:16 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: synthetic_pathref: opening [From F/Wondershare/DrFone/Backup/00008101-001E183C1430001E/68/687e2ccc7c4ac0daed90e8cfb346c002e0fad1b7] failed 01-31-2023 19:32:16 Daemon.Error 192.168.10.5 Jan 31 19:32:17 Storinator smbd[20034]: [2023/01/31 19:32:17.011287, 0] ../../source3/smbd/files.c:1199(synthetic_pathref) storinator-diagnostics-20230131-1953.zip
  19. I'm in the same boat. I have my main x570/5900X Unraid server at home, but I want to have an off-site backup at my parents (plus running a few dockers for them (home assistant, cameras, etc). I'm looking into building/buying hardware to accommodate. I'm heavily considering something like the TS-473A. But maybe just a HDD stack + RPI would work too. Idk, just started looking around.
  20. 95 days, 1 hour, 58 minutes uptime on 6.11.0 Running: 1x W10 VM Handful of dockers Some apps Just updated to 6.11.5 Needed a few reboots to get stats working properly. Hopefully stable for a long time again Found this useful command for USB Flash Drive (OS) backups from this reddit post: https://www.reddit.com/r/unRAID/comments/wp44pp/reminder_backup_your_unraid_usb_drive/ zip -r /mnt/user/backups/unraidbackup/flash`date +%d-%m-%Y`.zip /boot Fan 1 - CPU 80mm Noctua Fan 2 - x570 Chipset 40mm Noctua Fan 3 - Chassis Fan(s) 120mm arctic p12 PWM PST CO [25% = 500rpm, 50% = 900rpm, 75% = 1200rpm, 100% = 1800rpm] Fan 4 - HDD chassis fan 80mm arctic p8 PWM PST CO [25% = 900rpm, 50% = 1500rpm, 75% = 2300rpm, 80% = 2500rpm, 100% = 2900rpm] Fan 5 - future 120mm Fan 6 - future 120mm
  21. Plex server crashed again Tis why I'm hesitant to move Plex/Unifi/Syncthing/etc. over I Setup syslog/kiwi, hopefully I can catch the error now. https://documentation.solarwinds.com/archive/pdf/kss/kss_getting_started_guide.pdf
  22. VM is now running d-tools server on Win10x64 Pro. Server has been up for 8 days since upgrade to 6.11 Hoping for lots of stability now that I have something production-esque running on the server. Next will be Plex server, video-card passthru, and adding HDD's.