NathanR

Members
  • Posts

    54
  • Joined

  • Last visited

Posts posted by NathanR

  1. Been running 6.11.5 for a while now.

    Current uptime is 2 months 22 days.

     

    Can't remember the last thing I did that required me to reboot it.

    I remember now, added a USB 3.0 card so I could pass-through for my W10 VM so I could run LOR for my Christmas lights.

     

    No crashes.

    Been very happy.

     

    Future Wishes:

    • VM cloning, backup, snapshot, etc.
    • Rebuild server with 13900K, X13SAE-F, 128GB ECC
      • quicksync (GPU) encoding for plex
      • more USB

     

     

    Plugins:

    • Community
    • CA Backup / Restore Appdata
    • Dynamix Auto Fan Control
    • Dynamix File Manager
    • Dynamix System Statics
    • Dynamix System Temperature
    • Network UPS Tools (NUT)
    • Unassigned Devices
    • Unassigned Devices Plus
    • Unassigned Devices Preclear
    • Unraid Connect
    • User Scripts

    Docker:

    • blueiris (not on)
    • DiskSpeed (not on)
    • docker-diag-tools (not on)
    • ESPHome
    • glances
    • Grafana-Unraid-Stack (not on)
    • luckyBackup (not on)
    • MongoDB (not on)
    • OpenProject
    • Phoronix-Test-Suite (not on)
    • plex
    • qbitorrent (not on)
    • syslog-ng (not on)
    • unifi-controller
    • UniFi-Protect-Backup
    • UptimeKuma

    VMs

    • Dev-Home Assistant
    • Home Assistant
    • W10-SVR
    • Win10 (not on)

     

    Decided to make a development home assistant VM to play around with things and not break my house.

    Process still works in Feb of 2024

    1. Download Home Assistant .qcow2 file
    2. Copy to Array somehow
      • Shared folder
      • Upload via Dynamix File Manager app
    3. Create VM (linux)
      • 32GB HDD
    4. Delete the vdisk
      • vdisk1.img
      • /mnt/user/domains/Dev-Home Assistant/vdisk1.img
    5. Run the convert command
      • qemu-img convert -p -f qcow2 -O raw -o preallocation=off "/mnt/user/domains/Dev-Home Assistant/haos_ova-11.4.qcow2" "/mnt/user/domains/Dev-Home Assistant/vdisk1.img
    6. Boot the VM
    7. Launch VNC
    8. Get IP
    9. Login
    10. Woot/FIN

     

  2. On 5/11/2023 at 6:28 AM, blacklight said:

    Anyway, if someone could explain to me how there are so many people putting the 200watt+ K cpus into the X13SAE that would be nice.

     

    Thanks in advance

     

    So I don't know why people do what they do.

    What I can say is it is unlikely to matter what the mobo says it can support TDP wise.

     

    Rationale for the above statement:

    1) 125W TDP is just a classification, real metrics will change depending on workload

    2) Average power draw will be ~114w. Sauce: https://www.techpowerup.com/review/intel-core-i9-13900k/22.html

    3) 8-pin CPU connector can deliver 235w so the mobo isn't power limited. Sauce: https://www.overclock.net/threads/gpu-and-cpu-power-connections.1773088/

    4) Power Components will work until their thermal limits are exceeded. Meaning the mosfets will deliver more current (power) than they are designed for until they reach thermal limiting. Sauce: https://www.infineon.com/dgdl/ir3550.pdf?fileId=5546d462533600a4015355cd7c831761 (note, this is a generic VRM, I do not know what VRM the X13 uses)

    4A) servers usually have forced air cooling...thus the thermal headroom on the heatsink is very little because it is expected there will be hundreds of CFM moving across the mobo... Home server will cause thermal bottle-necks.

    5) Bench/Stress peak power ratings are often meaningless, (albeit fun, interesting, useful to find the limits) for what we would typically use the 13900k For (server, docker, VM, storage)

     

    X13SAE-F mini-review/writeup

    https://www.servethehome.com/supermicro-x13sae-f-intel-w680-motherboard-mini-review/3/

     

    On 2/21/2023 at 1:03 PM, spamalam said:

    Finally got round to buying this a few weeks back and just assembled it:

    • i9 13900k
    • 128GB ECC memory - 4x KSM48E40BD8KM-32HM (gets clocked down)
    • Supermicro X13SAE-F , Version 1.02
    • 3x WD 850X with Heatsyncs
    • I used my old SAS card: Broadcom / LSI SAS3224 PCI-Express Fusion-MPT SAS-3 - this only worked in Slot 5
    • Quad PCI 3.0 NVMe Card with 4x WD 850X - this worked fine in Slot 7 - no bifurcation on this board so needed this to use more m.2s, note its limited to 3500MB/s per drive.

    The motherboard shipped with version 2.0 of the bios, so no need to flash or mess around:

    Supermicro X13SAE-F , Version 1.02
    
    American Megatrends International, LLC., Version 2.0
    BIOS dated: Mon 17 Oct 2022 12:00:00 AM CEST

     

    Everything worked out of the box, I enabled modprobe i915 in  /boot/config/go and  passed --device=/dev/dri  through to Plex and can see it using hw transcoding now.  Not much config needed:

     

    Thank you! This is awesome. You probably saved me 20+hrs of work.

     

    I am considering moving my 5900X build to intel for two reasons.

    1) iGPU for PLEX transcoding (my parents, wife-parents, sister, etc. all need things transcoded; only direct play is in-house to my NVIDIA shield)

    2) lower power consumption (more efficient processors for dockers & server VMs)

     

    Build I'm considering:

    • Mobo: MBD-X13SAE-F-O
    • 64GB: MEM-DR532MD-EU48
    • 13700k or 13900K
    • Re-use everything else from my 5900X build
  3. On 3/9/2023 at 7:45 AM, CorneliousJD said:

     

    Sure, just pushed an update to the template, should be live in a few hours for new installs from CA.

    If you already have it installed you can just add the path you have from your screenshot for the same effect.

    Thank you for this!

     

    For some reason a recent Uptime Kuma or Unifi Controller update broke them talking to eachother for webpage status.

     

    Found-out the docker host connection was now added. Woot!

     

    image.thumb.png.b772b5e57bd3c0e45f231941fb44b3e3.png

  4. That makes sense. I was trying so hard to figure out how to do it.

     

    Quote

    The user will go to each disk and create the folders shown in red to create the storage as listed above.

    This was the key. Perfect link'd article!

     

    Let me try it when I get home tonight and see how I fair :)

     

    31 minutes ago, itimpi said:

    You could also use level 0 for Split Level (manual management) and it would achieve a similar result.   This may be more intuitive?

    Yes, I think this will be more intuitive (to my brain).

  5. 6.11.5

     

    I want this type of file structure:

     

    Media - SMB Share [one mapped drive for all users]

    Media\TV - Disk1

    Media\Movies - Disk2

    Media\Music - Disk3

    Media\Photos - Disk3

    Media\Projects - cache >[mover]> Disk3

     

    How do I get there?

    Or is this even possible?

     

    I already have root share setup, I thought this would solve my problem; but not really.

     

    Resources:

     

     

    Similar thread:

     

  6. +1

    I am constantly modifying my Unraid setup as I learn new things.
     

    I rarely need to turn off my HA VM or my Win 10 VM; yet taking down the array does that.

     

    Worse - turning on the array doesn't auto-on the VMs only Dockers :(

     

    ---

     

    I read numerous times about how to implement this when skimming through this thread.

    I also see questions about licensing, what to do when a mapped share is down, how to handle different locations for dockers/vms, etc.

     

    The devs really would be the experts at how as there are numerous obstacles to overcome.

    Array parity behavior, Array licensing behavior, Access to drives if array is down, Etc.

    I get it, there's lots to think through.

     

    ---

     

    Here's my take.
    Requirements - a working cache pool

    This forces users to not use the array. Limits to more experienced users.

     

    Licensing - Pro version enables support for multiple offline pools.

    Solves all licensing issues. All the licensing talk is too complicated. Why make code changes when you can make policy changes that make sense.

     

    Warning - Any mapped shares that a docker/VM uses will not work and can crash your docker/VM if the internal software cannot handle it.

    Why make this a Limetech issue? Push the responsibility on the user & container developer.

     

    Ex. 1) Plex - libraries shared to spinning rust on array - okay who cares. Plex handles missing library locations gracefully.

    Ex. 2) Windows VM  - can't find a share. Windows will handle that gracefully (or not) lol.

    Ex. 3) HA, FW, UtKuma, Unifi, etc. - These reside entirely inside their docker/VM container. They only write out or read out if they have mapped syslog folder or something. Too complicated to check if it's on an array, etc. Who cares. Responsibility is on the user.

     

    New Buttons & GUI changes

    Array-Start/Stop button (1) - main array

    Array-Start/Stop button (2) - cache pool 1

    Array-Start/Stop button (n) - cache pool n [drop down menu button]

     

    938053132_arraymultiplepoolspindown.thumb.png.0d09e78f67e7f4fd52b4e0f6e8f5618b.png

     

    ---

     

    Implementation 0.1
    'All or nothing.'

    Shares cannot point to the 'DockVM' pool [they can, but if they do, then you can't keep DockVM pool online]

    All Online Dockers volume mappings point to DockVM pool

    All Online VM disk mappings point to DockVM pool [This simplifies the programming required. sure there are dedicated disks, usb disks etc. who cares. Start simple]

     

    Offline Dockers/VMs are not considered [if they're offline and your turning off the array but not the DockVM pool...who cares]

     

    (3 pools) [You could forgo the cache pool of course, but most pro users are going to have a setup like this]

    Array [unraid] [hdd]

    Cache [whatever; raid, probably 1 or 0+1] [probably ssd or nvme]

    DockVM [whatever; raid, probably 1 or 0+1] [probably ssd or nvme]

     

    Clicking stop on the array - "error docker.list & VM.list are online & uses array volume mappings"

    I would use this right now. I'd make my current cache drive only for docker & vm until implementation 1.0 can roll out.

    I don't need to copy files to my cache and then use the mover to move to spinning rust.

    I'd rather have my dockers & vms stay up right now.

    Plex docker running? - I'd have to shut it down. Who cares, if the array is offline no one is watching movies anyways.

    But my HA & W10VM (with internal database on vdisk) stay up; woot.---

     

    ---

     

    Implementation 1.0
    'Almost All or nothing.'

     

    All Online Dockers volume mappings point to cache

    All Online VM disk mappings do not point to array

     

    All Offline Dockers cannot start without both array & cache 'online'

    All Offline VMs cannot start without both array & cache 'online'

     

    The only reason this is separate from 0.1 is due to the fact that the shares are mapped to the same cache pool as the dockers & VMs. Thus I'm assuming there will be some code changes & checks to ensure compatibility.

     

    Plex docker - must be shutdown to shutdown array [but keep cache online]

     

    ---

     

    Implementation 2.0

    'Granular approach'

     

    All of the above but:

     

    Checks if the primary storage for docker or VM is on the cache pool.

    Ignores mappings to array (but reports warning)

     

    Plex docker - stays running; volume mappings to array are offline/not working.

    Hopefully container doesn't crash.

     

    ---

     

    Implementation 3.0

    'Granular Auto Approach'

     

    All of 2.0 plus the ability to auto-on/off dockers & VMs based upon which pool is going up/down.

    Global setting for restart Unraid OS auto-start or start/stop array pool(s) to auto-start Docker/VMs.

  7. On 2/3/2023 at 8:19 AM, MAM59 said:

    Btw, it is perfectly normal that "Autostart" works after a reboot but not after a manual start of the array.

    This is the answer to my question.

     

    My expectation of VM behavior would be identical to docker behavior.

     

    ---

     

    I should have been much more clear with my post and my image is confusing as it shows nothing wrong. I apologize.

     

    Expected Behavior:

    1. Set [certain] VMs & Dockers to auto-start

    2. Stop array

    3. Start array

    4. [certain] VMs & Dockers previously set to auto-start...do just that >> auto-start

    5. Restart Unraid

    6. [certain] VMs & Dockers previously set to auto-start...do just that >> auto-start

     

    Current Behavior:

    1. Set [certain] VMs & Dockers to auto-start

    2. Stop array

    3. Start array

    4. No VMs auto-start & [certain] Dockers previously set to auto-start...do just that >> auto-start

    5. Restart Unraid

    6. [certain] VMs & Dockers previously set to auto-start...do just that >> auto-start

    • Thanks 1
  8. I'm in the same boat.

    I have my main x570/5900X Unraid server at home, but I want to have an off-site backup at my parents (plus running a few dockers for them (home assistant, cameras, etc).

    I'm looking into building/buying hardware to accommodate.

     

    I'm heavily considering something like the TS-473A.

     

    But maybe just a HDD stack + RPI would work too.

    Idk, just started looking around.

  9. 95 days, 1 hour, 58 minutes uptime on 6.11.0

    Running:
    1x W10 VM
    Handful of dockers
    Some apps

     

    Just updated to 6.11.5

    Needed a few reboots to get stats working properly.

    Hopefully stable for a long time again :)


    Found this useful command for USB Flash Drive (OS) backups from this reddit post: https://www.reddit.com/r/unRAID/comments/wp44pp/reminder_backup_your_unraid_usb_drive/

    zip -r  /mnt/user/backups/unraidbackup/flash`date +%d-%m-%Y`.zip /boot

     

    Fan 1 - CPU 80mm Noctua
    Fan 2 - x570 Chipset 40mm Noctua
    Fan 3 - Chassis Fan(s) 120mm arctic p12 PWM PST CO  [25% = 500rpm, 50% = 900rpm, 75% = 1200rpm, 100% = 1800rpm]
    Fan 4 - HDD chassis fan 80mm arctic p8 PWM PST CO  [25% = 900rpm, 50% = 1500rpm, 75% = 2300rpm, 80% = 2500rpm, 100% = 2900rpm]
    Fan 5 - future 120mm
    Fan 6 - future 120mm

  10. VM is now running d-tools server on Win10x64 Pro.

    Server has been up for 8 days since upgrade to 6.11

     

    Hoping for lots of stability now that I have something production-esque running on the server.

    Next will be Plex server, video-card passthru, and adding HDD's.

  11. Brainstorming/thoughts post...

     

    Looking into converting my .xva (xcp-ng) vms into Unraid format (qcow?)

    Maybe this? - https://forums.unraid.net/topic/69244-convert-vmware-vm-to-kvm/

     

     

    Looking into converting my physical disk into unraid drive.

    https://kmwoley.com/blog/convert-a-windows-installation-into-a-unraid-kvm-virtual-machine/

    https://kmwoley.com/blog/reduce-shrink-raw-image-img-size-of-a-windows-virtual-machine/

    A little worried I can't shrink my 500GB NVMe drive down to a manageable 50GB lol

     

     

    Looking into creating new W10 VM

    I was looking here > https://wiki.unraid.net/Manual/VM_Management#Basic_VM_Creation

    But that is wrong, this is correct steps to install HDD driver > https://wiki.unraid.net/index.php/UnRAID_Manual_6#Installing_a_Windows_VM

    Latest virtio drivers: https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md

  12. I have a NH-D9L on the CPU now

    NF-A4x10 PWM on the x570 chipset, which gets super hot. (ASRock really should have had a fan & larger cooler on x570 chipset)

     

    pwm
    cpu-rpm
    x570-rpm
    
    100%
    1900
    5000
    
    75%
    1500
    4000
    
    50%
    1100
    2700
    
    25%
    600
    1600

     

    image.thumb.png.65afe4b8ea3d65284be8785f732c0d53.png

  13. Update.
    Now running Version 6.11.0 2022-09-23

     

    Pearl & iperf3 included with the kernel so...yay.

    My 10G NIC's are confirmed working; woot!

    I was having issues confirming speeds from within the W10 VM.

    image.thumb.png.9a6617a6c267adb8ecbaadaf4feb8f44.png

     

    Also, I have my 1T 980's in what I think is a Raid 1 pool. But I've read conflicting posts about whether this is operating properly or not.
    Pressing the rebalance as Raid 1 does nothing.

    image.thumb.png.e6cb21ad18b66d243575d0ae202bd889.png

     

    I also modified the temp warning/errors to 85c so I didn't get the annoying Samsung 980 bug. I really wish they would do a firmware release.
    BTW...how do I do a NVMe firmware update?

     

    Oh! and I'm super happy that the temp sensors work out of the box now (well, there is a tiny config todo). Pearl or Linux Kernel with AMD tunings, whatever, don't care, I'm happy :)

    image.png.f45a040b6ace134eae0a93d532e6fbf9.png

     

     

    figured out exactly which by looking at IPMI & running P95 in a VM.

     

    CPU @ 70c and x570 @ ~50c lol

    image.png.9a369aee9c2d5c600748487e9c9c2690.png

  14. 20 hours ago, ConnerVT said:

     

    I only needed to set Typical Idle Current (and appropriate DRAM settings) for my 1st Gen Ryzen 1500X.  The 1st gen were the most notorious for Linux issues, but hasn't had one hang/freeze/crash for 1.5+ years since setting it correctly.

     

    As for Nerdtools, it will get caught up eventually.  This is the price one pays to be an early adopter of a new revision of software (especially a release candidate).  If a test bed system, no worries.  For one you rely on, well... 😞

    Thanks!

     

    I haven't had time to look into why it crashed the second time.

    Any pointers on where to look in the logs?

    I read that the logs don't show pre-boot logs IIRC.

    Idk, I'll investigate in the future.

     

    Re: RC - I completely agree, I usually don't run RC stuff, but I saw all the significant improvements (in-general & x570/kernel) and wanted to try it out. My SVR right now is just a test-bench and is running things that aren't critical (yet) just so I can get used to Unraid in a relaxed environment. That said, I am in the process of moving computer cases, installing 10G, and swapping drives so I'll be ready soon to move the server towards production :)

     

    Edit: Just read your story :D I think  know I am in good company xD

  15. Hi Spencer,

     

    Good copy, thank you.

     

    I am now the proud owner of Pro :)

     

    I am excited for what the future holds!

     

    1 hour ago, SpencerJ said:


    Hey there, 

    You would need to purchase and upgrade to Pro before the window closes. You can’t purchase an upgrade and activate later. 
     

    This promo was aimed at current users, hence the Pro upgrades only being a part of it but this does not preclude new users from taking part. 
     

    I can’t see any other downsides other than potential other sales down the road. 

     

    Thanks very much for trying us out and welcome to the community. 
     

     

    • Like 1
    • Thanks 1
  16. TL;DR

    1. Purchase license now, activate later (when ready?)
    2. Downside to upgrading Basic > Pro?

     

     

    Are there any downside to purchasing a basic license and then upgrading to pro?

    I can save $14.70 going from basic to pro it seems.

    https://unraid.net/upgrade-sale

     

    I have no intention of purchasing any other license than Pro.

    I want to take advantage of the sale before it ends at midnight.

    Still, seems silly to not offer a discount without upgrade when upgrading (might, hence my question) does the same thing.

     

    I presume the only downside is two license keys and two purchase receipt emails?

     

    ---

     

    I have 8 days left on my trial, and the server is running fairly well. I have plenty still to do. But I am very happy with the setup thus far.
    I really like Unraid community, plugins/apps/dockers/support, adding drives slowly, etc. I also love the cost model, one and done. My favorite. I'd do anything for not not having SAAS, I freaking hate that stuff. I know it's better for company's cash flow. But I am so thankful for that model. Thank you Lime Tech.

     

    I wasn't intending on licensing it yet because I wanted to have it setup the way I wanted and then transfer to the new USB drive. [As I feel like the licensing only works on one flash drive and I don't want to license this flash drive; I'd rather transfer it. But I also want to test out the new flash drive, etc. I don't know what to do.]

     

    Can I purchase the licenses now and 'activate' them later?