ConnerVT's post in My Dated Mobo will not boot the Unriad flash drive was marked as the answer
Early Ryzen processors (Zen/Zen+) were more problematic than the newer models. For the 5700G, don't overclock your processor and set your memory speeds to match your DRAM (3200 or less, and not XMP), and if your BIOS has this setting, set Power Supply Idle Control = Typical Current Idle.
Things on the Internet are forever. Then people repeat what they've seen/read/heard from long ago. Modern Ryzen processors are not as problematic as you may read. After all, the same Zen processor chips (and I/O dies) are the same in Ryzen and EPYC, and EPYC is a mainstay of datacenters, all running Linux.
ConnerVT's post in If I’ve upgraded licenses from basic to plus to pro, am I able to use the previous 2 licenses on new hardware? was marked as the answer
No. Upgrades are one license.
ConnerVT's post in Migrating strategy? (from old box to new/bigger box) was marked as the answer
20TB isn't an extreme amount to move. Not the fastest to move across a 1GB LAN, but not totally unbearable either. I'll suggest two different options:
1. Treat the new server as a hardware upgrade -
Think of it as replacing the CPU/MB/Power Supply/Case. Put the drives from your current server in the new system (including any cache/pool drives). Before shutting down your old server for the last time, be sure to update any plugins and turn off the Start Array Automatically as well as VM and Docker services. Boot from your current licensed flash drive, and sanity check that all runs correctly as expected. Move on to starting the array, Unraid should remember your drive configuration and start normally. You can then start up Docker, and this should also start normally, as your appdata is still in the same place as it was in the old server.
Once the system has proven itself functional and stable, you can start swapping out the disks. Start first with the Parity Drive, and let parity rebuild. Then a data drive, and let that rebuild. After this, you can move data from old drives to the new one (assuming there will be less drives in the new system than the old) or just swap in another new drive. (Unraid Docs for details) Lather, rinse, repeat.
Some benefits of going this route: Limited amount of down time (as array will be accessible through emulated disk during rebuild); stays true to your current configuration (less chance to name/config something wrong and messing things up); data always 100% in sync between old and new servers during migration transfer; old server drives are in hand as an effective backup in case you have any issues.
2. Create Trial USB for new server -
You can build the new server hardware with all the drives as you want in the final server. Create a Trial USB for the new server. Then you need to configure it all. Afterwards you transfer all of the data from the old server to the new server. You can do this via you LAN, or using Unassigned Devices and pulling a drive from your old server, put it in a USB enclosure (or attach internally in the new server if there are available connections, IMHO a PITA) and copy the data. Continue until all is transferred. (Note: You will want to either enable Turbo Write or just not enable your parity drive until all is transferred). After all is done, you can copy the contents of your new trail flash drive over that in the license one (SAVE your *.key file from the old USB config folder first!), then copy the license back to the original flash drive.
The benefit of doing it this way (at least, copying by LAN) is zero down time of the server. But it doesn't guarantee the data will be 100% in sync, as a file/folder you copied at the beginning of the process may have been updated/added/changed before all is done copying. But personally think that this is way more prone to error.
ConnerVT's post in Replace 2tb w/10tb Rebuild at 28mb/s? was marked as the answer
It is usually a crap shoot trying to diagnose why without a diagnostic file or much info to go on. Assuming there are no other issues (except for the errors mentioned on the drive being replaced, which shouldn't have any affect), my first guess is that there was some read/write activity happening in the array during the rebuild. That's assuming the ~30MB/s was happening during the rebuild stage.
Speeds during parity check/repair and drive rebuild can be funny when there are different sized drives in an array. I'll use my array as an example, which has both 8TB and 16TB HGST drives. Drives access faster on the first tracks, slower on the last tracks. My read access is basically 250MB/s to start, 150MB/s at the end. Obviously when you write, it is slower (and slower still when calculating and writing (parity or data).
It takes a few moments for the reads of multiple disks to sync up, then will start off at the fastest speed (250MB/s). As I have 8TB and 16TB, the speed will track that of the smallest drive (8TB), so for the first 50% of the process, speeds will track down to 150MB/s. After it is done needing to access the 8TB drives, the speed will jump back up to 200MB/S, ramping back down to 150MB/s at the end. (Remember this is parity check speeds, read only. Obviously much slower for read/write parity or data rebuild).
If there is any sort of array activity happening other than the rebuild operation, the synchronization of the reads gets messed up, so the drives thrash around a bit, really slowing down the drive rebuild.
ConnerVT's post in Do I use preclear when adding new parity drive was marked as the answer
There is no need to preclear a drive before using it for Parity. The preclear application writes zeros to the entire drive to prepare it for use as a data drive in the Unraid array. As installing a new Parity drive requires the parity to be rebuilt which will write to every location on the drive anyway.
Here is the relevant section from the Unraid manual for replacing a Parity drive:
ConnerVT's post in Fumbling thru the documentation was marked as the answer
Unraid cannot run on your Mac - It does not run on ARM processors, only x86 Intel/AMD processors.
Unraid is basically a NAS (Network Attached Storage) server with native resources to run both Docker containers and KVM virtual machines.
ConnerVT's post in Unraid GUI not on primary AMD iGPU display with nVidia in PCIe was marked as the answer
Following up to close this thread. I jumped over to another support thread, where ich777, our master of all things display related, helped me out.
I had tried this earlier, but it didn't seem to work. Turned out it *does* work perfectly, from a power off start. There is some other issue going on, where a Reboot does display on the AMD iGPU but in a strange 640x960 resolution (output is long and squished). So at least a partial solution, and one which I can work around, now that I know what I know now.