ConnerVT

Members
  • Posts

    736
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ConnerVT

  1. Two steps. First is to make sure that new system can work as well as old system. Then start swapping drives, starting with the Parity drive. As for two Parity drives, this is complete overkill. Dual parity allows for 2 drives in the array to fail and have redundancy. Typically folks with 10+ data drives may need this. For a small number of 2TB drives? Not worth the bother, complexity, and adds more opportunities to do something wrong. If you are concerned about your current drives failing you would be advised to replace them before doing this upgrade. For replacing cache (Pool) drive, Spaceinvader One's video is a good tutorial (I used that the first time I did it). It is in the manual HERE.
  2. If a drive fails in my backup server, the restore process is: - Replace drive in backup server. - Run backup script from main server. My butt feels pretty pain free.
  3. I am one who likes to do things in baby steps, rather than doing a number of changes all at once. Makes it much easier to diagnose and correct any issues that come up. This is my suggestion. First, Unraid handles disk assignments by drive serial numbers. The CPU and motherboard do not matter, so you can move all of your drives to new server hardware and all should work as it did on the old hardware. Sometimes drive controllers don't behave in the same manner, passing the drive serial number info the same. Handling of Docker and VM is a bit more complex, so do that after getting your array in order. First thing is to only start if your current system has no issues - Parity is valid and no data drive errors. Personally, before shutting down the old server for the last time, I would set Docker, VM and Array not to auto-start to allow you to do this when you are ready to do so. If you don't already have Unassigned Devices installed, do this before moving to the new hardware. Also make a list of drives/serial numbers of where they are assigned (print of Main tab works) and make a local backup of your flash (Main > flash > Flash Backup) and Diagnostics (lots of good info if needed). You should be able to install your old drives and new drives into the new server (assuming enough SATA ports. If not install your current array/cache and as many others drives as possible). When it boots, you can check your disk assignments. All of your array/pool assignments should be the same as they were on your old server. Your new drives should show as unassigned devices. Start the array, and check all is working as expected. Your plan for upgrading the disks in your array is flawed. All parity drives need to be larger than the data drives in the array. You also made it a bit more complicated than necessary. First thing to do is replace the Parity Drive and let it rebuild. You can then reassign a new 4TB drive to rebuild a 2TB dive's data. Having the 2nd parity drive, as you suggested, is not much benefit. The old 2TB data drive will be sitting in Unassigned Devices and is a backup of the data being rebuilt. Repeat until you have rebuilt all of the 2TB drives onto the new drives. For the Cache drive, follow the standard cache drive replacement procedure in the Unraid manual. Typically you use the Mover to move everything onto the array, reassign to your new M.2 drive, and use Mover to move it all back. Worst case, you may need to delete your docker image file, recreate it and reinstall your dockers (Docker > Add Container > Select Template). I won't get into VMs, as they are very sensitive to major hardware changes, and I've given you a lot to chew on for now. If coming from a healthy server, it really isn't all that scary or difficult. Even easier when doing things one step at a time.
  4. There is a 4GB version of the T400 as well.
  5. Thanks for the reply as well as having me revisit my old thread. I will likely be updating the script in the next few weeks (when I find time - who has any to spare?). My backups have been failing at times, which I believe has been due to some of my dockers not setting permissions correctly. As what I'm backing up is all media files, I will probably have the script set owner and permission before the first rsync command. I would also like to add a pass/fail check to each sub-directory (Movie/TV/Music) to make the messages more informative. So if you are still working on this, you may want to check back on that thread, as I will likely post a revised version of the script there.
  6. The title "Scatter" is a bit misleading. What you observed is unBALANCED normal behavior. I generally move smaller chunks of data when I want to spread things across multiple disks.
  7. Anything that is taking 1 PCIe lane to 8 SATA ports must be using a multiplier.
  8. Also believe for Cloudflare Tunnel you need a true domain (not dynamic). And there are only certain types of data which can be sent through tunnels - http, ssh, rdp, and a couple(?) more.
  9. You should be able to address a Unassigned Devices mounted USB drive as /mnt/disks/DRIVE_MOUNT_POINT as in: /mnt/disks/EXT_USB_FAN_1 I do something similar, using rsync and ssh to back up my main server's media directory to a second Unraid server. Below is a link to the detailed part of the thread I started (worth reading the whole thread) and SpaceinvaderOne's video on setting up SSH keys:
  10. The I226 didn't solve the issue either. Doesn't take much Googling to find many reports, even a few from Intel itself. The question is how the issue manifests itself in the real world. The vast majority of complaints come from home users running Windows machines, with drivers from who knows where and when. I have i225-B3 chips (x4) in my firewall running OPNsense. Would of went with the i226, but at the time they weren't supported by pfsense (was undecided which I would go with, pf or OPN). Tried a Realtek in my Unraid server, nothing but problems, and spent a couple of weeks trying everything before switching to a i225-B3 PCIe nic. Immediately had no issues, Unraid and the default driver and settings reliably performed. My switch runs Realtek (works great) and a Windows desktop has a USB Realtek (never really sees full speed, but at least 2Gbe). In all, 2.5Gbe is a hack by design. If you really need to have 2.5Gbe throughput all of the time, you really should be looking at 10Gb fiber. But for a home network, 2.5Gbe and a little attention to your setup fits the bill. One day this week I downloaded over 8TB of data - 800Mb/s (I throttled 80% of my 1Gb fiber) to my Unraid box. No issue at all. yeah, only 1/3 of the 2.5Gb/s speed but a huge amount of continuous data.
  11. Both, I believe, use port multipliers, which are not recommended for Unraid. In most cases, anything that uses only 2 PCIe lanes will require a port multiplier for 6+ SATA devices.
  12. That the two use cases you gave, using it as a display adapter for Unraid GUI and for use in a VM, does not use this Nvidia driver in any manner.
  13. I did a bit of research on the Intel i225/i226 chips when planning out my 2.5Gbe update to my network. Sometimes searching for info on the Internet is like reading product reviews on Amazon - You need to sift through a lot of 'stuff' and cull out what is actually useful. The vast majority of people having issues with the i225/i226 were Windows users. While the root issue may be hardware (semiconductor) related, it seems that Windows drivers really have not done anything to help with the issue. Move on to the Linux world, and the user bases of both pfsense and OPNsense will swear by the Intel nic, with no reports of issues (which I could find). These are networking knowledgeable people, many who would happily show off their smarts by bashing Intel if given a chance. My own experience has my firewall device with i225 B3 chips working with no issues. I did start with a Realtek 2.5Gbe nic in my Unraid server, but never could get it to iperf (anywhere near) full speed, no matter what I tried. Swapped in an i225 nic, and it was basically plug-and-play.
  14. Obviously, you must have a 3-2-1 backup plan for all of the data on the backup server. 😝 I don't run a parity drive on my backup server. While it is nice to have a way to rebuild a lost drive from parity, there is a real cost of implementing parity as well: Write speed is much slower with parity. Extra drive could be used for additional data storage.
  15. I'm willing to be a lab rat tester as well. I am a well educated novice of both VM and Mac, but recently have upgraded my server enough that I've been setting up several VMs just for the learning experience. My fresh eyes may see things that other folks who have spent more time doing this would just accept (or had configured long in the past).
  16. Happy Halloween! Hope all has been good with you. I recently upgraded my ISP to 1G symmetrical fiber. Like a little kid waiting for Christmas, I could not wait to try out my new blazing speed. Spent the last week making the last configuration changes to the home network, and all is working as I wish - Unraid server speed tests all in the 1000/1000 Mb/s range, be it from docker or VM. Same for my daily driver desktop. But noticed I really didn't see a speed increase from this SABnzbdVPN docker. VPN provider is PIA. Test from the app itself reports ~19 MB/S (152 Mbps) and downloads track about the same speed. Speed tests, via VPN to the same server, are ~500 Mbps from my desktop and ~400 Mbps from the VM (Win11) on the server. I was looking through your VPN Docker FAQ and tried what you suggested in Q6. Either I don't follow directions well, or maybe the info in the FAQ is outdated? I added the following to the .ovpn file (replacing the existing cipher and auth entries): cipher aes-128-gcm auth sh**A256 ncp-disable (note: It doesn't format into separate lines in the FAQ) The docker started, but did not see any change in speed. But did note this in my log, regarding cipher being deprecated. 2023-10-31 11:20:53 DEPRECATED OPTION: --cipher set to 'aes-128-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305). OpenVPN ignores --cipher for cipher negotiations. Curious if I did something wrong here, and any other suggestions you may have. It would be great to get some more speed out of SABnzbd. Another quick question - To disable VPN, shouldn't I just need to set VPN_ENABLED to no in the Docker template? I tried it once for troubleshooting, but had issues restarting. As always, thanks for your guidance and the great packages.
  17. Both drawings are basically the same setup. Routers with WiFi have what is basically a 5 port switch - 1 port wired internally to the WiFi AP and 4 external ports.
  18. Good choice. In fact, I have that same TEG-25GECTX in a drawer under the desk my server resides. Had all sorts of issues with that card, with iperf tests running from 1.6G all the way down to 400M (between the server and my 2.5G firewall). It was before the RealTek driver plugins hit Community Apps, so maybe it can be made to work now. But the Intel chips are plug and play forget and nics are the same price, so why bother? Someday I will pop the unused card into my backup server. Right now, I only have available jacks in my 1G switches, the 2.5G is full. As the server only wakes twice a month to rsync backup media from the main server, 1G is more than fast enough.
  19. For all of the devices on my network (including VMs) I just reserve a IP address on my router.
  20. Well yes, and no. While obviously the size of the largest drive in the array (and by design, the Parity drive) is a major factor, having smaller, slower drives mixed in can also play a significant role in how fast the overall parity check/rebuild takes. Assuming, of course, that other drive activity isn't interfering with the drives 'syncing up' their access cycles to maximize the parity check speed. All spinning drives start off the check at their fastest speed, and end at their slowest speed at the top end of their capacity. As parity checks will run only as fast as the slowest drive, having mixed sizes in your array will increase the overall time it takes to run a parity check. Let's look at this example: 10 drives + 1 parity in the array - a 16TB parity drive, 9 16TB data drives, 1 8TB data drive. All drives have a linear access speed curve - 250 MB/s at the first sector, 150 MB/s at the last sector. Once you start the parity check (and give the drives a little time to sync up) you are moving right along around 250 MB/s. The 8 TB drive starts slowing down faster than the others. 4TB along, you are now down to 200 MB/S, at 7.9TB you are at 150 MB/s. Once you clear the 8TB done point, the speeds will increase from 150 MB/s to 200 MB/s, then ramp down to 150MB/s for the rest of the check, but at a rate that is half as slow as the first 8TB slowed down. This is exactly what I saw when I recently replaced the remaining 8TB drives in my array with those which matched the 16TB drives. My average speed jumped up to 186 MB/s from 166 MB/s.
  21. Greetings Adam. Welcome! You may not have been told this during on-boarding, but it is tradition that you buy everyone on the forum a drink, as they are truly your most important support staff. 😄
  22. I can't/won't write up a complete guide (I don't have the time now) but here is what to have on hand that will help you recover from most issues: Flash drive backup Appdata folder backup (assuming all Dockers have their config files in this default location) Print/Pic/PDF of Main Tab (shows drive assignments) Diagnostic zip file The first two (Flash and Appdata) can be backed up manually (Flash has backup button on Main > flash tab), with Appdata plugin or from Unraid Connect. Appdata either manually or with the plugin. The flash drive holds all of the Unraid system configs, as well as your configured Docker templates that you can restore. Appdata backup has all of the files for your dockers - how you configured the application, databases, metadata, etc. Main tab showing drive assignments helps if you have major array issues and need to figure out which drive was assigned where. Diagnostic files are always good to have on hand, as it gives you a human readable version of many of the configurations and such that may be helpful in a bad system fail. I usually grab a fresh copy of these before a major upgrade (hardware or OS).
  23. You should ask over on the pfsense forum. The typical pfsense/OPNsense configuration is as a router/firewall device. You can likely use it with only the intrusion protection filtering, but will require more than the basic install and configuration (which you get from the many tutorials out there). I agree with JonathanM. I use one of the Chinese 4-port systems with OPNsense on my network. No complaints on my end, and the quality of the units seem to have improved some since they hit the market a couple of years ago. https://forum.pfsense.org/category/38/general-pfsense-questions
  24. Short answer: If you are just swapping the CPU/MB/DRAM, then it is no problem at all. Long(er) answer: If you have VMs, expect to do some reconfiguration (and possible reinstall) of them. As far as the array and cache, unless you have odd HBA controllers that are being changed, all of the drive configurations should just boot up configured as before.
  25. Looks as the max height for CPU cooler in the N3 is 130mm. So you will need to look at low profile coolers. Nothing with a vertical oriented fan will fit.