TimTheSettler

Members
  • Posts

    138
  • Joined

  • Last visited

Converted

  • Location
    Ontario, Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TimTheSettler's Achievements

Apprentice

Apprentice (3/14)

40

Reputation

1

Community Answers

  1. First of all I completely agree with @Miss_Sissy's post. She gives good advice. I would add that it might be a good idea to hire one of the paid support people to help you through the process (https://unraid.net/blog/unraid-paid-support). I was a Windows guy but Windows was getting too expensive for me and all the software that ran on Windows just as expensive. I needed a more reliable but cheaper option that had a good GUI (I'm not a Linux guy). I tried TrueNAS and it's good and powerful but complicated. I tried unRAID and I really like it. I think you will too.
  2. Sorry, a bit late to the game here. When I hear of HA I think of clustering/replication and load balancing. You should read the following articles: https://www.ituonline.com/blogs/achieving-high-availability/ https://mariadb.com/kb/en/kubernetes-overview-for-mariadb-users/ Proxmox, VMware, Hyper-V, and unRAID are all VMEs but to different degrees. These alone don't give you High Availability; it's the apps inside that need to be designed for this. But if you have multiple VMEs and, of course, multiple VMs (dockers) inside then you can set up the apps to have the proper clustering (vertical/horizontal), replication, and load balancing. Assuming these strategies are supported by the app. But in your case the goal is to minimize downtime when you're doing an upgrade and you've hit upon the solution. Copy/back up the original container (Container A) to another container (Container B) and then point the app to that backup. Upgrade the original container (Container A). Copy/back up (Container B - the backup) to the original container (Container A) and then point the app to the original again.
  3. Good to know. I guess I've been lucky or prudent with my settings. Many people like to use the Mover but I actually turned it off. My array is used for all my data and my cache is used for appdata. The beauty of unRAID is that you can play around with stuff. Get started with some simple, basic settings and then tweak things as you go.
  4. ECC is quite expensive and you already have the hardware. It's also one of those one-in-a-million situations that isn't worth worrying about. Will you be using a parity HDD? Use XFS. Wait for ZFS to mature and then maybe use it later (you can switch over at a later time). There are a whole list of benefits and disadvantages that it's better to keep it simple for now and use XFS. It's a good idea to double up the NVMe and SSD so that they are mirrored. Use two NVMe as Cache1 and two SSD as Cache2 or use the NVMe and SSD you have today as Cache1. Use btrfs as the file system (keep it simple and avoid ZFS for now). All cache devices in the same pool are mirrored with btrfs (see below). Use one cache for the appdata (docker). Be sure to use the appdataBackup plugin to back that up. You don't need daily backups unless the apps you're using change each day. Use the other cache for file mover. You don't need to do anything special for plex. It should be fast enough reading from the HDD (I've never had problems streaming plex from my server). Use the NVMe cache for file mover and the SSD for the apps that way file transfer is nice and fast. SSD will be fast enough for the apps. Not sure what issues you are talking about. Been using this for a few years now.
  5. Just so that I've got it right, you back up the photos folder on the file server to the photos folder on the backup server using Duplicacy. I would then copy that Duplicacy backup folder/file to the cloud. If the photos folder on the file server is lost you restore from the backup server and if the photos folder on the backup server is lost then you can rebuild from the photos folder on the file server or from the Duplicacy folder/file in the cloud. If both your file server and backup server are lost then you restore from the cloud.
  6. I see what you're saying. I generally copy the data so that my two folders are identical and then I point syncthing to those folders and from that point on let it manage the synchronization. You might want to give it a try again because it's constantly updated and maybe there was a flaw when you first used it. One last note is that I've come across a couple odd-ball cases where the folder/device gets confused. It's usually because I tried to do something I shouldn't have (as Rysz points out) like pausing the folder for too long. If something looks weird then you can recreate the folder (delete it and create it again) or reset the folder using the following curl api call. https://docs.syncthing.net/rest/system-reset-post.html
  7. Define "huge amount of data". Large number of files/folders or the size of the files? I use syncthing to sync data across four of my servers at different locations. No problems in general but I have noticed that large files (8GB+) tend to take a while to sync. For example, 1 8GB file will take much longer than 100 files totalling 8GB.
  8. Please don't surrender. I've used Firefox for about 16 years now. Back then it was far superior and now it seems that the major browsers seem to be roughly equal, although some have features that others don't have. My favourite feature (that all browsers have now) is the password saver. But if you use this you should be using a master password. It's open source (always has been) and not linked to those big tech companies. I also had some problems and switched to Edge for about 3 months (only for unRAID) but I haven't noticed anything lately so I switched back.
  9. I originally tried TrueNAS too. I found it confusing and I had lots of trouble with the permissions/ACL. I felt that it was an advanced product requiring advanced knowledge. I needed something simple but powerful and I think unRAID is it. Don't get me wrong, unRAID is advanced too but it simplifies that complexity. I wish TrueNAS the best because you need competitive products in the marketplace to keep things interesting. You mentioned 3x4TB HDDs and then mention a single 8TB SSD. One of those 4TB HDDs failed and so you decided to scrap the other two and use a single 8TB? I assume you have a backup strategy in place in case that 8TB bites the dust?
  10. Yes, I can see how this part seems confusing. What you need to do is back up the docker config somewhere else. The appdata directory (where the docker config is stored) is on my cache. I then use the Appdata Backup plugin (from Robin Kluth) to back up the config to the array. That config is then synchronized to another server using syncthing. If I lose the cache I have the backup on the array. If I lose the array then I have the docker still on the cache. If you lose both the cache and array (the server is fried, burnt, or stolen) then you have the backup somewhere else. You don't need to use syncthing (or the plugin), you can back up the cache to a flash drive. Did I miss anything?
  11. Are you running duplicati as a docker app? You should have all your docker apps backed up regularly. The server is gone so what do you do? The following list is a very basic process. Maybe someone else here has a better list. Buy new hardware including new hard drives. Restore the flash backup to a new USB (re-assign the license key to the new USB). Boot to unRAID using the restored flash backup. Assign the new hard drives to the array. Install the docker apps you used to have but then replace the config folders with your backups. Using the restored duplicati docker app retrieve your data.
  12. I'm not sure you need these unless you're running a lot of servers like Newtious does. I run a Minecraft server and a couple other game servers for friends and family. I don't need to manage them in any special way. This is what I do. I run DuckDNS so that my friends can find my server easily and port forwarding to the port for the game. Here's some additional info about security (beyond the link that trurl has): If you expose a port on your router pointing to your unRAID server then keep in mind that someone will find out that the port is open and "listening". This will happen regardless of you having a host name or not (since people port scan IP addresses all the time). If someone finds an open port then the first assumption is that the port is being used for whatever app typically uses that port so a way to hide the purpose of a port is to use an unconventional port number for that app (or game). The drawback is that services or games might expect a specific port number. Trust the app or game to have decent security to block bad requests to the port (this is debatable). The docker app is self-contained. If someone gains control of the app then they can only see the data that the docker app has access to. Be aware of this and limit what the docker has access to (this is one reason why using the "privileged" flag is a bad idea). This is why I like dockers. They are like a VM but for a very specific purpose with a small footprint.
  13. That's cool but you need the tower defence and puzzle games first. Too bad there wasn't a Wordle docker.