• Posts

  • Joined

  • Last visited

Everything posted by ConnerVT

  1. There are many threads, as well as Docker/plugins in CA, devoted to backup. But as servers can range from just a few TB to several hundreds of TB, and how important people feel their data is (and the cost of protecting it), it is impossible to have a one-size-fits-all solution. My server currently has about 16TB stored in the array, and roughly 2 dozen Dockers. Most of the array data is media files, which I would hate to lose, but I value less than the PC backups and photos also stored on the server. My backup plan was fairly inexpensive to implement, and has no reoccurring costs. I bought inexpensive USB 3.0 enclosures for 2 old 3TB I had on hand. I keep one connected as an Unassigned Device to the server, and weekly run a script to copy what I don't ever wish to lose. On the first of the month, I take this drive to my work, where it sits in my desk drawer. I bring home the other drive, plug it in to the server and clear it, and start the cycle again. As it has backups for the home client computers, it is also very convenient for restoring (Macrium Reflect), as I can just grab the USB drive and connect to the client computer if needed. Cons to this method are that your backup isn't always 100% latest up to date, and you need remember to make swapping the drive a monthly routine.
  2. Finally completed. Report is attached. You can disregard the CRC errors. They are all from when I initially built this server in Jan this year. Cheap SATA cables that came with the controller card, and had CRC errors with several of the drives. All these cables have been replaced, and haven't seen another CRC since. Another interesting tidbit is I only know about the Disk 1 read errors from the message sent by the Fix Common Problems plugin. I did not get any notification from Unraid, and the Dashboard shows a green thumbs up Healthy. I can see the 3 errors listed in the Main screen. So what do you suggest I do from here? As always, many thanks for the assistance!
  3. Re-read what you wrote. Disabled spin down for Disk 1 and started the Extended SMART Test. Headed to bed and let it run while I sleep. When I woke, it was 80% complete. Around 9:00 AM, ticked over to 90%. Now is 5:30 PM, 8.5 hours later, and still sitting at 90%. Is this normal?
  4. I had started one before leaving for work this morning, but it seemed not to complete. Do I need to first shut down the array and/or stop all my dockers first?
  5. Over the past week, my Disk 1 has reported 3 Read errors. Been in service just over 8 months. Attached are diagnostics. Ignore the "Share cache full" syslog spam - I made a bonehead change (Min Share space) which I reversed, though the errors started right about the same time. Coincidence? What would be the recommended action to take at this time? As always, thank you much for your support!
  6. Not strange - Many Ryzen motherboards have Super IO chips that aren't directly supported by Linux. There are many posts here on this forum of it. As for the Dynamix System Temp plugin - I have it installed, but have never seen Airflow on my Dashboard.
  7. Sometimes it helps to get everyone at the same baseline before moving forward. My practical experience with Unraid is a much more modest system (5 HD in my array), but understand (in theory) what you are looking at, with a case jammed full of drives. Lolight pointed you to the 100+ TB build thread, as that addresses the issues one comes across once you get up to a dozen or so drives. One needs to address: Connections for all the drives: A LSI 9211-8i will let you connect 8 drives. A typical ATX motherboard will have 6 SATA connectors (some likely disabled if you use the board's M.2 slots for cache). Let's say that gets you to 12 drives for your array. If you want more drives, your either going to need a second card (and utilize another PCIe slot on the MB), or as they used in the above mentioned thread, a SAS expander (basically a smart breakout board) to allow a single card to communicate with more than 8 drives. I've never put my hands on either of these, so I won't be much help beyond knowing they exist. Power A lot of drives need a lot of power. Buy a quality PSU. My HGST drives add 8W each when they spin up, so for doing napkin math, figure 10W/drive so 2A for each drive. So don't just say "It's a 1000W PSU", but look at the spec sheet for the +5V rail's current. Quality manufacturers usually mean quality wires too. Most PSU won't have enough SATA power connectors. Splitters will be needed, just never put too many on any single harness coming out of the PSU. Getting reviews/comparing builds from people who have stuffed a dozen drives in a case wouldn't hurt. Cooling Putting all of these drives in one place (and not socially distanced from each other) will generate some heat. Keep this in mind when specing the system out. In your OP, you said this will be basically just a storage NAS for video files. If you aren't running and dockers or VM, the rest of your needs are light. My system is using a repurposed 4-core first gen Ryzen processor (left overs from my home PC upgrade). If I was building your system, whatever reasonably priced Intel CPU with graphics, an appropriate MB, and 8GB of RAM I would consider more than adequate. Anyway, I have rambled on more than enough.
  8. OK, let's get back to basics. Let's say you have 12 drives. Each need to connect somewhere. Unraid doesn't care how/where they are connected - it references everything by the drive's ID/serial#. SATA/SAS/M.2/USB - Does not matter. Even swap connections around and Unraid is unfazed. Unless using some high end server motherboard, you have at most 6 or 8 SATA ports to connect your drives. So if you need to connect more drives than that, the typical solution is a PCIe card. Going back some years, the most affordable/available solution was repurpose a used LSI card. As the RAID functionality was unneeded (and actually gets in the way), folks would flash them back to be a simple HBA. This is still an option today (and maybe best for when many drives). But there are also other options available on the "consumer" market that also work. (Stay away from multiplexed ports and Marvell chipsets, and all should be good).
  9. uNrAId ? (That's probably the ugliest I got)
  10. I found this. Perhaps opening up GDM Network Discovery ports?
  11. No expert here. But do you have your Plex Docker's Network set to Host? If it is set to Bridge (or something else), I would kinda expect the setting in the Plex app not to work. With it set to Host, then it might work, or it may need some extra Docker Voodoo (Variable?) to pass that setting to the outside world (or let it in). Just spit balling here...
  12. 0.9.7 from both Linuxserver and hotio launch the GUI for me just fine. Both dockers have been running for some time, if that means anything.
  13. Nobody here powers back on their server remotely after a power loss shutdown? I'd really like to hear of your Wake On Wan solution if you do.
  14. My Servers. But the firewall at work blocks me from that. My main use case is to power back up the media server (Plex) if power goes out when I'm away for an extended period of time. Such as being on vacation, and wishing to watch a movie or show. I know that my question isn't directly related to Unraid itself. More of a general computer usage question. (Enable WOL in BIOS, set appropriate router port forwarding, a dynamic IP associated to a URL, and some means to send a magic packet). But figured this community more than most would have need for this use case, so was looking for what has been proven by others as BKM.
  15. As the thread title suggests, I'm looking for Wake On WAN recommendations. I have my Unraid server on a moderately sized UPS. A reasonable run time is set in apcupsd before doing a controlled shutdown. The UPS is configured to stay on after a shutdown, but the server stays off until I go downstairs and power it back on. This works fine when I'm home, but looking for a solution to power the server up when I'm away. Ideally, something that can be set up as a Wake-On-WAN that can be sent via an iPhone. An iOS app would be perfect, but something web page based could work as well. I have full access to configure my router as well as an automatically updated dynamic IP. Been searching and have found a number of options. But really would like to hear from people who have working configurations for their Unraid servers. Thanks!
  16. If only for transcoding, better would be a Quadro P400. Single slot, less power consumption, and used now back under $100 here in the USA.
  17. I believe they are dual rank, so you should set the memory speed in your BIOS to 2667 with four DIMMs for best stability.
  18. I've been using Handbrake from the djaydev repository (by dee31797), but he has stopped development and all of his dockers are now deprecated. As I have only a lowly 4 core CPU in the server, this docker was a lifesaver as it could make use of my nVidia Quatro. Do you have any plans in supporting nVidia transcoding in the future?
  19. Currently my 4K movies and my HD movies aren't synced in any automated manner. Something I will get to when I finally figure out the best strategy for my situation. Basically at home it is just me and my wife. Our son is here for a little while, while working out where he will be living post college graduation. Any one else using the Plex is streaming via internet, and I don't like 4K transcoding. The bulk of my library is HD, built up over the past 10 years. When I started adding 4K movies, I drop them into their own 4K library (and folder). Radarr manages HD, Radarr4K manages 4K. As there are few movies I will rewatch again (and again), I typically watch them in 4K, then will replace it with a HD version, for others to potentially watch. If it is a big hype release, I may keep both HD and 4K on the server. Currently I do this manually, but then, I don't really consume many movies in a month. It's manageable.
  20. I agree. Isn't that just so damn annoying? I mean, I regularly check my syslog for errors, and find nothing. Never is anything wrong, my dockers just keep running and doing their stuff. Not even a parity error reported. Geesh! Probably the most boring computer I've ever owned. /* end sarcasm */
  21. It just took some time to recharge the batteries.
  22. Uhhhh (again).... I think we're getting distracted again. Even if the data is on a mirrored RAID level on the TrueNAS system, the drives aren't going to be a "drop in and configure" into an UNRAID system. I seriously doubt the file systems (as in the UNRAID shares) are identical. So it will be a dangerous undertaking to do this with live data on the drives. As I see it, I would think there are 2 possible paths for moving from TrueNAS to UNRAID: -- Duplicate files to move from the TrueNAS system to a temporary remote storage, rebuild/reconfigure current system to UNRAID, move files from remote storage to desired shares on UNRAID server. -- Build new hardware to run UNRAID (with at least some array storage). Begin moving data from TrueNAS to UNRAID. If TrueNAS is mirrored RAID, possibly prune moved data from TrueNAS server to enable removing cleared drives from TrueNAS and installing in UNRAID, giving more space for storage. Continue moving data (and drives) until complete. Build parity. If you end up buying some new hardware (CPU/MB) and wish to do hardware transcoding, I would go Intel. I'm mostly an AMD guy, but believe that you want to have 7th generation Kaby Lake (7xxx) or newer to handle x265 4K transcoding. If you go the GPU route, the nVIDIA Quatro P400 is likely the best value.
  23. Uhhh... I think it will be much more difficult to switch your array over from TrueNAS to UNRAID. Depending on the RAID level you have your current server running, it will either be difficult (if mirroring) or impossible (if striped). UNRAID doesn't use conventional RAID levels, but rather (can) maintain a Parity drive, which if present, is used to emulate or recreate a data drive that becomes non-accessible. I wouldn't want to trust my existing data on my drives to being successful trying to implement on a new system.