• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ConnerVT's Achievements


Rookie (2/14)



  1. There are many threads, as well as Docker/plugins in CA, devoted to backup. But as servers can range from just a few TB to several hundreds of TB, and how important people feel their data is (and the cost of protecting it), it is impossible to have a one-size-fits-all solution. My server currently has about 16TB stored in the array, and roughly 2 dozen Dockers. Most of the array data is media files, which I would hate to lose, but I value less than the PC backups and photos also stored on the server. My backup plan was fairly inexpensive to implement, and has no reoccurring costs. I bought inexpensive USB 3.0 enclosures for 2 old 3TB I had on hand. I keep one connected as an Unassigned Device to the server, and weekly run a script to copy what I don't ever wish to lose. On the first of the month, I take this drive to my work, where it sits in my desk drawer. I bring home the other drive, plug it in to the server and clear it, and start the cycle again. As it has backups for the home client computers, it is also very convenient for restoring (Macrium Reflect), as I can just grab the USB drive and connect to the client computer if needed. Cons to this method are that your backup isn't always 100% latest up to date, and you need remember to make swapping the drive a monthly routine.
  2. Finally completed. Report is attached. You can disregard the CRC errors. They are all from when I initially built this server in Jan this year. Cheap SATA cables that came with the controller card, and had CRC errors with several of the drives. All these cables have been replaced, and haven't seen another CRC since. Another interesting tidbit is I only know about the Disk 1 read errors from the message sent by the Fix Common Problems plugin. I did not get any notification from Unraid, and the Dashboard shows a green thumbs up Healthy. I can see the 3 errors listed in the Main screen. So what do you suggest I do from here? As always, many thanks for the assistance!
  3. Re-read what you wrote. Disabled spin down for Disk 1 and started the Extended SMART Test. Headed to bed and let it run while I sleep. When I woke, it was 80% complete. Around 9:00 AM, ticked over to 90%. Now is 5:30 PM, 8.5 hours later, and still sitting at 90%. Is this normal?
  4. I had started one before leaving for work this morning, but it seemed not to complete. Do I need to first shut down the array and/or stop all my dockers first?
  5. Over the past week, my Disk 1 has reported 3 Read errors. Been in service just over 8 months. Attached are diagnostics. Ignore the "Share cache full" syslog spam - I made a bonehead change (Min Share space) which I reversed, though the errors started right about the same time. Coincidence? What would be the recommended action to take at this time? As always, thank you much for your support!
  6. Not strange - Many Ryzen motherboards have Super IO chips that aren't directly supported by Linux. There are many posts here on this forum of it. As for the Dynamix System Temp plugin - I have it installed, but have never seen Airflow on my Dashboard.
  7. Sometimes it helps to get everyone at the same baseline before moving forward. My practical experience with Unraid is a much more modest system (5 HD in my array), but understand (in theory) what you are looking at, with a case jammed full of drives. Lolight pointed you to the 100+ TB build thread, as that addresses the issues one comes across once you get up to a dozen or so drives. One needs to address: Connections for all the drives: A LSI 9211-8i will let you connect 8 drives. A typical ATX motherboard will have 6 SATA connectors (some likely disabled if you use the board's M.2 slots for cache). Let's say that gets you to 12 drives for your array. If you want more drives, your either going to need a second card (and utilize another PCIe slot on the MB), or as they used in the above mentioned thread, a SAS expander (basically a smart breakout board) to allow a single card to communicate with more than 8 drives. I've never put my hands on either of these, so I won't be much help beyond knowing they exist. Power A lot of drives need a lot of power. Buy a quality PSU. My HGST drives add 8W each when they spin up, so for doing napkin math, figure 10W/drive so 2A for each drive. So don't just say "It's a 1000W PSU", but look at the spec sheet for the +5V rail's current. Quality manufacturers usually mean quality wires too. Most PSU won't have enough SATA power connectors. Splitters will be needed, just never put too many on any single harness coming out of the PSU. Getting reviews/comparing builds from people who have stuffed a dozen drives in a case wouldn't hurt. Cooling Putting all of these drives in one place (and not socially distanced from each other) will generate some heat. Keep this in mind when specing the system out. In your OP, you said this will be basically just a storage NAS for video files. If you aren't running and dockers or VM, the rest of your needs are light. My system is using a repurposed 4-core first gen Ryzen processor (left overs from my home PC upgrade). If I was building your system, whatever reasonably priced Intel CPU with graphics, an appropriate MB, and 8GB of RAM I would consider more than adequate. Anyway, I have rambled on more than enough.
  8. OK, let's get back to basics. Let's say you have 12 drives. Each need to connect somewhere. Unraid doesn't care how/where they are connected - it references everything by the drive's ID/serial#. SATA/SAS/M.2/USB - Does not matter. Even swap connections around and Unraid is unfazed. Unless using some high end server motherboard, you have at most 6 or 8 SATA ports to connect your drives. So if you need to connect more drives than that, the typical solution is a PCIe card. Going back some years, the most affordable/available solution was repurpose a used LSI card. As the RAID functionality was unneeded (and actually gets in the way), folks would flash them back to be a simple HBA. This is still an option today (and maybe best for when many drives). But there are also other options available on the "consumer" market that also work. (Stay away from multiplexed ports and Marvell chipsets, and all should be good).
  9. uNrAId ? (That's probably the ugliest I got)
  10. I found this. Perhaps opening up GDM Network Discovery ports?
  11. No expert here. But do you have your Plex Docker's Network set to Host? If it is set to Bridge (or something else), I would kinda expect the setting in the Plex app not to work. With it set to Host, then it might work, or it may need some extra Docker Voodoo (Variable?) to pass that setting to the outside world (or let it in). Just spit balling here...
  12. 0.9.7 from both Linuxserver and hotio launch the GUI for me just fine. Both dockers have been running for some time, if that means anything.
  13. Nobody here powers back on their server remotely after a power loss shutdown? I'd really like to hear of your Wake On Wan solution if you do.
  14. My Servers. But the firewall at work blocks me from that. My main use case is to power back up the media server (Plex) if power goes out when I'm away for an extended period of time. Such as being on vacation, and wishing to watch a movie or show. I know that my question isn't directly related to Unraid itself. More of a general computer usage question. (Enable WOL in BIOS, set appropriate router port forwarding, a dynamic IP associated to a URL, and some means to send a magic packet). But figured this community more than most would have need for this use case, so was looking for what has been proven by others as BKM.