• Posts

  • Joined

  • Last visited

Everything posted by ConnerVT

  1. I'll add to the above post. Around Thanksgiving, I was having issues with streaming live TV. Would stream for about 8 minutes, then get a 'The Program Has Ended...' message (or something similar to that) and the program would stop. After a bit of Googling, found Plex had broken TV streaming in one of their updates, and was never resolved. (Me wonders just how they do their regression testing). Repository linuxserver/plex: is the version most people were suggesting to roll back to. Been sitting at this one, at least until football season is over.
  2. I traveled down this rabbit hole last year, when I set up my server. The main issue (especially for Ryzen systems) is that there are a number of different models of the Super I/O chips used on motherboards by the different MB vendors. The documentation for these chips either is incomplete, or non-existent, making it a frustrating headache for folks who try to write Linux drivers for them. Enough so, that the few who would try have eventually just abandoned their efforts. There are work around solutions that have been offered, but the risk is if they don't work correctly, it is possible for all of your fans to stop running. So for me, it was not worth the effort. Best solution is to allow your motherboard BIOS to control your fans. For drive (and general case) cooling, the basic rule of thumb is to blow air onto drives where you can, and have at least one fan exhausting heat from the case (near top of case, as heat rises). I set these fans to run at a constant speed, where the drives stay a comfortable temperature. I usually like to find the most eloquent solution, but this is the most stable compromise I was willing to live with.
  3. All of the spinning drives in my cheapo case are HGST - 8TB are He, 6 TB are not. The 8TB were used, the 6TB were new (now about a year of service). They all have worked great. Run in the low/mid 30's to just 40C during a 14+ hour parity check. 3 are in the bottom slots of a mid-tower case, 3 are in a 4x3 cage. They may not be the coolest, quietest, or lowest power consuming drives, but were priced well and seem to be very reliable. I'm sold.
  4. New Netgear router? May be Netgear Armor scanning your network.
  5. I will support @trurl in his thoughts. USB 2.0 is generally more stable/reliable, especially in Linux environments. As there isn't much to gain by having the greater throughput speeds of 3.0 as no large amounts of data are being moved around, sticking with a 2.0 drive has little to lose. Another benefit of older small 2.0 USB drives is that they generally have older tech memory chips. I work in semiconductor manufacturing. The newer, smaller memory cell allows you to pack more of them on a die. But they are more prone to damage by heat, static charge, etc. The older technology will likely out live a newer design. For under $8 USD, you can pick up a name brand 8 or 16 GB USB 2.0 flash drive. Folks really shouldn't over think this one.
  6. Or, you just can't stand to see all of those empty drive slots in your case. 😝
  7. I have not heard of any instance where SMR drives have caused issues with Unraid. Maybe the slightest of performance issues, but something only seen if searching hard for it. Unraid is not designed for speed. There are other RAID setups which cater to those looking for the fastest transfer rate. Your customer support analogy is faulty. The issue isn't that there is a delay in response. Here there is a loss of communication between the system and the drive's controller. So it would be, in your analogy, the CS representative hearing a loud click and then a dial tone.
  8. As I answered above, yes. You can basically add anything that can be accessed as a drive (volume). You could have a dozen freebee flash drives as an array if you wish. The issue with USB is that devices tend to momentarily "drop out" and not respond when needed. This is mostly due with the several layers of hardware and code between CPU and drive - drivers, hubs, USB to SATA translation. In a Windows machine, this generally isn't an issue (unless it is your boot drive) as most of the time, it is "found" again as the system retries the request or the device finally sorts itself out. A SATA or SAS connected drive doesn't have nearly as much stuff between CPU and the drive. In the Unraid array, these errors by the USB hardware are treated as a drive error. In my opinion justifiable, as the reason for having a parity protected array is to trust your data's integrity. If a drive drops out, the disk will likely be displayed as a failed disk, and data from it would be emulated by parity, until you take action to resolve. USB drives can have a place in Unraid. I have two connected to my sever as Unassigned Devices. One is a external USB enclosure I use for off site backup (I have 2, and swap each month, taking the most recent to my office for safekeeping). The other is a flash drive I keep plugged in the server, to quickly transfer some files then grab and go.
  9. Yeah, from 2019. Damn technology - Keeps on getting updated. Figured it would give you a place to start. Glad to help.
  10. Can you? Yes. Should you? No. Data is data. And USB hasn't changed much since a few years ago. The biggest issue is that USB is much more prone to temporarily drop out. When it does this on drive in the array, it can cause all sorts of issues - Drive not available on the array, data corruption (if it happens during a write), etc. It is just not worth the headache, and is counter to the reason you would build a parity protected server in the first place. Look at my sig. I'm not using any expensive, top end server gear. Mostly recycled PC stuff and inexpensive hardware. I'd shuck a few of those drives, and install them in your case.
  11. I'm no expert, but I did watch some videos from SpaceinvaderOne that addressed all three of your questions. May be a good place to start.
  12. Port forwarding in your router set to map Port 8080 to Port 80?
  13. Well... Not as easily as just pulling out a 4TB and installing a 3TB. You'll need to move data off a 4TB drive, then you can remove the drive from the Unraid configuration. A parity rebuild will then be needed.
  14. Do you have Plex Pass? Plex only HW transcode when you have Plex Pass - the free/unsubscribed version doesn't. (You *did* say "something simple"...)
  15. I'll show my age here. First job out of college, went to work for a company creating the first high resolution graphics cards for the IBM PC. 640x480 and 256 colors. We were doing contract work for IBM, which gave us some of the first built systems of a new PC they were going to release. The IBM PC/AT. Several of us had these on our desks/benches. A 5.25" HD floppy and a 20MB hard drive. Our manager had one of the premium ones - it came with a 30MB drive. Ooooohhhhh.... One of our senior hardware designers said, "You could spend your entire life, and not fill a 30MB hard disk."
  16. Try this. Note that my container port is 444, not the default 443.
  17. Correct. It won't make the task any faster. If your 10TB takes ~20 hours, the 18TB will take about 36 hours. For me, Parity Check Tuning makes sense for scheduled checks, running the check in segments when the server isn't likely to see (much) traffic. But if I'm replacing a drive, I want it to fully complete the parity rebuild as soon as I can, to get the array back to a protected state.
  18. This is probably a simple question, but after an hour of searching, I have yet to find a definitive answer. I have a working script that weekly will rsync several folders from my array to an external USB drive mounted with Unassigned Devices. I have two drives (External_USB_1 and External_USB_2) which I swap on the beginning of the month, and I keep the unconnected drive off site. My current solution has me manually editing my script each time I swap drives, to specify the correct destination drive. I would much rather set and forget the script, and have it check for the drive to be present before executing rsync. What would be the best way to check if these drives are present and mounted? UPDATE: I have a solution that seems to work, but if someone has a better one, I'm open to your ideas. I ended up using mountpoint. #!/bin/bash #arrayStarted=true if mountpoint -q "/mnt/disks/External_3TB_1" then rsync -avh "/mnt/disks/Samsung_USB/UNRAID_Backups" "/mnt/user/Media/Music" "/mnt/user/Photos/ORIGINALS" "/mnt/user/Backup/appdata backup" "/mnt/disks/External_3TB_1" else echo "External_3TB_1 not mounted" fi if mountpoint -q "/mnt/disks/External_3TB_2" then rsync -avh "/mnt/disks/Samsung_USB/UNRAID_Backups" "/mnt/user/Media/Music" "/mnt/user/Photos/ORIGINALS" "/mnt/user/Backup/appdata backup" "/mnt/disks/External_3TB_2" else echo "External_3TB_2 not mounted" fi
  19. Hello. My name is Conner. I have OSD – Obsessive Server Disorder. They say the first step is to admit you have a problem. Here is my story. It all started innocent enough. Last year, anticipating a $600 stimulus check, I decided I would build an Unraid server. I had a handful of unused components from a decommissioned PC – a 1st gen Ryzen, 8GB of DRAM, a motherboard, a small NVMe drive. I had packed too many 3GB drives in my small daily driver PC, and it would always be powered on, running my Plex server. Relocating those drives and off-loading that task to a small server seemed to be a reasonable idea at the time. The build went mostly smooth. I only overshot my budget by a small amount. An extra fan here, an internal USB header cable there. The extra money spent to make it clean was worth it to me. I loaded up the media server on the machine. Then I started thinking, “What else can it do?” This is where I went down a rabbit hole of trouble. Found a good deal on some 6TB drives. I bought 3 of them. Future proofing is good, I felt. It was nice to see that extra storage space. The 8GB of DRAM seemed inadequate, as I started installing more Dockers, so added 8GB more. I’m up to 28 Dockers installed, with 22 running all the time. At least another half dozen pinned in CA, to try out in the future. I started with an old GT760 to do some hardware transcoding. But felt it worth upgrading so I could handle NVENC H.265. A Quadro P400 only costs around $100. The power supply I had was very old and less than trustworthy, so a new one was ordered. I found a great deal on a UPS, to prevent those annoying unclean shutdowns from summer thunderstorms. Looking for an offsite backup solution, I again repurposed those 3TB drives I moved, I took those out of the server, and put them in external USB enclosures, to swap and safely keep at work. I ended up buying 4 more drives (two 6TB and two 8TB). The Intel NVMe is small and slow, so now have a 500GB to install as cache in the upcoming weeks. I worry how I’m affecting my family. I have already corrupted my son. He really enjoys being able to request and add media through one of the Dockers, and stream to his (or his girlfriend’s) apartment. The domain name I purchased makes it easier for him, as well as allows me to get around the DNS firewall at work, to access the server. My wife rolls her eyes when another package arrives, with more of my “toys”. But I feel she may be enabling me. I may need to add the Amazon driver to this year’s Christmas list. I was thinking that Limetech may consider creating a sub-forum, where folks like us can help each other through our OSD issues. But I decided that may not be the best idea – it would be like holding an AA meeting down at the local pub. Thank you for letting me share my story.
  20. There are many threads, as well as Docker/plugins in CA, devoted to backup. But as servers can range from just a few TB to several hundreds of TB, and how important people feel their data is (and the cost of protecting it), it is impossible to have a one-size-fits-all solution. My server currently has about 16TB stored in the array, and roughly 2 dozen Dockers. Most of the array data is media files, which I would hate to lose, but I value less than the PC backups and photos also stored on the server. My backup plan was fairly inexpensive to implement, and has no reoccurring costs. I bought inexpensive USB 3.0 enclosures for 2 old 3TB I had on hand. I keep one connected as an Unassigned Device to the server, and weekly run a script to copy what I don't ever wish to lose. On the first of the month, I take this drive to my work, where it sits in my desk drawer. I bring home the other drive, plug it in to the server and clear it, and start the cycle again. As it has backups for the home client computers, it is also very convenient for restoring (Macrium Reflect), as I can just grab the USB drive and connect to the client computer if needed. Cons to this method are that your backup isn't always 100% latest up to date, and you need remember to make swapping the drive a monthly routine.
  21. Finally completed. Report is attached. You can disregard the CRC errors. They are all from when I initially built this server in Jan this year. Cheap SATA cables that came with the controller card, and had CRC errors with several of the drives. All these cables have been replaced, and haven't seen another CRC since. Another interesting tidbit is I only know about the Disk 1 read errors from the message sent by the Fix Common Problems plugin. I did not get any notification from Unraid, and the Dashboard shows a green thumbs up Healthy. I can see the 3 errors listed in the Main screen. So what do you suggest I do from here? As always, many thanks for the assistance!
  22. Re-read what you wrote. Disabled spin down for Disk 1 and started the Extended SMART Test. Headed to bed and let it run while I sleep. When I woke, it was 80% complete. Around 9:00 AM, ticked over to 90%. Now is 5:30 PM, 8.5 hours later, and still sitting at 90%. Is this normal?
  23. I had started one before leaving for work this morning, but it seemed not to complete. Do I need to first shut down the array and/or stop all my dockers first?
  24. Over the past week, my Disk 1 has reported 3 Read errors. Been in service just over 8 months. Attached are diagnostics. Ignore the "Share cache full" syslog spam - I made a bonehead change (Min Share space) which I reversed, though the errors started right about the same time. Coincidence? What would be the recommended action to take at this time? As always, thank you much for your support!
  25. Not strange - Many Ryzen motherboards have Super IO chips that aren't directly supported by Linux. There are many posts here on this forum of it. As for the Dynamix System Temp plugin - I have it installed, but have never seen Airflow on my Dashboard.