Michael_P

Members
  • Posts

    668
  • Joined

  • Last visited

Everything posted by Michael_P

  1. Also, from the SMART report it doesn't appear to be anything wrong with the drive itself. CRC errors are almost always connection related.
  2. Looks right except yours has 8GB instead of 8G not sure it matters Set the plex scheduled task for the next closest hour, then watch the memory usage as it runs thru it (that way at least you won't have to wait overnight)
  3. Eliminate the splitters and you should be fine. The rule of thumb is no more than 4 drives per molex connector and to have more than one run back to the PSU. Toshiba drives are especially finicky in my experience about power sag and will drop offline if you look at them funny
  4. Looks like a power issue, are you using splitters?
  5. Doesn't look like you're putting it in the right spot - undo what you did and do this: Toggle basic view to advanced view Add it to the Extra Parameters line
  6. Storage should never be in the DMZ. If your data is valuable enough to protect with that many layers then the storage server should not be hosting WAN facing applications
  7. Enable the advanced view in the docker container's settings and in the extra parameters line add this:
  8. Weird that there's a login right before the OOM, do you recall doing anything in particular? Nov 25 04:28:48 Tower2 webGUI: Successful login user root from 192.168.1.138 Nov 25 04:28:49 Tower2 kernel: docker invoked oom-killer: gfp_mask=0x140dca(GFP_HIGHUSER_MOVABLE|__GFP_COMP|__GFP_ZERO), order=0, oom_score_adj=0 Nov 26 06:18:07 Tower2 webGUI: Successful login user root from 192.168.1.138 Nov 26 06:18:08 Tower2 kernel: docker invoked oom-killer: gfp_mask=0x140dca(GFP_HIGHUSER_MOVABLE|__GFP_COMP|__GFP_ZERO), order=0, oom_score_adj=0 Nov 27 04:57:56 Tower2 webGUI: Successful login user root from 192.168.1.138 Nov 27 04:57:57 Tower2 kernel: php-fpm invoked oom-killer: gfp_mask=0x140dca(GFP_HIGHUSER_MOVABLE|__GFP_COMP|__GFP_ZERO), order=0, oom_score_adj=0 You can try limiting Plex's container to something reasonable like 8GB or so to see if it stops going OOM
  9. If the parity disk is 20TB and the array disks are 4TB, when adding the parity drive to the array it will write parity info matching the 4TB of data, and zero the rest of the drive (which will take a couple of days probably). From then on, the parity checks will only take as long as it takes to read the largest disk in the array, 4TB in your case, and will finish when the 4TBs are read (since it already knows the remaining 16TB are zeros). --Edited to strike incorrect info, it will still read the zeroed portion of the parity drive
  10. Not to beat a dead horse, and I'm a firm believer of You Do You™️ , but the backup script/program will know and back up what's missing (snapshots aside) and back it up again - no need for a full backup. As I've said above, if you have the spares to have redundancy on your backup server, then you might as well use it.
  11. Rule of thumb is 4 drives per molex connector spread across more than one run from the PSU
  12. Had the 2TB variant of this drive, lasted less than a year in my system. Samsung used to be my gold standard, but their quality lately has appeared to have fallen off a cliff (see the 980 issues for example) I've replaced it with the Crucial P5 plus, too early to give a long term assessment but it's specs are on par Added a heatsink to mine, both drives would shoot straight to 70C under any load without it, and now it sits between 45-55 under heavy I/O. A simple one is all that's needed, no need to go fancy unless your particular situation requires it. Depends on what you're using it for. I was easily able to saturate my 10Gb network with both my gen 3 and gen 4. You'd need a pretty specific use case and powerful kit to hit the limit of either
  13. Logs show them resetting, so if that doesn't work it's likely a power delivery issue, loose data cable, backplane, or your controller - in the order of what I'd check first
  14. This discussion is about backup servers. Unless you're restoring backups of your backups to your backup server, I'm not sure what case you're trying to make.
  15. Not a good idea, most ISPs frown on it, and all other email servers will likely not accept mail from your server - especially if you're not on a dedicated IP
  16. "Ticket ID ##RE-36895## status updated: Closed - Reason: Is vampire"
  17. That's subjective in relation to your personal situation and needs - if you have an extra drive, and no need for it in your main server, then You Might As Well™️use it. You can always remove it later if you need it somewhere else.
  18. First you need to consider your data needs - how much you have now, and how much additional space you'll need in the near future Second you'll need to set your backup strategy to take the results of the first into account. Using single drives for storage and backup gets unwieldly very quickly, hence the need for pools/arrays of disks so you can write larger amounts of data without spanning individually separate file systems. Figure out what really needs to be backed up and plan accordingly In your case, right now, with only 4 drives - Unraid is probably going to be the better solution, use two pools, 1 mirroring the other or 1 backing up the other (better). Optionally you can host the backup drives in your daily machine and set your backup scripts/program to copy the data to the appropriate drives - which, see the second point, gets more difficult the more data you accumulate. In its default setting, yes
  19. First, why is it in the attic lol And to your question, you need to set up your router/firewall rules to bridge the subnets for whatever traffic you want between the two. The HDHR app likely won't work because the broadcast traffic wouldn't make it across, but Plex should be able to see it just fine as long as you have the correct forwarding in place in your rules
  20. For me, it was for consolidating hardware - instead of a machine running the security DVR, another for serving music and downloading etc, the NAS was always on so it made sense to move it all under one roof - not to mention Docker. A 'NAS' is just Network Attached Storage, so you're comparing apples to apples. If you can access your dedicated computer's storage over the network, congratulations - it's a NAS. All the other features, VMs, Docker, web services, are all things most operating sytems are capable of - whether you need them or not is up to you. Now if you're asking why Unraid over Windows/*whatever other Linux flavor* if there's issues with Unraid - well, good news - there's issues with all operating systems, so pick the one you're comfortable with until it no longer serves your purpose or starts to give you too many problems trying to do what you want it to do.