c3

Members
  • Posts

    1175
  • Joined

  • Last visited

Everything posted by c3

  1. Durp... blindly updating the docker from time to time, never knew to do the upgrade from the GUI... Now I get This version of Nextcloud is not compatible with PHP 7.2. You are currently running 7.2.14. My docker containers (mariadb-nextcloud and nextcloud) are up-to-date, but I guess that does not help. Without a gui, how can I proceed?
  2. c3

    Oldest Drive

    I have hundreds, maybe thousands, of Seagate MOOS running, that's 7+ years of power on, up to 9 years.
  3. Most controllers can raid 1 across n number of drives, n>1. The odd blocks go to drive+1, even to drive-1. They do this because the rebuild and performance is better. Any non adjacent drives can fail. Mirroring stripes is also a performance advantage. Striping mirrors is the lowest performance config. Optimized striping for rebuild and performance
  4. I manage 5+PB of storage for media files (Quantum and Primestream) and unRAID is not in the game for read or write speed requirements.
  5. zfs can use the ssd cache. zfs is probably the speed king if you are looking for fastest. There are plenty of other reasons why unRAID is popular, but speed is not one.
  6. You need to decide which factor is your primary concern, data durability (data loss), or data availability. As mentioned backups dramatically improve data durability. But if you are after data availability, you'll need to handle all the hardware factors power supplies (as mentioned), memory (ECC and DIMM fail/sparing), cooling, and probably networking (lacp, etc). The sparing process can be scripted. As a subject matter expert, and your vast experience, this will be straight forward. Perl and python are available in the Nerd Tools. This may allow you to worry less while working. However, I am not sure it would be "hot" as the array must shutdown to reassign the drive. You could implement NetApp's maintenance garage function, to test, and then resume or fail the drive.
  7. This might be overkill, but it simple and does what you ask. https://oss.oetiker.ch/smokeping/
  8. The oncomand strips? popsicle sticks down the sides? I think the velcro would work, those sata connectors are not that much force.
  9. 4224? All I can think of it to shutdown and completely unplug it. Then carefully take it apart. The drives may come out of the caddy, and the you have to pull the drive itself.
  10. natex is real and they do have good deals, which often run out.
  11. What workload are you expecting to require such bandwidth, and thus invalidate using the expander chassis?
  12. c3

    Managed Switches

    Hard to beat the used Dell market for home switches, but they can be loud. I ran one with fans unplugged for 5+ years, never died, just upgraded to POE. 24port $52 https://www.ebay.com/itm/Dell-PowerConnect-5424-24-Port-Gigabit-Ethernet-Switch-4SFP-M023F/372102804664?epid=1808131533&hash=item56a30e34b8:g:WFUAAOSw1JVZ5j6Z 24port+POE+10G $150 https://www.ebay.com/itm/DELL-PowerConnect-5524P-POE-24-port-Gigabit-Managed-Layer-3-Switch-2-SFP-ports/152999060231?epid=1430486060&hash=item239f746307:g:xFMAAOSwdIBa4lRQ
  13. That is a great idea! It would be a simple plugin to gather, anonymize, and ship the data. I already have a system which tracks millions of drives, so a baby clone of that could be used. It could be frontended with a page showing population statistics. I wonder how many would opt-in to share drive information from unRAID servers?
  14. Exactly, there is no performance play in the 8+TB drive business. Same as to where all the 10k and 15k drives are gone.
  15. Helium does reduce the power, (as covered in the WD blog) so much that it is well below traditional drive, even with more platters. It's not an either or. They do run hot, mostly because the lack of airflow in the traditional drive space. The 8 and 9 platter drives just dont leave any room for air flow.
  16. Would love to see the data on this as it would end the whole mess about "turbo" writes.
  17. yes, rpm has an impact on performance. But drives have been running 7200rpm (and much faster) without helium.
  18. Helium is being used to increase density. It has limit impact on performance, thus parity calculation, etc. https://blog.westerndigital.com/rise-helium-drives/ Any performance gain is related to the density, ie more heads due to more platters from thinner platters.
  19. Both the Seagate and WD 8TB externals have been shown to work fine with unRAID. First it was the Seagate SMR, which was speculated to be a poor performer due to the newer SMR technology. However broad use has shown this was just speculation. These drives were especially inexpensive. Then BestBuy seemed to offer a seemingly endless series of good deals on WD 8TB externals, basically at the same price as the Seagate. This nullified price advantage of Seagate. At first it was common to find RED label drives in the WD externals, but recently this have shifted to whitelabel, with the 3.3V disable feature. This is easily dealt with. Currently, you can use either for good results. Both offer great value.
  20. Actually I do, but that test was against pihole not bind.
  21. I wear a tinfoil hat (and thus use pihole) ping is not a test of dns performance. Checkout DNS Benchmark
  22. Since there are lots of Norco 4224/4220 being used by unRAID users, are there any alternatives?
  23. Yeah, I too have a bunch of these and do not have any bad reports. The fans fail, but I have had them a long time, so that is to be expected. I have never had a speed problem. I stopped using them because of cost. If you are putting together lots of drives, the 24/48/60 drive units are cheaper than bolting these in, and the backplane is nicer than lots of cables. If you just need 5-10 drives, these are great.
  24. c3

    Multi Hundred TBs

    Yes, there is the diminishing return for increased cost of increasing durability. That is why I said to get a good understanding of the value of the data. Some data needs only two or three 9s, other maybe four or even eleven 9s. And it is true that in some cases you need to plan for exceptionally large numbers of drive failures due to perhaps drive firmware, MarFS is configured to survive 200+ drive failures. The architecture used is as close to shared nothing as financially possible. In an extreme, this would be one drive per server per data center. Which is obviously not financially possible at scale. So, yes, more than one server and more than one location. The 7+5 configuration allows for a data center to be offline/unreachable/on fire and still the data is both available and durably stored, by putting 3 or 4 storage servers in each location, for a cost below mirroring. Backup should always be considered. Software issues can invalidate strategies relying on versioning and snapshots. Mirroring is just too expensive (as noted above), hence "crazy amounts" of parity drives are used. http://lambdastack.io/blog/2017/02/26/erasure-coding/ Not sure if Comcast, or ATT, or Facebook qualify as "real data centers" but they all use "crazy amounts" of parity drives.