c3

Members
  • Posts

    1175
  • Joined

  • Last visited

Posts posted by c3

  1. Durp...

     

    blindly updating the docker from time to time, never knew to do the upgrade from the GUI... Now I get

    This version of Nextcloud is not compatible with PHP 7.2.
    You are currently running 7.2.14.

    My docker containers (mariadb-nextcloud and nextcloud) are up-to-date, but I guess that does not help.

     

    Without a gui, how can I proceed?

  2. 56 minutes ago, pwm said:

     

    The read requirements when consuming a media file are quite low.

     

     

    For media data, it's normally when doing batch-processing (many media file processed at multiple real-time-speed) or when doing library indexing (a need to access a huge number of files) that read speed - or read operations per second - will matter.


    And since the data isn't striped, a large unRAID media server may often stream from multiple disks ending up being network-limited instead of disk-limited.

     

    In the end, it's normally for write operations or when running VM or Docker applications that sustained transfer rate or operations per second may end up too low with HDD accesses in the array.

    I manage  5+PB of storage for media files (Quantum and Primestream) and unRAID is not in the game for read or write speed requirements.

  3. 8 hours ago, shteud said:

    A zfs raid- z2 pool (from same 8 disks)can have much faster read speeds? But write speeds in zfs be much slower (because i can't use ssd as cache buffer)?

    zfs can use the ssd cache. zfs is probably the speed king if you are looking for fastest. There are plenty of other reasons why unRAID is popular, but speed is not one.

    • Like 1
  4. You need to decide which factor is your primary concern, data durability (data loss), or data availability. As mentioned backups dramatically improve data durability. But if you are after data availability, you'll need to handle all the hardware factors power supplies (as mentioned), memory (ECC and DIMM fail/sparing), cooling, and probably networking (lacp, etc).



    SME Storage


    Some posts expressed worry that in case unraid encounters a bad SATA connection a hot spare will kick in. (when available)
    Exactly what I would want! First priority is the health of the array.



    The sparing process can be scripted. As a subject matter expert, and your vast experience, this will be straight forward. Perl and python are available in the Nerd Tools. This may allow you to worry less while working.

    However, I am not sure it would be "hot" as the array must shutdown to reassign the drive. You could implement NetApp's maintenance garage function, to test, and then resume or fail the drive.
  5. On 3/22/2018 at 9:14 AM, MvL said:

    If the times come when ssd's of 50TB are affordable what speed do we get with these ssd's? Because ssd's are faster as hdd's. I assume that the sata interface is also the limit. So that should be again 4x600MB/s = 2400MB/s (theoretical). 80-100MB/s single link. That's a pity cause the ssd's are a lot faster. I'm not sure if a expander chassis is a good idea... 

    What workload are you expecting to require such bandwidth, and thus invalidate using the expander chassis?

  6. Hard to beat the used Dell market for home switches, but they can be loud. I ran one with fans unplugged for 5+ years, never died, just upgraded to POE.

    24port $52
    https://www.ebay.com/itm/Dell-PowerConnect-5424-24-Port-Gigabit-Ethernet-Switch-4SFP-M023F/372102804664?epid=1808131533&hash=item56a30e34b8:g:WFUAAOSw1JVZ5j6Z

    24port+POE+10G $150
    https://www.ebay.com/itm/DELL-PowerConnect-5524P-POE-24-port-Gigabit-Managed-Layer-3-Switch-2-SFP-ports/152999060231?epid=1430486060&hash=item239f746307:g:xFMAAOSwdIBa4lRQ

  7. 17 hours ago, SSD said:

    But unfortunately we can't arrange such a study.

    That is a great idea!

     

    It would be a simple plugin to gather, anonymize, and ship the data. I already have a system which tracks millions of drives, so a baby clone of that could be used. It could be frontended with a page showing population statistics.

     

    I wonder how many would opt-in to share drive information from unRAID servers?

  8. Helium does reduce the power, (as covered in the WD blog) so much that it is well below traditional drive, even with more platters. It's not an either or.

    They do run hot, mostly because the lack of airflow in the traditional drive space. The 8 and 9 platter drives just dont leave any room for air flow.

  9. Helium is being used to increase density. It has limit impact on performance, thus parity calculation, etc. https://blog.westerndigital.com/rise-helium-drives/
    Any performance gain is related to the density, ie more heads due to more platters from thinner platters.

  10. 9 hours ago, FreeMan said:

    Somewhere here, I thought I'd seen a recommendation of a make/model of 8TB external drive that had a good drive in it that was working well for shucking and installing as a parity or data drive. Unfortunately, my search-fu is failing me and I don't seem to be able to find that thread any more.

     

    I was at Fry's yesterday and they have a Seagate Expansion Desktop STEB8000100 model SRD0NF2 for the regular price of $150 which seems pretty reasonable. However, it seems to me that the recommendation was for a WD external.

     

    This will have to start life as my parity, as it would be my first 8TB drive. Does anyone have any experience with these Seagate drives? Is the drive in them a decent one for use in unRAID or should I continue my search?

    Both the Seagate and WD 8TB externals have been shown to work fine with unRAID.

     

    First it was the Seagate SMR, which was speculated to be a poor performer due to the newer SMR technology. However broad use has shown this was just speculation. These drives were especially inexpensive.

    Then BestBuy seemed to offer a seemingly endless series of good deals on WD 8TB externals, basically at the same price as the Seagate. This nullified price advantage of Seagate. At first it was common to find RED label drives in the WD externals, but recently this have shifted to whitelabel, with the 3.3V disable feature. This is easily dealt with.

     

    Currently, you can use either for good results. Both offer great value.

    • Like 1
  11. Yeah, I too have a bunch of these and do not have any bad reports. The fans fail, but I have had them a long time, so that is to be expected. I have never had a speed problem. I stopped using them because of cost. If you are putting together lots of drives, the 24/48/60 drive units are cheaper than bolting these in, and the backplane is nicer than lots of cables. If you just need 5-10 drives, these are great.

  12. Yes, there is the diminishing return for increased cost of increasing durability. That is why I said to get a good understanding of the value of the data. Some data needs only two or three 9s, other maybe four or even eleven 9s. And it is true that in some cases you need to plan for exceptionally large numbers of drive failures due to perhaps drive firmware, MarFS is configured to survive 200+ drive failures.

     

    The architecture used is as close to shared nothing as financially possible. In an extreme, this would be one drive per server per data center. Which is obviously not financially possible at scale. So, yes, more than one server and more than one location. The 7+5 configuration allows for a data center to be offline/unreachable/on fire and still the data is both available and durably stored, by putting 3 or 4 storage servers in each location, for a cost below mirroring.

     

    Backup should always be considered. Software issues can invalidate strategies relying on versioning and snapshots.

     

    Mirroring is just too expensive (as noted above), hence "crazy amounts" of parity drives are used. http://lambdastack.io/blog/2017/02/26/erasure-coding/

    Not sure if Comcast, or ATT, or Facebook qualify as "real data centers" but they all use "crazy amounts" of parity drives.