Jump to content

Squid

Community Developer
  • Posts

    28,695
  • Joined

  • Last visited

  • Days Won

    314

Posts posted by Squid

  1. Depends upon your point of view and boils down to semantics.

     

    The patch doesn't fix any bug or anything like that.   It's restoring the behaviour that everyone is used to.  It "corrects" how the user set up their template to ignore what docker now calls an error (fatal), by skipping over the offending path.   On the other POV, it's hiding the template error by skipping over the offending path.

     

    As @itimpi states, if you don't want this plugin, then don't install it.   But, every future release of the OS has this patch included anyways, so it's basically irrelevant anyways.

  2. 13 minutes ago, ElectroBlvd said:

     

    Why would you want to hide errors instead of fixing them?

    Because it's a very common error that users made in the past.  And the result of that error is that everything would have been working perfectly forever, but under 6.12.8 simply updating the container that had the error the user made in the template would result in the container disappearing altogether (not simply stopping / or refusing to start)

     

    Far easier to simply have the patch (and all future versions of the OS) to "correct" the template error automatically so that effectively the operation of 6.12.8 and pre 6.12.8 are identical.

     

    This process is identical to the philosophy that the entire management of the App ecosystem on Unraid has.   Certain errors that are commonly made by the maintainers of the templates themselves are automatically corrected (and flagged) so that the end-user has the least amount of problems.  The alternative of not correcting what this patch is doing (and the aforementioned template errors by maintainers) would result in a support nightmare.

  3. I leave everything spinning 24/7.  Very slight increase in power consumption vs spinning them down, but like everything else in the world the path to longevity on anything is to never turn them off.  I've got 10+ year old 3TB consumer grade drives still without any issues running.

     

    Very frequently accessed files I leave on nvme cache-pools.  For backup / redundancy on them, its all done to a cloud server.

    • Like 1
  4. Feb 19 06:45:52 Saber kernel: mce: [Hardware Error]: Machine check events logged
    Feb 19 06:45:52 Saber kernel: [Hardware Error]: Deferred error, no action required.
    Feb 19 06:45:52 Saber kernel: [Hardware Error]: CPU:1 (19:21:0) MC17_STATUS[-|-|MiscV|-|PCC|-|CECC|Deferred|-|-]: 0x8b48504155000044
    Feb 19 06:45:52 Saber kernel: [Hardware Error]: IPID: 0x0000000000000000
    Feb 19 06:45:52 Saber kernel: [Hardware Error]: Bank 17 is reserved.
    Feb 19 06:45:52 Saber kernel: [Hardware Error]: cache level: RESV, tx: DATA

    It's being deferred right now (whatever that means).  Probably safe to ignore.   I'd reboot to see if it re-appears before you hit Ignore in FCP

  5. 2 hours ago, bmartino1 said:

    I was getting is saying to use a different version of mcelog. the edmac mcelog amd..

    That message is informative and a tad misleading.  The module is being used.

  6. As mentioned above, Most Free is terrible for cache enabled shares once multiple drives are basically equivalent in free space.

     

    This is mainly because of how Linux caches the writes.  Since RAM is used as the buffer, and writes to drive 1 won't interfere with drive 2 then you'll wind up with multiple drives being written to simultaneously and with parity it takes a huge hit, whether not not reconstruct write is enabled in disk settings.

     

    High water is the ideal setting to use for all cache-enabled shares.

  7.  

      5 Reallocated_Sector_Ct   PO--CK   098   098   010    -    2272

     

    2272 reallocated sectors is over my comfort level by a significant amount and should be replaced.  Drill out the cover screws so you can get the cover off and you'll have some nice new wall art for your house.

  8. You need to make the "Cache" share the reverse.  Primary is Cache pool, secondary array and mover action move from pool to array.

     

    But, if you have trouble (system tells you its an invalid share name when editing the share), then you need to manually rename the folder at the command prompt - As far as I know, a share named "Cache" is technically not allowed since there's a cache pool named "cache"

  9. 26 minutes ago, tophat17 said:

    The solution to increase the stability of my server was to disable c-states in the bios and to cap my Memory from 3600mhz to 3200mhz. 

    Technically, you should actually run the memory at 2133MTs not 3200.  2133 is the actual speed of the memory https://www.gskill.com/specification/165/326/1562840073/F4-3600C16D-16GTZNC-Specification, and anything over that is an overclock and all overclocks introduce instability.

     

     

  10. appdata                           shareUseCache="only"    # Share exists on disk1, disk3, disk4, disk6, disk10, disk11
    C---e                             shareUseCache="only"    # Share exists on cache
    domains                           shareUseCache="no"      # Share exists on disk6
    isos                              shareUseCache="no"      # Share exists on disk6
    P--x                              shareUseCache="no"      # Share exists on disk1, disk3, disk4, disk5, disk6, disk7, disk8, disk9, disk10, disk11
    system                            shareUseCache="no"      # Share exists on disk6

     

    Because there's nothing for mover to actually do.

     

    Presumably you want appdata to get moved from the array to the cache drive, and in that case you want primary storage on the share to be the cache pool, secondary to be array, and set mover action to move from secondary to primary

  11. No.  This only changes how docker handles empty paths.   If you're seeing "Not Available" then it effectively means that the system can't hit docker hub (or GHCR) to see if there's an update available due to network issues, VPN, etc etc

×
×
  • Create New...