Squid

Community Developer
  • Posts

    28689
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. MCE's you need to post your diagnostics The edac message is informative and doesn't actually mean anything
  2. Docker - Container Size will show you mostly where all the space it being taken up
  3. You can also try running make_bootable on the flash drive via windows (run as administrator) What you need to backup and move to the new flash if necessary is the contents of /config
  4. Assuming that the proxy.cfg file for CA is setup correctly, does changing the line instead to be $jsonPage = download_url("https://registry.hub.docker.com/v1/search?q=$filter&page=$pageNumber"); fix it?
  5. Depends upon your point of view and boils down to semantics. The patch doesn't fix any bug or anything like that. It's restoring the behaviour that everyone is used to. It "corrects" how the user set up their template to ignore what docker now calls an error (fatal), by skipping over the offending path. On the other POV, it's hiding the template error by skipping over the offending path. As @itimpi states, if you don't want this plugin, then don't install it. But, every future release of the OS has this patch included anyways, so it's basically irrelevant anyways.
  6. Because it's a very common error that users made in the past. And the result of that error is that everything would have been working perfectly forever, but under 6.12.8 simply updating the container that had the error the user made in the template would result in the container disappearing altogether (not simply stopping / or refusing to start) Far easier to simply have the patch (and all future versions of the OS) to "correct" the template error automatically so that effectively the operation of 6.12.8 and pre 6.12.8 are identical. This process is identical to the philosophy that the entire management of the App ecosystem on Unraid has. Certain errors that are commonly made by the maintainers of the templates themselves are automatically corrected (and flagged) so that the end-user has the least amount of problems. The alternative of not correcting what this patch is doing (and the aforementioned template errors by maintainers) would result in a support nightmare.
  7. I leave everything spinning 24/7. Very slight increase in power consumption vs spinning them down, but like everything else in the world the path to longevity on anything is to never turn them off. I've got 10+ year old 3TB consumer grade drives still without any issues running. Very frequently accessed files I leave on nvme cache-pools. For backup / redundancy on them, its all done to a cloud server.
  8. Feb 19 06:45:52 Saber kernel: mce: [Hardware Error]: Machine check events logged Feb 19 06:45:52 Saber kernel: [Hardware Error]: Deferred error, no action required. Feb 19 06:45:52 Saber kernel: [Hardware Error]: CPU:1 (19:21:0) MC17_STATUS[-|-|MiscV|-|PCC|-|CECC|Deferred|-|-]: 0x8b48504155000044 Feb 19 06:45:52 Saber kernel: [Hardware Error]: IPID: 0x0000000000000000 Feb 19 06:45:52 Saber kernel: [Hardware Error]: Bank 17 is reserved. Feb 19 06:45:52 Saber kernel: [Hardware Error]: cache level: RESV, tx: DATA It's being deferred right now (whatever that means). Probably safe to ignore. I'd reboot to see if it re-appears before you hit Ignore in FCP
  9. That message is informative and a tad misleading. The module is being used.
  10. Are you talking about DS_STORE files or something like Apple Doubletalk files?
  11. As mentioned above, Most Free is terrible for cache enabled shares once multiple drives are basically equivalent in free space. This is mainly because of how Linux caches the writes. Since RAM is used as the buffer, and writes to drive 1 won't interfere with drive 2 then you'll wind up with multiple drives being written to simultaneously and with parity it takes a huge hit, whether not not reconstruct write is enabled in disk settings. High water is the ideal setting to use for all cache-enabled shares.
  12. Does booting via safe mode in the bootmenu work for you?
  13. 5 Reallocated_Sector_Ct PO--CK 098 098 010 - 2272 2272 reallocated sectors is over my comfort level by a significant amount and should be replaced. Drill out the cover screws so you can get the cover off and you'll have some nice new wall art for your house.
  14. You need to make the "Cache" share the reverse. Primary is Cache pool, secondary array and mover action move from pool to array. But, if you have trouble (system tells you its an invalid share name when editing the share), then you need to manually rename the folder at the command prompt - As far as I know, a share named "Cache" is technically not allowed since there's a cache pool named "cache"
  15. Technically, you should actually run the memory at 2133MTs not 3200. 2133 is the actual speed of the memory https://www.gskill.com/specification/165/326/1562840073/F4-3600C16D-16GTZNC-Specification, and anything over that is an overclock and all overclocks introduce instability.
  16. Contact Support requesting a transfer of the registration. Include a copy & paste (no screenshots) of what appears within Tools - Registration when using the new drive. Are you able to access the original drive at all or is it really broken?
  17. appdata shareUseCache="only" # Share exists on disk1, disk3, disk4, disk6, disk10, disk11 C---e shareUseCache="only" # Share exists on cache domains shareUseCache="no" # Share exists on disk6 isos shareUseCache="no" # Share exists on disk6 P--x shareUseCache="no" # Share exists on disk1, disk3, disk4, disk5, disk6, disk7, disk8, disk9, disk10, disk11 system shareUseCache="no" # Share exists on disk6 Because there's nothing for mover to actually do. Presumably you want appdata to get moved from the array to the cache drive, and in that case you want primary storage on the share to be the cache pool, secondary to be array, and set mover action to move from secondary to primary
  18. Change the name of your script to something that doesn't contain "Out Of Memory". That's what FCP looks for.
  19. There's nothing to do or solve. Its simply docker giving a warning for everything. Limiting memory on an application does work no problems....
  20. No. This only changes how docker handles empty paths. If you're seeing "Not Available" then it effectively means that the system can't hit docker hub (or GHCR) to see if there's an update available due to network issues, VPN, etc etc
  21. Can you post the template you're using (ideally the xml from /config/plugins/dockerMan/templates-user on the flash drive
  22. For support for all lsio containers, best to chat them up on their discord site. Hit the icon then select Discord