Squid

Community Developer
  • Posts

    28674
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. You need to make the "Cache" share the reverse. Primary is Cache pool, secondary array and mover action move from pool to array. But, if you have trouble (system tells you its an invalid share name when editing the share), then you need to manually rename the folder at the command prompt - As far as I know, a share named "Cache" is technically not allowed since there's a cache pool named "cache"
  2. Technically, you should actually run the memory at 2133MTs not 3200. 2133 is the actual speed of the memory https://www.gskill.com/specification/165/326/1562840073/F4-3600C16D-16GTZNC-Specification, and anything over that is an overclock and all overclocks introduce instability.
  3. Contact Support requesting a transfer of the registration. Include a copy & paste (no screenshots) of what appears within Tools - Registration when using the new drive. Are you able to access the original drive at all or is it really broken?
  4. appdata shareUseCache="only" # Share exists on disk1, disk3, disk4, disk6, disk10, disk11 C---e shareUseCache="only" # Share exists on cache domains shareUseCache="no" # Share exists on disk6 isos shareUseCache="no" # Share exists on disk6 P--x shareUseCache="no" # Share exists on disk1, disk3, disk4, disk5, disk6, disk7, disk8, disk9, disk10, disk11 system shareUseCache="no" # Share exists on disk6 Because there's nothing for mover to actually do. Presumably you want appdata to get moved from the array to the cache drive, and in that case you want primary storage on the share to be the cache pool, secondary to be array, and set mover action to move from secondary to primary
  5. Change the name of your script to something that doesn't contain "Out Of Memory". That's what FCP looks for.
  6. There's nothing to do or solve. Its simply docker giving a warning for everything. Limiting memory on an application does work no problems....
  7. No. This only changes how docker handles empty paths. If you're seeing "Not Available" then it effectively means that the system can't hit docker hub (or GHCR) to see if there's an update available due to network issues, VPN, etc etc
  8. Can you post the template you're using (ideally the xml from /config/plugins/dockerMan/templates-user on the flash drive
  9. For support for all lsio containers, best to chat them up on their discord site. Hit the icon then select Discord
  10. Hit check for updates multiple times without ever navigating away from the page. Or do pretty much anything on the page without navigating away, and when you finally hit Update All, then you'll get an "update" issued for the same containers. Number of times the "update" happens equals number of times you did something on that page. Kudos to @HastyBacon for figuring out the steps to replicate so the one-line fix can be implemented
  11. Squid

    NFS not using cache

    If the file being written already exists and is on the array then a re-write of the file is going to go onto the array.
  12. Just re-enable Docker via Settings - Docker and then go to Apps, Previous Apps. Check everything off you want and hit reinstall X apps
  13. Docker containers will all be there and all still running. Parity you can check off "Parity is already valid" when starting up the array. IE: No data loss
  14. Try running the system with the memory at its actual speed and don't overclock it. Actual speed (SPD speed) is 2133. All overclocks introduce instability
  15. The flash drive either dropped offline, is corrupted, or is read-only. Diagnostics would help
  16. Update should be available for this in ~5 minutes.
  17. Probably you'd need to run the command after the array is up and running. Make a script via user script to run at first array start
  18. Try it in safe mode and without the dockerd command you're running in /config/go on the flash drive
  19. Safe assumption that it effectively being unmaintained is less than perfect? SnapRAID HISTORY ================ 12.3 2024/01 ============ * Fix potential integer overflow when computing the completion percentage. No effect on the functionality. * Documentation improvements. 12.2 2022/08 ============ * Fix build issue with GLIBC 2.36 12.1 2022/01 ============ * Reduce stack usage to work in environments with limited stack size, like MUSL. * Increase the default disk cache from 8 MiB to 16 MiB. 12.0 2021/12 ============ * Parallel disk scanning. It's always enabled but it doesn't cover the -m option that still process disks sequentially. 11.6 2021/10 Or that the GUI for it is completely unofficial and also suffers from effectively zero updates. The last commit was April 27, 2023 FWIW, for something that has effectively been vaporware since day 1 I did try it a number of years ago. The lack of real-time recovery / emulation was a major problem. Not to mention that since it's snapshots are period based upon the schedule you set Murphy says that any random file that you need to recover due to a failure is not going to be covered by the snapshot. I ran it for about a week and then decided to run for the hills. Trusting your data to something like it don't give me a warm and fuzzy feeling. But to each their own.
  20. It's recommended to install this in the announcement thread for 6.12.8, and if you have Fix Common Problems, it also flags it as an error if its not installed
  21. It's not stuck. DONE appears when everything is actually done. Prior to that it actually says CLOSE which moves processing to the background