Jump to content

Squid

Community Developer
  • Posts

    28,733
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. Fair enough. It was actually going to wind up being deprecated as VM Snapshots etc are being integrated in 6.13
  2. It would be worthwhile to run memtest from the boot menu for minimum of a couple of passes. If you boot via UEFI, then you will need to temporarily switch to legacy boot to run memtest. You have a whack of segfaults which a lot of times are caused by bad memory and would also possibly result in what you are seeing.
  3. Unfortunately, you stopped the array so it's very hard to say what's going on. Probably related to "share cache full" but without the diagnostics after the array is started its impossible to say anything
  4. Sep 22 15:56:35 oscnhome mcelog: failed to prefill DIMM database from DMI data Sep 22 15:56:35 oscnhome mcelog: Kernel does not support page offline interface Nothing to worry about, but a BIOS update might silence. Also, FWIW every 30 seconds from 192.168.1.43 there's a login via SSH that gets logged. Very annoying to try and isolate any issues because of the constant logging due to that, and this will ultimately fill /var/log and cause issues
  5. Not necessarily your problem, but corruption issues are generally related to bad RAM (as your searching has already indicated). FWIW, your particular RAM is not on either the motherboard's QVL for memory and G-Skill doesn't list that motherboard on the memory's QVL. Personally, I only ever buy RAM from the MB QVL for the most trouble free experience. But take it with a grain of salt. Just because neither the memory or the MB says that they are compatible with each other doesn't mean that they are not.
  6. Settings page It's always on And more expanded if you install Dynamix File Manager
  7. They're safe to attach directly to your next post. That way other members can learn and/or comment also
  8. You need to post the diagnostics which will have the entry in it.
  9. Your docker folder is in the system share. This share is set to be moved to the array, and the files now exist between the array (disk 1 and the cache pool). You really don't want this. Not only (if mover fully succeeds in moving everything to the array) is there a huge performance hit in running docker from the array, but in terms of when only some files can be moved (which is to be expected since docker always has files open and mover can't move in use files) it's probably going to play havoc with weird things happening. What I'd do. Stop docker service in Settings, Docker In advanced view, check off and delete the folder From Shares, system settings set the Use Pool setting to move from the Array to the Cache Pool (ie: use cache: prefer) From Main, Array Operations run Mover (Move Now) Settings, Docker -> re-enable the service Apps, Previous Apps -> check off what you want re-installed and hit the "install x apps"
  10. Which containers? When editing them is the repository something akin to https://ghcr.io/ blah blah There's been some reports that ghcr (GitHub Container Repository) has been reporting incorrect SHA values on containers which does result in what you're seeing.
  11. The entire GitHub repository has been deleted by SmartPhoneLover, so none of the templates are available and have been removed. Because of that, you're probably also not going to get any support / answers via this forum, especially from the maintainer
  12. You don't need the "-e " in any of those variables. It's implied already since you're creating a variable and what you're do is creating a variable named "-e variablename" instead of "variablename" Unless you're running the container on its own dedicated IP address (network: br0) then the host port you've specified of 443 is already in use
  13. Did you start the array with Parity 2 removed? If you did, then it's natural that it has to rebuild the contents of Parity 2 when you've re-added it.
  14. Wouldn't have been the issue
  15. To re-enable a drive that was disabled (eg if the drive dropped offline and reseating the cables is the probable solution) Stop the array. Unassign the disabled drive. Start the array. Stop the array. Reassign the drive. Start the array. A parity build will result and rebuild the contents of the drive.
  16. What was the name of the share? (Diagnostics are anonymized so we can't really tell). But almost without fail loss of data is result of user error. (Inadvertently deleting the share contents or telling Sonarr to delete the contents)
  17. It should. Most causes of this degradation would be Plugging the cable into a 100MB/s switch or a bad cable. Post your diagnostics if changing the cable doesn't help
  18. Maybe the slots are disabled. Seems like the PCIe lanes are extremely limited https://forums.servethehome.com/index.php?threads/topton-nas-motherboard.37979/#post-354365
  19. It should show up in Main as either disabled with the drive. Stopping the array should also show you the same things
  20. Settings - Network settings. You will need to stop Docker and VMs (Settings - Docker and Settings VM Manager) to make changes
  21. Is it disabled in the BIOS? You might need the dummy plug....
  22. Can you repost your diagnostics after you start the array. And also ideally after plex crashes.
×
×
  • Create New...