Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. Unfortunately, that "create_parent" doesn't exist in the diagnostics. Can you disable mover logging, reboot and then run mover again. We won't see the errors, but we should be able to see what happens (appdata is being successfully moved to the cache drive in those diagnostics)
  2. You've added for one reason or another max protocol = SMB2_02 So you are limiting your SMB connections to the earliest version of SMB2 which is technically deprecated and insecure. You should remove that line so that everything can connect via SMB3 which is the current standard and far far more secure of a protocol Not sure about the other errors, but maybe they're related You can fix this via Settings - SMB Settings, SMB Extras. Reboot after making the change to ensure it takes effect
  3. Where can I see what folders are taking up my RAM? If you think (or have been told on the forum) that something somewhere is filling up your RAM (rootfs etc), then this might help in diagnosing exactly where to help you in finding out why From the plugins tab, install plugin and enter in this URL https://raw.githubusercontent.com/Squidly271/misc-stuff/master/memorystorage.plg NOTE: this does not actually install anything, but is simply a useful way to run a script You will see where all the memory in your RAM is being consumed. Pay particular attention to the last few lines (where it will detail /mnt). If you have anything listed under /mnt, that would mean that (most likely) a docker app is directly referencing a disk or pool that doesn't actually exist (ie: any actual disks and pools existing will not be listed) Other common areas for trouble would be /tmp and /var/log This script (while hopefully being useful) can potentially take a number of minutes to run, especially if you have bypassed the OS or Unassigned Devices (eg: rclone) and are making your own mount points anywhere in the system. Because it's impossible for this script to know that you are making your own mountpoints manually out of system control, it will think that this is in RAM and calculate the space taken accordingly. Also, do not be deceived by some of the entries in this list. Many of the folders listed will consume a couple of hundred meg. It's the folders which take up Gigabytes that you would be most concerned about
  4. This command du / -h -d2 --exclude="/proc" 2>/dev/null Will actually help a lot better in determining the space where everything is stored. Might take a couple of minutes to complete, and needs to be cross-referenced against diagnostics (for what drives you have installed vs where the data actually is), but will detail enough to show you (and us) the actual usage
  5. I'm also transcoding to RAM. If Plex isn't deleting the files then that's the issue (I've got no problems with it). It was simply one of a long list of possibilities What the output of du -h /tmp Diagnostics might also help
  6. Post your diagnostics and the docker run command for plex
  7. Can you post the entire diagnostics file
  8. Your RAM is full. Reboot is your easiest resolution, but you'll need to ultimately figure out why. The file it tried to write was about 200K. Various possibilities for this. You've applied the hack floating around here that moves the docker logs to RAM and something is doing massive logging, a docker container due to a misconfiguration on the template paths is writing to RAM instead of appdata / downloads / image, you've got a very limited amount of RAM on the server (2GB isn't enough -> functional minimum of 4GB)
  9. What does that mean? Those diagnostics appear to be after a powerup, and not after a WOL. Are you able to use the local keyboard / monitor at all after waking up? Top of my head: You've got a couple of VMs running and a couple of devices (sound etc) presumably passed through to the VM. Are you passing through a video card to them? Are you using the iGPU for passthrough? I don't see any video being "isolated" from the system to prevent the OS from using it.
  10. If you already have apps installed (Docker Tab), then you didn't recreate the docker image. Settings - Docker - Disable the service. Delete the image (there's a check box and button). Re-enable the service Apps - Previous Apps, check off what you want and hit install
  11. So it looks like we cross-posted. You powered down after the diagnostics. I'd still check the cabling when you have a chance, but the panic time is over (but not the beer time -> now you have to celebrate that it's working with a beer)
  12. I don't see in the logs any mention anywhere of the SSD. So best guess is that the cable(s) are completely unplugged or the drive is completely dead. Assuming that the SSD is plugged into the motherboard, does it recognize it in the BIOS?
  13. If you have a locally attached keyboard and monitor then from the command prompt diagnostics They'll get saved to the flash drive (logs folder) And then post them here after you reboot powerdown -r
  14. Cabling certainly appears to be the prime suspect (the drive isn't even showing any SMART report at all) Since it looks like you sync the backup from the plugin to backblaze it's probably not a major issue, but I don't recommend storing a backup of the drive you're backing up on the drive itself.
  15. dockerHub is having issues right now. Try again later (ie: pretty much anything you search for on dockerHub will now return a 404) https://status.docker.com/pages/history/533c6539221ae15e3f000031
  16. To actually wipe the data involves another step. Assuming the drives are all formatted XFS, to completely wipe them you would (prior to starting the array) switch the file system (click on each disk) from XFS (or auto) to be BTRFS, start the array and them format them. After that's done you would stop the array and switch them back to being XFS and reformat again. For safety reasons, there is no easy way to format the drives. Alternatively to formatting them you could also do from the command prompt rm -rf /mnt/user/shareName For each share you have in the system.
×
×
  • Create New...