Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. What it looks like. I don't have any other suggestions though. I test my server on average about every 6 months with this stuff.
  2. Stop the VM service and docker service in Settings (VM Settings and Docker Settings). Then mv /mnt/cache/system/libvirt.img /mnt/cache/system/libvirt.img.new mv /mnt/cache/system/docker.img /mnt/cache/system/docker.img.new Restart the VM service Do the VMs show back up again? If it does (and it should in theory), then rm /mnt/cache/system/libvirt.img.new rm /mnt/cache/system/docker.img.new Followed by stopping the each of the services again. Then change the system share's settings to use Cache: prefer, then Main, run Mover. After its done, restart each of the services. Other comments. The usual setting for use Cache on the appdata share is prefer, for far better performance (and make a backup via the appdata backup plugin)
  3. Move it either via the command line (/mnt/diskX to /mnt/diskY) or via the unbalance plugin
  4. It'll all stay the same. Not quite sure what's happening here, but that's my first option at a fix.
  5. The docker system is having a panic (not sure why) What I would try doing is Run a memtest if only to rule it out as a cause Settings - Docker - Disable the service Reboot (this may wind up being unclean) Cancel the parity check (shouldn't matter because the reason for the parity check will be the inability to unmount the docker image and nothing to do with the array itself) Settings - Docker - Delete the image and then re-enable (or alternatively just change the name of the image) Apps - Previous Apps - Check off everything you want.
  6. Overhead? According to the screen shot, you're using ~15G of what is presumably a 20G docker image with nothing out of the ordinary - Nothing is "downloading" to the image, logs are normal etc. There are some auxiliary files stored in the image (notably icons) along with the last backup log for CA Appdata Backup/Restore (This can potentially be huge depending upon the number of files within appdata) ie: You're worrying too much about working out where the 2G is coming from. The 75% warning (presumably you've been told about that via FCP) is hardcoded at that percentage, but I see nothing wrong. You can always increase the size of the image if you want
  7. What I'd suggest is install Netdata and keep it running. Then you'd be able to slowly workout what is happening at the point that the slowdowns happen.
  8. After exactly 120 seconds if the "spinner" doesn't disappear, it never will Post your diagnostics
  9. There's no link being detected on any of your NICs. You sure the cable is plugged in? (Both ends). Any lights on the NIC or the switch?
  10. Nothing obviously wrong... Have you increased the number of connections in NzbGet? I can easily get my 100MB/s no problems w/20 connections. Pretty much you want to always use the maximum number of connections your UseNet provider allows. (But you may get the odd "Maximum Connections Exceeded" error when the app switches from one file to another, which isn't a big deal)
  11. Jan 11 21:22:21 NAS kernel: BTRFS: error (device sdk1) in cleanup_transaction:1942: errno=-28 No space left Jan 11 21:22:21 NAS kernel: BTRFS info (device sdk1): forced readonly But, the more interesting thing is that you don't actually have a cache pool named "cache" Rather you have a cache pool named cache_1tb_ssd, but was probably a semi-recent change / addition, but at the time of doing that, you left your various shares all referencing "cache" in the select cache pool option. And the share exists on "cache" and on various disks, but since there is no pool named cache, the files all wind up in RAM. (Also you'll have to manually adjust the various docker templates you've got (show more settings), and see if anything there is directly referencing /mnt/cache/... After you've got all that fixed up, then reboot, and we'll take it from there (with a new set of diagnostics)
  12. Then try again when the speed is down to the 12 MB/s
  13. Why would you want to make a backup of the flash drive to the same flash drive???? The plugin is open ended and allows you to select any destination you want for any of the options. Doesn't really help for disaster recovery to store the backup on what you're backing up on the same device How would you change the naming of the various options?
  14. The options the plugin uses on a restore is -xvaf
  15. Stop and start the array and see if there's any difference. Known issue with RC2 where it ignores use cache changes without a stop and start
  16. Probably best to include more detail and talk to @SpencerJ or email Limetech https://unraid.net/contact
  17. Appears to be a poor connection to the drive... Reseat the cabling... Also, if this drive is permanently attached (it doesn't appear to be USB), then you're better off creating a new cache pool (Security Drive) and letting Unraid manage it instead of Unassigned Devices
  18. Are you still using custom IPs? Upgrade to 6.10-rc2+ and switch the type of network to instead be ipvlan (from macvlan)
  19. Jan 11 15:41:35 UNRAID kernel: BTRFS critical (device dm-6): corrupt leaf: root=7 block=411762819072 slot=81, unexpected item end, have 3693198191 expect 8247 Jan 11 15:41:35 UNRAID kernel: BTRFS info (device dm-6): leaf 411762819072 gen 9952518 total ptrs 112 free space 2227 owner 7 Start by running a memtest as your cache pool has detected corruption, which on the trickle down is giving docker a heart-attack
  20. Jan 11 07:36:21 unraid root: Creating new image file: /mnt/disks/Samsung_SSD_960_EVO_250GB_S3ESNX0JB32921B/system/docker/docker.img size: 60G Your Settings - Docker (and for VM's, Settings VM Manager) are referencing the 960EVO which isn't mounted via Unassigned Devices BTW, for permanently attached devices, you're better off creating a new cache pool and letting Unraid manage it instead of UD
  21. The mce I believe is nothing to worry about and is thrown on occasion by Ryzen CPUs (ie: bug in CPU?) The error in CA would be that /tmp (or all of your RAM) was completely filled...
×
×
  • Create New...