Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. No tag being present is the same as having ":latest"
  2. Here's a simple test: When the "flash drive offline or corrupted" banner appears. Does it disappear if you navigate to another page (eg: Settings). If that banner does disappear then what you've got is not a flash problem, and safe to ignore. Rather it *thinks* it's corrupted because another process somewhere is changing a file on the flash at the exact same time as the OS is checking to see if it's corrupted.. (ie: a race condition that needs to be accounted for if we could replicate it)
  3. Click the "X" hides it until the next upgrade. Why don't you want the version of UD with bugs fixed in it?
  4. If it doesn't boot at all, then it's most likely a flash problem that didn't manifest itself until the writes associated with the upgrade happened. If it boots and then there's issues, you have to be more specific as to what the problem you saw is.
  5. It's only in GUI boot mode, and it's only there for management of the server (for those persons who use the OS with a VM running their daily driver and need a way to access the server with the VM stopped) It is not recommended to use it for general web surfing.
  6. IO Wait isn't an "issue" the way that I *think* you're thinking that it is. It's a metric that reflects that process(es) are waiting for I/O to complete from the drives / ssd's. This could be because of multiple reasons -> the drives simply can't keep up with the request, too many simultaneous requests happening, failing drives, drives continually dropping off and reconnecting, exceeding the bandwidth by a fair margin of what the PCIe bus can handle etc. There unfortunately is no one size fits all solution to what boils down to be a rather generic term. As netdata words it's comments on the metric: Keep an eye on iowait (0.17%). If it is constantly high, your disks are a bottleneck and they slow your system down. All that being said, from the screen shot you've included, the process Plex Transcoder is writing to disk3. Assuming you have a parity drive, that's going to be a major bottleneck, as writes to a parity protected array are inherently slow, and the storage of the temp files for transcoding belongs on an SSD. Or, put another way: Nothing is stopping you from using a DVD-RAM drive (I'm one of the few people who owned one) as a cache drive. You're I/O wait is going to be sky-high. But nothing is actually wrong because it's just a slow device.
  7. This is the root of all your problems: Apr 8 18:23:04 JEFFUNRAID emhttpd: shcmd (187): mkdir -p /mnt/cache Apr 8 18:23:05 JEFFUNRAID emhttpd: shcmd (188): mount -t btrfs -o noatime,space_cache=v2 /dev/nvme1n1p1 /mnt/cache Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme1n1p1): using free space tree Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme1n1p1): has skinny extents Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme1n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 3760, gen 0 Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme1n1p1): enabling ssd optimizations Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme1n1p1): start tree-log replay Apr 8 18:23:05 JEFFUNRAID emhttpd: shcmd (189): mkdir -p /mnt/plexvms Apr 8 18:23:05 JEFFUNRAID emhttpd: shcmd (190): mount -t btrfs -o noatime,space_cache=v2 /dev/nvme0n1p1 /mnt/plexvms Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme0n1p1): using free space tree Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme0n1p1): has skinny extents Apr 8 18:23:05 JEFFUNRAID kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 195, gen 0 Apr 8 18:23:06 JEFFUNRAID kernel: BTRFS info (device nvme0n1p1): enabling ssd optimizations Each of the cache pools has corruption. In the case of BTRFS, this can usually be traced back to memory issues. In your case (this is not necessarily your problem though), you are running an significant overclock on your memory by running it at XMP speeds. G-Skill (along with most manufacturers) advertises and sells their memory kits running at XMP speeds (3600), but they are actually selling you 2133 memory. Disable XMP profile in the BIOS and instead run it at SPD speeds. All overclocks introduce instability. Completely up to you, but I'm also not a fan of using BTRFS for cache pools if you have no intension of making those multi-device pools (you have 2 pools with a single device only). XFS is far more forgiving of a filesystem and causes less problems if the system is less than 100%. This requires you however to reform that pools @JorgeB will be able to help you to recover the information on the pools properly
  8. That sounds like your download client is sending the path to sickchill of /Media instead of /Downloads. Make sure that all the paths on each container matches each other.
  9. What size drive? Is it formatted as exFAT or FAT32. Drives formatted exFAT will have similar results, but that format tends to only be used on drives >32G
  10. Are you auto updating the containers via the plugin? Are you "tagging" the containers with specific versions when installing?
  11. If I recall correctly, vsFTP had some challenges in the past. Most striking was that it always started up historically even if it was set to be disabled. Since 99% of users don't use FTP at all, the decision was probably have it disabled by default and force you to enable it. TBH, if you're actively using FTP, you're going to be better off installing ProFTPd instead of using the built-in.
  12. Nothing to do with 6.10 This is a change in the container a while back
  13. Because you're using an XFS image. Container Size on the docker tab will get you an answer that's close to the utilization.
  14. Your CPU is reporting that it only has a single core on 6.10 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz CPU family: 6 Model: 158 Thread(s) per core: 1 Core(s) per socket: 1 You're getting a ton of machine check errors which is probably accounting for this. I'd start with looking at BIOS updates.
  15. It's been present in the OS for quite a while now, but YMMV. Known issue
  16. That is the docker folders plugin, which has some issues. If you post in it's support thread with exact instructions on how to replicate I can do my best to try and keep the plugin alive and not mark it as being incompatible with 6.10 (since the author looks to be MIA)
  17. What version of the OS are you using? Try 6.10.0. Also, if the NIC isn't connected to a 2.5G switch you're not going to see it listed as 2.5 Those numbers are the connected speed.
  18. You're not displaying in tabbed mode. Have you scrolled down?
  19. Have you tried a different browser? Tables (which are what the docker page uses) aren't super efficient depending upon the browser choice.
  20. First thing is to check the date & time on your server
×
×
  • Create New...