Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. I checked my permissions on that file and it is properly 1564 -rw-r--r-- 1 root root 1875 Jan 25 15:02 logrotate.conf Which implies that something (some script or a container which you're passing /etc (or /) to is adjusting the permissions incorrectly. Basically, nothing in that folder should be nobody:users It's should be root:root
  2. There is the Dynamix Stop Shell plugin which you can install which should catch this stuff and terminate the shells accordingly.
  3. There is also the Dynamix Unlimited Width Plugin that completely removes the max widths and uses the entire width of the browser regardless of width.
  4. Yeah, that's what I was looking for in the first and it wasn't there. Usually though when you run xfs_repair you just remove the -n and then if it suggests to run -l you do it.
  5. You want to post in the applicable support thread for this (click on the icon and select support) or hit LSIO up on their discord (select discord)
  6. First off, you should update to 6.9.2 Many fixes / improvements over 6.9.1 Does this set of diagnostics cover the time period in question when this happened? Nothing strictly obvious Other thoughts: I'm not a fan (although there's nothing wrong with it) in using BTRFS on a single cache drive pool if you've got no plans to update it to a multi-device pool. XFS on single drive pools is anecdotally far more reliable. Your appdata share is set to Use Cache: No. Once again, nothing strictly wrong with doing that, but far far faster responsiveness out of any container (Plex) by setting Use Cache: Prefer and running it off of the cache drive (Use the appdata backup plugin to make periodic backups of that share). You'll have to stop the service under Settings - Docker and then run mover to get everything onto the cache drive once you change that setting.
  7. The config files are pretty much everything you need to set things up back again (ie: if nothing has changed since your last backup then simply copying the config from backup to the new drive (and subsequently transferring the key) will have things back the way it was in seconds. Disk_assignments.txt is a file created by the Appdata Backup plugin which identifies in Human Terms which disk was what (parity, data 1 etc) which if you need that information may be hard to discern otherwise at the time of generating a new flash drive. Also, the contents of the flash are not zipped at all, so not sure why you would have zipped it. (And don't forget to run make_bootable.bat as administrator to configure the flash as a bootable device)
  8. The initial burst you're seeing at line speed is when the writes are being cached in RAM. Once that fills, then the writes have to happen directly to the drive. Unraid has 2 writes modes: Read/Write/Modify - This is the default, allows drives not being involved in the writes to spin down so that only the parity drive and the disk in question are written to. It is also the slowest since it has to read the contents of parity and the data disk, recalculate what the contents will be, wait for the drive to spin back around to the applicable sectors and then write the information. IE: It's 2-3x slower than simply writing the info to the drive Reconstruct write: This basically writes the info directly to the data drive and the parity drive simultaneously but has to read from every other drive at the same time in order to get the proper parity information. It will be the fastest (close to the write speed (or read speed of the slowest drive) but the caveat is that every drive has to be spun up for any write to the array. Cache drives solve the problem because the writes can be cached to it and they're not involved in the parity system so will basically proceed at full line speed and then get moved to the parity protected array usually during off hours. The 40MB/s is at the low end of average, but not an unacceptable number.
  9. Those diagnostics are from 6.9.2 To diagnose it we would need you to upgrade again wait for it to happen and post a new set.
  10. You need to manually use an applicable tar command. This long standing feature req is bubbling upwards.
  11. A feedback system of some sort has always been on the whiteboard for implementation. It is however a very complicated undertaking that requires a completely new infrastructure, security system etc etc. If / when the feed becomes unworkable due to size then those changes have to come into effect to continue this endeavor, but that is a ways off. (IE: I'd have to start hosting a publicly accessible SQL server and have to implement controls and security to prevent anyone from gaining control over the entire system (and by inference everyone's servers by manipulating the repositories which are offered up to install) It's coming, but it's not on any current priority list)
  12. Taking a guess it looks like sdm dropped offline and became sdp However neither are showing up in the smart reports, so it appears that it dropped for good. Reseating cabling is your first step.
  13. You need to pass each of them another path to your media EG: /media mapped to /mnt/user/Movies And then within Radar you would tell it that your media exists at /media Additionally, on your download client (sab?) you should have a path to the downloads in the mappings. You need to add that exact same path (container and host) to sonarr / radarr See also
  14. @JorgeB would be the man to ask, but it *appears* to me that your nvme is locked down in a security protection, and this happens right away during the detection phase of boot up Feb 3 16:48:19 GRID-02 kernel: blk_update_request: protection error, dev nvme0n1, sector 264 op 0x0:(READ) flags 0x80700 phys_seg 27 prio class 0 Feb 3 16:48:19 GRID-02 kernel: blk_update_request: protection error, dev nvme0n1, sector 520 op 0x0:(READ) flags 0x80700 phys_seg 58 prio class 0 Feb 3 16:48:19 GRID-02 kernel: blk_update_request: protection error, dev nvme0n1, sector 2312 op 0x0:(READ) flags 0x80700 phys_seg 29 prio class 0 Feb 3 16:48:19 GRID-02 kernel: blk_update_request: protection error, dev nvme0n1, sector 2568 op 0x0:(READ) flags 0x80700 phys_seg 56 prio class 0 Feb 3 16:48:19 GRID-02 kernel: blk_update_request: protection error, dev nvme0n1, sector 2000408064 op 0x0:(READ) flags 0x80700 phys_seg 48 prio class 0 Feb 3 16:48:19 GRID-02 kernel: blk_update_request: protection error, dev nvme0n1, sector 2000408576 op 0x0:(READ) flags 0x80700 phys_seg 19 prio class 0 Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
  15. Follow the link that @trurl posted. It'll tell us most of what you need to know
  16. To quickly deprecate any given template in your repository, two choices Either add in <Deprecated>true</Deprecated> into the xml, or now you can also simply put those templates into a subdirectory named "deprecated" of your repository, and the feed will automatically apply that tag to them.
  17. You should post your entire diagnostics
  18. If SWAG is listening on 80 then that's all goodl I was worried you had forwarded the ports directly to the management ports instead of swag
  19. "There is" as a reply to FCP only gathers that warning from the logs. No where else.
  20. If you're forwarding the ports, you want them forwarded to SWAG, not to the server itself so that there's at least another layer in between your server and the WAN If you forwarded the ports directly to the server's 80/443, then yes you did basically open it up and only a password is standing between you and bad actors
  21. Settings - Management Settings, use SSL to be either Auto or Yes and get a certificate via the button at the bottom. Browsers automatically append :80 as it's the default if you don't specify a port. If you've changed it, append the port
  22. There is. FCP gathers that information only from the syslog.
  23. What is the source file? On the array? Reads are always limited by the speed of the device you're actually reading from. Writes are cached in memory first and then written (and tend to go to a SSD / nvme which are must faster than hard drives)
  24. Sorry I missed your reply The hezb process is coming from your unifi docker app root 11227 0.0 0.0 112036 6364 ? Sl 20:00 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 759460c22cf3d809b0564a2a0c1cc5490b1d149cbe2763eb4d3ccf90750d7a22 -address /var/run/docker/containerd/containerd.sock root 11252 0.0 0.0 204 4 ? Ss 20:00 0:00 \_ s6-svscan -t0 /var/run/s6/services nobody 25600 197 14.7 2821392 2404568 ? Ssl 20:09 24:56 \_ hezb -o 142.93.8.2:80 -u 759460c22cf3 -k -B nobody 22263 9.6 0.0 4632 1748 ? S 20:08 1:13 \_ /bin/sh ./6beb05a root 11352 0.0 0.0 204 4 ? S 20:00 0:00 \_ s6-supervise s6-fdholderd root 11686 0.0 0.0 204 4 ? S 20:00 0:00 \_ s6-supervise unifi nobody 11689 4.4 4.2 4698128 685620 ? Ssl 20:00 0:55 \_ java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start nobody 12542 0.5 0.6 959408 108160 ? Sl 20:00 0:06 \_ bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --logRotate reopen --logappend --logpath /usr/lib/unifi/logs/mongod.log --pidfilepath /usr/lib/unifi/run/mongod.pid --bind_ip 127.0.0.1 I would suggest you hit up linuxserver and @Roxedus on LSIO's discord (hit the icon and then hit support or discord) and they can properly help you diagnose if this should be here (the IP address implies that it shouldn't) Old versions of the unifi app were susceptible to Log4J, but whether that's the case here they would know
×
×
  • Create New...