coreylane

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

coreylane's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Hi Steve, you can fix this by adding a new environment variable to the docker container configuration in Unraid. Please see screenshots below Type: Variable Key: VECTOR_CONFIG Value: /etc/vector/vector.toml
  2. Thank you for reporting this. I'm having the same issue. This behavior was changed in Vector 0.34. I will work on a fix to the unraid template now. https://vector.dev/highlights/2023-11-07-0-34-0-upgrade-guide/#default-config-location-change
  3. Overview: Support for vector CA available in the coreylane repo. Application: Vector - GitHub Project Docker Hub: https://hub.docker.com/r/timberio/vector/ Template GitHub: https://github.com/coreylane/unraid-ca-templates How do I ingest/collect Unraid logs using Vector? Vector can be used with logs of any kind, but my original intention was to use Vector for ingestion of Unraid logs. Please see the below configuration guides for instructions on ingesting Unraid logs with Vector. Unraid Docker Logs To ingest Unraid's docker logs using Vector, the container needs read-only access to the unix socket Docker listens on. The default in Unraid is /var/run/docker.sock Add the below path configuration to the Vector container: Unraid syslog To ingest Unraid's system logs using Vector, the container needs read-only access to the syslog file. The default in Unraid is /var/log/syslog Add the below path configuration to the Vector container: Vector configuration When the container paths are setup, the final step is to provide a vector.toml configuration defining the desired log sources and destinations (sinks) where the logs should be shipped. Below is a very simple example Vector configuration that ingests docker logs and unraid syslogs and sends them to the cloud based log service Logtail. Example /mnt/user/appdata/vector.toml [sources.docker_logs] type = "docker_logs" [sources.unraid_logs] type = "file" include = ["/var/log/syslog"] [sinks.logtail_http] type = "http" method = "post" uri = "https://in.logtail.com/" encoding.codec = "json" auth.strategy = "bearer" auth.token = "XXXXXXXXXXXXXXXXXXXXXX" inputs = ["docker_logs", "unraid_logs"] New Relic Configuration The below vector.toml snippit will ship your logs to new relic [sinks.new_relic] type = "new_relic" inputs = ["docker_logs", "unraid_logs"] account_id = "123456" api = "logs" license_key = "XXXXXXXXXXXXX"
  4. Are your Crucial SSDs still working ok after the firmware update?
  5. Great idea about using two different make/model drives for RAID1 cache pool. And you are probably right about me catastrophizing, we need more data points. Crucial release notes are very opaque and do not provide any transparency or details around what the actual "edge case" is so customers have no idea if they are potentially affected. Their firmware update process is also a complete joke, and their support all around seems lacking. 🤷‍♂️
  6. Read the errors in the logs I posted, this isn't simply an annoying SMART attribute discrepancy, the BTRFS filesystem will become completely read-only, the drive will (temporarily) stop being detected in BIOS, and you will potentially lose data. The firmware release notes from Crucial admit this problem exists. They claim it doesn't affect Windows, which is why I specifically mention "Unraid system" in my original post.
  7. Posting this here in case anyone else runs into these issues, hopefully it will save some time. TLDR: Avoid using Crucial SSDs in your Unraid system. If you are using them, backup all the data immediately, consider replacing them, or at the very least check your firmware version and update to the latest (M3CR046) ASAP. I had a cache pool using 2x Crucial MX500 1TB SSDs. They worked fine for about a year, but this past week I suddenly started getting all kinds of BTRFS errors and other storage related write errors messages in the syslog. Examples below. The only thing that ended up resolving this and stabilizing my cache pool was updating the SSDs firmware to the latest version available, M3CR046 at the time of this post. This update is not available for direct download through the Crucial support site, you must use crucial storage executive software which only runs on Windows. Also the firmware update only works if you are actively writing to the disk (lol)... so this required mounting BTRFS in Windows using WinBtrfs, and writing to the filesystem while you execute the firmware update in the crucial software. I will never buy Crucial SSDs again, and am looking to replace these with a more reliable brand. Feb 7 01:20:52 darktower kernel: I/O error, dev loop2, sector 887200 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0 Feb 7 01:21:10 darktower kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 13, rd 1644, flush 0, corrupt 0, gen 0 Feb 7 01:21:10 darktower kernel: BTRFS warning (device sdc1: state EA): direct IO failed ino 109014 rw 0,0 sector 0x578abf30 len 0 err no 10 Feb 7 01:21:10 darktower kernel: BTRFS warning (device sdc1: state EA): direct IO failed ino 109014 rw 0,0 sector 0x578abf38 len 0 err no 10 Feb 7 04:40:04 darktower root: Fix Common Problems: Error: Unable to write to Docker Image Feb 7 08:39:38 darktower kernel: I/O error, dev sdc, sector 212606944 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0 Feb 7 08:39:38 darktower kernel: I/O error, dev loop3, sector 78080 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0
  8. Have you investigated these messages? shfs: share cache full
  9. I created a gh issue regarding the missing libffy dependency https://github.com/UnRAIDES/unRAID-NerdTools/issues/37
  10. What is the process to convert btrfs cache array disks to xfs?
  11. Gitlab is s a complex and busy app for sure, mine is constantly writing logs as well. How long has your Gitlab instance been running, a week? Check out how much space the log files are using in /mnt/cache/appdata/gitlab-ce/log - Are they being rotated?
  12. I ran "xfs_repair -v" on md1 (and all other array disks) but these messages are still appearing in the logs. Is this the right command? "fsck.xfs" command just redirects me to manpage for "xfs_repair" root@darktower:~# grep EXPERIMENTAL /var/log/syslog Dec 14 16:22:28 darktower kernel: XFS (md1): EXPERIMENTAL online shrink feature in use. Use at your own risk! root@darktower:~# grep fail /var/log/syslog Dec 14 16:22:28 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Dec 14 16:22:29 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Dec 14 16:22:29 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Dec 14 16:22:29 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device
  13. I recently got a small 10" display to directly attach to my Unraid server, for use in the rare case something goes wrong and I need to actually log in locally, and want to just keep 'bashtop' running on it as a dashboard. I'm having issues with Unraid OS and HDMI with this new display. The BIOS logo and boot messages perfectly fine, but once the systems boots into Unraid OS, the HDMI signal dies and the display goes to sleep. Where are display settings configured in Unraid? Maybe an unsupported resolution is set wrong in a file somewhere? Any tips or suggestions appreciated. Thanks! Using a VGA cable seems to work, but not ideal for me.
  14. Is this normal behavior? Diagnostics attached.... root@darktower:~# grep XFS_IOC_FSGROWFSDATA /var/log/syslog Dec 14 01:11:32 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Dec 14 01:11:33 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Dec 14 01:11:33 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device Dec 14 01:11:34 darktower root: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: No space left on device darktower-diagnostics-20211214-0142.zip
  15. It would be nice if this plugin could add/remove entries to .ssh/authorized_keys