evakq8r

Members
  • Posts

    52
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

evakq8r's Achievements

Rookie

Rookie (2/14)

15

Reputation

2

Community Answers

  1. Happy to. Send some details across and I'll gift you a copy of the game.
  2. Any chance of creating a template for Tabletop Simulator? https://store.steampowered.com/app/286160/Tabletop_Simulator/ Seems the only Docker container I can find is one from ~3 years ago, however the Steam CLI seems to be broken on that one as it is not accepting working Steam credentials. https://github.com/Benjamin-Dobell/docker-tts
  3. Everything setup again, system is much more responsive. Docker image is now 75GB, 68% used. Will leave as is for now, but all containers are working as expected. Thanks for your help @Squid.
  4. This makes sense, however it was actually disabled because a parity operation was in place (as advised on the GUI). I've stopped docker and VM, resized the image to 30GB and deleted the existing image, and now taking the (very painful) process of setting everything up again. Less than 20 docker containers in, and the 30GB image is already 70% full. Containers are all starting/not in a restart loop, so not sure why the image is already blown out. I did have about 75 containers running at once though, so I may fall into the 'not most people' category for docker image sizes.
  5. Looks like there was a docker container restarting (which I thought I resolved a month ago). Removed the entire docker container and image so that should no longer be a problem. As for system, you're correct: It seems when I built this system last week, I neglected to check how much space the cache files I already had used up, so can only assume system can't move to cache because there isn't enough space. Mover is disabled in a data rebuild, so I guess I'll need to wait until it's finished? Or do you suggest cancelling the rebuild anyway and moving system to cache, then do the rebuild again? Rebuild is currently at 77.8% going at a reasonable speed:
  6. I've recently rebuilt my server to new architecture (custom server vs an old Dell Poweredge) and had a disk go into a disabled state during a reboot. I've started the rebuild process and the speeds fluctuate wildly from 1MB/s - 125MB/s. Server specs are in my sig. The link speed of my LSI HBA is showing 8x, which as far as I know should be fine for a decent rebuild speed: lspci -vv -s 31:00.0 31:00.0 RAID bus controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) Subsystem: Fujitsu Technology Solutions HBA Ctrl SAS 6G 0/1 [D2607] Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 72 IOMMU group: 35 Region 0: I/O ports at e000 [size=256] Region 1: Memory at c04c0000 (64-bit, non-prefetchable) [size=16K] Region 3: Memory at c0080000 (64-bit, non-prefetchable) [size=256K] Expansion ROM at c0000000 [disabled] [size=512K] Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0W DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+ RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 512 bytes, MaxReadReq 512 bytes DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp- LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 5GT/s, Width x8 TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range BC, TimeoutDis+ NROPrPrP- LTR- 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix- EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit- FRS- TPHComp- ExtTPHComp- AtomicOpsCap: 32bit- 64bit- 128bitCAS- DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled, AtomicOpsCtl: ReqEn- LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1- EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest- Retimer- 2Retimers- CrosslinkRes: unsupported Capabilities: [d0] Vital Product Data pcilib: sysfs_read_vpd: read failed: No such device Not readable Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [c0] MSI-X: Enable+ Count=15 Masked- Vector table: BAR=1 offset=00002000 PBA: BAR=1 offset=00003800 Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 00000000 00000000 00000000 00000000 Capabilities: [138 v1] Power Budgeting <?> Capabilities: [150 v1] Single Root I/O Virtualization (SR-IOV) IOVCap: Migration- 10BitTagReq- Interrupt Message Number: 000 IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy- 10BitTagReq- IOVSta: Migration- Initial VFs: 16, Total VFs: 16, Number of VFs: 0, Function Dependency Link: 00 VF offset: 1, stride: 1, Device ID: 0072 Supported Page Size: 00000553, System Page Size: 00000001 Region 0: Memory at 00000000c04c4000 (64-bit, non-prefetchable) Region 2: Memory at 00000000c00c0000 (64-bit, non-prefetchable) VF Migration: offset: 00000000, BIR: 0 Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 0 ARICtl: MFVC- ACS-, Function Group: 0 Kernel driver in use: mpt3sas Kernel modules: mpt3sas Attached are diagnostics. I'm not exactly sure why the speeds fluctuate so heavily, so any help would be appreciated. notyourunraid-diagnostics-20230218-1311.zip
  7. I've ended up having to reboot anyway as I had other issues to deal with (space issues). The reboot has fixed the log files size (now 3%).
  8. Samba logs are definitely contributing, but not to the extent I was hoping for:
  9. Thanks for the suggestion. I will try that later today. I'm trying to avoid a full reboot as the Poweredge server I have Unraid on will always fail to reboot successfully (some IDRAC error, can't recall offhand). Needs to be powered off at the server itself then turned back on and wait 15 minutes.
  10. I had tried deleting a 10MB docker.log.1 file, but that didn't reduce the space at all. That said, I deleted 2 older syslog files (.1 and .2, at 3MB a piece), and it drop the log file a little. What I don't get is the disparity between the space in use, and what space is free: df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 128M 118M 11M 92% /var/log ls -sh /var/log total 4.5M 32K Xorg.0.log 0 cron 1.4M file.activity.log 0 nginx/ 0 removed_scripts@ 0 swtpm/ 4.0K apcupsd.events 0 debug 8.0K lastlog 0 packages@ 0 removed_uninstall_scripts@ 2.9M syslog 4.0K apcupsd.events.1 4.0K dhcplog 0 libvirt/ 0 pkgtools/ 0 samba/ 0 vfio-pci 4.0K apcupsd.events.2 4.0K diskinfo.log 172K maillog 0 plugins/ 0 scripts@ 4.0K wg-quick.log 4.0K apcupsd.events.3 92K dmesg 0 mcelog 0 preclear/ 0 secure 12K wtmp 4.0K apcupsd.events.4 0 docker.log 0 messages 0 pwfail/ 0 setup@ 0 btmp 4.0K faillog 0 nfsd/ 0 removed_packages@ 0 spooler Whereabouts is the extra space being used?
  11. Eureka! There were 2 containers spontaneously rebooting every minute. Authelia and Cachet-URL-Monitor (former for a failed migration (apparently it thinks my DB is corrupt so that'll be fun to sort) and the latter because the Cachet container was off and it couldn't communicate). Since stopping both containers, the promiscous logs have finally stopped. Thank you for your (and @trurl's) help! Is there a way to drop the percentage of the log file without a full reboot?
  12. Unfortunately I don't what it's about either. The logs are now indicating port 32, rather than 23: Jan 19 19:28:47 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered blocking state Jan 19 19:28:47 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered forwarding state Jan 19 19:28:48 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered disabled state Also just seen this error flash up a few times too: Jan 19 19:28:44 NotYourUnraid kernel: Page cache invalidation failure on direct I/O. Possible data corruption due to collision with buffered I/O! The adapter names suggest virtual ethernet adapters and/or bridge adapters. What would cause spontaneous connection attempts like this? I just don't know where to start...
  13. Good question. I don't recall when I made that change but it's been like that for quite some time. If I were to change it down to 20G, would that impact the existing Docker containers I have? Or will it just rebuild? I see. May have been remnants of moving everything to array when I changed cache drives. I'll look at that later today. Where are you seeing this? The promiscuous/blocked logs? They only refer to port 2 as far as I can see. As for what is spamming port 23, I haven't the foggiest as I don't use telnet for anything, only SSH. Are you able to see any more info as to where the connection attempts are coming from?
  14. Hi, Have started receiving warnings that my log files are nearing 100% and have been trying to work out why. If I manually look at the docker.log.1 file, it shows a repeating 'starting signal loop' from moby but not sure why it's started within the last couple of weeks or whether it's related. time="2023-01-19T01:11:08.810433865+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:11:08.810622094+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:11:08.810682107+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:11:08.811308407+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=40475 runtime=io.containerd.runc.v2 time="2023-01-19T01:11:08.885314367+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:11:08.885523569+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:11:08.885564943+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:11:08.886076138+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=40521 runtime=io.containerd.runc.v2 time="2023-01-19T01:12:11.044193548+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:12:11.044408590+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:12:11.044466804+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:12:11.044917385+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=1512 runtime=io.containerd.runc.v2 time="2023-01-19T01:12:11.550734620+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:12:11.550943282+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:12:11.550986842+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:12:11.551531510+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=1727 runtime=io.containerd.runc.v2 time="2023-01-19T01:13:13.201550504+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:13:13.201763393+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:13:13.201804197+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:13:13.202338359+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=13786 runtime=io.containerd.runc.v2 time="2023-01-19T01:13:14.762442280+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:13:14.762640405+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:13:14.762679552+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:13:14.763138656+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=14205 runtime=io.containerd.runc.v2 time="2023-01-19T01:14:15.347267221+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:14:15.347479993+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:14:15.347522144+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:14:15.348121927+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=24400 runtime=io.containerd.runc.v2 time="2023-01-19T01:14:16.900834894+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:14:16.900988734+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:14:16.901031182+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:14:16.901436282+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=24747 runtime=io.containerd.runc.v2 Diagnostics attached. Any help would be appreciated. Thank you. notyourunraid-diagnostics-20230119-0126.zip