evakq8r

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by evakq8r

  1. 4 hours ago, itimpi said:

    It would be normal for mover not to move files in the 'system' share if you have either the VM or Docker services enabled as they keep files open and mover cannot move open files. 

    This makes sense, however it was actually disabled because a parity operation was in place (as advised on the GUI).

     

    4 hours ago, itimpi said:

    As was mentioned you also seem to have the docker image file configured to be 100GB which is far more than you should need.  The default of 20GB is fine for most people.  If you find that you are filling it then that means you almost certainly have a docker container misconfigured so that it is writing files internal to the image when they should be mapped to an external location on the host.

    I've stopped docker and VM, resized the image to 30GB and deleted the existing image, and now taking the (very painful) process of setting everything up again. Less than 20 docker containers in, and the 30GB image is already 70% full. Containers are all starting/not in a restart loop, so not sure why the image is already blown out. I did have about 75 containers running at once though, so I may fall into the 'not most people' category for docker image sizes.

  2. 1 hour ago, Squid said:

    You have what appears to be a docker container that is continually restarting.

     

    The docker image is in the system share which is currently on the cache drive and disk 3.  Impossible to tell from diagnostics where the docker.img file is on which of those two drives, but if its on disk 3 then what you're seeing would be expected.

     

    Go to Shares, next to system hit calculate.  If you see 100G (which may be overkill) on disk 3 then that's your issue.  Stop the entire docker service in settings - docker and run mover from the main tab

     

    Looks like there was a docker container restarting (which I thought I resolved a month ago). Removed the entire docker container and image so that should no longer be a problem.

     

    As for system, you're correct:

     

    image.thumb.png.70e6c103d3b9f233eb1044f0bef43286.png

     

    It seems when I built this system last week, I neglected to check how much space the cache files I already had used up, so can only assume system can't move to cache because there isn't enough space.

     

    Mover is disabled in a data rebuild, so I guess I'll need to wait until it's finished? Or do you suggest cancelling the rebuild anyway and moving system to cache, then do the rebuild again? Rebuild is currently at 77.8% going at a reasonable speed:

     

    image.png.da81a1fc2f6d306ede10c065f5000551.png

  3. I've recently rebuilt my server to new architecture (custom server vs an old Dell Poweredge) and had a disk go into a disabled state during a reboot. I've started the rebuild process and the speeds fluctuate wildly from 1MB/s - 125MB/s. Server specs are in my sig.

     

    The link speed of my LSI HBA is showing 8x, which as far as I know should be fine for a decent rebuild speed:

     

    lspci -vv -s 31:00.0
    31:00.0 RAID bus controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
            Subsystem: Fujitsu Technology Solutions HBA Ctrl SAS 6G 0/1 [D2607]
            Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
            Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
            Latency: 0, Cache Line Size: 64 bytes
            Interrupt: pin A routed to IRQ 72
            IOMMU group: 35
            Region 0: I/O ports at e000 [size=256]
            Region 1: Memory at c04c0000 (64-bit, non-prefetchable) [size=16K]
            Region 3: Memory at c0080000 (64-bit, non-prefetchable) [size=256K]
            Expansion ROM at c0000000 [disabled] [size=512K]
            Capabilities: [50] Power Management version 3
                    Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                    Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
            Capabilities: [68] Express (v2) Endpoint, MSI 00
                    DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0W
                    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                            MaxPayload 512 bytes, MaxReadReq 512 bytes
                    DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                    LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns
                            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
                    LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                    LnkSta: Speed 5GT/s, Width x8
                            TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                    DevCap2: Completion Timeout: Range BC, TimeoutDis+ NROPrPrP- LTR-
                             10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                             EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                             FRS- TPHComp- ExtTPHComp-
                             AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                    DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
                             AtomicOpsCtl: ReqEn-
                    LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                             Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
                    LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                             EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                             Retimer- 2Retimers- CrosslinkRes: unsupported
            Capabilities: [d0] Vital Product Data
    pcilib: sysfs_read_vpd: read failed: No such device
                    Not readable
            Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
                    Address: 0000000000000000  Data: 0000
            Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
                    Vector table: BAR=1 offset=00002000
                    PBA: BAR=1 offset=00003800
            Capabilities: [100 v1] Advanced Error Reporting
                    UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                    UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                    UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                    CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                    CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                    AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                            MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                    HeaderLog: 00000000 00000000 00000000 00000000
            Capabilities: [138 v1] Power Budgeting <?>
            Capabilities: [150 v1] Single Root I/O Virtualization (SR-IOV)
                    IOVCap: Migration- 10BitTagReq- Interrupt Message Number: 000
                    IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy- 10BitTagReq-
                    IOVSta: Migration-
                    Initial VFs: 16, Total VFs: 16, Number of VFs: 0, Function Dependency Link: 00
                    VF offset: 1, stride: 1, Device ID: 0072
                    Supported Page Size: 00000553, System Page Size: 00000001
                    Region 0: Memory at 00000000c04c4000 (64-bit, non-prefetchable)
                    Region 2: Memory at 00000000c00c0000 (64-bit, non-prefetchable)
                    VF Migration: offset: 00000000, BIR: 0
            Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI)
                    ARICap: MFVC- ACS-, Next Function: 0
                    ARICtl: MFVC- ACS-, Function Group: 0
            Kernel driver in use: mpt3sas
            Kernel modules: mpt3sas

     

    Attached are diagnostics. I'm not exactly sure why the speeds fluctuate so heavily, so any help would be appreciated.

    notyourunraid-diagnostics-20230218-1311.zip

  4. 7 hours ago, Kilrah said:

    Can install ncdu from nerdTools and see it in a more useful way. Samba logs can sometimes grow quite a bit. 

    If the files are in use they might not be cleared until closed hence the comment above...

     

    image.png.0e51bdf7cd527666fed544827c8e3f46.png

     

    Samba logs are definitely contributing, but not to the extent I was hoping for:

     

    image.thumb.png.8cfa8b56312715d6cc4cc56b59df2f08.png

  5. 3 minutes ago, Kilrah said:

    Can install ncdu from nerdTools and see it in a more useful way. Samba logs can sometimes grow quite a bit. 

    If the files are in use they might not be cleared until closed hence the comment above...

     

    image.png.0e51bdf7cd527666fed544827c8e3f46.png

    Thanks for the suggestion. I will try that later today. 

     

    I'm trying to avoid a full reboot as the Poweredge server I have Unraid on will always fail to reboot successfully (some IDRAC error, can't recall offhand). Needs to be powered off at the server itself then turned back on and wait 15 minutes. 

  6. 11 minutes ago, trurl said:

    You could try deleting old logs in /var/log, but you might also need to restart logging. I don't remember how to do that.

     

    I had tried deleting a 10MB docker.log.1 file, but that didn't reduce the space at all. That said, I deleted 2 older syslog files (.1 and .2, at 3MB a piece), and it drop the log file a little.

     

    What I don't get is the disparity between the space in use, and what space is free:

     

    df -h /var/log
    Filesystem      Size  Used Avail Use% Mounted on
    tmpfs           128M  118M   11M  92% /var/log

     

    ls -sh /var/log
    total 4.5M
     32K Xorg.0.log           0 cron          1.4M file.activity.log     0 nginx/                0 removed_scripts@               0 swtpm/
    4.0K apcupsd.events       0 debug         8.0K lastlog               0 packages@             0 removed_uninstall_scripts@  2.9M syslog
    4.0K apcupsd.events.1  4.0K dhcplog          0 libvirt/              0 pkgtools/             0 samba/                         0 vfio-pci
    4.0K apcupsd.events.2  4.0K diskinfo.log  172K maillog               0 plugins/              0 scripts@                    4.0K wg-quick.log
    4.0K apcupsd.events.3   92K dmesg            0 mcelog                0 preclear/             0 secure                       12K wtmp
    4.0K apcupsd.events.4     0 docker.log       0 messages              0 pwfail/               0 setup@
       0 btmp              4.0K faillog          0 nfsd/                 0 removed_packages@     0 spooler

     

    Whereabouts is the extra space being used?

  7. 8 minutes ago, Squid said:

    Is there a container that's restarting itself every minute?  Looking at the container's uptime on the docker tab should show one that's very low uptime, even after double checking again in a half hour

    Eureka! There were 2 containers spontaneously rebooting every minute. Authelia and Cachet-URL-Monitor (former for a failed migration (apparently it thinks my DB is corrupt so that'll be fun to sort) and the latter because the Cachet container was off and it couldn't communicate).

     

    Since stopping both containers, the promiscous logs have finally stopped. Thank you for your (and @trurl's) help!

     

    Is there a way to drop the percentage of the log file without a full reboot?

     

  8. 7 hours ago, trurl said:

    I don't know what this is about but it seems to be the main thing filling your syslog.

     

    Unfortunately I don't what it's about either. The logs are now indicating port 32, rather than 23:

     

    Jan 19 19:28:47 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered blocking state
    Jan 19 19:28:47 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered forwarding state
    Jan 19 19:28:48 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered disabled state

     

    Also just seen this error flash up a few times too:

     

    Jan 19 19:28:44 NotYourUnraid kernel: Page cache invalidation failure on direct I/O.  Possible data corruption due to collision with buffered I/O!

     

    The adapter names suggest virtual ethernet adapters and/or bridge adapters.  What would cause spontaneous connection attempts like this? I just don't know where to start...

  9. 16 minutes ago, trurl said:

    Why do you have 200G docker.img? Have you had problems filling it? 20G is often more than enough. The usual cause of filling docker.img is an application writing to a path that isn't mapped.

     

    Good question. I don't recall when I made that change but it's been like that for quite some time. If I were to change it down to 20G, would that impact the existing Docker containers I have? Or will it just rebuild? 

     

    17 minutes ago, trurl said:

    Also, your appdata share has files on the array.

     

    I see. May have been remnants of moving everything to array when I changed cache drives. I'll look at that later today. 

     

    18 minutes ago, trurl said:

    Syslog is being spammed with connections to port 23. Any idea what that is about?

     

    Where are you seeing this? The promiscuous/blocked logs? They only refer to port 2 as far as I can see. As for what is spamming port 23, I haven't the foggiest as I don't use telnet for anything, only SSH. Are you able to see any more info as to where the connection attempts are coming from? 

  10. Hi,

     

    Have started receiving warnings that my log files are nearing 100% and have been trying to work out why. If I manually look at the docker.log.1 file, it shows a repeating 'starting signal loop' from moby but not sure why it's started within the last couple of weeks or whether it's related.

     

    time="2023-01-19T01:11:08.810433865+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:11:08.810622094+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:11:08.810682107+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:11:08.811308407+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=40475 runtime=io.containerd.runc.v2
    time="2023-01-19T01:11:08.885314367+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:11:08.885523569+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:11:08.885564943+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:11:08.886076138+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=40521 runtime=io.containerd.runc.v2
    time="2023-01-19T01:12:11.044193548+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:12:11.044408590+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:12:11.044466804+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:12:11.044917385+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=1512 runtime=io.containerd.runc.v2
    time="2023-01-19T01:12:11.550734620+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:12:11.550943282+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:12:11.550986842+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:12:11.551531510+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=1727 runtime=io.containerd.runc.v2
    time="2023-01-19T01:13:13.201550504+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:13:13.201763393+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:13:13.201804197+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:13:13.202338359+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=13786 runtime=io.containerd.runc.v2
    time="2023-01-19T01:13:14.762442280+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:13:14.762640405+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:13:14.762679552+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:13:14.763138656+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=14205 runtime=io.containerd.runc.v2
    time="2023-01-19T01:14:15.347267221+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:14:15.347479993+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:14:15.347522144+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:14:15.348121927+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=24400 runtime=io.containerd.runc.v2
    time="2023-01-19T01:14:16.900834894+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
    time="2023-01-19T01:14:16.900988734+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
    time="2023-01-19T01:14:16.901031182+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
    time="2023-01-19T01:14:16.901436282+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=24747 runtime=io.containerd.runc.v2

     

    Diagnostics attached.

     

    Any help would be appreciated. Thank you.

    notyourunraid-diagnostics-20230119-0126.zip

  11. Welp, managed to get the prefill container to work with the lancache-bundle container (changed DNS back to what the bundle was and it just worked? (didn't before)).

     

    Sounds like the standalone lancache containers are a bit broken (for my use case), and I do hope @Josh.5 doesn't depricate his container as it works wonders.

     

    Either way, crisis averted. Thanks for responding.

    • Like 1
  12. 3 minutes ago, ich777 said:

    Prefill don't need it's own IP strictly speaking.

     

    Did you set the "--dns=" parameter like mentioned in the description from the container?

     

    True, however without the IP it defaults to DHCP, which was a lot lower in the IP range I have available, so kept them together.

     

    Yes, I did:

     

    image.png.c2e285f0fa90e00df5bb510889e5cfc8.png

     

    5 minutes ago, ich777 said:

    Do you need the lancache-dns at all or better speaking have you something like AdGuard or PiHole somewhere in your network? You can also use AdGuard, PiHole, Unbound or whatever as a DNS for lancache.

     

    Based on Spaceinvader's tutorial video I linked earlier, yes it does seem to be needed. Happy to try an alternative method if you think one fits better. 

     

    As for DNS, I use NextDNS (a variation of PiHole/Adguard etc - https://nextdns.io/)

     

    6 minutes ago, ich777 said:

    May I ask where you are located in the world?

     

    Australia

     

    6 minutes ago, ich777 said:

    This could also be caused because of an unstable connection to the Steam CDN itself which maybe not be obvious while downloading through the Steam Client itself.

    AFAIK many users get the message that it retries and it works after a short while again but that could be caused of many things.

    Even I get the message sometimes but in my case that happens really occasionally.

     

    Unfortunately for me it's quite often. Downloading direct from Steam CDNs outside of Lancache are always stable, so I find it odd for it to become unstable while using Lancache and stable outside of it.

     

    7 minutes ago, ich777 said:

    Do you have the data for your lancache on your Array? If yes, I would not recommend doing that it that way, rather I would recommend that you use a disk outside from your Array, maybe a spinning HDD or even better a SSD or NVME mounted through Unassigned Devices.

    I could imagine that the prefill stresses your server too much and you maybe also get higher I/O Wait than usual when having the lancache on the Array.

     

    I do, yes, and what you say makes sense. Unfortunately the server I have has all HDD bays populated and the NVMe adapter in use is on a PCIe bracket in the server (as the Poweredge does not support NVMe or M.2s natively). 

     

    Interestingly enough, I was using Josh5's Lancache-Bundle just fine, but the prefill container could never communicate with the bundle for whatever reason. The only reason I spun up the separate containers was because prefill did work, but then had these issues instead. In addition, the lancache-bundle is also using an array share and hasn't crushed I/O at all.

  13. I've recently implemented the lancache-prefill container on my Unraid and it's acting rather strange. 

     

     I followed Spaceinvader's setup guide to set this all up: 

     

    EDIT: Should probably add what my setup is:

     

    192.168.1.173 = lancache-prefill

    192.168.1.174 = lancache-dns

    192.168.1.175 = lancache

     

    Blizzard prefill will sometimes work, sometimes come up with connection refused. Steam prefill will download the games intermittently (sometimes error downloading, sometimes constant retries, sometimes nothing at all). Unfortunately I set the log files to clear inadvertently, so will paste the error once it comes up again.

     

    If I tail -f to the log file, I can see (when it works) downloads to the lancache are working as the logs are filling. Not exactly sure what the go is, but it's probably something simple I've done to break it.

     

    ie: when working:

     

    [10:39:45 AM] Retrieved product metadata                         36.4256
    [10:39:45 AM] Downloading 80 GiB
                                                                                    
    Downloading.. ━━━━━━━━━━━━━━━━━━━━━━━━━━   7% 00:59:16 5.9/80.0 GiB 178.9 Mbit/s

     

    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 95796 "-" "-" "HIT" "level3.blizzard.com" "bytes=134401104-134496899"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 35948 "-" "-" "HIT" "level3.blizzard.com" "bytes=139660436-139696383"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 35312 "-" "-" "HIT" "level3.blizzard.com" "bytes=142388632-142423943"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 37416 "-" "-" "HIT" "level3.blizzard.com" "bytes=147433099-147470514"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 139587 "-" "-" "HIT" "level3.blizzard.com" "bytes=142494331-142633917"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 104460 "-" "-" "HIT" "level3.blizzard.com" "bytes=146886623-146991082"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 279348 "-" "-" "HIT" "level3.blizzard.com" "bytes=142912763-143192110"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 219811 "-" "-" "HIT" "level3.blizzard.com" "bytes=130888351-131108161"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 106040 "-" "-" "HIT" "level3.blizzard.com" "bytes=136775659-136881698"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 200330 "-" "-" "HIT" "level3.blizzard.com" "bytes=144267481-144467810"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 105703 "-" "-" "HIT" "level3.blizzard.com" "bytes=149023459-149129161"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 262034 "-" "-" "HIT" "level3.blizzard.com" "bytes=145291719-145553752"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 91984 "-" "-" "HIT" "level3.blizzard.com" "bytes=146394919-146486902"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 372961 "-" "-" "HIT" "level3.blizzard.com" "bytes=145818916-146191876"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/34/48/3448e41e21d3724195efe6ef688e81d6 HTTP/1.1" 206 3615055 "-" "-" "HIT" "level3.blizzard.com" "bytes=188148148-191763202"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 170670 "-" "-" "HIT" "level3.blizzard.com" "bytes=144842503-145013172"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 142495 "-" "-" "HIT" "level3.blizzard.com" "bytes=150512294-150654788"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/34/48/3448e41e21d3724195efe6ef688e81d6 HTTP/1.1" 206 6169748 "-" "-" "HIT" "level3.blizzard.com" "bytes=148658598-154828345"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/34/48/3448e41e21d3724195efe6ef688e81d6 HTTP/1.1" 206 1585078 "-" "-" "HIT" "level3.blizzard.com" "bytes=198792252-200377329"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 32612 "-" "-" "HIT" "level3.blizzard.com" "bytes=150116412-150149023"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 38635 "-" "-" "HIT" "level3.blizzard.com" "bytes=144575843-144614477"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 475054 "-" "-" "HIT" "level3.blizzard.com" "bytes=133722331-134197384"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:25 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 36488 "-" "-" "HIT" "level3.blizzard.com" "bytes=155993088-156029575"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 64897 "-" "-" "HIT" "level3.blizzard.com" "bytes=153538616-153603512"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 69408 "-" "-" "HIT" "level3.blizzard.com" "bytes=155814401-155883808"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 92917 "-" "-" "HIT" "level3.blizzard.com" "bytes=152477644-152570560"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 36626 "-" "-" "HIT" "level3.blizzard.com" "bytes=156169945-156206570"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 35194 "-" "-" "HIT" "level3.blizzard.com" "bytes=156355825-156391018"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 125568 "-" "-" "HIT" "level3.blizzard.com" "bytes=154258141-154383708"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 75150 "-" "-" "HIT" "level3.blizzard.com" "bytes=156243988-156319137"
    [blizzard] 192.168.1.173 / - - - [22/Dec/2022:10:41:26 +1030] "GET /tpr/wow/data/4f/ab/4fab5ff2d92107a8be6ef03077d10a81 HTTP/1.1" 206 69465 "-" "-" "HIT" "level3.blizzard.com" "bytes=156065003-156134467"

     

    In addition, when the prefill is running as normal, accessing SMB file shares slows to an absolute crawl (some shares take over a minute to load). When the prefill container is not running, takes less than a second.

     

    Lastly, the game clients (ie. battlenet for Blizzard) won't actually download from the cache, even though my DNS is setup to do just that. I have DNS rewrites in place (using NextDNS) to point the CDNs to my lancache, but that doesn't seem to help). The URLs rewritten are from this list: https://github.com/uklans/cache-domains/blob/master/blizzard.txt, all pointing to my lancache instance (192.168.1.175).

     

    Any help would be appreciated (sorry for the long post).

     

    EDIT2: Error log attached for failed downloads (for battlenet)

     

    battlenet_prefill.log

  14. 4 minutes ago, SimonF said:

    It will be because the packages cannot be downloaded, but the plugin should use the local cached packages. Suggest that you post on the nvida plugin support page.

     

    With diagnostics

     

    Machine has been restarted since the anomaly, so current diagnostics are now meaningless, and I'm not rolling back then upgrading to get them again, but appreciate the input.

  15. Just adding my 2c/experience with the upgrade process.

     

    Went from 6.9.2 to 6.10.2, no LAN NICs on next boot. Found adapters I have in my Dell Poweredge appear to be the affected ones mentioned here (Broadcom NetXtreme II for reference, running in bond0).

    Ended up adding the blank tg3.conf file to the USB boot drive, LAN adapters now functional.

     

    Strange side effect however; the upgrade appeared to completely remove my NVIDIA and DVB drivers? Only realised this when trying to run the docker container for Plex and having it return 'Error: Unable to find runtime nvidia'.

     

    Is this expected behaviour? Or just an unexpected anomoly? (reinstalling the respective drivers from the CA App store has fixed the issue FWIW).

    • Upvote 1
  16. 1 hour ago, Taddeusz said:


    You’re using it with an external database? Did you import the database schema using the SQL file?

     

    Yes using an external DB and also imported the schema. For whatever reason though, I just reinstalled it from scratch and made no changes to config, and 'it just works'.

     

    NFI why, but I'm not questioning it. :)

  17. Have tried setting this up today, and can't get the web UI to load the login prompt without throwing this:

     

    image.png.0911eab2325695fb6c03e5f18f9940d2.png

     

    There's no errors in the Apache Guacamole docker, nor MariaDB. The user and password work for the database I setup for this container, so I'm at a loss.

     

    Have tried both Unraid templates (with and without mariadb), Docker has 'host access to custom networks' enabled, and have also tried different networks (br0 to give the container it's own IP, as well as my reverse proxy network).

     

    Probably something simple, but any help would be appreciated. Thanks.

  18. Fixed the issue; ended up being an ID10T error.

     

    The Dell drive caddies have a SAS and SATA hole layout, and I figured 'hey, I should use the SATA holes!'.... nope; the drives were not connected to the backplane as I didn't screw the drives into the correct holes in the caddy.

    Now using the SAS holes, and lo and behold everything works!

    • Like 1
  19. Hi all.

     

    I've been using Unraid for quite a while now and had the means to migrate my server from a makeshift server to a Poweredge unit I've acquired.

     

    It comes with a PERC H700 mini flashed to LSI 2308 in IT mode and had come with 4x SAS drives. Installed a test environment of Unraid, worked fine. 

     

    Started migrating my hard drives into it (7x Seagate Ironwolf NAS SATA 10TB drives), turned the server on and... nothing. No SATA drives. Turned off the server, plugged the SAS drives back in, nada. The only drives being detected are my NVMe cache drives. Tried plugging in a mix of SATA and SAS drives, same outcome. I have no idea what's caused Unraid to no longer find any SAS or SATA drives. To my knowledge, this controller should support both SATA and SAS drives.

     

    Tried checking whether the BIOS or iDRAC would show the drives, but neither did (even when functioning previously) as the LSI has been flashed into IT mode.

     

    Diagnostics attached. The syslog shows the LSI SAS 2308 controller still showing up and enabled, so I'm at a loss.

     

    Any help would be appreciated. Thanks.

    tower-diagnostics-20220106-2316.zip

  20. 37 minutes ago, ich777 said:

    That's really strange.

    Please double check that also the client is on experimental and validate the files there please.

     

    Don't know what causes this now...

     

    Client is 100% on Experimental (game version confirms that).

     

    I'll rebuild the container at some point and see how we go. Thanks anyways @ich777- great work as always :)

    • Like 1