[Support] binhex - DelugeVPN


Recommended Posts

22 minutes ago, mike_walker said:

PS If there are any other ideas why the Virgin dl rate is so bad I'd love to hear.  I've not changed anything in the deluge preferences so pretty sure that isn't the issue (and I followed the guidance on limiting the ul speeds etc..).

The number one reason for low speed is not having an open incoming port. Have you set that up with your vpn provider and put that port in the deluge settings?

Link to comment

Started to get these messages in the container log.

The lines bellow keep repeating in the log and there is no access to the WebUI of the container

I think the cypher was there before when it used to work OK.


 

2023-05-31 21:14:10,641 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts

2023-05-31 21:14:10,645 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 TCP/UDP: Preserving recently used remote address: [AF_INET]185.2.30.216:80
2023-05-31 21:14:10 Attempting to establish TCP connection with [AF_INET]185.2.30.216:80

2023-05-31 21:14:10,748 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 TCP connection established with [AF_INET]185.2.30.216:80
2023-05-31 21:14:10 TCPv4_CLIENT link local: (not bound)
2023-05-31 21:14:10 TCPv4_CLIENT link remote: [AF_INET]185.2.30.216:80

2023-05-31 21:14:10,851 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 Connection reset, restarting [0]

2023-05-31 21:14:10,852 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 SIGHUP[soft,connection-reset] received, process restarting

2023-05-31 21:14:10,853 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305). OpenVPN ignores --cipher for cipher negotiations.

2023-05-31 21:14:10,856 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 WARNING: file 'credentials.conf' is group or others accessible
2023-05-31 21:14:10 OpenVPN 2.6.3 [git:makepkg/94aad8c51043a805+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] [DCO] built on Apr 13 2023

2023-05-31 21:14:10,856 DEBG 'start-script' stdout output:
2023-05-31 21:14:10 library versions: OpenSSL 3.0.8 7 Feb 2023, LZO 2.10
2023-05-31 21:14:10 DCO version: N/A

 

Link to comment
On 5/31/2023 at 2:57 PM, strike said:

The number one reason for low speed is not having an open incoming port. Have you set that up with your vpn provider and put that port in the deluge settings?

Hi - thanks I'll try that (although, like i say, I haven't changed any settings in Deluge).

Link to comment

The privoxy in my delugevpn container has stopped working. All media collector docker containers (like sonarr, radarr) have connection errors when the proxy is enabled. Is there anything I can do about this?

 

deluge itself functions as usual just fine.

 

Thank you!

Edited by williamwallace
added more details
Link to comment

Not sure if i'm allowed to post this here (2nd day on unraid). Wondering if there is an alternative to scheduling downloads on delugevpn? i know it's an issue on the github but was curious if anyone else has got it working or found a different method.

 

Would really like to throttle downloads during times where i need the bandwidth.

 

Any help is immensely appreciated, TIA :)

Link to comment
On 5/31/2023 at 2:57 PM, strike said:

The number one reason for low speed is not having an open incoming port. Have you set that up with your vpn provider and put that port in the deluge settings?

Hi there, So I've been on with Surfshark (my VPN provider) and the chap told me my incoming port was 1196.  I've completely re-installed the container (as my settings have been changed so much I wanted a fresh install).  Anyhow, it didn't work - nothing better than 1.2MB/s.

BTW I did a test without the VPN and got an amazing 50.2MB/s

I'm not convinced the incoming port bit I was told was right (he said it was because I was using UDP).

 

Any other suggestions - or advice on how to get wireguard working?

Deluge Settings.png

Link to comment
40 minutes ago, mike_walker said:

Faceplant for leaving TalkTalk and moving to Virgin "for better speeds".

As much as i would love to blame your ISP here (TalkTalk are bloody awful!) moving ISP should not affect your download speed when torrenting over a VPN connection UNLESS the ISP you moved to is bandwidth throttling VPN traffic, which is a pretty crappy thing to do. 

Having said all that, having a working incoming port for torrenting is HIGHLY recommended and will ensure you will be able to connect to more peers, so its def worth moving to a VPN provider that offers port forwarding.

 

44 minutes ago, mike_walker said:

If I buy either PrivateVPN or PIA both'll work?

As long as the VPN provider allows connection via wireguard or openvpn AND allows P2P traffic and a mechanism to assign a port forward then you should be good, if you want an easy life go with PIA as i have coded this image to use it out of the box with minimal effort for the end user.

  • Thanks 1
Link to comment

Ok, really sorry about this but I'm crying into my computer.

I've bought PIA and installed Deluge fresh with the following:

docker run -d \
    --cap-add=NET_ADMIN \
    -p 8112:8112 \
    -p 8118:8118 \
    -p 58846:58846 \
    -p 58946:58946 \
    --name=arch-delugevpn \
    -v /share/CACHEDEV1_DATA/Data/Multimedia/:/data \
    -v /share/CACHEDEV1_DATA/Data/Deluge_Config/:/config \
    -v /etc/localtime:/etc/localtime:ro \
    -e VPN_ENABLED=yes \
    -e VPN_USER=HIDDEN \
    -e VPN_PASS=HIDDEN \
    -e VPN_PROV=pia \
    -e VPN_CLIENT=openvpn \
    -e STRICT_PORT_FORWARD=yes \
    -e ENABLE_PRIVOXY=no \
    -e LAN_NETWORK=192.168.1.0/24 \
    -e NAME_SERVERS=84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1 \
    -e DELUGE_DAEMON_LOG_LEVEL=info \
    -e DELUGE_WEB_LOG_LEVEL=info \
    -e DELUGE_ENABLE_WEBUI_PASSWORD=no \
    -e VPN_INPUT_PORTS=1234 \
    -e VPN_OUTPUT_PORTS=5678 \
    -e DEBUG=false \
    -e UMASK=000 \
    -e PUID=0 \
    -e PGID=0 \
binhex/arch-delugevpn

 

It connects and everything but I'm still only getting c0.6-1.1MB/s.  I checked the log file and Manchester in the UK accepts port forwarding. Have attached the Deluge config file.  Am I supposed to still do something in preferences and the incoming ports (I thought it was automatically done with PIA - and seems to be from the .conf file)?

 

BTW oddly, at one point the download speed did shoot up to 18MB/s but then quickly went down to 1.1MB/s.

 

Thanks (again) in advance and apologies for taking up your time. M

supervisord.log core.conf

Link to comment
2 minutes ago, mike_walker said:

Ok, really sorry about this but I'm crying into my computer.

I've bought PIA and installed Deluge fresh with the following:

docker run -d \
    --cap-add=NET_ADMIN \
    -p 8112:8112 \
    -p 8118:8118 \
    -p 58846:58846 \
    -p 58946:58946 \
    --name=arch-delugevpn \
    -v /share/CACHEDEV1_DATA/Data/Multimedia/:/data \
    -v /share/CACHEDEV1_DATA/Data/Deluge_Config/:/config \
    -v /etc/localtime:/etc/localtime:ro \
    -e VPN_ENABLED=yes \
    -e VPN_USER=HIDDEN \
    -e VPN_PASS=HIDDEN \
    -e VPN_PROV=pia \
    -e VPN_CLIENT=openvpn \
    -e STRICT_PORT_FORWARD=yes \
    -e ENABLE_PRIVOXY=no \
    -e LAN_NETWORK=192.168.1.0/24 \
    -e NAME_SERVERS=84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1 \
    -e DELUGE_DAEMON_LOG_LEVEL=info \
    -e DELUGE_WEB_LOG_LEVEL=info \
    -e DELUGE_ENABLE_WEBUI_PASSWORD=no \
    -e VPN_INPUT_PORTS=1234 \
    -e VPN_OUTPUT_PORTS=5678 \
    -e DEBUG=false \
    -e UMASK=000 \
    -e PUID=0 \
    -e PGID=0 \
binhex/arch-delugevpn

 

It connects and everything but I'm still only getting c0.6-1.1MB/s.  I checked the log file and Manchester in the UK accepts port forwarding. Have attached the Deluge config file.  Am I supposed to still do something in preferences and the incoming ports (I thought it was automatically done with PIA - and seems to be from the .conf file)?

 

BTW oddly, at one point the download speed did shoot up to 18MB/s but then quickly went down to 1.1MB/s.

 

Thanks (again) in advance and apologies for taking up your time. M

supervisord.log 17.91 kB · 0 downloads core.conf 2.81 kB · 0 downloads

yep that's a successful start and configuration of the incoming port, have you gone through the checklist here in Q6:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md 

Link to comment
On 6/5/2023 at 4:22 PM, mike_walker said:

Hi there, So I've been on with Surfshark (my VPN provider) and the chap told me my incoming port was 1196.  I've completely re-installed the container (as my settings have been changed so much I wanted a fresh install).  Anyhow, it didn't work - nothing better than 1.2MB/s.

BTW I did a test without the VPN and got an amazing 50.2MB/s

I'm not convinced the incoming port bit I was told was right (he said it was because I was using UDP).

 

Any other suggestions - or advice on how to get wireguard working?

Deluge Settings.png

Use a TCP vpn, UDP vpns have been getting limited by some ISP’s

Link to comment
52 minutes ago, binhex said:

yep that's a successful start and configuration of the incoming port, have you gone through the checklist here in Q6:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md 

Have now - same result. Just sits at between 0.6 & 1.3MB/s  This is what I've done:

  • Incoming port not defined correctly - it is (as above)
  • Upload rate set too high/unlimited - My upload speed is 48Mb/s so I've set this in Deluge to 20000Kb/s
  • Use GCM cipher instead of CBC - added "cipher aes-128-gcm auth sh**A256 ncp-disable"
  • Disable in/out utp - Disabled both via itconfig plugin
  • Rate limit overhead enabled - Was unticked anyhow
  • VPN endpoint has low bandwidth - Getting 548Mb/s via browser with vpn connected
  • Highly fragmented disk - Doubt my disk would have that much effect on the download speed - especially as with the VPN off I get 40-50Mb/s
  • Name Resolution not working - Am using NAME_SERVERS=84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1 - are these ok?
  • unRAID - I'm not using unRAID
  • Router/Realtek NIC - again, works without the VPN

Again, thanks for your time on this. M

 

PS And again it breifly went up to 10MB/s, i thought "it's working" and then it wne tright down to 200kb/s before settling at 1MB/s.

core.conf uk_manchester.ovpn supervisord.log

Edited by mike_walker
Quick update
Link to comment
On 6/4/2023 at 9:54 PM, williamwallace said:

The privoxy in my delugevpn container has stopped working. All media collector docker containers (like sonarr, radarr) have connection errors when the proxy is enabled. Is there anything I can do about this?

 

deluge itself functions as usual just fine.

 

Thank you!

This magically resolved itself. I do not know what was the cause of the privoxy issue unfortunately.

Link to comment

Is there a way to setup automatic, I believe it's called "cross seeding" when DelugeVPN receives a torrent from Radarr/Sonarr that is already in session at that time?

 

For example, if I manually search in Radarr it may show the same torrent from multiple indexers. If I click the first torrent it will be sent to Deluge. If I click the second same torrent, from a different indexer, it will give me the error "torrent already in session". Is there a way to get Deluge to accept that second torrent and add any additional trackers to the already in session torrent?

I find this useful sometimes for torrents with low amount of seeds, where occasionally one indexers torrent may not be able to get the torrent started, but adding the trackers from a second or third torrent will sometimes get it to download. Thus far I have to do that manually.

Thanks!

Edited by vfm
Link to comment

Hi all, I have made some nice changes to the core code used for all the VPN docker images I produce, details as follows:-

  • Randomly rotate between multiple remote endpoints (openvpn only) on disconnection - Less possbility of getting stuck on a defunct endpoint
  • Manual round-robin implementation of IP addresses for endpoints - On disconnection all endpoint IP's are rotated in /etc/hosts, reducing the possibility of getting stuck on a defunct server on the endpoint.

I also have a final piece of work around this (not done yet), which is to refresh IP addresses for endpoints on each disconnect/reconnect cycle, further reducing the possibility of getting stuck on defunct servers.
 

In short the work above should help keep the connection maintained for longer periods of time (hopefully months!) without the requirement to restart the container.
 

The work was non-trivial and it is possible I have introduced some bugs (extensively tested) so please keep an eye out of for unexpected issues as I roll out the this change (currently rolled out to SABnzbdVPN and PrivoxyVPN), if you see a new image released then it will include the new functionality.

  • Like 3
  • Thanks 1
Link to comment

Hi @binhex,

 

I believe I've broken something with my instance and it is causing the unraid web UI to go unstable. After a short time of running the deluge webui no longer responds, and the container fails to exit even with a `docker kill` command.

 

I did recently change my downloads folder from a 1tb HDD to a 8tb HDD. I ended up using the same disk share name as the previous drive. That's the only change I've made recently. All my files loaded up great and started seeding so I don't think that is it?

 

Anyways, here are some logs from my system and the container itself.

 

System Log

Quote

Jun  9 10:30:50 NAS kernel: microcode: microcode updated early to revision 0xf0, date = 2021-11-12
Jun  9 10:30:50 NAS kernel: Linux version 5.19.17-Unraid (root@Develop) (gcc (GCC) 12.2.0, GNU ld version 2.39-slack151) #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022
Jun  9 10:30:50 NAS kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot
Jun  9 10:30:50 NAS kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jun  9 10:30:50 NAS kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jun  9 10:30:50 NAS kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jun  9 10:30:50 NAS kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Jun  9 10:30:50 NAS kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Jun  9 10:30:50 NAS kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jun  9 10:30:50 NAS kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Jun  9 10:30:50 NAS kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Jun  9 10:30:50 NAS kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format.
Jun  9 10:30:50 NAS kernel: signal: max sigframe size: 2032
Jun  9 10:30:50 NAS kernel: BIOS-provided physical RAM map:
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000057fff] usable
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x0000000000058000-0x0000000000058fff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x0000000000059000-0x000000000009efff] usable
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x000000000009f000-0x00000000000fffff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000b2f78fff] usable
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000b2f79000-0x00000000b2f79fff] ACPI NVS
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000b2f7a000-0x00000000b2f7afff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000b2f7b000-0x00000000b9ae0fff] usable
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000b9ae1000-0x00000000b9e28fff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000b9e29000-0x00000000b9f60fff] usable
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000b9f61000-0x00000000ba646fff] ACPI NVS
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000ba647000-0x00000000baefefff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000baeff000-0x00000000baefffff] usable
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000baf00000-0x00000000bfffffff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Jun  9 10:30:50 NAS kernel: BIOS-e820: [mem 0x0000000100000000-0x000000063effffff] usable
Jun  9 10:30:50 NAS kernel: NX (Execute Disable) protection: active
Jun  9 10:30:50 NAS kernel: e820: update [mem 0xa6801018-0xa680d857] usable ==> usable
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:50 NAS kernel: e820: update [mem 0xa67f0018-0xa6800057] usable ==> usable
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:50 NAS kernel: e820: update [mem 0xa67df018-0xa67efe57] usable ==> usable
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:50 NAS kernel: extended physical RAM map:
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x0000000000000000-0x0000000000057fff] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x0000000000058000-0x0000000000058fff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x0000000000059000-0x000000000009efff] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x000000000009f000-0x00000000000fffff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000a67df017] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000a67df018-0x00000000a67efe57] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000a67efe58-0x00000000a67f0017] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000a67f0018-0x00000000a6800057] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000a6800058-0x00000000a6801017] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000a6801018-0x00000000a680d857] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000a680d858-0x00000000b2f78fff] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000b2f79000-0x00000000b2f79fff] ACPI NVS
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000b2f7a000-0x00000000b2f7afff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000b2f7b000-0x00000000b9ae0fff] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000b9ae1000-0x00000000b9e28fff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000b9e29000-0x00000000b9f60fff] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000b9f61000-0x00000000ba646fff] ACPI NVS
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000ba647000-0x00000000baefefff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000baeff000-0x00000000baefffff] usable
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000baf00000-0x00000000bfffffff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Jun  9 10:30:50 NAS kernel: reserve setup_data: [mem 0x0000000100000000-0x000000063effffff] usable
Jun  9 10:30:50 NAS kernel: efi: EFI v2.50 by American Megatrends
Jun  9 10:30:50 NAS kernel: efi: ACPI 2.0=0xb9f61000 ACPI=0xb9f61000 SMBIOS=0xbadce000 SMBIOS 3.0=0xbadcd000 ESRT=0xb87c29d8
Jun  9 10:30:50 NAS kernel: SMBIOS 3.0.0 present.
Jun  9 10:30:50 NAS kernel: DMI: Gigabyte Technology Co., Ltd. Z170XP-SLI/Z170XP-SLI-CF, BIOS F22c 12/01/2017
Jun  9 10:30:50 NAS kernel: tsc: Detected 3500.000 MHz processor
Jun  9 10:30:50 NAS kernel: tsc: Detected 3499.912 MHz TSC
Jun  9 10:30:50 NAS kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jun  9 10:30:50 NAS kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jun  9 10:30:50 NAS kernel: last_pfn = 0x63f000 max_arch_pfn = 0x400000000
Jun  9 10:30:50 NAS kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jun  9 10:30:50 NAS kernel: last_pfn = 0xbaf00 max_arch_pfn = 0x400000000
Jun  9 10:30:50 NAS kernel: found SMP MP-table at [mem 0x000fccf0-0x000fccff]
Jun  9 10:30:50 NAS kernel: esrt: Reserving ESRT space from 0x00000000b87c29d8 to 0x00000000b87c2a10.
Jun  9 10:30:50 NAS kernel: e820: update [mem 0xb87c2000-0xb87c2fff] usable ==> reserved
Jun  9 10:30:50 NAS kernel: Using GB pages for direct mapping
Jun  9 10:30:50 NAS kernel: Secure boot disabled
Jun  9 10:30:50 NAS kernel: RAMDISK: [mem 0x76242000-0x7fffffff]
Jun  9 10:30:50 NAS kernel: ACPI: Early table checksum verification disabled
Jun  9 10:30:50 NAS kernel: ACPI: RSDP 0x00000000B9F61000 000024 (v02 ALASKA)
Jun  9 10:30:50 NAS kernel: ACPI: XSDT 0x00000000B9F610A8 0000CC (v01 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: FACP 0x00000000B9F896F8 000114 (v06 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: DSDT 0x00000000B9F61208 0284EA (v02 ALASKA A M I    01072009 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: FACS 0x00000000BA646C40 000040
Jun  9 10:30:50 NAS kernel: ACPI: APIC 0x00000000B9F89810 000084 (v03 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: FPDT 0x00000000B9F89898 000044 (v01 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: MCFG 0x00000000B9F898E0 00003C (v01 ALASKA A M I    01072009 MSFT 00000097)
Jun  9 10:30:50 NAS kernel: ACPI: FIDT 0x00000000B9F89920 00009C (v01 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F899C0 003154 (v02 SaSsdt SaSsdt   00003000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F8CB18 002544 (v02 PegSsd PegSsdt  00001000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: HPET 0x00000000B9F8F060 000038 (v01 INTEL  SKL      00000001 MSFT 0000005F)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F8F098 000E3B (v02 INTEL  Ther_Rvp 00001000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F8FED8 002AD7 (v02 INTEL  xh_rvp10 00000000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: UEFI 0x00000000B9F929B0 000042 (v01 INTEL  EDK2     00000002      01000013)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F929F8 000EDE (v02 CpuRef CpuSsdt  00003000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: LPIT 0x00000000B9F938D8 000094 (v01 INTEL  SKL      00000000 MSFT 0000005F)
Jun  9 10:30:50 NAS kernel: ACPI: WSMT 0x00000000B9F93970 000028 (v01 INTEL  SKL      00000000 MSFT 0000005F)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F93998 00029F (v02 INTEL  sensrhub 00000000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0x00000000B9F93C38 003002 (v02 INTEL  PtidDevc 00001000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: DBGP 0x00000000B9F96C40 000034 (v01 INTEL           00000002 MSFT 0000005F)
Jun  9 10:30:50 NAS kernel: ACPI: DBG2 0x00000000B9F96C78 000054 (v00 INTEL           00000002 MSFT 0000005F)
Jun  9 10:30:50 NAS kernel: ACPI: BGRT 0x00000000B9F96CD0 000038 (v01 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: DMAR 0x00000000B9F96D08 0000A8 (v01 INTEL  SKL      00000001 INTL 00000001)
Jun  9 10:30:50 NAS kernel: ACPI: BGRT 0x00000000B9F96DB0 000038 (v01 ALASKA A M I    01072009 AMI  00010013)
Jun  9 10:30:50 NAS kernel: ACPI: Reserving FACP table memory at [mem 0xb9f896f8-0xb9f8980b]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving DSDT table memory at [mem 0xb9f61208-0xb9f896f1]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving FACS table memory at [mem 0xba646c40-0xba646c7f]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving APIC table memory at [mem 0xb9f89810-0xb9f89893]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving FPDT table memory at [mem 0xb9f89898-0xb9f898db]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving MCFG table memory at [mem 0xb9f898e0-0xb9f8991b]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving FIDT table memory at [mem 0xb9f89920-0xb9f899bb]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f899c0-0xb9f8cb13]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f8cb18-0xb9f8f05b]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving HPET table memory at [mem 0xb9f8f060-0xb9f8f097]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f8f098-0xb9f8fed2]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f8fed8-0xb9f929ae]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving UEFI table memory at [mem 0xb9f929b0-0xb9f929f1]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f929f8-0xb9f938d5]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving LPIT table memory at [mem 0xb9f938d8-0xb9f9396b]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving WSMT table memory at [mem 0xb9f93970-0xb9f93997]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f93998-0xb9f93c36]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving SSDT table memory at [mem 0xb9f93c38-0xb9f96c39]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving DBGP table memory at [mem 0xb9f96c40-0xb9f96c73]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving DBG2 table memory at [mem 0xb9f96c78-0xb9f96ccb]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving BGRT table memory at [mem 0xb9f96cd0-0xb9f96d07]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving DMAR table memory at [mem 0xb9f96d08-0xb9f96daf]
Jun  9 10:30:50 NAS kernel: ACPI: Reserving BGRT table memory at [mem 0xb9f96db0-0xb9f96de7]
Jun  9 10:30:50 NAS kernel: No NUMA configuration found
Jun  9 10:30:50 NAS kernel: Faking a node at [mem 0x0000000000000000-0x000000063effffff]
Jun  9 10:30:50 NAS kernel: NODE_DATA(0) allocated [mem 0x63eff9000-0x63effcfff]
Jun  9 10:30:50 NAS kernel: Zone ranges:
Jun  9 10:30:50 NAS kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jun  9 10:30:50 NAS kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jun  9 10:30:50 NAS kernel:  Normal   [mem 0x0000000100000000-0x000000063effffff]
Jun  9 10:30:50 NAS kernel: Movable zone start for each node
Jun  9 10:30:50 NAS kernel: Early memory node ranges
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x0000000000001000-0x0000000000057fff]
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x0000000000059000-0x000000000009efff]
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x0000000000100000-0x00000000b2f78fff]
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x00000000b2f7b000-0x00000000b9ae0fff]
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x00000000b9e29000-0x00000000b9f60fff]
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x00000000baeff000-0x00000000baefffff]
Jun  9 10:30:50 NAS kernel:  node   0: [mem 0x0000000100000000-0x000000063effffff]
Jun  9 10:30:50 NAS kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000063effffff]
Jun  9 10:30:50 NAS kernel: On node 0, zone DMA: 1 pages in unavailable ranges
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:50 NAS kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jun  9 10:30:50 NAS kernel: On node 0, zone DMA32: 2 pages in unavailable ranges
Jun  9 10:30:50 NAS kernel: On node 0, zone DMA32: 840 pages in unavailable ranges
Jun  9 10:30:50 NAS kernel: On node 0, zone DMA32: 3998 pages in unavailable ranges
Jun  9 10:30:50 NAS kernel: On node 0, zone Normal: 20736 pages in unavailable ranges
Jun  9 10:30:50 NAS kernel: On node 0, zone Normal: 4096 pages in unavailable ranges
Jun  9 10:30:50 NAS kernel: Reserving Intel graphics memory at [mem 0xbc000000-0xbfffffff]
Jun  9 10:30:50 NAS kernel: ACPI: PM-Timer IO Port: 0x1808
Jun  9 10:30:50 NAS kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
Jun  9 10:30:50 NAS kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
Jun  9 10:30:50 NAS kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
Jun  9 10:30:50 NAS kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
Jun  9 10:30:50 NAS kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
Jun  9 10:30:50 NAS kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jun  9 10:30:50 NAS kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jun  9 10:30:50 NAS kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jun  9 10:30:50 NAS kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Jun  9 10:30:50 NAS kernel: e820: update [mem 0xb69f0000-0xb6a33fff] usable ==> reserved
Jun  9 10:30:50 NAS kernel: TSC deadline timer available
Jun  9 10:30:50 NAS kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Jun  9 10:30:50 NAS kernel: [mem 0xc0000000-0xefffffff] available for PCI devices
Jun  9 10:30:50 NAS kernel: Booting paravirtualized kernel on bare hardware
Jun  9 10:30:50 NAS kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jun  9 10:30:50 NAS kernel: setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:4 nr_node_ids:1
Jun  9 10:30:50 NAS kernel: percpu: Embedded 55 pages/cpu s185960 r8192 d31128 u524288
Jun  9 10:30:50 NAS kernel: pcpu-alloc: s185960 r8192 d31128 u524288 alloc=1*2097152
Jun  9 10:30:50 NAS kernel: pcpu-alloc: [0] 0 1 2 3
Jun  9 10:30:50 NAS kernel: Fallback order for Node 0: 0
Jun  9 10:30:50 NAS kernel: Built 1 zonelists, mobility grouping on.  Total pages: 6163687
Jun  9 10:30:50 NAS kernel: Policy zone: Normal
Jun  9 10:30:50 NAS kernel: Kernel command line: BOOT_IMAGE=/bzimage initrd=/bzroot
Jun  9 10:30:50 NAS kernel: Unknown kernel command line parameters "BOOT_IMAGE=/bzimage", will be passed to user space.
Jun  9 10:30:50 NAS kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
Jun  9 10:30:50 NAS kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Jun  9 10:30:50 NAS kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jun  9 10:30:50 NAS kernel: Memory: 24217408K/25046740K available (12295K kernel code, 1681K rwdata, 3916K rodata, 1836K init, 1820K bss, 829076K reserved, 0K cma-reserved)
Jun  9 10:30:50 NAS kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jun  9 10:30:50 NAS kernel: Kernel/User page tables isolation: enabled
Jun  9 10:30:50 NAS kernel: ftrace: allocating 41888 entries in 164 pages
Jun  9 10:30:50 NAS kernel: ftrace: allocated 164 pages with 3 groups
Jun  9 10:30:50 NAS kernel: Dynamic Preempt: voluntary
Jun  9 10:30:50 NAS kernel: rcu: Preemptible hierarchical RCU implementation.
Jun  9 10:30:50 NAS kernel: rcu:     RCU event tracing is enabled.
Jun  9 10:30:50 NAS kernel: rcu:     RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=4.
Jun  9 10:30:50 NAS kernel:     Trampoline variant of Tasks RCU enabled.
Jun  9 10:30:50 NAS kernel:     Rude variant of Tasks RCU enabled.
Jun  9 10:30:50 NAS kernel:     Tracing variant of Tasks RCU enabled.
Jun  9 10:30:50 NAS kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jun  9 10:30:50 NAS kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jun  9 10:30:50 NAS kernel: NR_IRQS: 16640, nr_irqs: 1024, preallocated irqs: 16
Jun  9 10:30:50 NAS kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jun  9 10:30:50 NAS kernel: Console: colour dummy device 80x25
Jun  9 10:30:50 NAS kernel: printk: console [tty0] enabled
Jun  9 10:30:50 NAS kernel: ACPI: Core revision 20220331
Jun  9 10:30:50 NAS kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns
Jun  9 10:30:50 NAS kernel: APIC: Switch to symmetric I/O mode setup
Jun  9 10:30:50 NAS kernel: DMAR: Host address width 39
Jun  9 10:30:50 NAS kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0
Jun  9 10:30:50 NAS kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 7e3ff0505e
Jun  9 10:30:50 NAS kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1
Jun  9 10:30:50 NAS kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
Jun  9 10:30:50 NAS kernel: DMAR: RMRR base: 0x000000b9da3000 end: 0x000000b9dc2fff
Jun  9 10:30:50 NAS kernel: DMAR: RMRR base: 0x000000bb800000 end: 0x000000bfffffff
Jun  9 10:30:50 NAS kernel: DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
Jun  9 10:30:50 NAS kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000
Jun  9 10:30:50 NAS kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
Jun  9 10:30:50 NAS kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode
Jun  9 10:30:50 NAS kernel: x2apic enabled
Jun  9 10:30:50 NAS kernel: Switched APIC routing to cluster x2apic.
Jun  9 10:30:50 NAS kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jun  9 10:30:50 NAS kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3272fd97217, max_idle_ns: 440795241220 ns
Jun  9 10:30:50 NAS kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6999.82 BogoMIPS (lpj=3499912)
Jun  9 10:30:50 NAS kernel: pid_max: default: 32768 minimum: 301
Jun  9 10:30:50 NAS kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jun  9 10:30:50 NAS kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jun  9 10:30:50 NAS kernel: x86/cpu: SGX disabled by BIOS.
Jun  9 10:30:50 NAS kernel: CPU0: Thermal monitoring enabled (TM1)
Jun  9 10:30:50 NAS kernel: process: using mwait in idle threads
Jun  9 10:30:50 NAS kernel: Last level iTLB entries: 4KB 128, 2MB 8, 4MB 8
Jun  9 10:30:50 NAS kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Jun  9 10:30:50 NAS kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jun  9 10:30:50 NAS kernel: Spectre V2 : Mitigation: IBRS
Jun  9 10:30:50 NAS kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jun  9 10:30:50 NAS kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Jun  9 10:30:50 NAS kernel: RETBleed: Mitigation: IBRS
Jun  9 10:30:50 NAS kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jun  9 10:30:50 NAS kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jun  9 10:30:50 NAS kernel: MDS: Mitigation: Clear CPU buffers
Jun  9 10:30:50 NAS kernel: TAA: Mitigation: TSX disabled
Jun  9 10:30:50 NAS kernel: MMIO Stale Data: Mitigation: Clear CPU buffers
Jun  9 10:30:50 NAS kernel: SRBDS: Mitigation: Microcode
Jun  9 10:30:50 NAS kernel: Freeing SMP alternatives memory: 24K
Jun  9 10:30:50 NAS kernel: smpboot: CPU0: Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz (family: 0x6, model: 0x5e, stepping: 0x3)
Jun  9 10:30:50 NAS kernel: cblist_init_generic: Setting adjustable number of callback queues.
Jun  9 10:30:50 NAS kernel: cblist_init_generic: Setting shift to 2 and lim to 1.
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Jun  9 10:30:50 NAS kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
Jun  9 10:30:50 NAS kernel: ... version:                4
Jun  9 10:30:50 NAS kernel: ... bit width:              48
Jun  9 10:30:50 NAS kernel: ... generic registers:      8
Jun  9 10:30:50 NAS kernel: ... value mask:             0000ffffffffffff
Jun  9 10:30:50 NAS kernel: ... max period:             00007fffffffffff
Jun  9 10:30:50 NAS kernel: ... fixed-purpose events:   3
Jun  9 10:30:50 NAS kernel: ... event mask:             00000007000000ff
Jun  9 10:30:50 NAS kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1053
Jun  9 10:30:50 NAS kernel: rcu: Hierarchical SRCU implementation.
Jun  9 10:30:50 NAS kernel: rcu:     Max phase no-delay instances is 400.
Jun  9 10:30:50 NAS kernel: smp: Bringing up secondary CPUs ...
Jun  9 10:30:50 NAS kernel: x86: Booting SMP configuration:
Jun  9 10:30:50 NAS kernel: .... node  #0, CPUs:      #1 #2 #3
Jun  9 10:30:50 NAS kernel: smp: Brought up 1 node, 4 CPUs
Jun  9 10:30:50 NAS kernel: smpboot: Max logical packages: 1
Jun  9 10:30:50 NAS kernel: smpboot: Total of 4 processors activated (27999.29 BogoMIPS)
Jun  9 10:30:50 NAS kernel: devtmpfs: initialized
Jun  9 10:30:50 NAS kernel: x86/mm: Memory block size: 128MB
Jun  9 10:30:50 NAS kernel: ACPI: PM: Registering ACPI NVS region [mem 0xb2f79000-0xb2f79fff] (4096 bytes)
Jun  9 10:30:50 NAS kernel: ACPI: PM: Registering ACPI NVS region [mem 0xb9f61000-0xba646fff] (7233536 bytes)
Jun  9 10:30:50 NAS kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jun  9 10:30:50 NAS kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Jun  9 10:30:50 NAS kernel: pinctrl core: initialized pinctrl subsystem
Jun  9 10:30:50 NAS kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jun  9 10:30:50 NAS kernel: thermal_sys: Registered thermal governor 'fair_share'
Jun  9 10:30:50 NAS kernel: thermal_sys: Registered thermal governor 'bang_bang'
Jun  9 10:30:50 NAS kernel: thermal_sys: Registered thermal governor 'step_wise'
Jun  9 10:30:50 NAS kernel: thermal_sys: Registered thermal governor 'user_space'
Jun  9 10:30:50 NAS kernel: cpuidle: using governor ladder
Jun  9 10:30:50 NAS kernel: cpuidle: using governor menu
Jun  9 10:30:50 NAS kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Jun  9 10:30:50 NAS kernel: ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
Jun  9 10:30:50 NAS kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
Jun  9 10:30:50 NAS kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820
Jun  9 10:30:50 NAS kernel: PCI: Using configuration type 1 for base access
Jun  9 10:30:50 NAS kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jun  9 10:30:50 NAS kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Jun  9 10:30:50 NAS kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Jun  9 10:30:50 NAS kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Jun  9 10:30:50 NAS kernel: raid6: avx2x4   gen() 37432 MB/s
Jun  9 10:30:50 NAS kernel: raid6: avx2x2   gen() 31926 MB/s
Jun  9 10:30:50 NAS kernel: raid6: avx2x1   gen() 29627 MB/s
Jun  9 10:30:50 NAS kernel: raid6: using algorithm avx2x4 gen() 37432 MB/s
Jun  9 10:30:50 NAS kernel: raid6: .... xor() 18700 MB/s, rmw enabled
Jun  9 10:30:50 NAS kernel: raid6: using avx2x2 recovery algorithm
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(Module Device)
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(Processor Device)
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(Linux-Dell-Video)
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Jun  9 10:30:50 NAS kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Jun  9 10:30:50 NAS kernel: ACPI: 8 ACPI AML tables successfully acquired and loaded
Jun  9 10:30:50 NAS kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
Jun  9 10:30:50 NAS kernel: ACPI: Dynamic OEM Table Load:
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0xFFFF888100F42000 000738 (v02 PmRef  Cpu0Ist  00003000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: \_PR_.CPU0: _OSC native thermal LVT Acked
Jun  9 10:30:50 NAS kernel: ACPI: Dynamic OEM Table Load:
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0xFFFF888100F3D000 0003FF (v02 PmRef  Cpu0Cst  00003001 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: Dynamic OEM Table Load:
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0xFFFF888100F43000 00065C (v02 PmRef  ApIst    00003000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: Dynamic OEM Table Load:
Jun  9 10:30:50 NAS kernel: ACPI: SSDT 0xFFFF888101071800 00018A (v02 PmRef  ApCst    00003000 INTL 20160422)
Jun  9 10:30:50 NAS kernel: ACPI: Interpreter enabled
Jun  9 10:30:50 NAS kernel: ACPI: PM: (supports S0 S3 S5)
Jun  9 10:30:50 NAS kernel: ACPI: Using IOAPIC for interrupt routing
Jun  9 10:30:50 NAS kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jun  9 10:30:50 NAS kernel: PCI: Using E820 reservations for host bridge windows
Jun  9 10:30:50 NAS kernel: ACPI: Enabled 6 GPEs in block 00 to 7F
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [PG00]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [PG01]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [PG02]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [WRST]
### [PREVIOUS LINE REPEATED 19 TIMES] ###
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [FN00]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [FN01]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [FN02]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [FN03]
Jun  9 10:30:50 NAS kernel: ACPI: PM: Power Resource [FN04]
Jun  9 10:30:50 NAS kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7e])
Jun  9 10:30:50 NAS kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jun  9 10:30:50 NAS kernel: acpi PNP0A08:00: _OSC: OS requested [PME AER PCIeCapability LTR]
Jun  9 10:30:50 NAS kernel: acpi PNP0A08:00: _OSC: platform willing to grant [PME AER PCIeCapability LTR]
Jun  9 10:30:50 NAS kernel: acpi PNP0A08:00: _OSC: platform retains control of PCIe features (AE_ERROR)
Jun  9 10:30:50 NAS kernel: PCI host bridge to bus 0000:00
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xefffffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: root bus resource [mem 0xfd000000-0xfe7fffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: root bus resource [bus 00-7e]
Jun  9 10:30:50 NAS kernel: pci 0000:00:00.0: [8086:191f] type 00 class 0x060000
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: [8086:1912] type 00 class 0x030000
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: reg 0x10: [mem 0xee000000-0xeeffffff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: reg 0x18: [mem 0xd0000000-0xdfffffff 64bit pref]
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: BAR 2: assigned to efifb
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:14.0: [8086:a12f] type 00 class 0x0c0330
Jun  9 10:30:50 NAS kernel: pci 0000:00:14.0: reg 0x10: [mem 0xef330000-0xef33ffff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:16.0: [8086:a13a] type 00 class 0x078000
Jun  9 10:30:50 NAS kernel: pci 0000:00:16.0: reg 0x10: [mem 0xef34d000-0xef34dfff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:00:16.0: PME# supported from D3hot
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: [8086:a102] type 00 class 0x010601
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: reg 0x10: [mem 0xef348000-0xef349fff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: reg 0x14: [mem 0xef34c000-0xef34c0ff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: reg 0x18: [io  0xf090-0xf097]
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: reg 0x1c: [io  0xf080-0xf083]
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: reg 0x20: [io  0xf060-0xf07f]
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: reg 0x24: [mem 0xef34b000-0xef34b7ff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: PME# supported from D3hot
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.0: [8086:a167] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.2: [8086:a169] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.2: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.2: Intel SPT PCH root port ACS workaround enabled
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.3: [8086:a16a] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.3: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.3: Intel SPT PCH root port ACS workaround enabled
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0: [8086:a110] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0: Intel SPT PCH root port ACS workaround enabled
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.4: [8086:a114] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.4: Intel SPT PCH root port ACS workaround enabled
Jun  9 10:30:50 NAS kernel: pci 0000:00:1d.0: [8086:a118] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1d.0: Intel SPT PCH root port ACS workaround enabled
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.0: [8086:a145] type 00 class 0x060100
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.2: [8086:a121] type 00 class 0x058000
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.2: reg 0x10: [mem 0xef344000-0xef347fff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.3: [8086:a170] type 00 class 0x040300
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.3: reg 0x10: [mem 0xef340000-0xef343fff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.3: reg 0x20: [mem 0xef320000-0xef32ffff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.3: PME# supported from D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.4: [8086:a123] type 00 class 0x0c0500
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.4: reg 0x10: [mem 0xef34a000-0xef34a0ff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.4: reg 0x20: [io  0xf040-0xf05f]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.6: [8086:15b8] type 00 class 0x020000
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.6: reg 0x10: [mem 0xef300000-0xef31ffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.6: PME# supported from D0 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: [1000:0087] type 00 class 0x010700
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: reg 0x10: [io  0xe000-0xe0ff]
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: reg 0x14: [mem 0xef140000-0xef14ffff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: reg 0x1c: [mem 0xef100000-0xef13ffff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: reg 0x30: [mem 0xef000000-0xef0fffff pref]
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: supports D1 D2
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1: PCI bridge to [bus 02]
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1:   bridge window [mem 0xef000000-0xef1fffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.0: PCI bridge to [bus 03]
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: [1b21:1080] type 01 class 0x060400
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: supports D1 D2
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.2: PCI bridge to [bus 04-05]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:05: extended config space not accessible
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: PCI bridge to [bus 05]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.3: PCI bridge to [bus 06]
Jun  9 10:30:50 NAS kernel: pci 0000:07:00.0: [1b21:1242] type 00 class 0x0c0330
Jun  9 10:30:50 NAS kernel: pci 0000:07:00.0: reg 0x10: [mem 0xef200000-0xef207fff 64bit]
Jun  9 10:30:50 NAS kernel: pci 0000:07:00.0: enabling Extended Tags
Jun  9 10:30:50 NAS kernel: pci 0000:07:00.0: PME# supported from D3hot D3cold
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0: PCI bridge to [bus 07]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0:   bridge window [mem 0xef200000-0xef2fffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.4: PCI bridge to [bus 08]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1d.0: PCI bridge to [bus 09]
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 11
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 11
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 11
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Jun  9 10:30:50 NAS kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Jun  9 10:30:50 NAS kernel: iommu: Default domain type: Passthrough
Jun  9 10:30:50 NAS kernel: SCSI subsystem initialized
Jun  9 10:30:50 NAS kernel: libata version 3.00 loaded.
Jun  9 10:30:50 NAS kernel: ACPI: bus type USB registered
Jun  9 10:30:50 NAS kernel: usbcore: registered new interface driver usbfs
Jun  9 10:30:50 NAS kernel: usbcore: registered new interface driver hub
Jun  9 10:30:50 NAS kernel: usbcore: registered new device driver usb
Jun  9 10:30:50 NAS kernel: pps_core: LinuxPPS API ver. 1 registered
Jun  9 10:30:50 NAS kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <[email protected]>
Jun  9 10:30:50 NAS kernel: PTP clock support registered
Jun  9 10:30:50 NAS kernel: Registered efivars operations
Jun  9 10:30:50 NAS kernel: PCI: Using ACPI for IRQ routing
Jun  9 10:30:50 NAS kernel: PCI: pci_cache_line_size set to 64 bytes
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0x00058000-0x0005ffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xa67df018-0xa7ffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xa67f0018-0xa7ffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xa6801018-0xa7ffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xb2f79000-0xb3ffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xb69f0000-0xb7ffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xb87c2000-0xbbffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xb9ae1000-0xbbffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xb9f61000-0xbbffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0xbaf00000-0xbbffffff]
Jun  9 10:30:50 NAS kernel: e820: reserve RAM buffer [mem 0x63f000000-0x63fffffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jun  9 10:30:50 NAS kernel: vgaarb: loaded
Jun  9 10:30:50 NAS kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Jun  9 10:30:50 NAS kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter
Jun  9 10:30:50 NAS kernel: clocksource: Switched to clocksource tsc-early
Jun  9 10:30:50 NAS kernel: FS-Cache: Loaded
Jun  9 10:30:50 NAS kernel: pnp: PnP ACPI init
Jun  9 10:30:50 NAS kernel: system 00:00: [io  0x0a00-0x0a2f] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:00: [io  0x0a30-0x0a3f] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:00: [io  0x0a40-0x0a4f] has been reserved
Jun  9 10:30:50 NAS kernel: pnp 00:01: [dma 0 disabled]
Jun  9 10:30:50 NAS kernel: pnp 00:02: [dma 0 disabled]
Jun  9 10:30:50 NAS kernel: system 00:03: [io  0x0680-0x069f] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:03: [io  0xffff] has been reserved
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Jun  9 10:30:50 NAS kernel: system 00:03: [io  0x1800-0x18fe] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:03: [io  0x164e-0x164f] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:04: [io  0x0800-0x087f] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:06: [io  0x1854-0x1857] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfed10000-0xfed17fff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfed18000-0xfed18fff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfed19000-0xfed19fff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfed20000-0xfed3ffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfed90000-0xfed93fff] could not be reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfed45000-0xfed8ffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xfee00000-0xfeefffff] could not be reserved
Jun  9 10:30:50 NAS kernel: system 00:07: [mem 0xeffe0000-0xefffffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfd000000-0xfdabffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfdad0000-0xfdadffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfdb00000-0xfdffffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfe000000-0xfe01ffff] could not be reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfe036000-0xfe03bfff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfe03d000-0xfe3fffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:08: [mem 0xfe410000-0xfe7fffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:09: [io  0xff00-0xfffe] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:0a: [mem 0xfdaf0000-0xfdafffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:0a: [mem 0xfdae0000-0xfdaeffff] has been reserved
Jun  9 10:30:50 NAS kernel: system 00:0a: [mem 0xfdac0000-0xfdacffff] has been reserved
Jun  9 10:30:50 NAS kernel: pnp: PnP ACPI: found 11 devices
Jun  9 10:30:50 NAS kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jun  9 10:30:50 NAS kernel: NET: Registered PF_INET protocol family
Jun  9 10:30:50 NAS kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jun  9 10:30:50 NAS kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear)
Jun  9 10:30:50 NAS kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jun  9 10:30:50 NAS kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jun  9 10:30:50 NAS kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jun  9 10:30:50 NAS kernel: TCP: Hash tables configured (established 262144 bind 65536)
Jun  9 10:30:50 NAS kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear)
Jun  9 10:30:50 NAS kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear)
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1: PCI bridge to [bus 02]
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1:   bridge window [mem 0xef000000-0xef1fffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.0: PCI bridge to [bus 03]
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: PCI bridge to [bus 05]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.2: PCI bridge to [bus 04-05]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.3: PCI bridge to [bus 06]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0: PCI bridge to [bus 07]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0:   bridge window [mem 0xef200000-0xef2fffff]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.4: PCI bridge to [bus 08]
Jun  9 10:30:50 NAS kernel: pci 0000:00:1d.0: PCI bridge to [bus 09]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xefffffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:00: resource 8 [mem 0xfd000000-0xfe7fffff window]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:02: resource 0 [io  0xe000-0xefff]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:02: resource 1 [mem 0xef000000-0xef1fffff]
Jun  9 10:30:50 NAS kernel: pci_bus 0000:07: resource 1 [mem 0xef200000-0xef2fffff]
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: Disabling ASPM L0s/L1
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: can't disable ASPM; OS doesn't have ASPM control
Jun  9 10:30:50 NAS kernel: PCI: CLS 64 bytes, default 64
Jun  9 10:30:50 NAS kernel: DMAR: No ATSR found
Jun  9 10:30:50 NAS kernel: DMAR: No SATC found
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature fl1gp_support inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature pgsel_inv inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature nwfs inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature eafs inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature prs inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature nest inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature mts inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature sc_support inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent
Jun  9 10:30:50 NAS kernel: DMAR: dmar0: Using Queued invalidation
Jun  9 10:30:50 NAS kernel: DMAR: dmar1: Using Queued invalidation
Jun  9 10:30:50 NAS kernel: Unpacking initramfs...
Jun  9 10:30:50 NAS kernel: pci 0000:00:00.0: Adding to iommu group 0
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.0: Adding to iommu group 1
Jun  9 10:30:50 NAS kernel: pci 0000:00:01.1: Adding to iommu group 1
Jun  9 10:30:50 NAS kernel: pci 0000:00:02.0: Adding to iommu group 2
Jun  9 10:30:50 NAS kernel: pci 0000:00:14.0: Adding to iommu group 3
Jun  9 10:30:50 NAS kernel: pci 0000:00:16.0: Adding to iommu group 4
Jun  9 10:30:50 NAS kernel: pci 0000:00:17.0: Adding to iommu group 5
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.0: Adding to iommu group 6
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.2: Adding to iommu group 7
Jun  9 10:30:50 NAS kernel: pci 0000:00:1b.3: Adding to iommu group 8
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.0: Adding to iommu group 9
Jun  9 10:30:50 NAS kernel: pci 0000:00:1c.4: Adding to iommu group 10
Jun  9 10:30:50 NAS kernel: pci 0000:00:1d.0: Adding to iommu group 11
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.0: Adding to iommu group 12
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.2: Adding to iommu group 12
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.3: Adding to iommu group 12
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.4: Adding to iommu group 12
Jun  9 10:30:50 NAS kernel: pci 0000:00:1f.6: Adding to iommu group 13
Jun  9 10:30:50 NAS kernel: pci 0000:02:00.0: Adding to iommu group 1
Jun  9 10:30:50 NAS kernel: pci 0000:04:00.0: Adding to iommu group 14
Jun  9 10:30:50 NAS kernel: pci 0000:07:00.0: Adding to iommu group 15
Jun  9 10:30:50 NAS kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O
Jun  9 10:30:50 NAS kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jun  9 10:30:50 NAS kernel: software IO TLB: mapped [mem 0x00000000ada51000-0x00000000b1a51000] (64MB)
Jun  9 10:30:50 NAS kernel: workingset: timestamp_bits=40 max_order=23 bucket_order=0
Jun  9 10:30:50 NAS kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jun  9 10:30:50 NAS kernel: fuse: init (API version 7.36)
Jun  9 10:30:50 NAS kernel: xor: automatically using best checksumming function   avx       
Jun  9 10:30:50 NAS kernel: Key type asymmetric registered
Jun  9 10:30:50 NAS kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jun  9 10:30:50 NAS kernel: io scheduler mq-deadline registered
Jun  9 10:30:50 NAS kernel: io scheduler kyber registered
Jun  9 10:30:50 NAS kernel: io scheduler bfq registered
Jun  9 10:30:50 NAS kernel: IPMI message handler: version 39.2
Jun  9 10:30:50 NAS kernel: Serial: 8250/16550 driver, 1 ports, IRQ sharing disabled
Jun  9 10:30:50 NAS kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jun  9 10:30:50 NAS kernel: tsc: Refined TSC clocksource calibration: 3503.999 MHz
Jun  9 10:30:50 NAS kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3282124c47b, max_idle_ns: 440795239402 ns
Jun  9 10:30:50 NAS kernel: clocksource: Switched to clocksource tsc
Jun  9 10:30:50 NAS kernel: Freeing initrd memory: 161528K
Jun  9 10:30:50 NAS kernel: lp: driver loaded but no devices found
Jun  9 10:30:50 NAS kernel: Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
Jun  9 10:30:50 NAS kernel: AMD-Vi: AMD IOMMUv2 functionality not available on this system - This is not a bug.
Jun  9 10:30:50 NAS kernel: parport_pc 00:01: reported by Plug and Play ACPI
Jun  9 10:30:50 NAS kernel: parport0: PC-style at 0x378, irq 5 [PCSPP(,...)]
Jun  9 10:30:50 NAS kernel: lp0: using parport0 (interrupt-driven).
Jun  9 10:30:50 NAS kernel: loop: module loaded
Jun  9 10:30:50 NAS kernel: Rounding down aligned max_sectors from 4294967295 to 4294967288
Jun  9 10:30:50 NAS kernel: db_root: cannot open: /etc/target
Jun  9 10:30:50 NAS kernel: VFIO - User Level meta-driver version: 0.3
Jun  9 10:30:50 NAS kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Jun  9 10:30:50 NAS kernel: ehci-pci: EHCI PCI platform driver
Jun  9 10:30:50 NAS kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Jun  9 10:30:50 NAS kernel: ohci-pci: OHCI PCI platform driver
Jun  9 10:30:50 NAS kernel: uhci_hcd: USB Universal Host Controller Interface driver
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x100 quirks 0x0000000001109810
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.0 SuperSpeed
Jun  9 10:30:50 NAS kernel: hub 1-0:1.0: USB hub found
Jun  9 10:30:50 NAS kernel: hub 1-0:1.0: 16 ports detected
Jun  9 10:30:50 NAS kernel: hub 2-0:1.0: USB hub found
Jun  9 10:30:50 NAS kernel: hub 2-0:1.0: 10 ports detected
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:07:00.0: xHCI Host Controller
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:07:00.0: new USB bus registered, assigned bus number 3
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:07:00.0: hcc params 0x0200eec0 hci version 0x110 quirks 0x0000000000800010
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:07:00.0: xHCI Host Controller
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:07:00.0: new USB bus registered, assigned bus number 4
Jun  9 10:30:50 NAS kernel: xhci_hcd 0000:07:00.0: Host supports USB 3.1 Enhanced SuperSpeed
Jun  9 10:30:50 NAS kernel: hub 3-0:1.0: USB hub found
Jun  9 10:30:50 NAS kernel: hub 3-0:1.0: 2 ports detected
Jun  9 10:30:50 NAS kernel: usb usb4: We don't know the algorithms for LPM for this host, disabling LPM.
Jun  9 10:30:50 NAS kernel: hub 4-0:1.0: USB hub found
Jun  9 10:30:50 NAS kernel: hub 4-0:1.0: 2 ports detected
Jun  9 10:30:50 NAS kernel: usbcore: registered new interface driver usb-storage
Jun  9 10:30:50 NAS kernel: i8042: PNP: No PS/2 controller found.
Jun  9 10:30:50 NAS kernel: mousedev: PS/2 mouse device common for all mice
Jun  9 10:30:50 NAS kernel: usbcore: registered new interface driver synaptics_usb
Jun  9 10:30:50 NAS kernel: input: PC Speaker as /devices/platform/pcspkr/input/input0
Jun  9 10:30:50 NAS kernel: rtc_cmos 00:05: RTC can wake from S4
Jun  9 10:30:50 NAS kernel: rtc_cmos 00:05: registered as rtc0
Jun  9 10:30:50 NAS kernel: rtc_cmos 00:05: setting system clock to 2023-06-09T16:30:41 UTC (1686328241)
Jun  9 10:30:50 NAS kernel: rtc_cmos 00:05: alarms up to one month, y3k, 242 bytes nvram, hpet irqs
Jun  9 10:30:50 NAS kernel: intel_pstate: Intel P-state driver initializing
Jun  9 10:30:50 NAS kernel: intel_pstate: HWP enabled
Jun  9 10:30:50 NAS kernel: efifb: probing for efifb
Jun  9 10:30:50 NAS kernel: efifb: framebuffer at 0xd0000000, using 1876k, total 1875k
Jun  9 10:30:50 NAS kernel: efifb: mode is 800x600x32, linelength=3200, pages=1
Jun  9 10:30:50 NAS kernel: efifb: scrolling: redraw
Jun  9 10:30:50 NAS kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Jun  9 10:30:50 NAS kernel: Console: switching to colour frame buffer device 100x37
Jun  9 10:30:50 NAS kernel: fb0: EFI VGA frame buffer device
Jun  9 10:30:50 NAS kernel: pstore: Registered efi as persistent store backend
Jun  9 10:30:50 NAS kernel: hid: raw HID events driver (C) Jiri Kosina
Jun  9 10:30:50 NAS kernel: usbcore: registered new interface driver usbhid
Jun  9 10:30:50 NAS kernel: usbhid: USB HID core driver
Jun  9 10:30:50 NAS kernel: ipip: IPv4 and MPLS over IPv4 tunneling driver
Jun  9 10:30:50 NAS kernel: NET: Registered PF_INET6 protocol family
Jun  9 10:30:50 NAS kernel: Segment Routing with IPv6
Jun  9 10:30:50 NAS kernel: RPL Segment Routing with IPv6
Jun  9 10:30:50 NAS kernel: In-situ OAM (IOAM) with IPv6
Jun  9 10:30:50 NAS kernel: 9pnet: Installing 9P2000 support
Jun  9 10:30:50 NAS kernel: microcode: sig=0x506e3, pf=0x2, revision=0xf0
Jun  9 10:30:50 NAS kernel: microcode: Microcode Update Driver: v2.2.
Jun  9 10:30:50 NAS kernel: IPI shorthand broadcast: enabled
Jun  9 10:30:50 NAS kernel: sched_clock: Marking stable (12425950203, 3528501)->(12431259347, -1780643)
Jun  9 10:30:50 NAS kernel: registered taskstats version 1
Jun  9 10:30:50 NAS kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
Jun  9 10:30:50 NAS kernel: pstore: Using crash dump compression: deflate
Jun  9 10:30:50 NAS kernel: usb 1-6: new full-speed USB device number 2 using xhci_hcd
Jun  9 10:30:50 NAS kernel: hid-generic 0003:0764:0501.0001: hiddev96,hidraw0: USB HID v1.10 Device [CPS CP1500PFCLCD] on usb-0000:00:14.0-6/input0
Jun  9 10:30:50 NAS kernel: usb 4-1: new SuperSpeed USB device number 2 using xhci_hcd
Jun  9 10:30:50 NAS kernel: usb-storage 4-1:1.0: USB Mass Storage device detected
Jun  9 10:30:50 NAS kernel: scsi host0: usb-storage 4-1:1.0
Jun  9 10:30:50 NAS kernel: scsi 0:0:0:0: Direct-Access     Samsung  Flash Drive      1100 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0
Jun  9 10:30:50 NAS kernel: sd 0:0:0:0: [sda] 125313283 512-byte logical blocks: (64.2 GB/59.8 GiB)
Jun  9 10:30:50 NAS kernel: sd 0:0:0:0: [sda] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00
Jun  9 10:30:50 NAS kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sda: sda1
Jun  9 10:30:50 NAS kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk
Jun  9 10:30:50 NAS kernel: floppy0: no floppy controllers found
Jun  9 10:30:50 NAS kernel: Freeing unused kernel image (initmem) memory: 1836K
Jun  9 10:30:50 NAS kernel: Write protecting the kernel read-only data: 18432k
Jun  9 10:30:50 NAS kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Jun  9 10:30:50 NAS kernel: Freeing unused kernel image (rodata/data gap) memory: 180K
Jun  9 10:30:50 NAS kernel: rodata_test: all tests were successful
Jun  9 10:30:50 NAS kernel: Run /init as init process
Jun  9 10:30:50 NAS kernel:  with arguments:
Jun  9 10:30:50 NAS kernel:    /init
Jun  9 10:30:50 NAS kernel:  with environment:
Jun  9 10:30:50 NAS kernel:    HOME=/
Jun  9 10:30:50 NAS kernel:    TERM=linux
Jun  9 10:30:50 NAS kernel:    BOOT_IMAGE=/bzimage
Jun  9 10:30:50 NAS kernel: random: crng init done
Jun  9 10:30:50 NAS kernel: loop0: detected capacity change from 0 to 241472
Jun  9 10:30:50 NAS kernel: loop1: detected capacity change from 0 to 39792
Jun  9 10:30:50 NAS kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jun  9 10:30:50 NAS kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
Jun  9 10:30:50 NAS kernel: ACPI: button: Sleep Button [SLPB]
Jun  9 10:30:50 NAS kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input2
Jun  9 10:30:50 NAS kernel: ACPI: button: Power Button [PWRB]
Jun  9 10:30:50 NAS kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Jun  9 10:30:50 NAS kernel: ACPI: button: Power Button [PWRF]
Jun  9 10:30:50 NAS kernel: thermal LNXTHERM:00: registered as thermal_zone0
Jun  9 10:30:50 NAS kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C)
Jun  9 10:30:50 NAS kernel: thermal LNXTHERM:01: registered as thermal_zone1
Jun  9 10:30:50 NAS kernel: ACPI: thermal: Thermal Zone [TZ01] (30 C)
Jun  9 10:30:50 NAS kernel: ahci 0000:00:17.0: version 3.0
Jun  9 10:30:50 NAS kernel: Linux agpgart interface v0.103
Jun  9 10:30:50 NAS kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
Jun  9 10:30:50 NAS kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf pm led clo only pio slum part ems deso sadm sds apst
Jun  9 10:30:50 NAS kernel: scsi host1: ahci
Jun  9 10:30:50 NAS kernel: scsi host2: ahci
Jun  9 10:30:50 NAS kernel: scsi host3: ahci
Jun  9 10:30:50 NAS kernel: scsi host4: ahci
Jun  9 10:30:50 NAS kernel: scsi host5: ahci
Jun  9 10:30:50 NAS kernel: scsi host6: ahci
Jun  9 10:30:50 NAS kernel: ata1: SATA max UDMA/133 abar m2048@0xef34b000 port 0xef34b100 irq 136
Jun  9 10:30:50 NAS kernel: ata2: SATA max UDMA/133 abar m2048@0xef34b000 port 0xef34b180 irq 136
Jun  9 10:30:50 NAS kernel: ata3: SATA max UDMA/133 abar m2048@0xef34b000 port 0xef34b200 irq 136
Jun  9 10:30:50 NAS kernel: ata4: SATA max UDMA/133 abar m2048@0xef34b000 port 0xef34b280 irq 136
Jun  9 10:30:50 NAS kernel: ata5: SATA max UDMA/133 abar m2048@0xef34b000 port 0xef34b300 irq 136
Jun  9 10:30:50 NAS kernel: ata6: SATA max UDMA/133 abar m2048@0xef34b000 port 0xef34b380 irq 136
Jun  9 10:30:50 NAS kernel: i801_smbus 0000:00:1f.4: enabling device (0001 -> 0003)
Jun  9 10:30:50 NAS kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set
Jun  9 10:30:50 NAS kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt
Jun  9 10:30:50 NAS kernel: mpt3sas version 42.100.00.00 loaded
Jun  9 10:30:50 NAS kernel: mpt3sas 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control
Jun  9 10:30:50 NAS kernel: i2c i2c-0: 4/4 memory slots populated (from DMI)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (24513432 kB)
Jun  9 10:30:50 NAS kernel: e1000e: Intel(R) PRO/1000 Network Driver
Jun  9 10:30:50 NAS kernel: i2c i2c-0: Successfully instantiated SPD at 0x50
Jun  9 10:30:50 NAS kernel: e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
Jun  9 10:30:50 NAS kernel: i2c i2c-0: Successfully instantiated SPD at 0x51
Jun  9 10:30:50 NAS kernel: e1000e 0000:00:1f.6: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
Jun  9 10:30:50 NAS kernel: i2c i2c-0: Successfully instantiated SPD at 0x52
Jun  9 10:30:50 NAS kernel: i2c i2c-0: Successfully instantiated SPD at 0x53
Jun  9 10:30:50 NAS kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer
Jun  9 10:30:50 NAS kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
Jun  9 10:30:50 NAS kernel: RAPL PMU: hw unit of domain package 2^-14 Joules
Jun  9 10:30:50 NAS kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules
Jun  9 10:30:50 NAS kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules
Jun  9 10:30:50 NAS kernel: cryptd: max_cpu_qlen set to 1000
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: MSI-X vectors supported: 16
Jun  9 10:30:50 NAS kernel:      no of cores: 4, max_msix_vectors: -1
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0:  0 4 4
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: High IOPs queues : disabled
Jun  9 10:30:50 NAS kernel: mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 138
Jun  9 10:30:50 NAS kernel: mpt2sas0-msix1: PCI-MSI-X enabled: IRQ 139
Jun  9 10:30:50 NAS kernel: mpt2sas0-msix2: PCI-MSI-X enabled: IRQ 140
Jun  9 10:30:50 NAS kernel: mpt2sas0-msix3: PCI-MSI-X enabled: IRQ 141
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: iomem(0x00000000ef140000), mapped(0x00000000e618d7ed), size(65536)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: ioport(0x000000000000e000), size(256)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: sending message unit reset !!
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: message unit reset: SUCCESS
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: request pool(0x000000003d11bf0a) - dma(0x103e00000): depth(10261), frame_size(128), pool_size(1282 kB)
Jun  9 10:30:50 NAS kernel: e1000e 0000:00:1f.6 0000:00:1f.6 (uninitialized): registered PHC clock
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: sense pool(0x0000000095682a3a) - dma(0x105500000): depth(10000), element_size(96), pool_size (937 kB)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: sense pool(0x0000000095682a3a)- dma(0x105500000): depth(10000),element_size(96), pool_size(0 kB)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: reply pool(0x00000000e309a68e) - dma(0x104000000): depth(10325), frame_size(128), pool_size(1290 kB)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: config page(0x0000000075673224) - dma(0x1054ef000): size(512)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: Allocated physical memory: size(22945 kB)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: Current Controller Queue Depth(9997),Max Controller Queue Depth(10240)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: Scatter Gather Elements per IO(128)
Jun  9 10:30:50 NAS kernel: e1000e 0000:00:1f.6 eth0: (PCI Express:2.5GT/s:Width x1) 40:8d:5c:1c:6f:0d
Jun  9 10:30:50 NAS kernel: e1000e 0000:00:1f.6 eth0: Intel(R) PRO/1000 Network Connection
Jun  9 10:30:50 NAS kernel: e1000e 0000:00:1f.6 eth0: MAC: 12, PHY: 12, PBA No: FFFFFF-0FF
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: LSISAS2308: FWVersion(15.00.00.00), ChipRevision(0x05), BiosVersion(07.29.00.00)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
Jun  9 10:30:50 NAS kernel: scsi host7: Fusion MPT SAS Host
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: sending port enable !!
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: hba_port entry: 00000000903bed4b, port: 255 is added to hba_port list
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b008c0c300), phys(8)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0x9) sas_address(0x4433221100000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0xa) sas_address(0x4433221101000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0xd) sas_address(0x4433221102000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0xb) sas_address(0x4433221103000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0xc) sas_address(0x4433221104000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0xe) sas_address(0x4433221105000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0x10) sas_address(0x4433221106000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: handle(0xf) sas_address(0x4433221107000000) port_type(0x1)
Jun  9 10:30:50 NAS kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun  9 10:30:50 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun  9 10:30:50 NAS kernel: ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun  9 10:30:50 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun  9 10:30:50 NAS kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun  9 10:30:50 NAS kernel: ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun  9 10:30:50 NAS kernel: ata1.00: ATA-11: SATA SSD, SBFM11.2, max UDMA/133
Jun  9 10:30:50 NAS kernel: mpt2sas_cm0: port enable: SUCCESS
Jun  9 10:30:50 NAS kernel: ata3.00: ATA-9: WDC WD80EMAZ-00WJTA0, 83.H0A83, max UDMA/133
Jun  9 10:30:50 NAS kernel: ata4.00: ATA-9: WDC WD80EMAZ-00WJTA0, 83.H0A83, max UDMA/133
Jun  9 10:30:50 NAS kernel: ata1.00: 468862128 sectors, multi 16: LBA48 NCQ (depth 32), AA
Jun  9 10:30:50 NAS kernel: ata6.00: ATA-9: WDC WD80EMAZ-00WJTA0, 83.H0A83, max UDMA/133
Jun  9 10:30:50 NAS kernel: scsi 7:0:0:0: Direct-Access     ATA      WDC WD80EDAZ-11T 0A81 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: ata1.00: configured for UDMA/133
Jun  9 10:30:50 NAS kernel: scsi 7:0:0:0: SATA: handle(0x000b), sas_addr(0x4433221103000000), phy(3), device_name(0x5000cca0bed25ea1)
Jun  9 10:30:50 NAS kernel: scsi 1:0:0:0: Direct-Access     ATA      SATA SSD         11.2 PQ: 0 ANSI: 5
Jun  9 10:30:50 NAS kernel: scsi 7:0:0:0: enclosure logical id (0x500605b008c0c300), slot(0)
Jun  9 10:30:50 NAS kernel: scsi 7:0:0:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:0:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: Attached scsi generic sg1 type 0
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: [sdb] 468862128 512-byte logical blocks: (240 GB/224 GiB)
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: [sdb] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 512 bytes
Jun  9 10:30:50 NAS kernel: sdb: sdb1
Jun  9 10:30:50 NAS kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: Attached scsi generic sg2 type 0
Jun  9 10:30:50 NAS kernel: end_device-7:0: add: handle(0x000b), sas_addr(0x4433221103000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: [sdc] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: scsi 7:0:1:0: Direct-Access     ATA      WDC WD80EMAZ-00W 0A83 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: scsi 7:0:1:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x5000cca266ec2525)
Jun  9 10:30:50 NAS kernel: scsi 7:0:1:0: enclosure logical id (0x500605b008c0c300), slot(3)
Jun  9 10:30:50 NAS kernel: scsi 7:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:1:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: ata3.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA
Jun  9 10:30:50 NAS kernel: ata3.00: Features: NCQ-sndrcv NCQ-prio
Jun  9 10:30:50 NAS kernel: ata4.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA
Jun  9 10:30:50 NAS kernel: ata4.00: Features: NCQ-sndrcv NCQ-prio
Jun  9 10:30:50 NAS kernel: ata6.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA
Jun  9 10:30:50 NAS kernel: ata6.00: Features: NCQ-sndrcv NCQ-prio
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: Attached scsi generic sg3 type 0
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: [sdc] Write Protect is off
Jun  9 10:30:50 NAS kernel: end_device-7:1: add: handle(0x0009), sas_addr(0x4433221100000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: [sdd] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: [sdd] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: [sdc] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: scsi 7:0:2:0: Direct-Access     ATA      WDC WD80EFBX-68A 0A85 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: scsi 7:0:2:0: SATA: handle(0x000a), sas_addr(0x4433221101000000), phy(1), device_name(0x5000cca0c3c48d88)
Jun  9 10:30:50 NAS kernel: scsi 7:0:2:0: enclosure logical id (0x500605b008c0c300), slot(2)
Jun  9 10:30:50 NAS kernel: scsi 7:0:2:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:2:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: [sdd] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: [sdd] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: Attached scsi generic sg4 type 0
Jun  9 10:30:50 NAS kernel: end_device-7:2: add: handle(0x000a), sas_addr(0x4433221101000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: [sde] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: [sde] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: scsi 7:0:3:0: Direct-Access     ATA      WDC WD80EFBX-68A 0A85 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: scsi 7:0:3:0: SATA: handle(0x000d), sas_addr(0x4433221102000000), phy(2), device_name(0x5000cca0c3c4ab5e)
Jun  9 10:30:50 NAS kernel: scsi 7:0:3:0: enclosure logical id (0x500605b008c0c300), slot(1)
Jun  9 10:30:50 NAS kernel: scsi 7:0:3:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:3:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: ata3.00: configured for UDMA/133
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: [sde] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: [sde] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: ata4.00: configured for UDMA/133
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: ata6.00: configured for UDMA/133
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: Attached scsi generic sg5 type 0
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: [sdf] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: end_device-7:3: add: handle(0x000d), sas_addr(0x4433221102000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: [sdf] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: scsi 7:0:4:0: Direct-Access     ATA      WDC WD10EZEX-00B 1A01 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: scsi 7:0:4:0: SATA: handle(0x000c), sas_addr(0x4433221104000000), phy(4), device_name(0x50014ee2b6a4a13d)
Jun  9 10:30:50 NAS kernel: scsi 7:0:4:0: enclosure logical id (0x500605b008c0c300), slot(7)
Jun  9 10:30:50 NAS kernel: scsi 7:0:4:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:4:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: [sdf] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: [sdf] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: ata2.00: ATA-11: WDC  WUH721816ALE6L4, PCGNW232, max UDMA/133
Jun  9 10:30:50 NAS kernel: scsi 7:0:4:0: Attached scsi generic sg6 type 0
Jun  9 10:30:50 NAS kernel: end_device-7:4: add: handle(0x000c), sas_addr(0x4433221104000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:4:0: [sdg] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
Jun  9 10:30:50 NAS kernel: sd 7:0:4:0: [sdg] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: scsi 7:0:5:0: Direct-Access     ATA      WDC  WUH721816AL W232 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: scsi 7:0:5:0: SATA: handle(0x000e), sas_addr(0x4433221105000000), phy(5), device_name(0x5000cca2c1d0a19a)
Jun  9 10:30:50 NAS kernel: scsi 7:0:5:0: enclosure logical id (0x500605b008c0c300), slot(6)
Jun  9 10:30:50 NAS kernel: scsi 7:0:5:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:5:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: sde: sde1
Jun  9 10:30:50 NAS kernel: sd 7:0:2:0: [sde] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sd 7:0:4:0: [sdg] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 7:0:4:0: [sdg] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: Attached scsi generic sg7 type 0
Jun  9 10:30:50 NAS kernel: sd 7:0:4:0: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: sdf: sdf1
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: [sdh] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
Jun  9 10:30:50 NAS kernel: sd 7:0:3:0: [sdf] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: end_device-7:5: add: handle(0x000e), sas_addr(0x4433221105000000)
Jun  9 10:30:50 NAS kernel: scsi 7:0:6:0: Direct-Access     ATA      WDC  WUH721816AL W680 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: [sdh] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: scsi 7:0:6:0: SATA: handle(0x0010), sas_addr(0x4433221106000000), phy(6), device_name(0x5000cca295f2e2df)
Jun  9 10:30:50 NAS kernel: ata2.00: 31251759104 sectors, multi 16: LBA48 NCQ (depth 32), AA
Jun  9 10:30:50 NAS kernel: scsi 7:0:6:0: enclosure logical id (0x500605b008c0c300), slot(5)
Jun  9 10:30:50 NAS kernel: ata2.00: Features: NCQ-sndrcv NCQ-prio
Jun  9 10:30:50 NAS kernel: scsi 7:0:6:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: [sdh] Write Protect is off
Jun  9 10:30:50 NAS kernel: scsi 7:0:6:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: [sdh] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: sdd: sdd1
Jun  9 10:30:50 NAS kernel: sd 7:0:1:0: [sdd] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: Attached scsi generic sg8 type 0
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: [sdi] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
Jun  9 10:30:50 NAS kernel: end_device-7:6: add: handle(0x0010), sas_addr(0x4433221106000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: [sdi] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: scsi 7:0:7:0: Direct-Access     ATA      WDC WD80EMAZ-00W 0A83 PQ: 0 ANSI: 6
Jun  9 10:30:50 NAS kernel: scsi 7:0:7:0: SATA: handle(0x000f), sas_addr(0x4433221107000000), phy(7), device_name(0x5000cca257f17d89)
Jun  9 10:30:50 NAS kernel: scsi 7:0:7:0: enclosure logical id (0x500605b008c0c300), slot(4)
Jun  9 10:30:50 NAS kernel: scsi 7:0:7:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jun  9 10:30:50 NAS kernel: scsi 7:0:7:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: [sdi] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: [sdi] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: sdc: sdc1
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: sd 7:0:0:0: [sdc] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: Attached scsi generic sg9 type 0
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: [sdj] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: end_device-7:7: add: handle(0x000f), sas_addr(0x4433221107000000)
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: [sdj] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: ata2.00: configured for UDMA/133
Jun  9 10:30:50 NAS kernel: scsi 2:0:0:0: Direct-Access     ATA      WDC  WUH721816AL W232 PQ: 0 ANSI: 5
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: Attached scsi generic sg10 type 0
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
Jun  9 10:30:50 NAS kernel: scsi 3:0:0:0: Direct-Access     ATA      WDC WD80EMAZ-00W 0A83 PQ: 0 ANSI: 5
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: [sdj] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: Attached scsi generic sg11 type 0
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] Mode Sense: 00 3a 00 00
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] Preferred minimum I/O size 4096 bytes
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: [sdj] Mode Sense: 7f 00 10 08
Jun  9 10:30:50 NAS kernel: scsi 4:0:0:0: Direct-Access     ATA      WDC WD80EMAZ-00W 0A83 PQ: 0 ANSI: 5
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: Attached scsi generic sg12 type 0
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] Mode Sense: 00 3a 00 00
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] Preferred minimum I/O size 4096 bytes
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] Mode Sense: 00 3a 00 00
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] Preferred minimum I/O size 4096 bytes
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: [sdj] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jun  9 10:30:50 NAS kernel: ata5.00: ATA-11: WDC  WUH721816ALE6L4, PCGNW232, max UDMA/133
Jun  9 10:30:50 NAS kernel: sdh: sdh1
Jun  9 10:30:50 NAS kernel: sd 7:0:5:0: [sdh] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sdi: sdi1
Jun  9 10:30:50 NAS kernel: sd 7:0:6:0: [sdi] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: ata5.00: 31251759104 sectors, multi 16: LBA48 NCQ (depth 32), AA
Jun  9 10:30:50 NAS kernel: ata5.00: Features: NCQ-sndrcv NCQ-prio
Jun  9 10:30:50 NAS kernel: sdg: sdg1
Jun  9 10:30:50 NAS kernel: sd 7:0:4:0: [sdg] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sdj: sdj1
Jun  9 10:30:50 NAS kernel: sd 7:0:7:0: [sdj] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: ata5.00: configured for UDMA/133
Jun  9 10:30:50 NAS kernel: scsi 5:0:0:0: Direct-Access     ATA      WDC  WUH721816AL W232 PQ: 0 ANSI: 5
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: Attached scsi generic sg13 type 0
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
Jun  9 10:30:50 NAS kernel: scsi 6:0:0:0: Direct-Access     ATA      WDC WD80EMAZ-00W 0A83 PQ: 0 ANSI: 5
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: Attached scsi generic sg14 type 0
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] 4096-byte physical blocks
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] Write Protect is off
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] Mode Sense: 00 3a 00 00
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] Preferred minimum I/O size 4096 bytes
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] Mode Sense: 00 3a 00 00
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] Preferred minimum I/O size 4096 bytes
Jun  9 10:30:50 NAS kernel: sdk: sdk1
Jun  9 10:30:50 NAS kernel: sd 2:0:0:0: [sdk] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sdl: sdl1
Jun  9 10:30:50 NAS kernel: sdm: sdm1
Jun  9 10:30:50 NAS kernel: sd 3:0:0:0: [sdl] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sd 4:0:0:0: [sdm] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sdn: sdn1
Jun  9 10:30:50 NAS kernel: sd 5:0:0:0: [sdn] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: sdo: sdo1
Jun  9 10:30:50 NAS kernel: sd 6:0:0:0: [sdo] Attached SCSI disk
Jun  9 10:30:50 NAS kernel: AVX2 version of gcm_enc/dec engaged.
Jun  9 10:30:50 NAS kernel: AES CTR mode by8 optimization enabled
Jun  9 10:30:50 NAS kernel: BTRFS: device fsid 8be132c0-9eb6-4a66-a93a-daafefc17c8f devid 1 transid 69257085 /dev/sdb1 scanned by udevd (630)
Jun  9 10:30:50 NAS kernel: ACPI: bus type drm_connector registered
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: [drm] VT-d active for gfx access
Jun  9 10:30:50 NAS kernel: Console: switching to colour dummy device 80x25
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: vgaarb: deactivate vga console
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: [drm] Transparent Hugepage mode 'huge=within_size'
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: [drm] Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/skl_dmc_ver1_27.bin (v1.27)
Jun  9 10:30:50 NAS kernel: [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 0
Jun  9 10:30:50 NAS kernel: ACPI: video: Video Device [GFX0] (multi-head: yes  rom: no  post: no)
Jun  9 10:30:50 NAS kernel: input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input4
Jun  9 10:30:50 NAS kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Jun  9 10:30:50 NAS rc.inet1: ip -4 addr add 127.0.0.1/8 dev lo
Jun  9 10:30:50 NAS rc.inet1: ip -6 addr add ::1/128 dev lo
Jun  9 10:30:50 NAS rc.inet1: ip link set lo up
Jun  9 10:30:50 NAS kernel: MII link monitoring set to 100 ms
Jun  9 10:30:50 NAS rc.inet1: ip link add name bond0 type bond mode 1 miimon 100
Jun  9 10:30:50 NAS rc.inet1: ip link set bond0 up
Jun  9 10:30:50 NAS rc.inet1: ip link set eth0 master bond0 down
Jun  9 10:30:50 NAS kernel: bond0: (slave eth0): Enslaving as a backup interface with a down link
Jun  9 10:30:51 NAS  rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="823" x-info="https://www.rsyslog.com"] start
Jun  9 10:30:53 NAS kernel: e1000e 0000:00:1f.6 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Jun  9 10:30:53 NAS kernel: bond0: (slave eth0): link status definitely up, 1000 Mbps full duplex
Jun  9 10:30:53 NAS kernel: bond0: (slave eth0): making interface the new active one
Jun  9 10:30:53 NAS kernel: bond0: active interface up!
Jun  9 10:30:53 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
Jun  9 10:30:53 NAS rc.inet1: ip link add name br0 type bridge stp_state 0 forward_delay 0
Jun  9 10:30:53 NAS kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jun  9 10:30:53 NAS rc.inet1: ip link set br0 up
Jun  9 10:30:53 NAS rc.inet1: ip link set bond0 down
Jun  9 10:30:53 NAS rc.inet1: ip -4 addr flush dev bond0
Jun  9 10:30:53 NAS rc.inet1: ip link set bond0 promisc on master br0 up
Jun  9 10:30:53 NAS kernel: device bond0 entered promiscuous mode
Jun  9 10:30:53 NAS kernel: device eth0 entered promiscuous mode
Jun  9 10:30:53 NAS kernel: br0: port 1(bond0) entered blocking state
Jun  9 10:30:53 NAS kernel: br0: port 1(bond0) entered disabled state
Jun  9 10:30:53 NAS kernel: br0: port 1(bond0) entered blocking state
Jun  9 10:30:53 NAS kernel: br0: port 1(bond0) entered forwarding state
Jun  9 10:30:54 NAS rc.inet1: ip -4 addr add 192.168.1.8/255.255.255.0 dev br0
Jun  9 10:30:54 NAS rc.inet1: ip link set br0 up
Jun  9 10:30:54 NAS rc.inet1: ip -4 route add default via 192.168.1.1 dev br0
Jun  9 10:30:54 NAS kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Jun  9 10:30:54 NAS kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
Jun  9 10:30:54 NAS wireguard: Tunnel WireGuard-wg0 started
Jun  9 10:30:55 NAS  mcelog: failed to prefill DIMM database from DMI data
Jun  9 10:30:55 NAS  mcelog: Kernel does not support page offline interface
Jun  9 10:30:55 NAS elogind-daemon[1037]: New seat seat0.
Jun  9 10:30:55 NAS elogind-daemon[1037]: Watching system buttons on /dev/input/event3 (Power Button)
Jun  9 10:30:55 NAS elogind-daemon[1037]: Watching system buttons on /dev/input/event2 (Power Button)
Jun  9 10:30:55 NAS elogind-daemon[1037]: Watching system buttons on /dev/input/event1 (Sleep Button)
Jun  9 10:30:55 NAS  sshd[1060]: Server listening on 0.0.0.0 port 22.
Jun  9 10:30:55 NAS  sshd[1060]: Server listening on :: port 22.
Jun  9 10:30:55 NAS  ntpd[1069]: ntpd [email protected] Fri Jun  3 04:17:10 UTC 2022 (1): Starting
Jun  9 10:30:55 NAS  ntpd[1069]: Command line: /usr/sbin/ntpd -g -u ntp:ntp
Jun  9 10:30:55 NAS  ntpd[1069]: ----------------------------------------------------
Jun  9 10:30:55 NAS  ntpd[1069]: ntp-4 is maintained by Network Time Foundation,
Jun  9 10:30:55 NAS  ntpd[1069]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Jun  9 10:30:55 NAS  ntpd[1069]: corporation.  Support and training for ntp-4 are
Jun  9 10:30:55 NAS  ntpd[1069]: available at https://www.nwtime.org/support
Jun  9 10:30:55 NAS  ntpd[1069]: ----------------------------------------------------
Jun  9 10:30:55 NAS  ntpd[1071]: proto: precision = 0.046 usec (-24)
Jun  9 10:30:55 NAS  ntpd[1071]: basedate set to 2022-05-22
Jun  9 10:30:55 NAS  ntpd[1071]: gps base set to 2022-05-22 (week 2211)
Jun  9 10:30:55 NAS  ntpd[1071]: Listen normally on 0 lo 127.0.0.1:123
Jun  9 10:30:55 NAS  ntpd[1071]: Listen normally on 1 br0 192.168.1.8:123
Jun  9 10:30:55 NAS  ntpd[1071]: Listen normally on 2 lo [::1]:123
Jun  9 10:30:55 NAS  ntpd[1071]: Listening on routing socket on fd #19 for interface updates
Jun  9 10:30:55 NAS  ntpd[1071]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:55 NAS  acpid: starting up with netlink and the input layer
Jun  9 10:30:55 NAS  acpid: 1 rule loaded
Jun  9 10:30:55 NAS  acpid: waiting for events: event logging is off
Jun  9 10:30:55 NAS  crond[1092]: /usr/sbin/crond 4.5 dillon's cron daemon, started with loglevel notice
Jun  9 10:30:55 NAS  smbd[1104]: [2023/06/09 10:30:55.214423,  0] ../../source3/smbd/server.c:1741(main)
Jun  9 10:30:55 NAS  smbd[1104]:   smbd version 4.17.3 started.
Jun  9 10:30:55 NAS  smbd[1104]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Jun  9 10:30:55 NAS  nmbd[1107]: [2023/06/09 10:30:55.233755,  0] ../../source3/nmbd/nmbd.c:901(main)
Jun  9 10:30:55 NAS  nmbd[1107]:   nmbd version 4.17.3 started.
Jun  9 10:30:55 NAS  nmbd[1107]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Jun  9 10:30:55 NAS  wsdd2[1120]: starting.
Jun  9 10:30:55 NAS  winbindd[1121]: [2023/06/09 10:30:55.320452,  0] ../../source3/winbindd/winbindd.c:1440(main)
Jun  9 10:30:55 NAS  winbindd[1121]:   winbindd version 4.17.3 started.
Jun  9 10:30:55 NAS  winbindd[1121]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Jun  9 10:30:55 NAS  winbindd[1123]: [2023/06/09 10:30:55.324252,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
Jun  9 10:30:55 NAS  winbindd[1123]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
Jun  9 10:30:55 NAS root: Installing plugins
Jun  9 10:30:55 NAS root: plugin: installing: ca.backup2.plg
Jun  9 10:30:55 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:55 NAS root: plugin: running: anonymous
Jun  9 10:30:55 NAS root: plugin: checking: /boot/config/plugins/ca.backup2/ca.backup2-2023.03.28c-x86_64-1.txz - MD5
Jun  9 10:30:55 NAS root: plugin: skipping: /boot/config/plugins/ca.backup2/ca.backup2-2023.03.28c-x86_64-1.txz already exists
Jun  9 10:30:55 NAS root: plugin: running: /boot/config/plugins/ca.backup2/ca.backup2-2023.03.28c-x86_64-1.txz
Jun  9 10:30:55 NAS root:
Jun  9 10:30:55 NAS root: +==============================================================================
Jun  9 10:30:55 NAS root: | Installing new package /boot/config/plugins/ca.backup2/ca.backup2-2023.03.28c-x86_64-1.txz
Jun  9 10:30:55 NAS root: +==============================================================================
Jun  9 10:30:55 NAS root:
Jun  9 10:30:55 NAS root: Verifying package ca.backup2-2023.03.28c-x86_64-1.txz.
Jun  9 10:30:55 NAS root: Installing package ca.backup2-2023.03.28c-x86_64-1.txz:
Jun  9 10:30:55 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:55 NAS root: Package ca.backup2-2023.03.28c-x86_64-1.txz installed.
Jun  9 10:30:55 NAS root: plugin: running: anonymous
Jun  9 10:30:55 NAS root:
Jun  9 10:30:55 NAS root: ----------------------------------------------------
Jun  9 10:30:55 NAS root:  ca.backup2 has been installed.
Jun  9 10:30:55 NAS root:  2022-2023, Robin Kluth
Jun  9 10:30:55 NAS root:  2015-2022, Andrew Zawadzki
Jun  9 10:30:55 NAS root:  Version: 2023.03.28c
Jun  9 10:30:55 NAS root: ----------------------------------------------------
Jun  9 10:30:55 NAS root:
Jun  9 10:30:55 NAS root: plugin: running: anonymous
Jun  9 10:30:55 NAS root:
Jun  9 10:30:55 NAS root: plugin: ca.backup2.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:55 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:56 NAS root: plugin: installing: ca.mover.tuning.plg
Jun  9 10:30:56 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: 6.11.5
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root: plugin: checking: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning-2023.05.23-x86_64-1.txz - MD5
Jun  9 10:30:56 NAS root: plugin: skipping: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning-2023.05.23-x86_64-1.txz already exists
Jun  9 10:30:56 NAS root: plugin: running: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning-2023.05.23-x86_64-1.txz
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: +==============================================================================
Jun  9 10:30:56 NAS root: | Installing new package /boot/config/plugins/ca.mover.tuning/ca.mover.tuning-2023.05.23-x86_64-1.txz
Jun  9 10:30:56 NAS root: +==============================================================================
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: Verifying package ca.mover.tuning-2023.05.23-x86_64-1.txz.
Jun  9 10:30:56 NAS root: Installing package ca.mover.tuning-2023.05.23-x86_64-1.txz:
Jun  9 10:30:56 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:56 NAS root: Package ca.mover.tuning-2023.05.23-x86_64-1.txz installed.
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:  ca.mover.tuning has been installed.
Jun  9 10:30:56 NAS root:  Copyright 2018, Andrew Zawadzki
Jun  9 10:30:56 NAS root:  Version: 2023.05.23
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: plugin: ca.mover.tuning.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:56 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:56 NAS root: plugin: installing: ca.update.applications.plg
Jun  9 10:30:56 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root: plugin: checking: /boot/config/plugins/ca.update.applications/ca.update.applications-2023.02.20-x86_64-1.txz - MD5
Jun  9 10:30:56 NAS root: plugin: skipping: /boot/config/plugins/ca.update.applications/ca.update.applications-2023.02.20-x86_64-1.txz already exists
Jun  9 10:30:56 NAS root: plugin: running: /boot/config/plugins/ca.update.applications/ca.update.applications-2023.02.20-x86_64-1.txz
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: +==============================================================================
Jun  9 10:30:56 NAS root: | Installing new package /boot/config/plugins/ca.update.applications/ca.update.applications-2023.02.20-x86_64-1.txz
Jun  9 10:30:56 NAS root: +==============================================================================
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: Verifying package ca.update.applications-2023.02.20-x86_64-1.txz.
Jun  9 10:30:56 NAS root: Installing package ca.update.applications-2023.02.20-x86_64-1.txz:
Jun  9 10:30:56 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:56 NAS root: Package ca.update.applications-2023.02.20-x86_64-1.txz installed.
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:  ca.update.applications has been installed.
Jun  9 10:30:56 NAS root:  Copyright 2015-2021, Andrew Zawadzki
Jun  9 10:30:56 NAS root:  Version: 2023.02.20
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: plugin: ca.update.applications.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:56 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:56 NAS root: plugin: installing: community.applications.plg
Jun  9 10:30:56 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root:
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:56 NAS root: Cleaning Up Old Versions
Jun  9 10:30:56 NAS root: Fixing pinned apps
Jun  9 10:30:56 NAS root: Setting up cron for background notifications
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root: plugin: checking: /boot/config/plugins/community.applications/community.applications-2023.05.21-x86_64-1.txz - MD5
Jun  9 10:30:56 NAS root: plugin: skipping: /boot/config/plugins/community.applications/community.applications-2023.05.21-x86_64-1.txz already exists
Jun  9 10:30:56 NAS root: plugin: running: /boot/config/plugins/community.applications/community.applications-2023.05.21-x86_64-1.txz
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: +==============================================================================
Jun  9 10:30:56 NAS root: | Installing new package /boot/config/plugins/community.applications/community.applications-2023.05.21-x86_64-1.txz
Jun  9 10:30:56 NAS root: +==============================================================================
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: Verifying package community.applications-2023.05.21-x86_64-1.txz.
Jun  9 10:30:56 NAS root: Installing package community.applications-2023.05.21-x86_64-1.txz:
Jun  9 10:30:56 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:56 NAS root: Package community.applications-2023.05.21-x86_64-1.txz installed.
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root: Creating Directories
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: Adjusting icon for Unraid version
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:  community.applications has been installed.
Jun  9 10:30:56 NAS root:  Copyright 2015-2023, Andrew Zawadzki
Jun  9 10:30:56 NAS root:  Version: 2023.05.21
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: plugin: community.applications.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:56 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:56 NAS root: plugin: installing: docker.patch.plg
Jun  9 10:30:56 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:56 NAS root: plugin: running: anonymous
Jun  9 10:30:56 NAS root: Patching Docker Client
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:  docker.patch has been installed.
Jun  9 10:30:56 NAS root:  Copyright 2023, Andrew Zawadzki
Jun  9 10:30:56 NAS root:  Version: 2023.01.28
Jun  9 10:30:56 NAS root: ----------------------------------------------------
Jun  9 10:30:56 NAS root:
Jun  9 10:30:56 NAS root: plugin: docker.patch.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:56 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:56 NAS root: plugin: installing: dynamix.share.floor.plg
Jun  9 10:30:56 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root: plugin: checking: /boot/config/plugins/dynamix.share.floor/dynamix.share.floor.txz - MD5
Jun  9 10:30:57 NAS root: plugin: skipping: /boot/config/plugins/dynamix.share.floor/dynamix.share.floor.txz already exists
Jun  9 10:30:57 NAS root: plugin: running: /boot/config/plugins/dynamix.share.floor/dynamix.share.floor.txz
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root: | Installing new package /boot/config/plugins/dynamix.share.floor/dynamix.share.floor.txz
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: Verifying package dynamix.share.floor.txz.
Jun  9 10:30:57 NAS root: Installing package dynamix.share.floor.txz:
Jun  9 10:30:57 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:57 NAS root: Package dynamix.share.floor.txz installed.
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:  Plugin dynamix.share.floor is installed.
Jun  9 10:30:57 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:57 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:57 NAS root:  Version: 2023.04.30a
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: plugin: dynamix.share.floor.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:57 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:57 NAS root: plugin: installing: dynamix.stop.shell.plg
Jun  9 10:30:57 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root: plugin: creating: /usr/local/emhttp/plugins/dynamix.stop.shell/README.md - from INLINE content
Jun  9 10:30:57 NAS root: plugin: setting: /usr/local/emhttp/plugins/dynamix.stop.shell/README.md - mode to 0644
Jun  9 10:30:57 NAS root: plugin: creating: /usr/local/emhttp/plugins/dynamix.stop.shell/images/dynamix.stop.shell.png - from INLINE content
Jun  9 10:30:57 NAS root: plugin: decoding: /usr/local/emhttp/plugins/dynamix.stop.shell/images/dynamix.stop.shell.png as base64
Jun  9 10:30:57 NAS root: plugin: setting: /usr/local/emhttp/plugins/dynamix.stop.shell/images/dynamix.stop.shell.png - mode to 0644
Jun  9 10:30:57 NAS root: plugin: creating: /usr/local/emhttp/webGui/event/unmounting_disks/stop_shell - from INLINE content
Jun  9 10:30:57 NAS root: plugin: setting: /usr/local/emhttp/webGui/event/unmounting_disks/stop_shell - mode to 0755
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:  Plugin dynamix.stop.shell is installed.
Jun  9 10:30:57 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:57 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:57 NAS root:  Version: 2023.02.05
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: plugin: dynamix.stop.shell.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:57 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:57 NAS root: plugin: installing: dynamix.system.autofan.plg
Jun  9 10:30:57 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root: plugin: checking: /boot/config/plugins/dynamix.system.autofan/dynamix.system.autofan.txz - MD5
Jun  9 10:30:57 NAS root: plugin: skipping: /boot/config/plugins/dynamix.system.autofan/dynamix.system.autofan.txz already exists
Jun  9 10:30:57 NAS root: plugin: running: /boot/config/plugins/dynamix.system.autofan/dynamix.system.autofan.txz
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root: | Installing new package /boot/config/plugins/dynamix.system.autofan/dynamix.system.autofan.txz
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: Verifying package dynamix.system.autofan.txz.
Jun  9 10:30:57 NAS root: Installing package dynamix.system.autofan.txz:
Jun  9 10:30:57 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:57 NAS root: Package dynamix.system.autofan.txz installed.
Jun  9 10:30:57 NAS root: plugin: creating: /tmp/start_service - from INLINE content
Jun  9 10:30:57 NAS root: plugin: setting: /tmp/start_service - mode to 0770
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:  Plugin dynamix.system.autofan is installed.
Jun  9 10:30:57 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:57 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:57 NAS root:  Version: 2023.02.05a
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: plugin: dynamix.system.autofan.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:57 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:57 NAS autofan: autofan process ID 2155 started, To terminate it, type: autofan -q -c /sys/devices/platform/it87.2624/hwmon/hwmon2/pwm1 -f /sys/devices/platform/it87.2624/hwmon/hwmon2/fan1_input
Jun  9 10:30:57 NAS autofan: autofan process ID 2181 started, To terminate it, type: autofan -q -c /sys/devices/platform/it87.2624/hwmon/hwmon2/pwm3 -f /sys/devices/platform/it87.2624/hwmon/hwmon2/fan3_input
Jun  9 10:30:57 NAS autofan: autofan process ID 2209 started, To terminate it, type: autofan -q -c /sys/devices/platform/it87.2624/hwmon/hwmon2/pwm4 -f /sys/devices/platform/it87.2624/hwmon/hwmon2/fan1_input
Jun  9 10:30:57 NAS root: plugin: installing: dynamix.system.buttons.plg
Jun  9 10:30:57 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root: plugin: checking: /boot/config/plugins/dynamix.system.buttons/dynamix.system.buttons.txz - MD5
Jun  9 10:30:57 NAS root: plugin: skipping: /boot/config/plugins/dynamix.system.buttons/dynamix.system.buttons.txz already exists
Jun  9 10:30:57 NAS root: plugin: running: /boot/config/plugins/dynamix.system.buttons/dynamix.system.buttons.txz
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root: | Installing new package /boot/config/plugins/dynamix.system.buttons/dynamix.system.buttons.txz
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: Verifying package dynamix.system.buttons.txz.
Jun  9 10:30:57 NAS root: Installing package dynamix.system.buttons.txz:
Jun  9 10:30:57 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:57 NAS root: Package dynamix.system.buttons.txz installed.
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:  Plugin dynamix.system.buttons is installed.
Jun  9 10:30:57 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:57 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:57 NAS root:  Version: 2023.02.11
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: plugin: dynamix.system.buttons.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:57 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:57 NAS root: plugin: installing: dynamix.system.info.plg
Jun  9 10:30:57 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root: plugin: checking: /boot/config/plugins/dynamix.system.info/dynamix.system.info.txz - MD5
Jun  9 10:30:57 NAS root: plugin: skipping: /boot/config/plugins/dynamix.system.info/dynamix.system.info.txz already exists
Jun  9 10:30:57 NAS root: plugin: running: /boot/config/plugins/dynamix.system.info/dynamix.system.info.txz
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root: | Installing new package /boot/config/plugins/dynamix.system.info/dynamix.system.info.txz
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: Verifying package dynamix.system.info.txz.
Jun  9 10:30:57 NAS root: Installing package dynamix.system.info.txz:
Jun  9 10:30:57 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:57 NAS root: Package dynamix.system.info.txz installed.
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:  Plugin dynamix.system.info is installed.
Jun  9 10:30:57 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:57 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:57 NAS root:  Version: 2023.02.05
Jun  9 10:30:57 NAS root: -----------------------------------------------------------
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: plugin: dynamix.system.info.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:57 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:57 NAS root: plugin: installing: dynamix.system.stats.plg
Jun  9 10:30:57 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:57 NAS root: plugin: running: anonymous
Jun  9 10:30:57 NAS root: plugin: checking: /boot/config/plugins/dynamix.system.stats/dynamix.system.stats.txz - MD5
Jun  9 10:30:57 NAS root: plugin: skipping: /boot/config/plugins/dynamix.system.stats/dynamix.system.stats.txz already exists
Jun  9 10:30:57 NAS root: plugin: running: /boot/config/plugins/dynamix.system.stats/dynamix.system.stats.txz
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root: | Installing new package /boot/config/plugins/dynamix.system.stats/dynamix.system.stats.txz
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: Verifying package dynamix.system.stats.txz.
Jun  9 10:30:57 NAS root: Installing package dynamix.system.stats.txz:
Jun  9 10:30:57 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:57 NAS root: Package dynamix.system.stats.txz installed.
Jun  9 10:30:57 NAS root: plugin: skipping: /boot/config/plugins/dynamix.system.stats/sysstat-12.0.2.txz already exists
Jun  9 10:30:57 NAS root: plugin: running: /boot/config/plugins/dynamix.system.stats/sysstat-12.0.2.txz
Jun  9 10:30:57 NAS root:
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root: | Installing new package /boot/config/plugins/dynamix.system.stats/sysstat-12.0.2.txz
Jun  9 10:30:57 NAS root: +==============================================================================
Jun  9 10:30:57 NAS root:
Jun  9 10:30:58 NAS root: Verifying package sysstat-12.0.2.txz.
Jun  9 10:30:58 NAS root: Installing package sysstat-12.0.2.txz:
Jun  9 10:30:58 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:58 NAS root: Package sysstat-12.0.2.txz installed.
Jun  9 10:30:58 NAS root: plugin: running: anonymous
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: -----------------------------------------------------------
Jun  9 10:30:58 NAS root:  Plugin dynamix.system.stats is installed.
Jun  9 10:30:58 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:58 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:58 NAS root:  Version: 2023.02.14
Jun  9 10:30:58 NAS root: -----------------------------------------------------------
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: plugin: dynamix.system.stats.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:58 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:58 NAS root: plugin: installing: dynamix.system.temp.plg
Jun  9 10:30:58 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:58 NAS root: plugin: running: anonymous
Jun  9 10:30:58 NAS root: plugin: checking: /boot/config/plugins/dynamix.system.temp/dynamix.system.temp.txz - MD5
Jun  9 10:30:58 NAS root: plugin: skipping: /boot/config/plugins/dynamix.system.temp/dynamix.system.temp.txz already exists
Jun  9 10:30:58 NAS root: plugin: running: /boot/config/plugins/dynamix.system.temp/dynamix.system.temp.txz
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: +==============================================================================
Jun  9 10:30:58 NAS root: | Installing new package /boot/config/plugins/dynamix.system.temp/dynamix.system.temp.txz
Jun  9 10:30:58 NAS root: +==============================================================================
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: Verifying package dynamix.system.temp.txz.
Jun  9 10:30:58 NAS root: Installing package dynamix.system.temp.txz:
Jun  9 10:30:58 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:58 NAS root: Package dynamix.system.temp.txz installed.
Jun  9 10:30:58 NAS root: plugin: skipping: sensors-detect - Unraid version too high, requires at most version 6.7.2
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:58 NAS root: plugin: running: anonymous
Jun  9 10:30:58 NAS kernel: it87: Found IT8628E chip at 0xa40, revision 2
Jun  9 10:30:58 NAS kernel: it87: Beeping is supported
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: -----------------------------------------------------------
Jun  9 10:30:58 NAS root:  Plugin dynamix.system.temp is installed.
Jun  9 10:30:58 NAS root:  This plugin requires Dynamix webGui to operate
Jun  9 10:30:58 NAS root:  Copyright 2023, Bergware International
Jun  9 10:30:58 NAS root:  Version: 2023.02.04b
Jun  9 10:30:58 NAS root: -----------------------------------------------------------
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: plugin: dynamix.system.temp.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:58 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:58 NAS root: plugin: installing: dynamix.unraid.net.plg
Jun  9 10:30:58 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:58 NAS root: plugin: running: anonymous
Jun  9 10:30:58 NAS root: Installing dynamix.unraid.net.plg 2023.05.03.1227 with Unraid API 3.1.1
Jun  9 10:30:58 NAS root: plugin: running: anonymous
Jun  9 10:30:58 NAS root: Checking DNS...
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: Validating /boot/config/plugins/dynamix.my.servers/dynamix.unraid.net.txz...  ok.
Jun  9 10:30:58 NAS root: Validating /boot/config/plugins/dynamix.my.servers/unraid-api.tgz...  ok.
Jun  9 10:30:58 NAS root: plugin: checking: /boot/config/plugins/dynamix.my.servers/dynamix.unraid.net.txz - SHA256
Jun  9 10:30:58 NAS root: plugin: skipping: /boot/config/plugins/dynamix.my.servers/dynamix.unraid.net.txz already exists
Jun  9 10:30:58 NAS root: plugin: checking: /boot/config/plugins/dynamix.my.servers/unraid-api.tgz - SHA256
Jun  9 10:30:58 NAS root: plugin: skipping: /boot/config/plugins/dynamix.my.servers/unraid-api.tgz already exists
Jun  9 10:30:58 NAS root: plugin: running: anonymous
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:58 NAS root:
Jun  9 10:30:58 NAS root: 🕹️ Installing JavaScript web components
Jun  9 10:30:59 NAS root: Finished installing web components
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: plugin: running: anonymous
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: plugin: running: anonymous
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: plugin: running: anonymous
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:59 NAS root:
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:59 NAS root: +==============================================================================
Jun  9 10:30:59 NAS root: | Installing new package /boot/config/plugins/dynamix.my.servers/dynamix.unraid.net.txz
Jun  9 10:30:59 NAS root: +==============================================================================
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: Verifying package dynamix.unraid.net.txz.
Jun  9 10:30:59 NAS root: Installing package dynamix.unraid.net.txz:
Jun  9 10:30:59 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:59 NAS root: Package dynamix.unraid.net.txz installed.
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:59 NAS root:
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:59 NAS root: ⚠️ Do not close this window yet
Jun  9 10:30:59 NAS root:
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Jun  9 10:30:59 NAS root: Installation is complete, it is safe to close this window
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: plugin: dynamix.unraid.net.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:59 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:59 NAS root: plugin: installing: fix.common.problems.plg
Jun  9 10:30:59 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:59 NAS root: plugin: checking: /boot/config/plugins/fix.common.problems/fix.common.problems-2023.04.26-x86_64-1.txz - MD5
Jun  9 10:30:59 NAS root: plugin: skipping: /boot/config/plugins/fix.common.problems/fix.common.problems-2023.04.26-x86_64-1.txz already exists
Jun  9 10:30:59 NAS root: plugin: running: /boot/config/plugins/fix.common.problems/fix.common.problems-2023.04.26-x86_64-1.txz
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: +==============================================================================
Jun  9 10:30:59 NAS root: | Installing new package /boot/config/plugins/fix.common.problems/fix.common.problems-2023.04.26-x86_64-1.txz
Jun  9 10:30:59 NAS root: +==============================================================================
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: Verifying package fix.common.problems-2023.04.26-x86_64-1.txz.
Jun  9 10:30:59 NAS root: Installing package fix.common.problems-2023.04.26-x86_64-1.txz:
Jun  9 10:30:59 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:30:59 NAS root: Package fix.common.problems-2023.04.26-x86_64-1.txz installed.
Jun  9 10:30:59 NAS root: plugin: running: anonymous
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: ----------------------------------------------------
Jun  9 10:30:59 NAS root:  fix.common.problems has been installed.
Jun  9 10:30:59 NAS root:  Copyright 2016-2022, Andrew Zawadzki
Jun  9 10:30:59 NAS root:  Version: 2023.04.26
Jun  9 10:30:59 NAS root: ----------------------------------------------------
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: plugin: fix.common.problems.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:30:59 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:30:59 NAS root: plugin: installing: gpustat.plg
Jun  9 10:30:59 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:30:59 NAS root: plugin: running: anonymous
Jun  9 10:30:59 NAS root: ********************************************************************
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: Intel vendor utility found. Continuing install.
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: ********************************************************************
Jun  9 10:30:59 NAS root: plugin: skipping: /boot/config/plugins/gpustat/gpustat-2022.11.30a-x86_64.txz already exists
Jun  9 10:30:59 NAS root: plugin: running: /boot/config/plugins/gpustat/gpustat-2022.11.30a-x86_64.txz
Jun  9 10:30:59 NAS root:
Jun  9 10:30:59 NAS root: +==============================================================================
Jun  9 10:30:59 NAS root: | Installing new package /boot/config/plugins/gpustat/gpustat-2022.11.30a-x86_64.txz
Jun  9 10:30:59 NAS root: +==============================================================================
Jun  9 10:30:59 NAS root:
Jun  9 10:31:00 NAS root: Verifying package gpustat-2022.11.30a-x86_64.txz.
Jun  9 10:31:00 NAS root: Installing package gpustat-2022.11.30a-x86_64.txz:
Jun  9 10:31:00 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:00 NAS root: Package gpustat-2022.11.30a-x86_64.txz installed.
Jun  9 10:31:00 NAS root: plugin: skipping: /boot/config/plugins/gpustat/gpustat.cfg already exists
Jun  9 10:31:00 NAS root: plugin: gpustat.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:00 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:00 NAS root: plugin: installing: intel-gpu-top.plg
Jun  9 10:31:00 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:00 NAS root: plugin: running: anonymous
Jun  9 10:31:00 NAS root: plugin: checking: /boot/config/plugins/intel-gpu-top/intel.gpu.top-2023.02.15.txz - MD5
Jun  9 10:31:00 NAS root: plugin: skipping: /boot/config/plugins/intel-gpu-top/intel.gpu.top-2023.02.15.txz already exists
Jun  9 10:31:00 NAS root: plugin: running: /boot/config/plugins/intel-gpu-top/intel.gpu.top-2023.02.15.txz
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: +==============================================================================
Jun  9 10:31:00 NAS root: | Installing new package /boot/config/plugins/intel-gpu-top/intel.gpu.top-2023.02.15.txz
Jun  9 10:31:00 NAS root: +==============================================================================
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: Verifying package intel.gpu.top-2023.02.15.txz.
Jun  9 10:31:00 NAS root: Installing package intel.gpu.top-2023.02.15.txz:
Jun  9 10:31:00 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:00 NAS root: Package intel.gpu.top-2023.02.15.txz installed.
Jun  9 10:31:00 NAS root: plugin: creating: /usr/local/emhttp/plugins/intel-gpu-top/README.md - from INLINE content
Jun  9 10:31:00 NAS root: plugin: running: anonymous
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: -----Intel Kernel Module already enabled!------
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: ----Installation of Intel GPU TOP complete-----
Jun  9 10:31:00 NAS root: plugin: intel-gpu-top.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:00 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:00 NAS root: plugin: installing: recycle.bin.plg
Jun  9 10:31:00 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:00 NAS root: plugin: checking: /boot/config/plugins/recycle.bin/recycle.bin-2023.06.01.tgz - MD5
Jun  9 10:31:00 NAS root: plugin: skipping: /boot/config/plugins/recycle.bin/recycle.bin-2023.06.01.tgz already exists
Jun  9 10:31:00 NAS root: plugin: creating: /tmp/recycle.bin/wait_recycle.bin - from INLINE content
Jun  9 10:31:00 NAS root: plugin: setting: /tmp/recycle.bin/wait_recycle.bin - mode to 0770
Jun  9 10:31:00 NAS root: plugin: creating: /tmp/recycle.bin/remove-smb-extra - from INLINE content
Jun  9 10:31:00 NAS root: plugin: setting: /tmp/recycle.bin/remove-smb-extra - mode to 0770
Jun  9 10:31:00 NAS root: plugin: running: anonymous
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:00 NAS root: plugin: skipping: /boot/config/plugins/recycle.bin/recycle.bin.cfg already exists
Jun  9 10:31:00 NAS root: plugin: running: anonymous
Jun  9 10:31:00 NAS root:
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Jun  9 10:31:00 NAS root: plugin: running: anonymous
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: -----------------------------------------------------------
Jun  9 10:31:00 NAS root:  recycle.bin has been installed.
Jun  9 10:31:00 NAS root:  Copyright 2015-2023 dlandon
Jun  9 10:31:00 NAS root:  Version: 2023.06.01
Jun  9 10:31:00 NAS root: -----------------------------------------------------------
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: plugin: recycle.bin.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:00 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:00 NAS root: plugin: installing: unassigned.devices-plus.plg
Jun  9 10:31:00 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:00 NAS root: plugin: running: anonymous
Jun  9 10:31:00 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/parted-3.3-x86_64-1.txz - MD5
Jun  9 10:31:00 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/parted-3.3-x86_64-1.txz already exists
Jun  9 10:31:00 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/parted-3.3-x86_64-1.txz
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: +==============================================================================
Jun  9 10:31:00 NAS root: | Installing new package /boot/config/plugins/unassigned.devices-plus/packages/parted-3.3-x86_64-1.txz
Jun  9 10:31:00 NAS root: +==============================================================================
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: Verifying package parted-3.3-x86_64-1.txz.
Jun  9 10:31:00 NAS root: Installing package parted-3.3-x86_64-1.txz:
Jun  9 10:31:00 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:00 NAS root: # parted (GNU disk partitioning tool)
Jun  9 10:31:00 NAS root: #
Jun  9 10:31:00 NAS root: # GNU Parted is a program for creating, destroying, resizing, checking
Jun  9 10:31:00 NAS root: # and copying partitions, and the filesystems on them. This is useful
Jun  9 10:31:00 NAS root: # for creating space for new operating systems, reorganizing disk
Jun  9 10:31:00 NAS root: # usage, copying data between hard disks, and disk imaging.
Jun  9 10:31:00 NAS root: #
Jun  9 10:31:00 NAS root: Executing install script for parted-3.3-x86_64-1.txz.
Jun  9 10:31:00 NAS root: Package parted-3.3-x86_64-1.txz installed.
Jun  9 10:31:00 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/parted-3.6-x86_64-1.txz - MD5
Jun  9 10:31:00 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/parted-3.6-x86_64-1.txz already exists
Jun  9 10:31:00 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/parted-3.6-x86_64-1.txz
Jun  9 10:31:00 NAS root:
Jun  9 10:31:00 NAS root: +==============================================================================
Jun  9 10:31:00 NAS root: | Upgrading parted-3.3-x86_64-1 package using /boot/config/plugins/unassigned.devices-plus/packages/parted-3.6-x86_64-1.txz
Jun  9 10:31:00 NAS root: +==============================================================================
Jun  9 10:31:00 NAS root: Pre-installing package parted-3.6-x86_64-1...
Jun  9 10:31:01 NAS root: Removing package: parted-3.3-x86_64-1-upgraded-2023-06-09,10:31:00
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/API
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/AUTHORS
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/BUGS
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/COPYING
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/ChangeLog
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/FAT
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/NEWS
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/README
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/THANKS
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/TODO
Jun  9 10:31:01 NAS root:   --> Deleting /usr/doc/parted-3.3/USER.jp
Jun  9 10:31:01 NAS root:   --> Deleting /usr/lib64/libparted-fs-resize.so.0.0.2
Jun  9 10:31:01 NAS root:   --> Deleting /usr/lib64/libparted.so.2.0.2
Jun  9 10:31:01 NAS root:   --> Deleting empty directory /usr/doc/parted-3.3/
Jun  9 10:31:01 NAS root: Verifying package parted-3.6-x86_64-1.txz.
Jun  9 10:31:01 NAS root: Installing package parted-3.6-x86_64-1.txz:
Jun  9 10:31:01 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:01 NAS root: # parted (GNU disk partitioning tool)
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # GNU Parted is a program for creating, destroying, resizing, checking
Jun  9 10:31:01 NAS root: # and copying partitions, and the filesystems on them. This is useful
Jun  9 10:31:01 NAS root: # for creating space for new operating systems, reorganizing disk
Jun  9 10:31:01 NAS root: # usage, copying data between hard disks, and disk imaging.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: Executing install script for parted-3.6-x86_64-1.txz.
Jun  9 10:31:01 NAS root: Package parted-3.6-x86_64-1.txz installed.
Jun  9 10:31:01 NAS root: Package parted-3.3-x86_64-1 upgraded with new package /boot/config/plugins/unassigned.devices-plus/packages/parted-3.6-x86_64-1.txz.
Jun  9 10:31:01 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/exfat-utils-1.3.0-x86_64-1_slonly.txz - MD5
Jun  9 10:31:01 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/exfat-utils-1.3.0-x86_64-1_slonly.txz already exists
Jun  9 10:31:01 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/exfat-utils-1.3.0-x86_64-1_slonly.txz
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root: | Installing new package /boot/config/plugins/unassigned.devices-plus/packages/exfat-utils-1.3.0-x86_64-1_slonly.txz
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: Verifying package exfat-utils-1.3.0-x86_64-1_slonly.txz.
Jun  9 10:31:01 NAS root: Installing package exfat-utils-1.3.0-x86_64-1_slonly.txz:
Jun  9 10:31:01 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:01 NAS root: # exfat-utils (exFAT system utilities)
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # This project aims to provide a full-featured exFAT file system
Jun  9 10:31:01 NAS root: # implementation for GNU/Linux and other Unix-like systems as a FUSE
Jun  9 10:31:01 NAS root: # module and a set of utilities.
Jun  9 10:31:01 NAS root: # module.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # This package contains the utilities.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # Homepage: https://github.com/relan/exfat
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: Executing install script for exfat-utils-1.3.0-x86_64-1_slonly.txz.
Jun  9 10:31:01 NAS root: Package exfat-utils-1.3.0-x86_64-1_slonly.txz installed.
Jun  9 10:31:01 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/fuse-exfat-1.3.0-x86_64-1_slonly.txz - MD5
Jun  9 10:31:01 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/fuse-exfat-1.3.0-x86_64-1_slonly.txz already exists
Jun  9 10:31:01 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/fuse-exfat-1.3.0-x86_64-1_slonly.txz
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root: | Installing new package /boot/config/plugins/unassigned.devices-plus/packages/fuse-exfat-1.3.0-x86_64-1_slonly.txz
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: Verifying package fuse-exfat-1.3.0-x86_64-1_slonly.txz.
Jun  9 10:31:01 NAS root: Installing package fuse-exfat-1.3.0-x86_64-1_slonly.txz:
Jun  9 10:31:01 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:01 NAS root: # fuse-exfat (exFAT FUSE module)
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # This project aims to provide a full-featured exFAT file system
Jun  9 10:31:01 NAS root: # implementation for GNU/Linux and other Unix-like systems as a FUSE
Jun  9 10:31:01 NAS root: # module and a set of utilities.
Jun  9 10:31:01 NAS root: # module.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # This package contains the FUSE module.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # Homepage: https://github.com/relan/exfat
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: Executing install script for fuse-exfat-1.3.0-x86_64-1_slonly.txz.
Jun  9 10:31:01 NAS root: Package fuse-exfat-1.3.0-x86_64-1_slonly.txz installed.
Jun  9 10:31:01 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/hfsprogs-332.25-x86_64-2sl.txz - MD5
Jun  9 10:31:01 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/hfsprogs-332.25-x86_64-2sl.txz already exists
Jun  9 10:31:01 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/hfsprogs-332.25-x86_64-2sl.txz
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root: | Installing new package /boot/config/plugins/unassigned.devices-plus/packages/hfsprogs-332.25-x86_64-2sl.txz
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: Verifying package hfsprogs-332.25-x86_64-2sl.txz.
Jun  9 10:31:01 NAS root: Installing package hfsprogs-332.25-x86_64-2sl.txz:
Jun  9 10:31:01 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:01 NAS root: # hfsprogs - hfs+ user space utils
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # The HFS+ file system used by Apple Computer for their Mac OS is
Jun  9 10:31:01 NAS root: # supported by the Linux kernel.  Apple provides mkfs and fsck for
Jun  9 10:31:01 NAS root: # HFS+ with the Unix core of their operating system, Darwin.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # This package is a port of Apple's tools for HFS+ filesystems.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # http://www.opensource.apple.com
Jun  9 10:31:01 NAS root: Package hfsprogs-332.25-x86_64-2sl.txz installed.
Jun  9 10:31:01 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/libbsd-0.11.7-x86_64-1cf.txz - MD5
Jun  9 10:31:01 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/libbsd-0.11.7-x86_64-1cf.txz already exists
Jun  9 10:31:01 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/libbsd-0.11.7-x86_64-1cf.txz
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root: | Installing new package /boot/config/plugins/unassigned.devices-plus/packages/libbsd-0.11.7-x86_64-1cf.txz
Jun  9 10:31:01 NAS root: +==============================================================================
Jun  9 10:31:01 NAS root:
Jun  9 10:31:01 NAS root: Verifying package libbsd-0.11.7-x86_64-1cf.txz.
Jun  9 10:31:01 NAS root: Installing package libbsd-0.11.7-x86_64-1cf.txz:
Jun  9 10:31:01 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:01 NAS root: # libbsd (library of BSD functions)
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: # This library provides useful functions commonly found on BSD systems,
Jun  9 10:31:01 NAS root: # and lacking on others like GNU systems, thus making it easier to port
Jun  9 10:31:01 NAS root: # projects with strong BSD origins, without needing to embed the same
Jun  9 10:31:01 NAS root: # code over and over again on each project.
Jun  9 10:31:01 NAS root: #
Jun  9 10:31:01 NAS root: Executing install script for libbsd-0.11.7-x86_64-1cf.txz.
Jun  9 10:31:02 NAS root: Package libbsd-0.11.7-x86_64-1cf.txz installed.
Jun  9 10:31:02 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices-plus/packages/apfsfuse-v20200708-x86_64-1.txz - MD5
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices-plus/packages/apfsfuse-v20200708-x86_64-1.txz already exists
Jun  9 10:31:02 NAS root: plugin: running: /boot/config/plugins/unassigned.devices-plus/packages/apfsfuse-v20200708-x86_64-1.txz
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: +==============================================================================
Jun  9 10:31:02 NAS root: | Installing new package /boot/config/plugins/unassigned.devices-plus/packages/apfsfuse-v20200708-x86_64-1.txz
Jun  9 10:31:02 NAS root: +==============================================================================
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: Verifying package apfsfuse-v20200708-x86_64-1.txz.
Jun  9 10:31:02 NAS root: Installing package apfsfuse-v20200708-x86_64-1.txz:
Jun  9 10:31:02 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:02 NAS root: # apfs-fuse v20200708
Jun  9 10:31:02 NAS root: #
Jun  9 10:31:02 NAS root: # Custom apfs-fuse v20200708 package for Unraid by ich777
Jun  9 10:31:02 NAS root: # https://github.com/sgan81/apfs-fuse
Jun  9 10:31:02 NAS root: Package apfsfuse-v20200708-x86_64-1.txz installed.
Jun  9 10:31:02 NAS root: plugin: creating: /usr/local/emhttp/plugins/unassigned.devices-plus/README.md - from INLINE content
Jun  9 10:31:02 NAS root: plugin: running: anonymous
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: -----------------------------------------------------------
Jun  9 10:31:02 NAS root:  unassigned.devices-plus has been installed.
Jun  9 10:31:02 NAS root:  Copyright 2015, gfjardim
Jun  9 10:31:02 NAS root:  Copyright 2016-2023, dlandon
Jun  9 10:31:02 NAS root:  Version: 2023.04.15
Jun  9 10:31:02 NAS root: -----------------------------------------------------------
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: plugin: unassigned.devices-plus.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:02 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:02 NAS root: plugin: installing: unassigned.devices.plg
Jun  9 10:31:02 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:02 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices/unassigned.devices-2023.06.02a.tgz - MD5
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices/unassigned.devices-2023.06.02a.tgz already exists
Jun  9 10:31:02 NAS root: plugin: running: anonymous
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices/unassigned.devices.cfg already exists
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices/samba_mount.cfg already exists
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices/iso_mount.cfg already exists
Jun  9 10:31:02 NAS root: plugin: creating: /tmp/unassigned.devices/add-smb - from INLINE content
Jun  9 10:31:02 NAS root: plugin: setting: /tmp/unassigned.devices/add-smb - mode to 0770
Jun  9 10:31:02 NAS root: plugin: running: anonymous
Jun  9 10:31:02 NAS unassigned.devices: Updating share settings...
Jun  9 10:31:02 NAS unassigned.devices: Share settings updated.
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: -----------------------------------------------------------
Jun  9 10:31:02 NAS root:  unassigned.devices has been installed.
Jun  9 10:31:02 NAS root:  Copyright 2015, gfjardim
Jun  9 10:31:02 NAS root:  Copyright 2016-2023, dlandon
Jun  9 10:31:02 NAS root:  Version: 2023.06.02a
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root:  Note:
Jun  9 10:31:02 NAS root:  Install the Unassigned Devices Plus plugin if you need to
Jun  9 10:31:02 NAS root:  mount exFAT, HFS+, apfs file formats or format disks.
Jun  9 10:31:02 NAS root: -----------------------------------------------------------
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: plugin: unassigned.devices.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:02 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:02 NAS root: plugin: installing: unassigned.devices.preclear.plg
Jun  9 10:31:02 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:02 NAS root: plugin: running: anonymous
Jun  9 10:31:02 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices.preclear/unassigned.devices.preclear-2023.05.20.tgz - MD5
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices.preclear/unassigned.devices.preclear-2023.05.20.tgz already exists
Jun  9 10:31:02 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices.preclear/tmux-3.1b-x86_64-1.txz - MD5
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices.preclear/tmux-3.1b-x86_64-1.txz already exists
Jun  9 10:31:02 NAS root: plugin: running: /boot/config/plugins/unassigned.devices.preclear/tmux-3.1b-x86_64-1.txz
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: +==============================================================================
Jun  9 10:31:02 NAS root: | Installing new package /boot/config/plugins/unassigned.devices.preclear/tmux-3.1b-x86_64-1.txz
Jun  9 10:31:02 NAS root: +==============================================================================
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: Verifying package tmux-3.1b-x86_64-1.txz.
Jun  9 10:31:02 NAS root: Installing package tmux-3.1b-x86_64-1.txz:
Jun  9 10:31:02 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:02 NAS root: # tmux (terminal multiplexer)
Jun  9 10:31:02 NAS root: #
Jun  9 10:31:02 NAS root: # tmux is a terminal multiplexer. It enables a number of terminals
Jun  9 10:31:02 NAS root: # (or windows) to be accessed and controlled from a single terminal.
Jun  9 10:31:02 NAS root: # tmux is intended to be a simple, modern, BSD-licensed alternative to
Jun  9 10:31:02 NAS root: # programs such as GNU screen.
Jun  9 10:31:02 NAS root: #
Jun  9 10:31:02 NAS root: # Homepage: https://github.com/tmux/tmux/wiki
Jun  9 10:31:02 NAS root: #
Jun  9 10:31:02 NAS root: Executing install script for tmux-3.1b-x86_64-1.txz.
Jun  9 10:31:02 NAS root: Package tmux-3.1b-x86_64-1.txz installed.
Jun  9 10:31:02 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices.preclear/tmux-3.3a-x86_64-1.txz - MD5
Jun  9 10:31:02 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices.preclear/tmux-3.3a-x86_64-1.txz already exists
Jun  9 10:31:02 NAS root: plugin: running: /boot/config/plugins/unassigned.devices.preclear/tmux-3.3a-x86_64-1.txz
Jun  9 10:31:02 NAS root:
Jun  9 10:31:02 NAS root: +==============================================================================
Jun  9 10:31:02 NAS root: | Upgrading tmux-3.1b-x86_64-1 package using /boot/config/plugins/unassigned.devices.preclear/tmux-3.3a-x86_64-1.txz
Jun  9 10:31:02 NAS root: +==============================================================================
Jun  9 10:31:02 NAS root: Pre-installing package tmux-3.3a-x86_64-1...
Jun  9 10:31:03 NAS root: Removing package: tmux-3.1b-x86_64-1-upgraded-2023-06-09,10:31:02
Jun  9 10:31:03 NAS root:   --> Deleting /usr/doc/tmux-3.1b/CHANGES
Jun  9 10:31:03 NAS root:   --> Deleting /usr/doc/tmux-3.1b/README
Jun  9 10:31:03 NAS root:   --> Deleting /usr/doc/tmux-3.1b/example_tmux.conf
Jun  9 10:31:03 NAS root:   --> Deleting empty directory /usr/doc/tmux-3.1b/
Jun  9 10:31:03 NAS root: Verifying package tmux-3.3a-x86_64-1.txz.
Jun  9 10:31:03 NAS root: Installing package tmux-3.3a-x86_64-1.txz:
Jun  9 10:31:03 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:03 NAS root: # tmux (terminal multiplexer)
Jun  9 10:31:03 NAS root: #
Jun  9 10:31:03 NAS root: # tmux is a terminal multiplexer. It enables a number of terminals
Jun  9 10:31:03 NAS root: # (or windows) to be accessed and controlled from a single terminal.
Jun  9 10:31:03 NAS root: # tmux is intended to be a simple, modern, BSD-licensed alternative to
Jun  9 10:31:03 NAS root: # programs such as GNU screen.
Jun  9 10:31:03 NAS root: #
Jun  9 10:31:03 NAS root: # Homepage: https://github.com/tmux/tmux/wiki
Jun  9 10:31:03 NAS root: #
Jun  9 10:31:03 NAS root: Executing install script for tmux-3.3a-x86_64-1.txz.
Jun  9 10:31:03 NAS root: Package tmux-3.3a-x86_64-1.txz installed.
Jun  9 10:31:03 NAS root: Package tmux-3.1b-x86_64-1 upgraded with new package /boot/config/plugins/unassigned.devices.preclear/tmux-3.3a-x86_64-1.txz.
Jun  9 10:31:03 NAS root: plugin: checking: /boot/config/plugins/unassigned.devices.preclear/utempter-1.2.0-x86_64-3.txz - MD5
Jun  9 10:31:03 NAS root: plugin: skipping: /boot/config/plugins/unassigned.devices.preclear/utempter-1.2.0-x86_64-3.txz already exists
Jun  9 10:31:03 NAS root: plugin: running: /boot/config/plugins/unassigned.devices.preclear/utempter-1.2.0-x86_64-3.txz
Jun  9 10:31:03 NAS root:
Jun  9 10:31:03 NAS root: +==============================================================================
Jun  9 10:31:03 NAS root: | Skipping package utempter-1.2.0-x86_64-3 (already installed)
Jun  9 10:31:03 NAS root: +==============================================================================
Jun  9 10:31:03 NAS root:
Jun  9 10:31:03 NAS root: plugin: running: anonymous
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:03 NAS root: Checking tmux operation...
Jun  9 10:31:05 NAS root:
Jun  9 10:31:05 NAS root: -----------------------------------------------------------
Jun  9 10:31:05 NAS root:  unassigned.devices.preclear has been installed.
Jun  9 10:31:05 NAS root:  Copyright 2015-2020, gfjardim
Jun  9 10:31:05 NAS root:  Copyright 2022-2023, dlandon
Jun  9 10:31:05 NAS root:  Version: 2023.05.20
Jun  9 10:31:05 NAS root: -----------------------------------------------------------
Jun  9 10:31:05 NAS root:
Jun  9 10:31:05 NAS root: plugin: unassigned.devices.preclear.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:05 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:05 NAS root: plugin: installing: unbalance.plg
Jun  9 10:31:05 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:05 NAS root: plugin: checking: /boot/config/plugins/unbalance/unbalance-5.6.4.tgz - MD5
Jun  9 10:31:05 NAS root: plugin: skipping: /boot/config/plugins/unbalance/unbalance-5.6.4.tgz already exists
Jun  9 10:31:05 NAS root: plugin: skipping: /boot/config/plugins/unbalance/unbalance.cfg already exists
Jun  9 10:31:05 NAS root: plugin: running: anonymous
Jun  9 10:31:05 NAS root:
Jun  9 10:31:05 NAS root: -----------------------------------------------------------
Jun  9 10:31:05 NAS root:  unBALANCE has been installed.
Jun  9 10:31:05 NAS root:  Copyright (c) Juan B. Rodriguez
Jun  9 10:31:05 NAS root:  Version: v2021.04.21
Jun  9 10:31:05 NAS root: -----------------------------------------------------------
Jun  9 10:31:05 NAS root:
Jun  9 10:31:05 NAS root: plugin: creating: /tmp/unbalance-chkconf - from INLINE content
Jun  9 10:31:05 NAS root: plugin: running: /tmp/unbalance-chkconf
Jun  9 10:31:05 NAS root: plugin: creating: /tmp/unbalance-chkconf2 - from INLINE content
Jun  9 10:31:05 NAS root: plugin: running: /tmp/unbalance-chkconf2
Jun  9 10:31:06 NAS root: plugin: unbalance.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:06 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:06 NAS root: plugin: installing: user.scripts.plg
Jun  9 10:31:06 NAS root: Executing hook script: pre_plugin_checks
Jun  9 10:31:06 NAS root: plugin: checking: /boot/config/plugins/user.scripts/user.scripts-2023.03.29-x86_64-1.txz - MD5
Jun  9 10:31:06 NAS root: plugin: skipping: /boot/config/plugins/user.scripts/user.scripts-2023.03.29-x86_64-1.txz already exists
Jun  9 10:31:06 NAS root: plugin: running: /boot/config/plugins/user.scripts/user.scripts-2023.03.29-x86_64-1.txz
Jun  9 10:31:06 NAS root:
Jun  9 10:31:06 NAS root: +==============================================================================
Jun  9 10:31:06 NAS root: | Installing new package /boot/config/plugins/user.scripts/user.scripts-2023.03.29-x86_64-1.txz
Jun  9 10:31:06 NAS root: +==============================================================================
Jun  9 10:31:06 NAS root:
Jun  9 10:31:06 NAS root: Verifying package user.scripts-2023.03.29-x86_64-1.txz.
Jun  9 10:31:06 NAS root: Installing package user.scripts-2023.03.29-x86_64-1.txz:
Jun  9 10:31:06 NAS root: PACKAGE DESCRIPTION:
Jun  9 10:31:06 NAS root: Package user.scripts-2023.03.29-x86_64-1.txz installed.
Jun  9 10:31:06 NAS root: plugin: running: anonymous
Jun  9 10:31:06 NAS root:
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Jun  9 10:31:06 NAS root: plugin: running: anonymous
Jun  9 10:31:06 NAS root:
Jun  9 10:31:06 NAS root: ----------------------------------------------------
Jun  9 10:31:06 NAS root:  user.scripts has been installed.
Jun  9 10:31:06 NAS root:  Copyright 2016-2023, Andrew Zawadzki
Jun  9 10:31:06 NAS root:  Version: 2023.03.29
Jun  9 10:31:06 NAS root: ----------------------------------------------------
Jun  9 10:31:06 NAS root:
Jun  9 10:31:06 NAS root: plugin: user.scripts.plg installed
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:31:06 NAS root: Executing hook script: post_plugin_checks
Jun  9 10:31:06 NAS root: Installing language packs
Jun  9 10:31:06 NAS root: Starting go script
Jun  9 10:31:06 NAS root: Starting emhttpd
Jun  9 10:31:06 NAS  emhttpd: Unraid(tm) System Management Utility version 6.11.5
Jun  9 10:31:06 NAS  emhttpd: Copyright (C) 2005-2022, Lime Technology, Inc.
Jun  9 10:31:06 NAS  emhttpd: unclean shutdown detected
Jun  9 10:31:06 NAS  emhttpd: shcmd (3): /usr/local/emhttp/webGui/scripts/update_access
Jun  9 10:31:06 NAS  sshd[1060]: Received signal 15; terminating.
Jun  9 10:31:07 NAS  sshd[7148]: Server listening on 0.0.0.0 port 22.
Jun  9 10:31:07 NAS  sshd[7148]: Server listening on :: port 22.
Jun  9 10:31:08 NAS  emhttpd: shcmd (5): modprobe md-mod super=/boot/config/super.dat
Jun  9 10:31:08 NAS kernel: md: unRAID driver 2.9.25 installed
Jun  9 10:31:08 NAS  emhttpd: Pro key detected, GUID: 0..1 FILE: /boot/config/Pro.key
Jun  9 10:31:08 NAS  emhttpd: Device inventory:
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EMAZ-00WJTA0_E4G2DUNK (sdm) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EMAZ-00WJTA0_7HKHW0VJ (sdj) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WUH721816ALE6L4_2PH6DDTJ (sdk) 512 31251759104
Jun  9 10:31:08 NAS  emhttpd: WDC_WUH721816ALE6L4_2PH5L5KT (sdh) 512 31251759104
Jun  9 10:31:08 NAS  emhttpd: WDC_WD10EZEX-00BN5A0_WD-WCC3F5DKV2P7 (sdg) 512 1953525168
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EMAZ-00WJTA0_7JK43HLC (sdd) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EFBX-68AZZN0_VR0A0H1K (sde) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: SATA_SSD_18050224003847 (sdb) 512 468862128
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EFBX-68AZZN0_VR0A8EEK (sdf) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EDAZ-11TA3A0_VGH9DRMG (sdc) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WUH721816ALE6L4_2PH63PRT (sdn) 512 31251759104
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EMAZ-00WJTA0_7JKX855C (sdo) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WD80EMAZ-00WJTA0_7JK07S9C (sdl) 512 15628053168
Jun  9 10:31:08 NAS  emhttpd: WDC_WUH721816ALE6L4_2BKLY6ST (sdi) 512 31251759104
Jun  9 10:31:08 NAS  emhttpd: Samsung_Flash_Drive_0374722070003841-0:0 (sda) 512 125313283
Jun  9 10:31:08 NAS kernel: mdcmd (1): import 0 sdh 64 15625879500 0 WDC_WUH721816ALE6L4_2PH5L5KT
Jun  9 10:31:08 NAS kernel: md: import disk0: (sdh) WDC_WUH721816ALE6L4_2PH5L5KT size: 15625879500
Jun  9 10:31:08 NAS kernel: mdcmd (2): import 1 sdl 64 7814026532 0 WDC_WD80EMAZ-00WJTA0_7JK07S9C
Jun  9 10:31:08 NAS kernel: md: import disk1: (sdl) WDC_WD80EMAZ-00WJTA0_7JK07S9C size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (3): import 2 sdk 64 15625879500 0 WDC_WUH721816ALE6L4_2PH6DDTJ
Jun  9 10:31:08 NAS kernel: md: import disk2: (sdk) WDC_WUH721816ALE6L4_2PH6DDTJ size: 15625879500
Jun  9 10:31:08 NAS kernel: mdcmd (4): import 3 sdo 64 7814026532 0 WDC_WD80EMAZ-00WJTA0_7JKX855C
Jun  9 10:31:08 NAS kernel: md: import disk3: (sdo) WDC_WD80EMAZ-00WJTA0_7JKX855C size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (5): import 4 sdd 64 7814026532 0 WDC_WD80EMAZ-00WJTA0_7JK43HLC
Jun  9 10:31:08 NAS kernel: md: import disk4: (sdd) WDC_WD80EMAZ-00WJTA0_7JK43HLC size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (6): import 5 sdj 64 7814026532 0 WDC_WD80EMAZ-00WJTA0_7HKHW0VJ
Jun  9 10:31:08 NAS kernel: md: import disk5: (sdj) WDC_WD80EMAZ-00WJTA0_7HKHW0VJ size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (7): import 6 sdc 64 7814026532 0 WDC_WD80EDAZ-11TA3A0_VGH9DRMG
Jun  9 10:31:08 NAS kernel: md: import disk6: (sdc) WDC_WD80EDAZ-11TA3A0_VGH9DRMG size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (8): import 7 sde 64 7814026532 0 WDC_WD80EFBX-68AZZN0_VR0A0H1K
Jun  9 10:31:08 NAS kernel: md: import disk7: (sde) WDC_WD80EFBX-68AZZN0_VR0A0H1K size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (9): import 8 sdf 64 7814026532 0 WDC_WD80EFBX-68AZZN0_VR0A8EEK
Jun  9 10:31:08 NAS kernel: md: import disk8: (sdf) WDC_WD80EFBX-68AZZN0_VR0A8EEK size: 7814026532
Jun  9 10:31:08 NAS kernel: mdcmd (10): import 9 sdn 64 15625879500 0 WDC_WUH721816ALE6L4_2PH63PRT
Jun  9 10:31:08 NAS kernel: md: import disk9: (sdn) WDC_WUH721816ALE6L4_2PH63PRT size: 15625879500
Jun  9 10:31:08 NAS kernel: mdcmd (11): import 10
Jun  9 10:31:08 NAS kernel: mdcmd (12): import 11
Jun  9 10:31:08 NAS kernel: mdcmd (13): import 12
Jun  9 10:31:08 NAS kernel: mdcmd (14): import 13
Jun  9 10:31:08 NAS kernel: mdcmd (15): import 14
Jun  9 10:31:08 NAS kernel: mdcmd (16): import 15
Jun  9 10:31:08 NAS kernel: mdcmd (17): import 16
Jun  9 10:31:08 NAS kernel: mdcmd (18): import 17
Jun  9 10:31:08 NAS kernel: mdcmd (19): import 18
Jun  9 10:31:08 NAS kernel: mdcmd (20): import 19
Jun  9 10:31:08 NAS kernel: mdcmd (21): import 20
Jun  9 10:31:08 NAS kernel: mdcmd (22): import 21
Jun  9 10:31:08 NAS kernel: mdcmd (23): import 22
Jun  9 10:31:08 NAS kernel: mdcmd (24): import 23
Jun  9 10:31:08 NAS kernel: mdcmd (25): import 24
Jun  9 10:31:08 NAS kernel: mdcmd (26): import 25
Jun  9 10:31:08 NAS kernel: mdcmd (27): import 26
Jun  9 10:31:08 NAS kernel: mdcmd (28): import 27
Jun  9 10:31:08 NAS kernel: mdcmd (29): import 28
Jun  9 10:31:08 NAS kernel: mdcmd (30): import 29 sdi 64 15625879500 0 WDC_WUH721816ALE6L4_2BKLY6ST
Jun  9 10:31:08 NAS kernel: md: import disk29: (sdi) WDC_WUH721816ALE6L4_2BKLY6ST size: 15625879500
Jun  9 10:31:08 NAS  emhttpd: import 30 cache device: (sdb) SATA_SSD_18050224003847
Jun  9 10:31:08 NAS  emhttpd: import flash device: sda
Jun  9 10:31:08 NAS root: Starting apcupsd power management:  /sbin/apcupsd
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdm
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdj
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdk
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdh
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdg
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdd
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sde
Jun  9 10:31:09 NAS  apcupsd[7184]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdb
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdf
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdc
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdn
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdo
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdl
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sdi
Jun  9 10:31:09 NAS  emhttpd: read SMART /dev/sda
Jun  9 10:31:09 NAS  apcupsd[7184]: NIS server startup succeeded
Jun  9 10:31:09 NAS  emhttpd: Starting services...
Jun  9 10:31:09 NAS  emhttpd: shcmd (19): /etc/rc.d/rc.samba restart
Jun  9 10:31:09 NAS  wsdd2[1120]: 'Terminated' signal received.
Jun  9 10:31:09 NAS  nmbd[1110]: [2023/06/09 10:31:09.251074,  0] ../../source3/nmbd/nmbd.c:59(terminate)
Jun  9 10:31:09 NAS  nmbd[1110]:   Got SIGTERM: going down...
Jun  9 10:31:09 NAS  winbindd[1123]: [2023/06/09 10:31:09.251110,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Jun  9 10:31:09 NAS  winbindd[1123]:   Got sig[15] terminate (is_parent=1)
Jun  9 10:31:09 NAS  wsdd2[1120]: terminating.
Jun  9 10:31:09 NAS  winbindd[1125]: [2023/06/09 10:31:09.252144,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Jun  9 10:31:09 NAS  winbindd[1125]:   Got sig[15] terminate (is_parent=0)
Jun  9 10:31:11 NAS root: Starting Samba:  /usr/sbin/smbd -D
Jun  9 10:31:11 NAS  smbd[7286]: [2023/06/09 10:31:11.420484,  0] ../../source3/smbd/server.c:1741(main)
Jun  9 10:31:11 NAS  smbd[7286]:   smbd version 4.17.3 started.
Jun  9 10:31:11 NAS  smbd[7286]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Jun  9 10:31:11 NAS root:                  /usr/sbin/nmbd -D
Jun  9 10:31:11 NAS  nmbd[7289]: [2023/06/09 10:31:11.439535,  0] ../../source3/nmbd/nmbd.c:901(main)
Jun  9 10:31:11 NAS  nmbd[7289]:   nmbd version 4.17.3 started.
Jun  9 10:31:11 NAS  nmbd[7289]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Jun  9 10:31:11 NAS root:                  /usr/sbin/wsdd2 -d
Jun  9 10:31:11 NAS root:                  /usr/sbin/winbindd -D
Jun  9 10:31:11 NAS  wsdd2[7302]: starting.
Jun  9 10:31:11 NAS  winbindd[7303]: [2023/06/09 10:31:11.524599,  0] ../../source3/winbindd/winbindd.c:1440(main)
Jun  9 10:31:11 NAS  winbindd[7303]:   winbindd version 4.17.3 started.
Jun  9 10:31:11 NAS  winbindd[7303]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Jun  9 10:31:11 NAS  winbindd[7305]: [2023/06/09 10:31:11.528457,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
Jun  9 10:31:11 NAS  winbindd[7305]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
Jun  9 10:31:11 NAS  emhttpd: shcmd (23): /etc/rc.d/rc.avahidaemon start
Jun  9 10:31:11 NAS root: Starting Avahi mDNS/DNS-SD Daemon:  /usr/sbin/avahi-daemon -D
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Successfully dropped root privileges.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: avahi-daemon 0.8 starting up.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Successfully called chroot().
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Successfully dropped remaining capabilities.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Loading service file /services/sftp-ssh.service.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Loading service file /services/smb.service.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Loading service file /services/ssh.service.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.8.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: New relevant interface br0.IPv4 for mDNS.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Joining mDNS multicast group on interface lo.IPv6 with address ::1.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: New relevant interface lo.IPv6 for mDNS.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: New relevant interface lo.IPv4 for mDNS.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Network interface enumeration completed.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Registering new address record for 192.168.1.8 on br0.IPv4.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Registering new address record for ::1 on lo.*.
Jun  9 10:31:11 NAS  avahi-daemon[7322]: Registering new address record for 127.0.0.1 on lo.IPv4.
Jun  9 10:31:11 NAS  emhttpd: shcmd (24): /etc/rc.d/rc.avahidnsconfd start
Jun  9 10:31:11 NAS root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
Jun  9 10:31:11 NAS  avahi-dnsconfd[7331]: Successfully connected to Avahi daemon.
Jun  9 10:31:11 NAS  emhttpd: Autostart enabled
Jun  9 10:31:12 NAS  avahi-daemon[7322]: Server startup complete. Host name is NAS.local. Local service cookie is 2432492006.
Jun  9 10:31:13 NAS  avahi-daemon[7322]: Service "NAS" (/services/ssh.service) successfully established.
Jun  9 10:31:13 NAS  avahi-daemon[7322]: Service "NAS" (/services/smb.service) successfully established.
Jun  9 10:31:13 NAS  avahi-daemon[7322]: Service "NAS" (/services/sftp-ssh.service) successfully established.
Jun  9 10:31:13 NAS  emhttpd: shcmd (29): /etc/rc.d/rc.php-fpm start
Jun  9 10:31:13 NAS root: Starting php-fpm  done
Jun  9 10:31:13 NAS  emhttpd: shcmd (30): /etc/rc.d/rc.unraid-api install
Jun  9 10:31:14 NAS root: unraid-api installed
Jun  9 10:31:14 NAS  emhttpd: shcmd (31): /etc/rc.d/rc.nginx start
Jun  9 10:31:14 NAS root: Starting Nginx server daemon...
Jun  9 10:31:15 NAS root: Starting [email protected]
Jun  9 10:31:15 NAS root: Watch Setup on Config Path: /boot/config/plugins/dynamix.my.servers/myservers.cfg
Jun  9 10:31:15 NAS  emhttpd: shcmd (32): /etc/rc.d/rc.flash_backup start
Jun  9 10:31:15 NAS flash_backup: flush: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Jun  9 10:31:15 NAS flash_backup: start watching for file changes
Jun  9 10:31:16 NAS kernel: mdcmd (31): set md_num_stripes 1280
Jun  9 10:31:16 NAS kernel: mdcmd (32): set md_queue_limit 80
Jun  9 10:31:16 NAS kernel: mdcmd (33): set md_sync_limit 5
Jun  9 10:31:16 NAS kernel: mdcmd (34): set md_write_method
Jun  9 10:31:16 NAS kernel: mdcmd (35): start STOPPED
Jun  9 10:31:16 NAS kernel: unraid: allocating 56710K for 1280 stripes (11 disks)
Jun  9 10:31:16 NAS kernel: md1: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md2: running, size: 15625879500 blocks
Jun  9 10:31:16 NAS kernel: md3: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md4: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md5: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md6: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md7: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md8: running, size: 7814026532 blocks
Jun  9 10:31:16 NAS kernel: md9: running, size: 15625879500 blocks
Jun  9 10:31:16 NAS  emhttpd: shcmd (34): udevadm settle
Jun  9 10:31:16 NAS  emhttpd: Opening encrypted volumes...
Jun  9 10:31:16 NAS  emhttpd: shcmd (35): touch /boot/config/forcesync
Jun  9 10:31:16 NAS  emhttpd: Mounting disks...
Jun  9 10:31:17 NAS  emhttpd: shcmd (37): mkdir -p /mnt/disk1
Jun  9 10:31:17 NAS  emhttpd: shcmd (38): mount -t xfs -o noatime,nouuid /dev/md1 /mnt/disk1
Jun  9 10:31:17 NAS kernel: SGI XFS with ACLs, security attributes, no debug enabled
Jun  9 10:31:17 NAS kernel: XFS (md1): Mounting V5 Filesystem
Jun  9 10:31:17 NAS kernel: XFS (md1): Ending clean mount
Jun  9 10:31:17 NAS  emhttpd: shcmd (39): xfs_growfs /mnt/disk1
Jun  9 10:31:17 NAS kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:17 NAS root: meta-data=/dev/md1               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:17 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:17 NAS root:          =                       crc=1        finobt=1, sparse=0, rmapbt=0
Jun  9 10:31:17 NAS root:          =                       reflink=0    bigtime=0 inobtcount=0
Jun  9 10:31:17 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:17 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:17 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:17 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:17 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:17 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:17 NAS  emhttpd: shcmd (40): mkdir -p /mnt/disk2
Jun  9 10:31:17 NAS  emhttpd: shcmd (41): mount -t xfs -o noatime,nouuid /dev/md2 /mnt/disk2
Jun  9 10:31:17 NAS kernel: XFS (md2): Mounting V5 Filesystem
Jun  9 10:31:18 NAS kernel: XFS (md2): Ending clean mount
Jun  9 10:31:18 NAS  emhttpd: shcmd (42): xfs_growfs /mnt/disk2
Jun  9 10:31:18 NAS root: meta-data=/dev/md2               isize=512    agcount=15, agsize=268435455 blks
Jun  9 10:31:18 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:18 NAS root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Jun  9 10:31:18 NAS root:          =                       reflink=1    bigtime=1 inobtcount=1
Jun  9 10:31:18 NAS root: data     =                       bsize=4096   blocks=3906469875, imaxpct=5
Jun  9 10:31:18 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:18 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:18 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:18 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:18 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:18 NAS  emhttpd: shcmd (43): mkdir -p /mnt/disk3
Jun  9 10:31:18 NAS  emhttpd: shcmd (44): mount -t xfs -o noatime,nouuid /dev/md3 /mnt/disk3
Jun  9 10:31:18 NAS kernel: XFS (md3): Mounting V5 Filesystem
Jun  9 10:31:18 NAS unraid-api[7965]: ✔️ UNRAID API started successfully!
Jun  9 10:31:18 NAS kernel: XFS (md3): Ending clean mount
Jun  9 10:31:18 NAS  emhttpd: shcmd (45): xfs_growfs /mnt/disk3
Jun  9 10:31:18 NAS kernel: xfs filesystem being mounted at /mnt/disk3 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:18 NAS root: meta-data=/dev/md3               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:18 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:18 NAS root:          =                       crc=1        finobt=1, sparse=0, rmapbt=0
Jun  9 10:31:18 NAS root:          =                       reflink=0    bigtime=0 inobtcount=0
Jun  9 10:31:18 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:18 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:18 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:18 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:18 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:18 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:18 NAS  emhttpd: shcmd (46): mkdir -p /mnt/disk4
Jun  9 10:31:18 NAS  emhttpd: shcmd (47): mount -t xfs -o noatime,nouuid /dev/md4 /mnt/disk4
Jun  9 10:31:18 NAS kernel: XFS (md4): Mounting V5 Filesystem
Jun  9 10:31:18 NAS kernel: XFS (md4): Ending clean mount
Jun  9 10:31:18 NAS kernel: xfs filesystem being mounted at /mnt/disk4 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:18 NAS  emhttpd: shcmd (48): xfs_growfs /mnt/disk4
Jun  9 10:31:18 NAS root: meta-data=/dev/md4               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:18 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:18 NAS root:          =                       crc=1        finobt=1, sparse=0, rmapbt=0
Jun  9 10:31:18 NAS root:          =                       reflink=0    bigtime=0 inobtcount=0
Jun  9 10:31:18 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:18 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:18 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:18 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:18 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:18 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:18 NAS  emhttpd: shcmd (49): mkdir -p /mnt/disk5
Jun  9 10:31:18 NAS  emhttpd: shcmd (50): mount -t xfs -o noatime,nouuid /dev/md5 /mnt/disk5
Jun  9 10:31:18 NAS kernel: XFS (md5): Mounting V5 Filesystem
Jun  9 10:31:18 NAS kernel: XFS (md5): Ending clean mount
Jun  9 10:31:18 NAS kernel: xfs filesystem being mounted at /mnt/disk5 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:18 NAS  emhttpd: shcmd (51): xfs_growfs /mnt/disk5
Jun  9 10:31:18 NAS root: meta-data=/dev/md5               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:18 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:18 NAS root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Jun  9 10:31:18 NAS root:          =                       reflink=0    bigtime=0 inobtcount=0
Jun  9 10:31:18 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:18 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:18 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:18 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:18 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:18 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:18 NAS  emhttpd: shcmd (52): mkdir -p /mnt/disk6
Jun  9 10:31:18 NAS  emhttpd: shcmd (53): mount -t xfs -o noatime,nouuid /dev/md6 /mnt/disk6
Jun  9 10:31:18 NAS kernel: XFS (md6): Mounting V5 Filesystem
Jun  9 10:31:19 NAS kernel: XFS (md6): Ending clean mount
Jun  9 10:31:19 NAS  emhttpd: shcmd (54): xfs_growfs /mnt/disk6
Jun  9 10:31:19 NAS kernel: xfs filesystem being mounted at /mnt/disk6 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:19 NAS root: meta-data=/dev/md6               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:19 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:19 NAS root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Jun  9 10:31:19 NAS root:          =                       reflink=1    bigtime=0 inobtcount=0
Jun  9 10:31:19 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:19 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:19 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:19 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:19 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:19 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:19 NAS  emhttpd: shcmd (55): mkdir -p /mnt/disk7
Jun  9 10:31:19 NAS  emhttpd: shcmd (56): mount -t xfs -o noatime,nouuid /dev/md7 /mnt/disk7
Jun  9 10:31:19 NAS kernel: XFS (md7): Mounting V5 Filesystem
Jun  9 10:31:19 NAS kernel: XFS (md7): Ending clean mount
Jun  9 10:31:19 NAS kernel: xfs filesystem being mounted at /mnt/disk7 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:19 NAS  emhttpd: shcmd (57): xfs_growfs /mnt/disk7
Jun  9 10:31:19 NAS root: meta-data=/dev/md7               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:19 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:19 NAS root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Jun  9 10:31:19 NAS root:          =                       reflink=1    bigtime=0 inobtcount=0
Jun  9 10:31:19 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:19 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:19 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:19 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:19 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:19 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:19 NAS  emhttpd: shcmd (58): mkdir -p /mnt/disk8
Jun  9 10:31:19 NAS  emhttpd: shcmd (59): mount -t xfs -o noatime,nouuid /dev/md8 /mnt/disk8
Jun  9 10:31:19 NAS kernel: XFS (md8): Mounting V5 Filesystem
Jun  9 10:31:19 NAS kernel: XFS (md8): Ending clean mount
Jun  9 10:31:19 NAS kernel: xfs filesystem being mounted at /mnt/disk8 supports timestamps until 2038 (0x7fffffff)
Jun  9 10:31:19 NAS  emhttpd: shcmd (60): xfs_growfs /mnt/disk8
Jun  9 10:31:19 NAS root: meta-data=/dev/md8               isize=512    agcount=8, agsize=268435455 blks
Jun  9 10:31:19 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:19 NAS root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Jun  9 10:31:19 NAS root:          =                       reflink=1    bigtime=0 inobtcount=0
Jun  9 10:31:19 NAS root: data     =                       bsize=4096   blocks=1953506633, imaxpct=5
Jun  9 10:31:19 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:19 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:19 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:19 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:19 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:19 NAS  emhttpd: shcmd (61): mkdir -p /mnt/disk9
Jun  9 10:31:19 NAS  emhttpd: shcmd (62): mount -t xfs -o noatime,nouuid /dev/md9 /mnt/disk9
Jun  9 10:31:19 NAS kernel: XFS (md9): Mounting V5 Filesystem
Jun  9 10:31:19 NAS kernel: XFS (md9): Ending clean mount
Jun  9 10:31:19 NAS  emhttpd: shcmd (63): xfs_growfs /mnt/disk9
Jun  9 10:31:19 NAS root: meta-data=/dev/md9               isize=512    agcount=15, agsize=268435455 blks
Jun  9 10:31:19 NAS root:          =                       sectsz=512   attr=2, projid32bit=1
Jun  9 10:31:19 NAS root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Jun  9 10:31:19 NAS root:          =                       reflink=1    bigtime=1 inobtcount=1
Jun  9 10:31:19 NAS root: data     =                       bsize=4096   blocks=3906469875, imaxpct=5
Jun  9 10:31:19 NAS root:          =                       sunit=0      swidth=0 blks
Jun  9 10:31:19 NAS root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Jun  9 10:31:19 NAS root: log      =internal log           bsize=4096   blocks=521728, version=2
Jun  9 10:31:19 NAS root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Jun  9 10:31:19 NAS root: realtime =none                   extsz=4096   blocks=0, rtextents=0
Jun  9 10:31:19 NAS  emhttpd: shcmd (64): mkdir -p /mnt/cache
Jun  9 10:31:20 NAS  emhttpd: shcmd (65): mount -t btrfs -o noatime,space_cache=v2 /dev/sdb1 /mnt/cache
Jun  9 10:31:20 NAS kernel: BTRFS info (device sdb1): using free space tree
Jun  9 10:31:20 NAS kernel: BTRFS info (device sdb1): has skinny extents
Jun  9 10:31:20 NAS kernel: BTRFS info (device sdb1): enabling ssd optimizations
Jun  9 10:31:20 NAS  emhttpd: shcmd (66): sync
Jun  9 10:31:20 NAS  emhttpd: shcmd (67): mkdir /mnt/user0
Jun  9 10:31:20 NAS  emhttpd: shcmd (68): /usr/local/sbin/shfs /mnt/user0 -disks 1022 -o default_permissions,allow_other,noatime  |& logger
Jun  9 10:31:20 NAS  emhttpd: shcmd (69): mkdir /mnt/user
Jun  9 10:31:20 NAS  emhttpd: shcmd (70): /usr/local/sbin/shfs /mnt/user -disks 1023 -o default_permissions,allow_other,noatime -o remember=0  |& logger
Jun  9 10:31:20 NAS  emhttpd: shcmd (72): /usr/local/sbin/update_cron
Jun  9 10:31:20 NAS root: Delaying execution of fix common problems scan for 10 minutes
Jun  9 10:31:20 NAS Recycle Bin: Starting Recycle Bin
Jun  9 10:31:20 NAS  emhttpd: Starting Recycle Bin...
Jun  9 10:31:21 NAS unassigned.devices: Mounting 'Auto Mount' Devices...
Jun  9 10:31:21 NAS unassigned.devices: Mounting partition 'sdg1' at mountpoint '/mnt/disks/SeedBox Old'...
Jun  9 10:31:21 NAS unassigned.devices: Mount cmd: /sbin/mount -t 'ext4' -o rw,noatime,nodiratime,nodev,nosuid '/dev/sdg1' '/mnt/disks/SeedBox Old'
Jun  9 10:31:21 NAS kernel: EXT4-fs (sdg1): mounted filesystem with ordered data mode. Quota mode: disabled.
Jun  9 10:31:21 NAS unassigned.devices: Successfully mounted 'sdg1' on '/mnt/disks/SeedBox Old'.
Jun  9 10:31:21 NAS unassigned.devices: Adding SMB share 'SeedBox Old'.
Jun  9 10:31:21 NAS unassigned.devices: Warning: Unassigned Devices are not set to be shared with NFS.
Jun  9 10:31:21 NAS unassigned.devices: Disk with ID 'WDC_WD80EMAZ-00WJTA0_E4G2DUNK (sdm)' is not set to auto mount.
Jun  9 10:31:21 NAS  emhttpd: Starting services...
Jun  9 10:31:21 NAS  emhttpd: shcmd (74): /etc/rc.d/rc.samba restart
Jun  9 10:31:21 NAS  wsdd2[7302]: 'Terminated' signal received.
Jun  9 10:31:21 NAS  nmbd[7292]: [2023/06/09 10:31:21.412879,  0] ../../source3/nmbd/nmbd.c:59(terminate)
Jun  9 10:31:21 NAS  nmbd[7292]:   Got SIGTERM: going down...
Jun  9 10:31:21 NAS  wsdd2[7302]: terminating.
Jun  9 10:31:21 NAS  winbindd[7307]: [2023/06/09 10:31:21.413116,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Jun  9 10:31:21 NAS  winbindd[7307]:   Got sig[15] terminate (is_parent=0)
Jun  9 10:31:21 NAS  winbindd[7305]: [2023/06/09 10:31:21.414084,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Jun  9 10:31:21 NAS  winbindd[7305]:   Got sig[15] terminate (is_parent=1)
Jun  9 10:31:21 NAS  winbindd[7402]: [2023/06/09 10:31:21.415287,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Jun  9 10:31:21 NAS  winbindd[7402]:   Got sig[15] terminate (is_parent=0)
Jun  9 10:31:23 NAS root: Starting Samba:  /usr/sbin/smbd -D
Jun  9 10:31:23 NAS root:                  /usr/sbin/nmbd -D
Jun  9 10:31:23 NAS root:                  /usr/sbin/wsdd2 -d
Jun  9 10:31:23 NAS  wsdd2[8565]: starting.
Jun  9 10:31:23 NAS root:                  /usr/sbin/winbindd -D
Jun  9 10:31:23 NAS  emhttpd: shcmd (78): /etc/rc.d/rc.avahidaemon restart
Jun  9 10:31:23 NAS root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
Jun  9 10:31:23 NAS  avahi-daemon[7322]: Got SIGTERM, quitting.
Jun  9 10:31:23 NAS  avahi-daemon[7322]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.8.
Jun  9 10:31:23 NAS  avahi-daemon[7322]: Leaving mDNS multicast group on interface lo.IPv6 with address ::1.
Jun  9 10:31:23 NAS  avahi-daemon[7322]: Leaving mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Jun  9 10:31:23 NAS  avahi-dnsconfd[7331]: read(): EOF
Jun  9 10:31:23 NAS  avahi-daemon[7322]: avahi-daemon 0.8 exiting.
Jun  9 10:31:23 NAS root: Starting Avahi mDNS/DNS-SD Daemon:  /usr/sbin/avahi-daemon -D
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Successfully dropped root privileges.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: avahi-daemon 0.8 starting up.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Successfully called chroot().
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Successfully dropped remaining capabilities.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Loading service file /services/sftp-ssh.service.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Loading service file /services/smb.service.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Loading service file /services/ssh.service.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.8.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: New relevant interface br0.IPv4 for mDNS.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface lo.IPv6 with address ::1.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: New relevant interface lo.IPv6 for mDNS.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: New relevant interface lo.IPv4 for mDNS.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Network interface enumeration completed.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Registering new address record for 192.168.1.8 on br0.IPv4.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Registering new address record for ::1 on lo.*.
Jun  9 10:31:23 NAS  avahi-daemon[8585]: Registering new address record for 127.0.0.1 on lo.IPv4.
Jun  9 10:31:23 NAS  emhttpd: shcmd (79): /etc/rc.d/rc.avahidnsconfd restart
Jun  9 10:31:23 NAS root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
Jun  9 10:31:23 NAS root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
Jun  9 10:31:23 NAS  avahi-dnsconfd[8594]: Successfully connected to Avahi daemon.
Jun  9 10:31:23 NAS  emhttpd: shcmd (89): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 50
Jun  9 10:31:24 NAS  avahi-daemon[8585]: Server startup complete. Host name is NAS.local. Local service cookie is 2009835645.
Jun  9 10:31:25 NAS  avahi-daemon[8585]: Service "NAS" (/services/ssh.service) successfully established.
Jun  9 10:31:25 NAS  avahi-daemon[8585]: Service "NAS" (/services/smb.service) successfully established.
Jun  9 10:31:25 NAS  avahi-daemon[8585]: Service "NAS" (/services/sftp-ssh.service) successfully established.
Jun  9 10:31:26 NAS kernel: loop2: detected capacity change from 0 to 104857600
Jun  9 10:31:26 NAS kernel: BTRFS: device fsid 20f23682-6a6e-4c05-8a09-02c79498b226 devid 1 transid 2840047 /dev/loop2 scanned by mount (8628)
Jun  9 10:31:26 NAS kernel: BTRFS info (device loop2): using free space tree
Jun  9 10:31:26 NAS kernel: BTRFS info (device loop2): has skinny extents
Jun  9 10:31:26 NAS kernel: BTRFS info (device loop2): enabling ssd optimizations
Jun  9 10:31:26 NAS root: Resize device id 1 (/dev/loop2) from 50.00GiB to max
Jun  9 10:31:26 NAS  emhttpd: shcmd (91): /etc/rc.d/rc.docker start
Jun  9 10:31:26 NAS root: starting dockerd ...
Jun  9 10:31:26 NAS kernel: Bridge firewalling registered
Jun  9 10:31:26 NAS kernel: Initializing XFRM netlink socket
Jun  9 10:31:26 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface br-63825e4fef5c.IPv4 with address 172.18.0.1.
Jun  9 10:31:26 NAS  avahi-daemon[8585]: New relevant interface br-63825e4fef5c.IPv4 for mDNS.
Jun  9 10:31:26 NAS  avahi-daemon[8585]: Registering new address record for 172.18.0.1 on br-63825e4fef5c.IPv4.
Jun  9 10:31:26 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Jun  9 10:31:26 NAS  avahi-daemon[8585]: New relevant interface docker0.IPv4 for mDNS.
Jun  9 10:31:26 NAS  avahi-daemon[8585]: Registering new address record for 172.17.0.1 on docker0.IPv4.
Jun  9 10:31:27 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface br-f5d50c56c1b9.IPv4 with address 172.31.200.1.
Jun  9 10:31:27 NAS  avahi-daemon[8585]: New relevant interface br-f5d50c56c1b9.IPv4 for mDNS.
Jun  9 10:31:27 NAS  avahi-daemon[8585]: Registering new address record for 172.31.200.1 on br-f5d50c56c1b9.IPv4.
Jun  9 10:31:28 NAS rc.docker: created network br0 with subnets: 192.168.1.0/24;
Jun  9 10:31:28 NAS kernel: mdcmd (36): check correct
Jun  9 10:31:28 NAS kernel: md: recovery thread: check P Q ...
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered blocking state
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered disabled state
Jun  9 10:31:28 NAS kernel: device veth8f06a0e entered promiscuous mode
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered blocking state
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered forwarding state
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered disabled state
Jun  9 10:31:28 NAS kernel: eth0: renamed from veth3d6562b
Jun  9 10:31:28 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8f06a0e: link becomes ready
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered blocking state
Jun  9 10:31:28 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered forwarding state
Jun  9 10:31:28 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): br-63825e4fef5c: link becomes ready
Jun  9 10:31:28 NAS rc.docker: Dozzle: started succesfully!
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered blocking state
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered disabled state
Jun  9 10:31:29 NAS kernel: device vethd2fdef1 entered promiscuous mode
Jun  9 10:31:29 NAS kernel: eth0: renamed from veth808f9ac
Jun  9 10:31:29 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd2fdef1: link becomes ready
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered blocking state
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered forwarding state
Jun  9 10:31:29 NAS rc.docker: mariadb: started succesfully!
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered blocking state
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered disabled state
Jun  9 10:31:29 NAS kernel: device veth7e35482 entered promiscuous mode
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered blocking state
Jun  9 10:31:29 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered forwarding state
Jun  9 10:31:29 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth8f06a0e.IPv6 with address fe80::c868:3eff:fe1d:6da6.
Jun  9 10:31:29 NAS  avahi-daemon[8585]: New relevant interface veth8f06a0e.IPv6 for mDNS.
Jun  9 10:31:29 NAS  avahi-daemon[8585]: Registering new address record for fe80::c868:3eff:fe1d:6da6 on veth8f06a0e.*.
Jun  9 10:31:30 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface br-63825e4fef5c.IPv6 with address fe80::42:ceff:fe00:1314.
Jun  9 10:31:30 NAS  avahi-daemon[8585]: New relevant interface br-63825e4fef5c.IPv6 for mDNS.
Jun  9 10:31:30 NAS  avahi-daemon[8585]: Registering new address record for fe80::42:ceff:fe00:1314 on br-63825e4fef5c.*.
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered disabled state
Jun  9 10:31:30 NAS kernel: eth0: renamed from vethea4b0cf
Jun  9 10:31:30 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7e35482: link becomes ready
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered blocking state
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered forwarding state
Jun  9 10:31:30 NAS rc.docker: binhex-delugevpn: started succesfully!
Jun  9 10:31:30 NAS unassigned.devices: Mounting 'Auto Mount' Remote Shares...
Jun  9 10:31:30 NAS  sudo:     root : PWD=/ ; USER=root ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/unbalance/unbalance -port 6237
Jun  9 10:31:30 NAS  sudo: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Jun  9 10:31:30 NAS unassigned.devices: Using Gateway '192.168.1.1' to Ping Remote Shares.
Jun  9 10:31:30 NAS unassigned.devices: Waiting 5 secs before mounting Remote Shares...
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered blocking state
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered disabled state
Jun  9 10:31:30 NAS kernel: device veth530c71d entered promiscuous mode
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered blocking state
Jun  9 10:31:30 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered forwarding state
Jun  9 10:31:31 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethd2fdef1.IPv6 with address fe80::f8b5:1aff:fedb:782a.
Jun  9 10:31:31 NAS  avahi-daemon[8585]: New relevant interface vethd2fdef1.IPv6 for mDNS.
Jun  9 10:31:31 NAS  avahi-daemon[8585]: Registering new address record for fe80::f8b5:1aff:fedb:782a on vethd2fdef1.*.
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered disabled state
Jun  9 10:31:31 NAS kernel: eth0: renamed from vethb727dbf
Jun  9 10:31:31 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth530c71d: link becomes ready
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered blocking state
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered forwarding state
Jun  9 10:31:31 NAS rc.docker: binhex-prowlarr: started succesfully!
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered blocking state
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered disabled state
Jun  9 10:31:31 NAS kernel: device veth38bd0d7 entered promiscuous mode
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered blocking state
Jun  9 10:31:31 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered forwarding state
Jun  9 10:31:31 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth7e35482.IPv6 with address fe80::9833:71ff:feb7:8156.
Jun  9 10:31:31 NAS  avahi-daemon[8585]: New relevant interface veth7e35482.IPv6 for mDNS.
Jun  9 10:31:31 NAS  avahi-daemon[8585]: Registering new address record for fe80::9833:71ff:feb7:8156 on veth7e35482.*.
Jun  9 10:31:32 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered disabled state
Jun  9 10:31:32 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth530c71d.IPv6 with address fe80::ac7b:35ff:fe85:5947.
Jun  9 10:31:32 NAS  avahi-daemon[8585]: New relevant interface veth530c71d.IPv6 for mDNS.
Jun  9 10:31:32 NAS  avahi-daemon[8585]: Registering new address record for fe80::ac7b:35ff:fe85:5947 on veth530c71d.*.
Jun  9 10:31:32 NAS kernel: eth0: renamed from vethac8405e
Jun  9 10:31:32 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth38bd0d7: link becomes ready
Jun  9 10:31:32 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered blocking state
Jun  9 10:31:32 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered forwarding state
Jun  9 10:31:33 NAS rc.docker: Cloudflare-DDNS: started succesfully!
Jun  9 10:31:33 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered blocking state
Jun  9 10:31:33 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered disabled state
Jun  9 10:31:33 NAS kernel: device vetha0840bf entered promiscuous mode
Jun  9 10:31:34 NAS kernel: eth0: renamed from veth1b96d32
Jun  9 10:31:34 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha0840bf: link becomes ready
Jun  9 10:31:34 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered blocking state
Jun  9 10:31:34 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered forwarding state
Jun  9 10:31:34 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth38bd0d7.IPv6 with address fe80::f02d:80ff:fedb:9d35.
Jun  9 10:31:34 NAS  avahi-daemon[8585]: New relevant interface veth38bd0d7.IPv6 for mDNS.
Jun  9 10:31:34 NAS  avahi-daemon[8585]: Registering new address record for fe80::f02d:80ff:fedb:9d35 on veth38bd0d7.*.
Jun  9 10:31:34 NAS rc.docker: cloudflarezerotrust: started succesfully!
Jun  9 10:31:34 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered blocking state
Jun  9 10:31:34 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered disabled state
Jun  9 10:31:34 NAS kernel: device vethb9ac8b9 entered promiscuous mode
Jun  9 10:31:35 NAS kernel: eth0: renamed from veth44adcc5
Jun  9 10:31:35 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb9ac8b9: link becomes ready
Jun  9 10:31:35 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered blocking state
Jun  9 10:31:35 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered forwarding state
Jun  9 10:31:35 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vetha0840bf.IPv6 with address fe80::64a1:dff:feb9:6383.
Jun  9 10:31:35 NAS  avahi-daemon[8585]: New relevant interface vetha0840bf.IPv6 for mDNS.
Jun  9 10:31:35 NAS  avahi-daemon[8585]: Registering new address record for fe80::64a1:dff:feb9:6383 on vetha0840bf.*.
Jun  9 10:31:35 NAS rc.docker: heimdall: started succesfully!
Jun  9 10:31:35 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered blocking state
Jun  9 10:31:35 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered disabled state
Jun  9 10:31:35 NAS kernel: device vethaaa583c entered promiscuous mode
Jun  9 10:31:36 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethb9ac8b9.IPv6 with address fe80::c4df:b1ff:fe8a:12ea.
Jun  9 10:31:36 NAS  avahi-daemon[8585]: New relevant interface vethb9ac8b9.IPv6 for mDNS.
Jun  9 10:31:36 NAS  avahi-daemon[8585]: Registering new address record for fe80::c4df:b1ff:fe8a:12ea on vethb9ac8b9.*.
Jun  9 10:31:37 NAS kernel: eth0: renamed from vethf97060b
Jun  9 10:31:37 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethaaa583c: link becomes ready
Jun  9 10:31:37 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered blocking state
Jun  9 10:31:37 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered forwarding state
Jun  9 10:31:37 NAS rc.docker: nextcloud: started succesfully!
Jun  9 10:31:37 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered blocking state
Jun  9 10:31:37 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered disabled state
Jun  9 10:31:37 NAS kernel: device veth7d055ef entered promiscuous mode
Jun  9 10:31:37 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered blocking state
Jun  9 10:31:37 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered forwarding state
Jun  9 10:31:38 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered disabled state
Jun  9 10:31:38 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethaaa583c.IPv6 with address fe80::8c03:c0ff:fea4:afca.
Jun  9 10:31:38 NAS  avahi-daemon[8585]: New relevant interface vethaaa583c.IPv6 for mDNS.
Jun  9 10:31:38 NAS  avahi-daemon[8585]: Registering new address record for fe80::8c03:c0ff:fea4:afca on vethaaa583c.*.
Jun  9 10:31:39 NAS kernel: eth0: renamed from vethd3c52c2
Jun  9 10:31:39 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7d055ef: link becomes ready
Jun  9 10:31:39 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered blocking state
Jun  9 10:31:39 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered forwarding state
Jun  9 10:31:39 NAS rc.docker: overseerr: started succesfully!
Jun  9 10:31:39 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered blocking state
Jun  9 10:31:39 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered disabled state
Jun  9 10:31:39 NAS kernel: device vethb668373 entered promiscuous mode
Jun  9 10:31:39 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered blocking state
Jun  9 10:31:39 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered forwarding state
Jun  9 10:31:40 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered disabled state
Jun  9 10:31:40 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth7d055ef.IPv6 with address fe80::e8c1:b4ff:fedd:9642.
Jun  9 10:31:40 NAS  avahi-daemon[8585]: New relevant interface veth7d055ef.IPv6 for mDNS.
Jun  9 10:31:40 NAS  avahi-daemon[8585]: Registering new address record for fe80::e8c1:b4ff:fedd:9642 on veth7d055ef.*.
Jun  9 10:31:41 NAS kernel: eth0: renamed from vethe7b0fff
Jun  9 10:31:41 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb668373: link becomes ready
Jun  9 10:31:41 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered blocking state
Jun  9 10:31:41 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered forwarding state
Jun  9 10:31:41 NAS rc.docker: PhotoPrism: started succesfully!
Jun  9 10:31:42 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethb668373.IPv6 with address fe80::b476:e9ff:fe18:5ea.
Jun  9 10:31:42 NAS  avahi-daemon[8585]: New relevant interface vethb668373.IPv6 for mDNS.
Jun  9 10:31:42 NAS  avahi-daemon[8585]: Registering new address record for fe80::b476:e9ff:fe18:5ea on vethb668373.*.
Jun  9 10:31:43 NAS rc.docker: PlexMediaServer: started succesfully!
Jun  9 10:31:43 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered blocking state
Jun  9 10:31:43 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered disabled state
Jun  9 10:31:43 NAS kernel: device veth724fad4 entered promiscuous mode
Jun  9 10:31:45 NAS kernel: eth0: renamed from vethf711ec2
Jun  9 10:31:45 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth724fad4: link becomes ready
Jun  9 10:31:45 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered blocking state
Jun  9 10:31:45 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered forwarding state
Jun  9 10:31:45 NAS rc.docker: pyload-ng: started succesfully!
Jun  9 10:31:45 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered blocking state
Jun  9 10:31:45 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered disabled state
Jun  9 10:31:45 NAS kernel: device veth8a45ed4 entered promiscuous mode
Jun  9 10:31:45 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered blocking state
Jun  9 10:31:45 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered forwarding state
Jun  9 10:31:46 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered disabled state
Jun  9 10:31:47 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth724fad4.IPv6 with address fe80::5c86:aaff:fe93:b7bb.
Jun  9 10:31:47 NAS  avahi-daemon[8585]: New relevant interface veth724fad4.IPv6 for mDNS.
Jun  9 10:31:47 NAS  avahi-daemon[8585]: Registering new address record for fe80::5c86:aaff:fe93:b7bb on veth724fad4.*.
Jun  9 10:31:47 NAS kernel: eth0: renamed from vethbdf2193
Jun  9 10:31:47 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8a45ed4: link becomes ready
Jun  9 10:31:47 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered blocking state
Jun  9 10:31:47 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered forwarding state
Jun  9 10:31:48 NAS rc.docker: radarr: started succesfully!
Jun  9 10:31:48 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered blocking state
Jun  9 10:31:48 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered disabled state
Jun  9 10:31:48 NAS kernel: device veth7c46055 entered promiscuous mode
Jun  9 10:31:48 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered blocking state
Jun  9 10:31:48 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered forwarding state
Jun  9 10:31:48 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered disabled state
Jun  9 10:31:49 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth8a45ed4.IPv6 with address fe80::3c05:1ff:fe7f:21a.
Jun  9 10:31:49 NAS  avahi-daemon[8585]: New relevant interface veth8a45ed4.IPv6 for mDNS.
Jun  9 10:31:49 NAS  avahi-daemon[8585]: Registering new address record for fe80::3c05:1ff:fe7f:21a on veth8a45ed4.*.
Jun  9 10:31:50 NAS kernel: eth0: renamed from vetha080f1c
Jun  9 10:31:50 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7c46055: link becomes ready
Jun  9 10:31:50 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered blocking state
Jun  9 10:31:50 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered forwarding state
Jun  9 10:31:51 NAS rc.docker: sonarr: started succesfully!
Jun  9 10:31:52 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered blocking state
Jun  9 10:31:52 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered disabled state
Jun  9 10:31:52 NAS kernel: device vethcfb7e53 entered promiscuous mode
Jun  9 10:31:52 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth7c46055.IPv6 with address fe80::d8e3:76ff:fe13:4660.
Jun  9 10:31:52 NAS  avahi-daemon[8585]: New relevant interface veth7c46055.IPv6 for mDNS.
Jun  9 10:31:52 NAS  avahi-daemon[8585]: Registering new address record for fe80::d8e3:76ff:fe13:4660 on veth7c46055.*.
Jun  9 10:31:54 NAS kernel: eth0: renamed from veth3fb60ed
Jun  9 10:31:54 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethcfb7e53: link becomes ready
Jun  9 10:31:54 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered blocking state
Jun  9 10:31:54 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered forwarding state
Jun  9 10:31:54 NAS rc.docker: tautulli: started succesfully!
Jun  9 10:31:55 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered blocking state
Jun  9 10:31:55 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered disabled state
Jun  9 10:31:55 NAS kernel: device vethf20fc8c entered promiscuous mode
Jun  9 10:31:55 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethcfb7e53.IPv6 with address fe80::a8b4:48ff:fe46:ffe6.
Jun  9 10:31:55 NAS  avahi-daemon[8585]: New relevant interface vethcfb7e53.IPv6 for mDNS.
Jun  9 10:31:55 NAS  avahi-daemon[8585]: Registering new address record for fe80::a8b4:48ff:fe46:ffe6 on vethcfb7e53.*.
Jun  9 10:31:58 NAS kernel: eth0: renamed from veth8ec32ac
Jun  9 10:31:58 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf20fc8c: link becomes ready
Jun  9 10:31:58 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered blocking state
Jun  9 10:31:58 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered forwarding state
Jun  9 10:31:59 NAS rc.docker: unifi-controller: started succesfully!
Jun  9 10:32:00 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethf20fc8c.IPv6 with address fe80::945f:c0ff:fe48:5bd.
Jun  9 10:32:00 NAS  avahi-daemon[8585]: New relevant interface vethf20fc8c.IPv6 for mDNS.
Jun  9 10:32:00 NAS  avahi-daemon[8585]: Registering new address record for fe80::945f:c0ff:fe48:5bd on vethf20fc8c.*.
Jun  9 10:32:00 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered blocking state
Jun  9 10:32:00 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered disabled state
Jun  9 10:32:00 NAS kernel: device vethaa64582 entered promiscuous mode
Jun  9 10:32:00 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered blocking state
Jun  9 10:32:00 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered forwarding state
Jun  9 10:32:00 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered disabled state
Jun  9 10:32:02 NAS kernel: eth0: renamed from veth818133e
Jun  9 10:32:02 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethaa64582: link becomes ready
Jun  9 10:32:02 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered blocking state
Jun  9 10:32:02 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered forwarding state
Jun  9 10:32:03 NAS rc.docker: Flood-UI: started succesfully!
Jun  9 10:32:03 NAS rc.docker: Flood-UI: wait 180 seconds
Jun  9 10:32:03 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethaa64582.IPv6 with address fe80::e860:96ff:fe7d:f44f.
Jun  9 10:32:03 NAS  avahi-daemon[8585]: New relevant interface vethaa64582.IPv6 for mDNS.
Jun  9 10:32:03 NAS  avahi-daemon[8585]: Registering new address record for fe80::e860:96ff:fe7d:f44f on vethaa64582.*.
Jun  9 10:32:11 NAS kernel: veth818133e: renamed from eth0
Jun  9 10:32:11 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered disabled state
Jun  9 10:32:11 NAS  avahi-daemon[8585]: Interface vethaa64582.IPv6 no longer relevant for mDNS.
Jun  9 10:32:11 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethaa64582.IPv6 with address fe80::e860:96ff:fe7d:f44f.
Jun  9 10:32:11 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered disabled state
Jun  9 10:32:11 NAS kernel: device vethaa64582 left promiscuous mode
Jun  9 10:32:11 NAS kernel: br-63825e4fef5c: port 16(vethaa64582) entered disabled state
Jun  9 10:32:11 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::e860:96ff:fe7d:f44f on vethaa64582.
Jun  9 10:32:12 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered blocking state
Jun  9 10:32:12 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered disabled state
Jun  9 10:32:12 NAS kernel: device veth74b8c62 entered promiscuous mode
Jun  9 10:32:12 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered blocking state
Jun  9 10:32:12 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered forwarding state
Jun  9 10:32:12 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered disabled state
Jun  9 10:32:14 NAS kernel: eth0: renamed from vethc68178a
Jun  9 10:32:14 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth74b8c62: link becomes ready
Jun  9 10:32:14 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered blocking state
Jun  9 10:32:14 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered forwarding state
Jun  9 10:32:16 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth74b8c62.IPv6 with address fe80::b05b:afff:fe3a:9492.
Jun  9 10:32:16 NAS  avahi-daemon[8585]: New relevant interface veth74b8c62.IPv6 for mDNS.
Jun  9 10:32:16 NAS  avahi-daemon[8585]: Registering new address record for fe80::b05b:afff:fe3a:9492 on veth74b8c62.*.
Jun  9 10:32:23 NAS kernel: vethc68178a: renamed from eth0
Jun  9 10:32:23 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered disabled state
Jun  9 10:32:23 NAS  avahi-daemon[8585]: Interface veth74b8c62.IPv6 no longer relevant for mDNS.
Jun  9 10:32:23 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth74b8c62.IPv6 with address fe80::b05b:afff:fe3a:9492.
Jun  9 10:32:23 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered disabled state
Jun  9 10:32:23 NAS kernel: device veth74b8c62 left promiscuous mode
Jun  9 10:32:23 NAS kernel: br-63825e4fef5c: port 16(veth74b8c62) entered disabled state
Jun  9 10:32:23 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::b05b:afff:fe3a:9492 on veth74b8c62.
Jun  9 10:32:24 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered blocking state
Jun  9 10:32:24 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered disabled state
Jun  9 10:32:24 NAS kernel: device veth7a0fa73 entered promiscuous mode
Jun  9 10:32:24 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered blocking state
Jun  9 10:32:24 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered forwarding state
Jun  9 10:32:24 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered disabled state
Jun  9 10:32:26 NAS kernel: eth0: renamed from veth7adc26f
Jun  9 10:32:26 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7a0fa73: link becomes ready
Jun  9 10:32:26 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered blocking state
Jun  9 10:32:26 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered forwarding state
Jun  9 10:32:27 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth7a0fa73.IPv6 with address fe80::2079:ebff:fef1:bdf8.
Jun  9 10:32:27 NAS  avahi-daemon[8585]: New relevant interface veth7a0fa73.IPv6 for mDNS.
Jun  9 10:32:27 NAS  avahi-daemon[8585]: Registering new address record for fe80::2079:ebff:fef1:bdf8 on veth7a0fa73.*.
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered disabled state
Jun  9 10:32:36 NAS kernel: veth7adc26f: renamed from eth0
Jun  9 10:32:36 NAS  avahi-daemon[8585]: Interface veth7a0fa73.IPv6 no longer relevant for mDNS.
Jun  9 10:32:36 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth7a0fa73.IPv6 with address fe80::2079:ebff:fef1:bdf8.
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered disabled state
Jun  9 10:32:36 NAS kernel: device veth7a0fa73 left promiscuous mode
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(veth7a0fa73) entered disabled state
Jun  9 10:32:36 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::2079:ebff:fef1:bdf8 on veth7a0fa73.
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered blocking state
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered disabled state
Jun  9 10:32:36 NAS kernel: device vethde65372 entered promiscuous mode
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered blocking state
Jun  9 10:32:36 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered forwarding state
Jun  9 10:32:37 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered disabled state
Jun  9 10:32:38 NAS kernel: eth0: renamed from veth67f156d
Jun  9 10:32:38 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethde65372: link becomes ready
Jun  9 10:32:38 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered blocking state
Jun  9 10:32:38 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered forwarding state
Jun  9 10:32:40 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethde65372.IPv6 with address fe80::945d:46ff:feae:6f15.
Jun  9 10:32:40 NAS  avahi-daemon[8585]: New relevant interface vethde65372.IPv6 for mDNS.
Jun  9 10:32:40 NAS  avahi-daemon[8585]: Registering new address record for fe80::945d:46ff:feae:6f15 on vethde65372.*.
Jun  9 10:32:46 NAS kernel: veth67f156d: renamed from eth0
Jun  9 10:32:46 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered disabled state
Jun  9 10:32:46 NAS  avahi-daemon[8585]: Interface vethde65372.IPv6 no longer relevant for mDNS.
Jun  9 10:32:46 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethde65372.IPv6 with address fe80::945d:46ff:feae:6f15.
Jun  9 10:32:46 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered disabled state
Jun  9 10:32:46 NAS kernel: device vethde65372 left promiscuous mode
Jun  9 10:32:46 NAS kernel: br-63825e4fef5c: port 16(vethde65372) entered disabled state
Jun  9 10:32:46 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::945d:46ff:feae:6f15 on vethde65372.
Jun  9 10:32:47 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered blocking state
Jun  9 10:32:47 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered disabled state
Jun  9 10:32:47 NAS kernel: device veth9bc2f13 entered promiscuous mode
Jun  9 10:32:49 NAS kernel: eth0: renamed from veth00e4353
Jun  9 10:32:50 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9bc2f13: link becomes ready
Jun  9 10:32:50 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered blocking state
Jun  9 10:32:50 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered forwarding state
Jun  9 10:32:51 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth9bc2f13.IPv6 with address fe80::a0bc:c6ff:fe57:c240.
Jun  9 10:32:51 NAS  avahi-daemon[8585]: New relevant interface veth9bc2f13.IPv6 for mDNS.
Jun  9 10:32:51 NAS  avahi-daemon[8585]: Registering new address record for fe80::a0bc:c6ff:fe57:c240 on veth9bc2f13.*.
Jun  9 10:32:57 NAS kernel: veth00e4353: renamed from eth0
Jun  9 10:32:57 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered disabled state
Jun  9 10:32:57 NAS  avahi-daemon[8585]: Interface veth9bc2f13.IPv6 no longer relevant for mDNS.
Jun  9 10:32:57 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth9bc2f13.IPv6 with address fe80::a0bc:c6ff:fe57:c240.
Jun  9 10:32:57 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered disabled state
Jun  9 10:32:57 NAS kernel: device veth9bc2f13 left promiscuous mode
Jun  9 10:32:57 NAS kernel: br-63825e4fef5c: port 16(veth9bc2f13) entered disabled state
Jun  9 10:32:57 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::a0bc:c6ff:fe57:c240 on veth9bc2f13.
Jun  9 10:32:58 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered blocking state
Jun  9 10:32:58 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered disabled state
Jun  9 10:32:58 NAS kernel: device veth685a3d4 entered promiscuous mode
Jun  9 10:33:00 NAS kernel: eth0: renamed from veth6627ad5
Jun  9 10:33:00 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth685a3d4: link becomes ready
Jun  9 10:33:00 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered blocking state
Jun  9 10:33:00 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered forwarding state
Jun  9 10:33:02 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth685a3d4.IPv6 with address fe80::bc9e:fdff:fe34:10c2.
Jun  9 10:33:02 NAS  avahi-daemon[8585]: New relevant interface veth685a3d4.IPv6 for mDNS.
Jun  9 10:33:02 NAS  avahi-daemon[8585]: Registering new address record for fe80::bc9e:fdff:fe34:10c2 on veth685a3d4.*.
Jun  9 10:33:08 NAS kernel: veth6627ad5: renamed from eth0
Jun  9 10:33:08 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered disabled state
Jun  9 10:33:08 NAS  avahi-daemon[8585]: Interface veth685a3d4.IPv6 no longer relevant for mDNS.
Jun  9 10:33:08 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth685a3d4.IPv6 with address fe80::bc9e:fdff:fe34:10c2.
Jun  9 10:33:08 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered disabled state
Jun  9 10:33:08 NAS kernel: device veth685a3d4 left promiscuous mode
Jun  9 10:33:08 NAS kernel: br-63825e4fef5c: port 16(veth685a3d4) entered disabled state
Jun  9 10:33:08 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::bc9e:fdff:fe34:10c2 on veth685a3d4.
Jun  9 10:33:11 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered blocking state
Jun  9 10:33:11 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered disabled state
Jun  9 10:33:11 NAS kernel: device veth6beaf9b entered promiscuous mode
Jun  9 10:33:13 NAS kernel: eth0: renamed from veth5203f8f
Jun  9 10:33:13 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6beaf9b: link becomes ready
Jun  9 10:33:13 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered blocking state
Jun  9 10:33:13 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered forwarding state
Jun  9 10:33:14 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth6beaf9b.IPv6 with address fe80::f4bf:3bff:fef9:803c.
Jun  9 10:33:14 NAS  avahi-daemon[8585]: New relevant interface veth6beaf9b.IPv6 for mDNS.
Jun  9 10:33:14 NAS  avahi-daemon[8585]: Registering new address record for fe80::f4bf:3bff:fef9:803c on veth6beaf9b.*.
Jun  9 10:36:03 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: 70 (27% @ 924rpm) to: 268 (105% @ 650rpm)
Jun  9 10:36:03 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: 70 (27% @ 924rpm) to: 258 (101% @ 650rpm)
Jun  9 10:36:03 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: 70 (27% @ 806rpm) to: 269 (105% @ 518rpm)
Jun  9 10:36:36 NAS  ntpd[1071]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jun  9 10:41:00 NAS root: Fix Common Problems Version 2023.04.26
Jun  9 10:41:01 NAS root: Fix Common Problems: Warning: Docker Application Kavita has an update available for it ** Ignored
Jun  9 10:41:08 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 70 (27% @ 649rpm) to: FULL (100% @ 1687rpm)
### [PREVIOUS LINE REPEATED 1 TIMES] ###
Jun  9 10:41:09 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 70 (27% @ 514rpm) to: FULL (100% @ 1112rpm)
Jun  9 10:59:25 NAS kernel: TCP: request_sock_TCP: Possible SYN flooding on port 37076. Sending cookies.  Check SNMP counters.
Jun  9 11:44:58 NAS webGUI: Successful login user root from 192.168.1.99
Jun  9 11:46:17 NAS kernel: veth5203f8f: renamed from eth0
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered disabled state
Jun  9 11:46:17 NAS  avahi-daemon[8585]: Interface veth6beaf9b.IPv6 no longer relevant for mDNS.
Jun  9 11:46:17 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth6beaf9b.IPv6 with address fe80::f4bf:3bff:fef9:803c.
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered disabled state
Jun  9 11:46:17 NAS kernel: device veth6beaf9b left promiscuous mode
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 16(veth6beaf9b) entered disabled state
Jun  9 11:46:17 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::f4bf:3bff:fef9:803c on veth6beaf9b.
Jun  9 11:46:17 NAS kernel: vethea4b0cf: renamed from eth0
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered disabled state
Jun  9 11:46:17 NAS  avahi-daemon[8585]: Interface veth7e35482.IPv6 no longer relevant for mDNS.
Jun  9 11:46:17 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth7e35482.IPv6 with address fe80::9833:71ff:feb7:8156.
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered disabled state
Jun  9 11:46:17 NAS kernel: device veth7e35482 left promiscuous mode
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth7e35482) entered disabled state
Jun  9 11:46:17 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::9833:71ff:feb7:8156 on veth7e35482.
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered blocking state
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered disabled state
Jun  9 11:46:17 NAS kernel: device veth4c1445d entered promiscuous mode
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered blocking state
Jun  9 11:46:17 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered forwarding state
Jun  9 11:46:18 NAS kernel: eth0: renamed from veth2175b23
Jun  9 11:46:18 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth4c1445d: link becomes ready
Jun  9 11:46:18 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered blocking state
Jun  9 11:46:18 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered disabled state
Jun  9 11:46:18 NAS kernel: device vethf25df75 entered promiscuous mode
Jun  9 11:46:19 NAS kernel: eth0: renamed from veth51655c5
Jun  9 11:46:19 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf25df75: link becomes ready
Jun  9 11:46:19 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered blocking state
Jun  9 11:46:19 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered forwarding state
Jun  9 11:46:19 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth4c1445d.IPv6 with address fe80::bccd:b4ff:fe04:ec3c.
Jun  9 11:46:19 NAS  avahi-daemon[8585]: New relevant interface veth4c1445d.IPv6 for mDNS.
Jun  9 11:46:19 NAS  avahi-daemon[8585]: Registering new address record for fe80::bccd:b4ff:fe04:ec3c on veth4c1445d.*.
Jun  9 11:46:21 NAS kernel: veth2175b23: renamed from eth0
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered disabled state
Jun  9 11:46:21 NAS  avahi-daemon[8585]: Interface veth4c1445d.IPv6 no longer relevant for mDNS.
Jun  9 11:46:21 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth4c1445d.IPv6 with address fe80::bccd:b4ff:fe04:ec3c.
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered disabled state
Jun  9 11:46:21 NAS kernel: device veth4c1445d left promiscuous mode
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth4c1445d) entered disabled state
Jun  9 11:46:21 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::bccd:b4ff:fe04:ec3c on veth4c1445d.
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered blocking state
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered disabled state
Jun  9 11:46:21 NAS kernel: device veth1640daa entered promiscuous mode
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered blocking state
Jun  9 11:46:21 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered forwarding state
Jun  9 11:46:21 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethf25df75.IPv6 with address fe80::1cc6:d1ff:fe00:e77.
Jun  9 11:46:21 NAS  avahi-daemon[8585]: New relevant interface vethf25df75.IPv6 for mDNS.
Jun  9 11:46:21 NAS  avahi-daemon[8585]: Registering new address record for fe80::1cc6:d1ff:fe00:e77 on vethf25df75.*.
Jun  9 11:46:22 NAS kernel: eth0: renamed from veth53562d0
Jun  9 11:46:22 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1640daa: link becomes ready
Jun  9 11:46:23 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth1640daa.IPv6 with address fe80::2c35:62ff:fe9d:fcac.
Jun  9 11:46:23 NAS  avahi-daemon[8585]: New relevant interface veth1640daa.IPv6 for mDNS.
Jun  9 11:46:23 NAS  avahi-daemon[8585]: Registering new address record for fe80::2c35:62ff:fe9d:fcac on veth1640daa.*.
Jun  9 11:46:28 NAS flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered disabled state
Jun  9 11:46:29 NAS kernel: veth53562d0: renamed from eth0
Jun  9 11:46:29 NAS  avahi-daemon[8585]: Interface veth1640daa.IPv6 no longer relevant for mDNS.
Jun  9 11:46:29 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth1640daa.IPv6 with address fe80::2c35:62ff:fe9d:fcac.
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered disabled state
Jun  9 11:46:29 NAS kernel: device veth1640daa left promiscuous mode
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth1640daa) entered disabled state
Jun  9 11:46:29 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::2c35:62ff:fe9d:fcac on veth1640daa.
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered blocking state
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered disabled state
Jun  9 11:46:29 NAS kernel: device veth9631315 entered promiscuous mode
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered blocking state
Jun  9 11:46:29 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered forwarding state
Jun  9 11:46:30 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered disabled state
Jun  9 11:46:31 NAS kernel: eth0: renamed from veth862d86e
Jun  9 11:46:31 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9631315: link becomes ready
Jun  9 11:46:31 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered blocking state
Jun  9 11:46:31 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered forwarding state
Jun  9 11:46:32 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth9631315.IPv6 with address fe80::685f:57ff:fe24:9779.
Jun  9 11:46:32 NAS  avahi-daemon[8585]: New relevant interface veth9631315.IPv6 for mDNS.
Jun  9 11:46:32 NAS  avahi-daemon[8585]: Registering new address record for fe80::685f:57ff:fe24:9779 on veth9631315.*.
Jun  9 11:46:36 NAS kernel: veth862d86e: renamed from eth0
Jun  9 11:46:36 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered disabled state
Jun  9 11:46:36 NAS  avahi-daemon[8585]: Interface veth9631315.IPv6 no longer relevant for mDNS.
Jun  9 11:46:36 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth9631315.IPv6 with address fe80::685f:57ff:fe24:9779.
Jun  9 11:46:36 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered disabled state
Jun  9 11:46:36 NAS kernel: device veth9631315 left promiscuous mode
Jun  9 11:46:36 NAS kernel: br-63825e4fef5c: port 3(veth9631315) entered disabled state
Jun  9 11:46:36 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::685f:57ff:fe24:9779 on veth9631315.
Jun  9 11:46:37 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered blocking state
Jun  9 11:46:37 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered disabled state
Jun  9 11:46:37 NAS kernel: device veth08c4358 entered promiscuous mode
Jun  9 11:46:37 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered blocking state
Jun  9 11:46:37 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered forwarding state
Jun  9 11:46:37 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered disabled state
Jun  9 11:46:38 NAS kernel: eth0: renamed from vethf01d8c6
Jun  9 11:46:38 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth08c4358: link becomes ready
Jun  9 11:46:38 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered blocking state
Jun  9 11:46:38 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered forwarding state
Jun  9 11:46:39 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth08c4358.IPv6 with address fe80::4ca9:d4ff:fe2d:e7e8.
Jun  9 11:46:39 NAS  avahi-daemon[8585]: New relevant interface veth08c4358.IPv6 for mDNS.
Jun  9 11:46:39 NAS  avahi-daemon[8585]: Registering new address record for fe80::4ca9:d4ff:fe2d:e7e8 on veth08c4358.*.
Jun  9 11:46:41 NAS kernel: vethf01d8c6: renamed from eth0
Jun  9 11:46:41 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered disabled state
Jun  9 11:46:41 NAS  avahi-daemon[8585]: Interface veth08c4358.IPv6 no longer relevant for mDNS.
Jun  9 11:46:41 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth08c4358.IPv6 with address fe80::4ca9:d4ff:fe2d:e7e8.
Jun  9 11:46:41 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered disabled state
Jun  9 11:46:41 NAS kernel: device veth08c4358 left promiscuous mode
Jun  9 11:46:41 NAS kernel: br-63825e4fef5c: port 3(veth08c4358) entered disabled state
Jun  9 11:46:41 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::4ca9:d4ff:fe2d:e7e8 on veth08c4358.
Jun  9 11:46:43 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered blocking state
Jun  9 11:46:43 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered disabled state
Jun  9 11:46:43 NAS kernel: device veth798e368 entered promiscuous mode
Jun  9 11:46:44 NAS kernel: eth0: renamed from veth53769b3
Jun  9 11:46:44 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth798e368: link becomes ready
Jun  9 11:46:44 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered blocking state
Jun  9 11:46:44 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered forwarding state
Jun  9 11:46:45 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth798e368.IPv6 with address fe80::480e:28ff:fe7c:b60c.
Jun  9 11:46:45 NAS  avahi-daemon[8585]: New relevant interface veth798e368.IPv6 for mDNS.
Jun  9 11:46:45 NAS  avahi-daemon[8585]: Registering new address record for fe80::480e:28ff:fe7c:b60c on veth798e368.*.
Jun  9 11:46:49 NAS kernel: veth53769b3: renamed from eth0
Jun  9 11:46:49 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered disabled state
Jun  9 11:46:49 NAS  avahi-daemon[8585]: Interface veth798e368.IPv6 no longer relevant for mDNS.
Jun  9 11:46:49 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth798e368.IPv6 with address fe80::480e:28ff:fe7c:b60c.
Jun  9 11:46:49 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered disabled state
Jun  9 11:46:49 NAS kernel: device veth798e368 left promiscuous mode
Jun  9 11:46:49 NAS kernel: br-63825e4fef5c: port 3(veth798e368) entered disabled state
Jun  9 11:46:49 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::480e:28ff:fe7c:b60c on veth798e368.
Jun  9 11:46:52 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered blocking state
Jun  9 11:46:52 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered disabled state
Jun  9 11:46:52 NAS kernel: device veth09a9061 entered promiscuous mode
Jun  9 11:46:53 NAS kernel: eth0: renamed from veth07a70eb
Jun  9 11:46:53 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth09a9061: link becomes ready
Jun  9 11:46:53 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered blocking state
Jun  9 11:46:53 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered forwarding state
Jun  9 11:46:55 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth09a9061.IPv6 with address fe80::9cea:95ff:fef2:8402.
Jun  9 11:46:55 NAS  avahi-daemon[8585]: New relevant interface veth09a9061.IPv6 for mDNS.
Jun  9 11:46:55 NAS  avahi-daemon[8585]: Registering new address record for fe80::9cea:95ff:fef2:8402 on veth09a9061.*.
Jun  9 11:46:59 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered disabled state
Jun  9 11:46:59 NAS kernel: veth07a70eb: renamed from eth0
Jun  9 11:46:59 NAS  avahi-daemon[8585]: Interface veth09a9061.IPv6 no longer relevant for mDNS.
Jun  9 11:46:59 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth09a9061.IPv6 with address fe80::9cea:95ff:fef2:8402.
Jun  9 11:46:59 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered disabled state
Jun  9 11:46:59 NAS kernel: device veth09a9061 left promiscuous mode
Jun  9 11:46:59 NAS kernel: br-63825e4fef5c: port 3(veth09a9061) entered disabled state
Jun  9 11:46:59 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::9cea:95ff:fef2:8402 on veth09a9061.
Jun  9 11:47:05 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered blocking state
Jun  9 11:47:05 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered disabled state
Jun  9 11:47:05 NAS kernel: device veth5021377 entered promiscuous mode
Jun  9 11:47:06 NAS kernel: eth0: renamed from veth11852e7
Jun  9 11:47:06 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth5021377: link becomes ready
Jun  9 11:47:06 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered blocking state
Jun  9 11:47:06 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered forwarding state
Jun  9 11:47:08 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth5021377.IPv6 with address fe80::dce8:28ff:fe6f:d9dc.
Jun  9 11:47:08 NAS  avahi-daemon[8585]: New relevant interface veth5021377.IPv6 for mDNS.
Jun  9 11:47:08 NAS  avahi-daemon[8585]: Registering new address record for fe80::dce8:28ff:fe6f:d9dc on veth5021377.*.
Jun  9 11:47:11 NAS kernel: veth11852e7: renamed from eth0
Jun  9 11:47:11 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered disabled state
Jun  9 11:47:11 NAS  avahi-daemon[8585]: Interface veth5021377.IPv6 no longer relevant for mDNS.
Jun  9 11:47:11 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth5021377.IPv6 with address fe80::dce8:28ff:fe6f:d9dc.
Jun  9 11:47:11 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered disabled state
Jun  9 11:47:11 NAS kernel: device veth5021377 left promiscuous mode
Jun  9 11:47:11 NAS kernel: br-63825e4fef5c: port 3(veth5021377) entered disabled state
Jun  9 11:47:11 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::dce8:28ff:fe6f:d9dc on veth5021377.
Jun  9 11:47:24 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered blocking state
Jun  9 11:47:24 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered disabled state
Jun  9 11:47:24 NAS kernel: device veth0b1fe68 entered promiscuous mode
Jun  9 11:47:24 NAS kernel: eth0: renamed from veth32fc872
Jun  9 11:47:24 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0b1fe68: link becomes ready
Jun  9 11:47:24 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered blocking state
Jun  9 11:47:24 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered forwarding state
Jun  9 11:47:26 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface veth0b1fe68.IPv6 with address fe80::b492:e5ff:fe98:b15c.
Jun  9 11:47:26 NAS  avahi-daemon[8585]: New relevant interface veth0b1fe68.IPv6 for mDNS.
Jun  9 11:47:26 NAS  avahi-daemon[8585]: Registering new address record for fe80::b492:e5ff:fe98:b15c on veth0b1fe68.*.
Jun  9 11:51:55 NAS unassigned.devices: Mounting partition 'sdm1' at mountpoint '/mnt/disks/SeedBox'...
Jun  9 11:51:55 NAS unassigned.devices: Mount cmd: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime '/dev/sdm1' '/mnt/disks/SeedBox'
Jun  9 11:51:55 NAS kernel: XFS (sdm1): Mounting V5 Filesystem
Jun  9 11:51:55 NAS kernel: XFS (sdm1): Starting recovery (logdev: internal)
Jun  9 11:51:56 NAS kernel: XFS (sdm1): Ending recovery (logdev: internal)
Jun  9 11:51:56 NAS unassigned.devices: Successfully mounted 'sdm1' on '/mnt/disks/SeedBox'.
Jun  9 11:51:56 NAS unassigned.devices: Device '/dev/sdm1' is not set to be shared.
Jun  9 11:52:57 NAS unassigned.devices: Adding SMB share 'SeedBox'.
Jun  9 11:52:57 NAS unassigned.devices: Warning: Unassigned Devices are not set to be shared with NFS.
Jun  9 11:53:28 NAS flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Jun  9 11:58:46 NAS kernel: TCP: request_sock_TCP: Possible SYN flooding on port 57964. Sending cookies.  Check SNMP counters.
Jun  9 20:17:36 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1695rpm) to: 228 (89% @ 1610rpm)
Jun  9 20:17:36 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1695rpm) to: 238 (93% @ 1610rpm)
Jun  9 20:17:36 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1064rpm) to: 239 (93% @ 1066rpm)
Jun  9 20:27:43 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 228 (89% @ 1618rpm) to: FULL (100% @ 1691rpm)
Jun  9 20:27:43 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 238 (93% @ 1618rpm) to: FULL (100% @ 1691rpm)
Jun  9 20:27:43 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 239 (93% @ 1062rpm) to: FULL (100% @ 1062rpm)
Jun  9 23:23:15 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1691rpm) to: 228 (89% @ 1610rpm)
Jun  9 23:23:15 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1691rpm) to: 238 (93% @ 1610rpm)
Jun  9 23:23:15 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1066rpm) to: 239 (93% @ 1064rpm)
Jun 10 00:00:17 NAS root: /var/lib/docker: 37.2 GiB (39966359552 bytes) trimmed on /dev/loop2
Jun 10 00:00:17 NAS root: /mnt/cache: 97.3 GiB (104459161600 bytes) trimmed on /dev/sdb1
Jun 10 00:01:57 NAS flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Jun 10 00:08:30 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 238 (93% @ 1610rpm) to: FULL (100% @ 1704rpm)
Jun 10 00:08:30 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 228 (89% @ 1610rpm) to: FULL (100% @ 1704rpm)
Jun 10 00:08:30 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 239 (93% @ 1066rpm) to: FULL (100% @ 1061rpm)
Jun 10 00:38:40 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1704rpm) to: 238 (93% @ 1607rpm)
Jun 10 00:38:40 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1704rpm) to: 228 (89% @ 1607rpm)
Jun 10 00:38:41 NAS autofan: Highest disk temp is 44C, adjusting fan speed from: FULL (100% @ 1062rpm) to: 239 (93% @ 1066rpm)
Jun 10 00:48:46 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 238 (93% @ 1622rpm) to: FULL (100% @ 1700rpm)
Jun 10 00:48:47 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 228 (89% @ 1666rpm) to: FULL (100% @ 1700rpm)
Jun 10 00:48:48 NAS autofan: Highest disk temp is 45C, adjusting fan speed from: 239 (93% @ 1064rpm) to: FULL (100% @ 1059rpm)
Jun 10 01:00:45 NAS kernel: BUG: kernel NULL pointer dereference, address: 0000000000000076
Jun 10 01:00:45 NAS kernel: #PF: supervisor read access in kernel mode
Jun 10 01:00:45 NAS kernel: #PF: error_code(0x0000) - not-present page
Jun 10 01:00:45 NAS kernel: PGD 0 P4D 0
Jun 10 01:00:45 NAS kernel: Oops: 0000 [#1] PREEMPT SMP PTI
Jun 10 01:00:45 NAS kernel: CPU: 2 PID: 32712 Comm: deluged Not tainted 5.19.17-Unraid #2
Jun 10 01:00:45 NAS kernel: Hardware name: Gigabyte Technology Co., Ltd. Z170XP-SLI/Z170XP-SLI-CF, BIOS F22c 12/01/2017
Jun 10 01:00:45 NAS kernel: RIP: 0010:folio_try_get_rcu+0x0/0x21
Jun 10 01:00:45 NAS kernel: Code: e8 9d fd 67 00 48 8b 84 24 80 00 00 00 65 48 2b 04 25 28 00 00 00 74 05 e8 c1 35 69 00 48 81 c4 88 00 00 00 5b c3 cc cc cc cc <8b> 57 34 85 d2 74 10 8d 4a 01 89 d0 f0 0f b1 4f 34 74 04 89 c2 eb
Jun 10 01:00:45 NAS kernel: RSP: 0000:ffffc90003917cc0 EFLAGS: 00010246
Jun 10 01:00:45 NAS kernel: RAX: 0000000000000042 RBX: 0000000000000042 RCX: 0000000000000042
Jun 10 01:00:45 NAS kernel: RDX: 0000000000000001 RSI: ffff888435be2b58 RDI: 0000000000000042
Jun 10 01:00:45 NAS kernel: RBP: 0000000000000000 R08: 0000000000000018 R09: ffffc90003917cd0
Jun 10 01:00:45 NAS kernel: R10: ffffc90003917cd0 R11: ffffc90003917d48 R12: 0000000000000000
Jun 10 01:00:45 NAS kernel: R13: ffff88809c2a09f8 R14: 00000000000384d9 R15: ffff88809c2a0a00
Jun 10 01:00:45 NAS kernel: FS:  0000149a44fe56c0(0000) GS:ffff888626d00000(0000) knlGS:0000000000000000
Jun 10 01:00:45 NAS kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 10 01:00:45 NAS kernel: CR2: 0000000000000076 CR3: 0000000573660002 CR4: 00000000003706e0
Jun 10 01:00:45 NAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jun 10 01:00:45 NAS kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Jun 10 01:00:45 NAS kernel: Call Trace:
Jun 10 01:00:45 NAS kernel: <TASK>
Jun 10 01:00:45 NAS kernel: __filemap_get_folio+0x98/0x1ff
Jun 10 01:00:45 NAS kernel: ? _raw_spin_unlock+0x14/0x29
Jun 10 01:00:45 NAS kernel: filemap_fault+0x6e/0x524
Jun 10 01:00:45 NAS kernel: __do_fault+0x2d/0x6e
Jun 10 01:00:45 NAS kernel: __handle_mm_fault+0x9a5/0xc7d
Jun 10 01:00:45 NAS kernel: handle_mm_fault+0x113/0x1d7
Jun 10 01:00:45 NAS kernel: do_user_addr_fault+0x36a/0x514
Jun 10 01:00:45 NAS kernel: exc_page_fault+0xfc/0x11e
Jun 10 01:00:45 NAS kernel: asm_exc_page_fault+0x22/0x30
Jun 10 01:00:45 NAS kernel: RIP: 0033:0x149a4976c60d
Jun 10 01:00:45 NAS kernel: Code: 00 00 00 00 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 f3 0f 1e fa 48 89 f8 48 83 fa 20 72 23 <c5> fe 6f 06 48 83 fa 40 0f 87 a5 00 00 00 c5 fe 6f 4c 16 e0 c5 fe
Jun 10 01:00:45 NAS kernel: RSP: 002b:0000149a44fe4888 EFLAGS: 00010202
Jun 10 01:00:45 NAS kernel: RAX: 0000149a2400b390 RBX: 0000149a24005fb8 RCX: 0000149a44fe4ac0
Jun 10 01:00:45 NAS kernel: RDX: 0000000000004000 RSI: 000014986f4d972b RDI: 0000149a2400b390
Jun 10 01:00:45 NAS kernel: RBP: 0000000000000000 R08: 0000000000000003 R09: 0000000000000000
Jun 10 01:00:45 NAS kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
Jun 10 01:00:45 NAS kernel: R13: 0000149a300039c0 R14: 0000000000000003 R15: 0000149a402cb480
Jun 10 01:00:45 NAS kernel: </TASK>
Jun 10 01:00:45 NAS kernel: Modules linked in: xt_connmark xt_mark iptable_mangle xt_comment iptable_raw xt_nat xt_tcpudp veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter ext4 mbcache jbd2 xfs md_mod tcp_diag inet_diag it87 hwmon_vid efivarfs iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc bonding tls i915 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel iosf_mbi kvm drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel drm aesni_intel crypto_simd intel_wmi_thunderbolt mxm_wmi cryptd rapl intel_cstate i2c_i801 intel_gtt mpt3sas intel_uncore e1000e agpgart i2c_smbus raid_class ahci i2c_core syscopyarea sysfillrect sysimgblt libahci
Jun 10 01:00:45 NAS kernel: scsi_transport_sas fb_sys_fops thermal fan video wmi backlight button acpi_pad unix
Jun 10 01:00:45 NAS kernel: CR2: 0000000000000076
Jun 10 01:00:45 NAS kernel: ---[ end trace 0000000000000000 ]---
Jun 10 01:00:45 NAS kernel: RIP: 0010:folio_try_get_rcu+0x0/0x21
Jun 10 01:00:45 NAS kernel: Code: e8 9d fd 67 00 48 8b 84 24 80 00 00 00 65 48 2b 04 25 28 00 00 00 74 05 e8 c1 35 69 00 48 81 c4 88 00 00 00 5b c3 cc cc cc cc <8b> 57 34 85 d2 74 10 8d 4a 01 89 d0 f0 0f b1 4f 34 74 04 89 c2 eb
Jun 10 01:00:45 NAS kernel: RSP: 0000:ffffc90003917cc0 EFLAGS: 00010246
Jun 10 01:00:45 NAS kernel: RAX: 0000000000000042 RBX: 0000000000000042 RCX: 0000000000000042
Jun 10 01:00:45 NAS kernel: RDX: 0000000000000001 RSI: ffff888435be2b58 RDI: 0000000000000042
Jun 10 01:00:45 NAS kernel: RBP: 0000000000000000 R08: 0000000000000018 R09: ffffc90003917cd0
Jun 10 01:00:45 NAS kernel: R10: ffffc90003917cd0 R11: ffffc90003917d48 R12: 0000000000000000
Jun 10 01:00:45 NAS kernel: R13: ffff88809c2a09f8 R14: 00000000000384d9 R15: ffff88809c2a0a00
Jun 10 01:00:45 NAS kernel: FS:  0000149a44fe56c0(0000) GS:ffff888626d00000(0000) knlGS:0000000000000000
Jun 10 01:00:45 NAS kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 10 01:00:45 NAS kernel: CR2: 0000000000000076 CR3: 0000000573660002 CR4: 00000000003706e0
Jun 10 01:00:45 NAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jun 10 01:00:45 NAS kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Jun 10 01:16:23 NAS webGUI: Successful login user root from 192.168.1.160
Jun 10 01:19:16 NAS kernel: TCP: request_sock_TCP: Possible SYN flooding on port 8181. Sending cookies.  Check SNMP counters.
Jun 10 01:27:24 NAS  sshd[11081]: Connection from 192.168.1.160 port 56600 on 192.168.1.8 port 22 rdomain ""
Jun 10 01:27:28 NAS  sshd[11081]: Postponed keyboard-interactive for root from 192.168.1.160 port 56600 ssh2 [preauth]
Jun 10 01:27:38 NAS  sshd[11081]: Postponed keyboard-interactive/pam for root from 192.168.1.160 port 56600 ssh2 [preauth]
Jun 10 01:27:38 NAS  sshd[11081]: Accepted keyboard-interactive/pam for root from 192.168.1.160 port 56600 ssh2
Jun 10 01:27:38 NAS  sshd[11081]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Jun 10 01:27:38 NAS elogind-daemon[1037]: New session c1 of user root.
Jun 10 01:27:38 NAS  sshd[11081]: Starting session: shell on pts/0 for root from 192.168.1.160 port 56600 id 0
Jun 10 01:34:01 NAS root: /usr/local/sbin/powerdown has been deprecated
Jun 10 01:34:01 NAS  shutdown[15422]: shutting down for system reboot
Jun 10 01:34:01 NAS  init: Switching to runlevel: 6
Jun 10 01:34:39 NAS webGUI: Successful login user root from 192.168.1.160
Jun 10 01:37:27 NAS  sshd[11081]: Connection closed by 192.168.1.160 port 56600
Jun 10 01:37:27 NAS  sshd[11081]: Close session: user root from 192.168.1.160 port 56600 id 0
Jun 10 01:37:27 NAS  sshd[11081]: pam_unix(sshd:session): session closed for user root
Jun 10 01:37:27 NAS  sshd[11081]: Transferred: sent 8648, received 4320 bytes
Jun 10 01:37:27 NAS  sshd[11081]: Closing connection to 192.168.1.160 port 56600
Jun 10 01:37:27 NAS elogind-daemon[1037]: Removed session c1.
Jun 10 01:37:33 NAS  sshd[16785]: Connection from 192.168.1.160 port 57391 on 192.168.1.8 port 22 rdomain ""
Jun 10 01:37:35 NAS  sshd[16785]: Postponed keyboard-interactive for root from 192.168.1.160 port 57391 ssh2 [preauth]
Jun 10 01:37:39 NAS  sshd[16785]: Postponed keyboard-interactive/pam for root from 192.168.1.160 port 57391 ssh2 [preauth]
Jun 10 01:37:39 NAS  sshd[16785]: Accepted keyboard-interactive/pam for root from 192.168.1.160 port 57391 ssh2
Jun 10 01:37:39 NAS  sshd[16785]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Jun 10 01:37:39 NAS elogind-daemon[1037]: New session c2 of user root.
Jun 10 01:37:39 NAS  sshd[16785]: Starting session: shell on pts/1 for root from 192.168.1.160 port 57391 id 0
Jun 10 01:39:19 NAS root: /usr/local/sbin/powerdown has been deprecated
Jun 10 01:39:19 NAS  init: Switching to runlevel: 0
Jun 10 01:41:01 NAS flash_backup: stop watching for file changes
Jun 10 01:41:01 NAS  init: Trying to re-exec init
Jun 10 01:41:01 NAS kernel: veth32fc872: renamed from eth0
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered disabled state
Jun 10 01:41:01 NAS  avahi-daemon[8585]: Interface veth0b1fe68.IPv6 no longer relevant for mDNS.
Jun 10 01:41:01 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth0b1fe68.IPv6 with address fe80::b492:e5ff:fe98:b15c.
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered disabled state
Jun 10 01:41:01 NAS kernel: device veth0b1fe68 left promiscuous mode
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(veth0b1fe68) entered disabled state
Jun 10 01:41:01 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::b492:e5ff:fe98:b15c on veth0b1fe68.
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered blocking state
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered disabled state
Jun 10 01:41:01 NAS kernel: device vethb4ff53d entered promiscuous mode
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered blocking state
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered forwarding state
Jun 10 01:41:01 NAS kernel: veth51655c5: renamed from eth0
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered disabled state
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered disabled state
Jun 10 01:41:01 NAS  avahi-daemon[8585]: Interface vethf25df75.IPv6 no longer relevant for mDNS.
Jun 10 01:41:01 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethf25df75.IPv6 with address fe80::1cc6:d1ff:fe00:e77.
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered disabled state
Jun 10 01:41:01 NAS kernel: device vethf25df75 left promiscuous mode
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 16(vethf25df75) entered disabled state
Jun 10 01:41:01 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::1cc6:d1ff:fe00:e77 on vethf25df75.
Jun 10 01:41:01 NAS kernel: eth0: renamed from veth82d5122
Jun 10 01:41:01 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb4ff53d: link becomes ready
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered blocking state
Jun 10 01:41:01 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered forwarding state
Jun 10 01:41:02 NAS kernel: mdcmd (37): nocheck cancel
Jun 10 01:41:02 NAS kernel: md: recovery thread: exit status: -4
Jun 10 01:41:02 NAS  avahi-daemon[8585]: Joining mDNS multicast group on interface vethb4ff53d.IPv6 with address fe80::7030:6ff:fe0c:4b39.
Jun 10 01:41:02 NAS  avahi-daemon[8585]: New relevant interface vethb4ff53d.IPv6 for mDNS.
Jun 10 01:41:02 NAS  avahi-daemon[8585]: Registering new address record for fe80::7030:6ff:fe0c:4b39 on vethb4ff53d.*.
Jun 10 01:41:03 NAS  emhttpd: Spinning up all drives...
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdm
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdj
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdk
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdh
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdg
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdd
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sde
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdb
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdf
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdc
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdn
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdo
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdl
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sdi
Jun 10 01:41:03 NAS  emhttpd: read SMART /dev/sda
Jun 10 01:41:03 NAS  emhttpd: Stopping services...
Jun 10 01:41:03 NAS  emhttpd: shcmd (186): /etc/rc.d/rc.docker stop
Jun 10 01:41:04 NAS kernel: veth3d6562b: renamed from eth0
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered disabled state
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Interface veth8f06a0e.IPv6 no longer relevant for mDNS.
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth8f06a0e.IPv6 with address fe80::c868:3eff:fe1d:6da6.
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered disabled state
Jun 10 01:41:04 NAS kernel: device veth8f06a0e left promiscuous mode
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 1(veth8f06a0e) entered disabled state
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::c868:3eff:fe1d:6da6 on veth8f06a0e.
Jun 10 01:41:04 NAS kernel: vethe7b0fff: renamed from eth0
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered disabled state
Jun 10 01:41:04 NAS kernel: veth1b96d32: renamed from eth0
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered disabled state
Jun 10 01:41:04 NAS kernel: veth3fb60ed: renamed from eth0
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered disabled state
Jun 10 01:41:04 NAS kernel: veth82d5122: renamed from eth0
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered disabled state
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered disabled state
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Interface vethb668373.IPv6 no longer relevant for mDNS.
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethb668373.IPv6 with address fe80::b476:e9ff:fe18:5ea.
Jun 10 01:41:04 NAS kernel: device vethb668373 left promiscuous mode
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 10(vethb668373) entered disabled state
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::b476:e9ff:fe18:5ea on vethb668373.
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Interface vethcfb7e53.IPv6 no longer relevant for mDNS.
Jun 10 01:41:04 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethcfb7e53.IPv6 with address fe80::a8b4:48ff:fe46:ffe6.
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered disabled state
Jun 10 01:41:04 NAS kernel: device vethcfb7e53 left promiscuous mode
Jun 10 01:41:04 NAS kernel: br-63825e4fef5c: port 14(vethcfb7e53) entered disabled state
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::a8b4:48ff:fe46:ffe6 on vethcfb7e53.
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Interface vethb4ff53d.IPv6 no longer relevant for mDNS.
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethb4ff53d.IPv6 with address fe80::7030:6ff:fe0c:4b39.
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered disabled state
Jun 10 01:41:05 NAS kernel: device vethb4ff53d left promiscuous mode
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 3(vethb4ff53d) entered disabled state
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::7030:6ff:fe0c:4b39 on vethb4ff53d.
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Interface vetha0840bf.IPv6 no longer relevant for mDNS.
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vetha0840bf.IPv6 with address fe80::64a1:dff:feb9:6383.
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered disabled state
Jun 10 01:41:05 NAS kernel: device vetha0840bf left promiscuous mode
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 6(vetha0840bf) entered disabled state
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::64a1:dff:feb9:6383 on vetha0840bf.
Jun 10 01:41:05 NAS kernel: vethb727dbf: renamed from eth0
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered disabled state
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Interface veth530c71d.IPv6 no longer relevant for mDNS.
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth530c71d.IPv6 with address fe80::ac7b:35ff:fe85:5947.
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered disabled state
Jun 10 01:41:05 NAS kernel: device veth530c71d left promiscuous mode
Jun 10 01:41:05 NAS kernel: br-63825e4fef5c: port 4(veth530c71d) entered disabled state
Jun 10 01:41:05 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::ac7b:35ff:fe85:5947 on veth530c71d.
Jun 10 01:41:07 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered disabled state
Jun 10 01:41:07 NAS kernel: vethac8405e: renamed from eth0
Jun 10 01:41:07 NAS  avahi-daemon[8585]: Interface veth38bd0d7.IPv6 no longer relevant for mDNS.
Jun 10 01:41:07 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth38bd0d7.IPv6 with address fe80::f02d:80ff:fedb:9d35.
Jun 10 01:41:07 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered disabled state
Jun 10 01:41:07 NAS kernel: device veth38bd0d7 left promiscuous mode
Jun 10 01:41:07 NAS kernel: br-63825e4fef5c: port 5(veth38bd0d7) entered disabled state
Jun 10 01:41:07 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::f02d:80ff:fedb:9d35 on veth38bd0d7.
Jun 10 01:41:08 NAS kernel: vetha080f1c: renamed from eth0
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered disabled state
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered disabled state
Jun 10 01:41:08 NAS kernel: vethf97060b: renamed from eth0
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Interface veth7c46055.IPv6 no longer relevant for mDNS.
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth7c46055.IPv6 with address fe80::d8e3:76ff:fe13:4660.
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered disabled state
Jun 10 01:41:08 NAS kernel: device veth7c46055 left promiscuous mode
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 13(veth7c46055) entered disabled state
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::d8e3:76ff:fe13:4660 on veth7c46055.
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Interface vethaaa583c.IPv6 no longer relevant for mDNS.
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethaaa583c.IPv6 with address fe80::8c03:c0ff:fea4:afca.
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered disabled state
Jun 10 01:41:08 NAS kernel: device vethaaa583c left promiscuous mode
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 8(vethaaa583c) entered disabled state
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::8c03:c0ff:fea4:afca on vethaaa583c.
Jun 10 01:41:08 NAS kernel: veth808f9ac: renamed from eth0
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered disabled state
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Interface vethd2fdef1.IPv6 no longer relevant for mDNS.
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethd2fdef1.IPv6 with address fe80::f8b5:1aff:fedb:782a.
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered disabled state
Jun 10 01:41:08 NAS kernel: device vethd2fdef1 left promiscuous mode
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 2(vethd2fdef1) entered disabled state
Jun 10 01:41:08 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::f8b5:1aff:fedb:782a on vethd2fdef1.
Jun 10 01:41:08 NAS kernel: vethd3c52c2: renamed from eth0
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered disabled state
Jun 10 01:41:08 NAS kernel: veth44adcc5: renamed from eth0
Jun 10 01:41:08 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered disabled state
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Interface vethb9ac8b9.IPv6 no longer relevant for mDNS.
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethb9ac8b9.IPv6 with address fe80::c4df:b1ff:fe8a:12ea.
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered disabled state
Jun 10 01:41:09 NAS kernel: device vethb9ac8b9 left promiscuous mode
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 7(vethb9ac8b9) entered disabled state
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::c4df:b1ff:fe8a:12ea on vethb9ac8b9.
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Interface veth7d055ef.IPv6 no longer relevant for mDNS.
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth7d055ef.IPv6 with address fe80::e8c1:b4ff:fedd:9642.
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered disabled state
Jun 10 01:41:09 NAS kernel: device veth7d055ef left promiscuous mode
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 9(veth7d055ef) entered disabled state
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::e8c1:b4ff:fedd:9642 on veth7d055ef.
Jun 10 01:41:09 NAS kernel: vethbdf2193: renamed from eth0
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered disabled state
Jun 10 01:41:09 NAS kernel: vethf711ec2: renamed from eth0
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered disabled state
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Interface veth724fad4.IPv6 no longer relevant for mDNS.
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth724fad4.IPv6 with address fe80::5c86:aaff:fe93:b7bb.
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered disabled state
Jun 10 01:41:09 NAS kernel: device veth724fad4 left promiscuous mode
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 11(veth724fad4) entered disabled state
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::5c86:aaff:fe93:b7bb on veth724fad4.
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Interface veth8a45ed4.IPv6 no longer relevant for mDNS.
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface veth8a45ed4.IPv6 with address fe80::3c05:1ff:fe7f:21a.
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered disabled state
Jun 10 01:41:09 NAS kernel: device veth8a45ed4 left promiscuous mode
Jun 10 01:41:09 NAS kernel: br-63825e4fef5c: port 12(veth8a45ed4) entered disabled state
Jun 10 01:41:09 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::3c05:1ff:fe7f:21a on veth8a45ed4.
Jun 10 01:41:12 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered disabled state
Jun 10 01:41:12 NAS kernel: veth8ec32ac: renamed from eth0
Jun 10 01:41:12 NAS  avahi-daemon[8585]: Interface vethf20fc8c.IPv6 no longer relevant for mDNS.
Jun 10 01:41:12 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface vethf20fc8c.IPv6 with address fe80::945f:c0ff:fe48:5bd.
Jun 10 01:41:12 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered disabled state
Jun 10 01:41:12 NAS kernel: device vethf20fc8c left promiscuous mode
Jun 10 01:41:12 NAS kernel: br-63825e4fef5c: port 15(vethf20fc8c) entered disabled state
Jun 10 01:41:12 NAS  avahi-daemon[8585]: Withdrawing address record for fe80::945f:c0ff:fe48:5bd on vethf20fc8c.
Jun 10 01:41:13 NAS root: stopping dockerd ...
Jun 10 01:41:14 NAS root: waiting for docker to die ...
Jun 10 01:41:15 NAS  avahi-daemon[8585]: Interface docker0.IPv4 no longer relevant for mDNS.
Jun 10 01:41:15 NAS  avahi-daemon[8585]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Jun 10 01:41:15 NAS  avahi-daemon[8585]: Withdrawing address record for 172.17.0.1 on docker0.
Jun 10 01:41:15 NAS  emhttpd: shcmd (187): umount /var/lib/docker
Jun 10 01:41:15 NAS Recycle Bin: Stopping Recycle Bin
Jun 10 01:41:15 NAS  emhttpd: Stopping Recycle Bin...
Jun 10 01:41:15 NAS unassigned.devices: Unmounting All Devices...
Jun 10 01:41:15 NAS unassigned.devices: Unmounting partition 'sdg1' at mountpoint '/mnt/disks/SeedBox Old'...
Jun 10 01:41:15 NAS unassigned.devices: Removing SMB share 'SeedBox Old'.
Jun 10 01:41:15 NAS unassigned.devices: Unmount cmd: /sbin/umount -fl '/mnt/disks/SeedBox Old' 2>&1
Jun 10 01:41:15 NAS kernel: EXT4-fs (sdg1): unmounting filesystem.
Jun 10 01:41:15 NAS unassigned.devices: Successfully unmounted 'sdg1'
Jun 10 01:41:15 NAS unassigned.devices: Unmounting partition 'sdm1' at mountpoint '/mnt/disks/SeedBox'...
Jun 10 01:41:15 NAS unassigned.devices: Removing SMB share 'SeedBox'.
Jun 10 01:41:15 NAS unassigned.devices: Unmount cmd: /sbin/umount -fl '/mnt/disks/SeedBox' 2>&1
Jun 10 01:41:15 NAS unassigned.devices: Successfully unmounted 'sdm1'
Jun 10 01:41:15 NAS  sudo: pam_unix(sudo:session): session closed for user root
Jun 10 01:41:16 NAS  emhttpd: shcmd (188): /etc/rc.d/rc.samba stop
Jun 10 01:41:16 NAS  wsdd2[8565]: 'Terminated' signal received.
Jun 10 01:41:16 NAS  wsdd2[8565]: terminating.
Jun 10 01:41:16 NAS  emhttpd: shcmd (189): rm -f /etc/avahi/services/smb.service
Jun 10 01:41:16 NAS  avahi-daemon[8585]: Files changed, reloading.
Jun 10 01:41:16 NAS  avahi-daemon[8585]: Service group file /services/smb.service vanished, removing services.
Jun 10 01:41:16 NAS  emhttpd: Stopping mover...
Jun 10 01:41:16 NAS  emhttpd: shcmd (191): /usr/local/sbin/mover stop
Jun 10 01:41:16 NAS root: mover: not running
Jun 10 01:41:16 NAS  emhttpd: Sync filesystems...
Jun 10 01:41:16 NAS  emhttpd: shcmd (192): sync
Jun 10 01:41:16 NAS  emhttpd: shcmd (193): umount /mnt/user0
Jun 10 01:41:16 NAS  emhttpd: shcmd (194): rmdir /mnt/user0
Jun 10 01:41:16 NAS  emhttpd: shcmd (195): umount /mnt/user
Jun 10 01:41:17 NAS  emhttpd: shcmd (196): rmdir /mnt/user
Jun 10 01:41:17 NAS  emhttpd: shcmd (198): /usr/local/sbin/update_cron
Jun 10 01:41:17 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:17 NAS  emhttpd: shcmd (199): umount /mnt/disk1
Jun 10 01:41:17 NAS kernel: XFS (md1): Unmounting Filesystem
Jun 10 01:41:17 NAS  emhttpd: shcmd (200): rmdir /mnt/disk1
Jun 10 01:41:17 NAS  emhttpd: shcmd (201): umount /mnt/disk2
Jun 10 01:41:18 NAS kernel: XFS (md2): Unmounting Filesystem
Jun 10 01:41:18 NAS  emhttpd: shcmd (202): rmdir /mnt/disk2
Jun 10 01:41:18 NAS  emhttpd: shcmd (203): umount /mnt/disk3
Jun 10 01:41:18 NAS kernel: XFS (md3): Unmounting Filesystem
Jun 10 01:41:18 NAS  emhttpd: shcmd (204): rmdir /mnt/disk3
Jun 10 01:41:18 NAS  emhttpd: shcmd (205): umount /mnt/disk4
Jun 10 01:41:18 NAS kernel: XFS (md4): Unmounting Filesystem
Jun 10 01:41:18 NAS  emhttpd: shcmd (206): rmdir /mnt/disk4
Jun 10 01:41:18 NAS  emhttpd: shcmd (207): umount /mnt/disk5
Jun 10 01:41:18 NAS kernel: XFS (md5): Unmounting Filesystem
Jun 10 01:41:18 NAS  emhttpd: shcmd (208): rmdir /mnt/disk5
Jun 10 01:41:18 NAS  emhttpd: shcmd (209): umount /mnt/disk6
Jun 10 01:41:18 NAS kernel: XFS (md6): Unmounting Filesystem
Jun 10 01:41:18 NAS  emhttpd: shcmd (210): rmdir /mnt/disk6
Jun 10 01:41:18 NAS  emhttpd: shcmd (211): umount /mnt/disk7
Jun 10 01:41:18 NAS kernel: XFS (md7): Unmounting Filesystem
Jun 10 01:41:19 NAS  emhttpd: shcmd (212): rmdir /mnt/disk7
Jun 10 01:41:19 NAS  emhttpd: shcmd (213): umount /mnt/disk8
Jun 10 01:41:19 NAS kernel: XFS (md8): Unmounting Filesystem
Jun 10 01:41:19 NAS  emhttpd: shcmd (214): rmdir /mnt/disk8
Jun 10 01:41:19 NAS  emhttpd: shcmd (215): umount /mnt/disk9
Jun 10 01:41:19 NAS kernel: XFS (md9): Unmounting Filesystem
Jun 10 01:41:19 NAS  emhttpd: shcmd (216): rmdir /mnt/disk9
Jun 10 01:41:19 NAS  emhttpd: shcmd (217): umount /mnt/cache
Jun 10 01:41:19 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:19 NAS  emhttpd: shcmd (217): exit status: 32
Jun 10 01:41:19 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:24 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:24 NAS  emhttpd: shcmd (218): umount /mnt/cache
Jun 10 01:41:24 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:24 NAS  emhttpd: shcmd (218): exit status: 32
Jun 10 01:41:24 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:29 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:29 NAS  emhttpd: shcmd (219): umount /mnt/cache
Jun 10 01:41:29 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:29 NAS  emhttpd: shcmd (219): exit status: 32
Jun 10 01:41:29 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:34 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:34 NAS  emhttpd: shcmd (220): umount /mnt/cache
Jun 10 01:41:34 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:34 NAS  emhttpd: shcmd (220): exit status: 32
Jun 10 01:41:34 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:39 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:39 NAS  emhttpd: shcmd (221): umount /mnt/cache
Jun 10 01:41:39 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:39 NAS  emhttpd: shcmd (221): exit status: 32
Jun 10 01:41:39 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:44 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:44 NAS  emhttpd: shcmd (222): umount /mnt/cache
Jun 10 01:41:44 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:44 NAS  emhttpd: shcmd (222): exit status: 32
Jun 10 01:41:44 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:49 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:49 NAS  emhttpd: shcmd (223): umount /mnt/cache
Jun 10 01:41:49 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:49 NAS  emhttpd: shcmd (223): exit status: 32
Jun 10 01:41:49 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:54 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:54 NAS  emhttpd: shcmd (224): umount /mnt/cache
Jun 10 01:41:54 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:54 NAS  emhttpd: shcmd (224): exit status: 32
Jun 10 01:41:54 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:41:59 NAS  emhttpd: Unmounting disks...
Jun 10 01:41:59 NAS  emhttpd: shcmd (225): umount /mnt/cache
Jun 10 01:41:59 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:41:59 NAS  emhttpd: shcmd (225): exit status: 32
Jun 10 01:41:59 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:04 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:04 NAS  emhttpd: shcmd (226): umount /mnt/cache
Jun 10 01:42:04 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:04 NAS  emhttpd: shcmd (226): exit status: 32
Jun 10 01:42:04 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:09 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:09 NAS  emhttpd: shcmd (227): umount /mnt/cache
Jun 10 01:42:09 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:09 NAS  emhttpd: shcmd (227): exit status: 32
Jun 10 01:42:09 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:14 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:14 NAS  emhttpd: shcmd (228): umount /mnt/cache
Jun 10 01:42:14 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:14 NAS  emhttpd: shcmd (228): exit status: 32
Jun 10 01:42:14 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:14 NAS root: /usr/local/sbin/powerdown has been deprecated
Jun 10 01:42:19 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:19 NAS  emhttpd: shcmd (229): umount /mnt/cache
Jun 10 01:42:19 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:19 NAS  emhttpd: shcmd (229): exit status: 32
Jun 10 01:42:19 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:24 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:24 NAS  emhttpd: shcmd (230): umount /mnt/cache
Jun 10 01:42:24 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:24 NAS  emhttpd: shcmd (230): exit status: 32
Jun 10 01:42:24 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:29 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:29 NAS  emhttpd: shcmd (231): umount /mnt/cache
Jun 10 01:42:29 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:29 NAS  emhttpd: shcmd (231): exit status: 32
Jun 10 01:42:29 NAS  emhttpd: Retry unmounting disk share(s)...
Jun 10 01:42:32 NAS root: Status of all loop devices
Jun 10 01:42:32 NAS root: /dev/loop1: [2049]:11 (/boot/bzmodules)
Jun 10 01:42:32 NAS root: /dev/loop2: [0054]:22117639 (/mnt/cache/system/docker/docker.img)
Jun 10 01:42:32 NAS root: /dev/loop0: [2049]:9 (/boot/bzfirmware)
Jun 10 01:42:32 NAS root: Active pids left on /mnt/*
Jun 10 01:42:32 NAS root:                      USER        PID ACCESS COMMAND
Jun 10 01:42:32 NAS root: /mnt/addons:         root     kernel mount /mnt/addons
Jun 10 01:42:32 NAS root: /mnt/cache:          root     kernel mount /mnt/cache
Jun 10 01:42:32 NAS root: /mnt/disks:          root     kernel mount /mnt/disks
Jun 10 01:42:32 NAS root: /mnt/remotes:        root     kernel mount /mnt/remotes
Jun 10 01:42:32 NAS root: /mnt/rootshare:      root     kernel mount /mnt/rootshare
Jun 10 01:42:32 NAS root: Active pids left on /dev/md*
Jun 10 01:42:32 NAS root: Generating diagnostics...
Jun 10 01:42:34 NAS  emhttpd: Unmounting disks...
Jun 10 01:42:34 NAS  emhttpd: shcmd (232): umount /mnt/cache
Jun 10 01:42:34 NAS root: umount: /mnt/cache: target is busy.
Jun 10 01:42:34 NAS  emhttpd: shcmd (232): exit status: 32
Jun 10 01:42:34 NAS  emhttpd: Retry unmounting disk share(s)...

 

Docker Log

Quote

time="2023-06-09T10:31:26-06:00" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header"
time="2023-06-09T10:31:28.626715831-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:28.626765649-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:28.626775447-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:28.626929201-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2752497c7ea9d23789e1d62b5ec94b73d309ac873ac2efbd110f2c8576d200fd pid=9482 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:29.232049896-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:29.232095338-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:29.232105531-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:29.232304596-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ead12ae4246c0d86debeb18ee6ca7c5efb44bf46b36ea10d61914d328a2a26d6 pid=9654 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:30.041103506-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:30.041149700-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:30.041159428-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:30.041271357-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/535b12e68cdb12850f619ccc7ead2c472504c5b015bd959cdead8e43fdc9f026 pid=9997 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:31.021729059-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:31.021777263-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:31.021787883-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:31.022311016-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2fde73e4154ff5262895685bcb1a539350e5ceff8a12ba0966c9bc1961c3c8eb pid=10432 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:32.392468112-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:32.392514568-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:32.392524560-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:32.392648470-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2ec6c33970b296777b0687ad44f4ac7d715ef03fea17aac877eb1d56b26b67b7 pid=10800 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:33.753596131-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:33.753648995-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:33.753664203-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:33.753783840-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8ad3dc3228164158469832ea10caf831085925e76fc1fa67bf4442de4f92c558 pid=11186 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:35.047051673-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:35.047141179-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:35.047165607-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:35.047914924-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/07bb16a2392cf12d00cab96257dd4cd45574cc11ee2ad3d17f6902449e6891f4 pid=11429 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:36.488372542-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:36.488490914-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:36.488542222-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:36.489025753-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8f10962a24c269821f532981dafbde939d99286b34547f03709abdc84620c763 pid=11747 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:38.585223979-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:38.585365264-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:38.585393063-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:38.585688145-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2cb4497d5b96b083697610d9f50c5f49b1043438fc9d0a1e4e2d37a34de93a13 pid=12276 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:40.671352783-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:40.671430269-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:40.671454639-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:40.673875276-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/760c7901aac9fc76f6a2ccb2618ea2dca3b38d84b3b8b47cfcc0a01f9f9b1edb pid=12749 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:42.405162512-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:42.405246064-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:42.405270349-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:42.405420732-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d160de0605b25196823a06a8c0bfb64555742bba0c900369ee251c00edeae1d3 pid=13049 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:44.579508510-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:44.579603430-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:44.579629872-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:44.579778579-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8d720cda141392854137da51d9e99808598639f5804e37fbccbe81cfee25eee7 pid=13291 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:47.176851201-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:47.176910069-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:47.176920485-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:47.177053002-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3ac8b4eb4848ecc759e9a54016faf6bf05b5790fcba691017d845f1e68edc78c pid=13677 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:50.032567797-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:50.032616847-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:50.032628023-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:50.033306913-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/45f42166dbef17e74ffc4616b3326c518d3c036a490af06efc75c0520f5245aa pid=14204 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:53.157967854-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:53.158021809-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:53.158031894-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:53.158192715-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c1a3c857c0261c712a7d2e3c7b7f4680807e278aa3b8a386d103c851fea92466 pid=14782 runtime=io.containerd.runc.v2
time="2023-06-09T10:31:57.593649117-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:31:57.593694122-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:31:57.593703858-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:31:57.593880158-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8dad0b917e73785c1c2eb16867eb7e56a2994d1c356f044561579b816a254d32 pid=15123 runtime=io.containerd.runc.v2
time="2023-06-09T10:32:01.554077085-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:32:01.556446210-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:32:01.556498820-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:32:01.556648499-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=15509 runtime=io.containerd.runc.v2
time="2023-06-09T10:32:13.529022438-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:32:13.529084422-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:32:13.529094920-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:32:13.529253900-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=16044 runtime=io.containerd.runc.v2
time="2023-06-09T10:32:25.417406596-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:32:25.418417274-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:32:25.418491579-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:32:25.420906697-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=16549 runtime=io.containerd.runc.v2
time="2023-06-09T10:32:37.984123270-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:32:37.984220287-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:32:37.984245389-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:32:37.984381201-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=16990 runtime=io.containerd.runc.v2
time="2023-06-09T10:32:49.380873733-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:32:49.380944291-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:32:49.380955253-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:32:49.382155591-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=17250 runtime=io.containerd.runc.v2
time="2023-06-09T10:33:00.205957592-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:33:00.206051708-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:33:00.206077813-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:33:00.209881033-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=17500 runtime=io.containerd.runc.v2
time="2023-06-09T10:33:12.605392754-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T10:33:12.606291269-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T10:33:12.606309310-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T10:33:12.606890881-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=17911 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:17.843335194-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:17.843433687-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:17.843458800-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:17.843610715-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=25954 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:19.346997748-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:19.347065511-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:19.347077551-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:19.347272644-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b8284f85b1d1b645a705a5ca906847e5cb83456816889c91f6d200c95477a07b pid=26324 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:21.554268613-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:21.554322180-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:21.554332764-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:21.559047646-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=26777 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:30.596339030-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:30.596427701-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:30.596453627-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:30.602890047-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=27702 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:37.741102474-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:37.741153663-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:37.741163804-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:37.742651253-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=28365 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:43.693239459-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:43.693293644-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:43.693304540-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:43.693451882-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=28683 runtime=io.containerd.runc.v2
time="2023-06-09T11:46:52.680960946-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:46:52.681012938-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:46:52.681023039-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:46:52.681140165-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=29099 runtime=io.containerd.runc.v2
time="2023-06-09T11:47:06.480032332-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:47:06.480081539-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:47:06.480092801-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:47:06.483903214-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=29784 runtime=io.containerd.runc.v2
time="2023-06-09T11:47:24.278186238-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-09T11:47:24.278281246-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-09T11:47:24.278311820-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-09T11:47:24.278462934-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=30548 runtime=io.containerd.runc.v2
time="2023-06-10T01:41:01.529726837-06:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-06-10T01:41:01.529777853-06:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-06-10T01:41:01.529788169-06:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-06-10T01:41:01.529920097-06:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e89c7aeeaf336ba6e1e530ea603ed5a222ca77144aa64190a2bc126838e8e1ea pid=18475 runtime=io.containerd.runc.v2

 

Delugevpn Log

Quote

2023-06-09T17:46:19.814040737Z Created by...
2023-06-09T17:46:19.814062852Z ___.   .__       .__
2023-06-09T17:46:19.814071660Z \_ |__ |__| ____ |  |__   ____ ___  ___
2023-06-09T17:46:19.814077289Z  | __ \|  |/    \|  |  \_/ __ \\  \/  /
2023-06-09T17:46:19.814082529Z  | \_\ \  |   |  \   Y  \  ___/ >    <
2023-06-09T17:46:19.814087851Z  |___  /__|___|  /___|  /\___  >__/\_ \
2023-06-09T17:46:19.814092906Z      \/        \/     \/     \/      \/
2023-06-09T17:46:19.814097972Z    https://hub.docker.com/u/binhex/
2023-06-09T17:46:19.814102942Z
2023-06-09T17:46:19.935864126Z 2023-06-09 11:46:19.927264 [info] Host is running unRAID
2023-06-09T17:46:19.966039690Z 2023-06-09 11:46:19.965804 [info] System information Linux b8284f85b1d1 5.19.17-Unraid #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022 x86_64 GNU/Linux
2023-06-09T17:46:20.047758375Z 2023-06-09 11:46:20.047564 [info] OS_ARCH defined as 'x86-64'
2023-06-09T17:46:20.115688334Z 2023-06-09 11:46:20.113632 [info] PUID defined as '99'
2023-06-09T17:46:20.388864896Z 2023-06-09 11:46:20.376929 [info] PGID defined as '100'
2023-06-09T17:46:20.572768163Z 2023-06-09 11:46:20.561622 [info] UMASK defined as '000'
2023-06-09T17:46:20.639011761Z 2023-06-09 11:46:20.632601 [info] Permissions already set for '/config'
2023-06-09T17:46:20.807268737Z 2023-06-09 11:46:20.799403 [info] Deleting files in /tmp (non recursive)...
2023-06-09T17:46:20.852874192Z 2023-06-09 11:46:20.849951 [info] VPN_ENABLED defined as 'yes'
2023-06-09T17:46:20.879632302Z 2023-06-09 11:46:20.879403 [info] VPN_CLIENT defined as 'wireguard'
2023-06-09T17:46:20.930863622Z 2023-06-09 11:46:20.909632 [info] VPN_PROV defined as 'pia'
2023-06-09T17:46:20.990219258Z 2023-06-09 11:46:20.989941 [info] WireGuard config file (conf extension) is located at /config/wireguard/wg0.conf
2023-06-09T17:46:21.129795603Z 2023-06-09 11:46:21.129452 [info] VPN_REMOTE_SERVER defined as 'nl-amsterdam.privacy.network'
2023-06-09T17:46:21.231734793Z 2023-06-09 11:46:21.209609 [info] VPN_REMOTE_PORT defined as '1337'
2023-06-09T17:46:21.282556672Z 2023-06-09 11:46:21.282106 [info] VPN_DEVICE_TYPE defined as 'wg0'
2023-06-09T17:46:21.369794737Z 2023-06-09 11:46:21.369540 [info] VPN_REMOTE_PROTOCOL defined as 'udp'
2023-06-09T17:46:22.378765775Z 2023-06-09 11:46:22.378504 [info] LAN_NETWORK defined as '192.168.1.0/24'
2023-06-09T17:46:22.521217647Z 2023-06-09 11:46:22.514068 [info] NAME_SERVERS defined as '209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1'
2023-06-09T17:46:22.625859719Z 2023-06-09 11:46:22.617420 [info] VPN_USER defined as 'p5676653'
2023-06-09T17:46:22.706377098Z 2023-06-09 11:46:22.700456 [info] VPN_PASS defined as 'A4skPSonJ8'
2023-06-09T17:46:22.829859405Z 2023-06-09 11:46:22.772122 [info] STRICT_PORT_FORWARD defined as 'yes'
2023-06-09T17:46:22.985306315Z 2023-06-09 11:46:22.971830 [info] ENABLE_PRIVOXY defined as 'no'
2023-06-09T17:46:23.255844714Z 2023-06-09 11:46:23.243084 [info] VPN_INPUT_PORTS not defined (via -e VPN_INPUT_PORTS), skipping allow for custom incoming ports
2023-06-09T17:46:23.412083231Z 2023-06-09 11:46:23.388233 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports
2023-06-09T17:46:23.460965447Z 2023-06-09 11:46:23.459723 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info'
2023-06-09T17:46:23.526071697Z 2023-06-09 11:46:23.511797 [info] DELUGE_WEB_LOG_LEVEL defined as 'info'
2023-06-09T17:46:23.681957368Z 2023-06-09 11:46:23.677564 [info] DELUGE_ENABLE_WEBUI_PASSWORD not defined,(via -e DELUGE_ENABLE_WEBUI_PASSWORD), defaulting to 'yes'
2023-06-09T17:46:23.805750070Z 2023-06-09 11:46:23.800256 [info] Starting Supervisor...
2023-06-09T17:46:24.584449551Z 2023-06-09 11:46:24,581 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing
2023-06-09T17:46:24.584467929Z 2023-06-09 11:46:24,581 INFO Set uid to user 0 succeeded
2023-06-09T17:46:24.601844180Z 2023-06-09 11:46:24,586 INFO supervisord started with pid 7
2023-06-09T17:46:25.593519449Z 2023-06-09 11:46:25,591 INFO spawned: 'start-script' with pid 286
2023-06-09T17:46:25.605919075Z 2023-06-09 11:46:25,604 INFO spawned: 'watchdog-script' with pid 287
2023-06-09T17:46:25.625849158Z 2023-06-09 11:46:25,612 INFO reaped unknown pid 8 (exit status 0)
2023-06-09T17:46:25.663873227Z 2023-06-09 11:46:25,639 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.663893756Z [info] VPN is enabled, beginning configuration of VPN
2023-06-09T17:46:25.663899898Z
2023-06-09T17:46:25.663905106Z 2023-06-09 11:46:25,647 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-06-09T17:46:25.663910713Z 2023-06-09 11:46:25,648 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-06-09T17:46:25.687938708Z 2023-06-09 11:46:25,679 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.687959263Z [info] Adding 209.222.18.222 to /etc/resolv.conf
2023-06-09T17:46:25.687965445Z
2023-06-09T17:46:25.692041946Z 2023-06-09 11:46:25,691 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.692058535Z [info] Adding 84.200.69.80 to /etc/resolv.conf
2023-06-09T17:46:25.692064824Z
2023-06-09T17:46:25.704814301Z 2023-06-09 11:46:25,704 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.704836758Z [info] Adding 37.235.1.174 to /etc/resolv.conf
2023-06-09T17:46:25.704842876Z
2023-06-09T17:46:25.740248033Z 2023-06-09 11:46:25,740 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.740266440Z [info] Adding 1.1.1.1 to /etc/resolv.conf
2023-06-09T17:46:25.740272649Z
2023-06-09T17:46:25.765410049Z 2023-06-09 11:46:25,762 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.765429204Z [info] Adding 209.222.18.218 to /etc/resolv.conf
2023-06-09T17:46:25.765446824Z
2023-06-09T17:46:25.814969816Z 2023-06-09 11:46:25,797 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.814988032Z [info] Adding 37.235.1.177 to /etc/resolv.conf
2023-06-09T17:46:25.814994225Z
2023-06-09T17:46:25.814999503Z 2023-06-09 11:46:25,810 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.815004739Z [info] Adding 84.200.70.40 to /etc/resolv.conf
2023-06-09T17:46:25.815009930Z
2023-06-09T17:46:25.855935829Z 2023-06-09 11:46:25,827 DEBG 'start-script' stdout output:
2023-06-09T17:46:25.855960284Z [info] Adding 1.0.0.1 to /etc/resolv.conf
2023-06-09T17:46:25.855967016Z
2023-06-09T17:46:26.221925397Z 2023-06-09 11:46:26,206 DEBG 'start-script' stdout output:
2023-06-09T17:46:26.221947143Z [info] Token generated for PIA wireguard authentication
2023-06-09T17:46:26.221953405Z
2023-06-09T17:46:26.360611251Z 2023-06-09 11:46:26,334 DEBG 'start-script' stdout output:
2023-06-09T17:46:26.360629277Z [info] Trying to connect to the PIA WireGuard API on 'nl-amsterdam.privacy.network'...
2023-06-09T17:46:26.360635450Z
2023-06-09T17:46:26.986634706Z 2023-06-09 11:46:26,975 DEBG 'start-script' stdout output:
2023-06-09T17:46:26.986659145Z [info] Default route for container is 172.18.0.1
2023-06-09T17:46:26.986665712Z
2023-06-09T17:46:27.260340856Z 2023-06-09 11:46:27,259 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.260432960Z [info] Docker network defined as    172.18.0.0/16
2023-06-09T17:46:27.260439793Z
2023-06-09T17:46:27.264325469Z 2023-06-09 11:46:27,264 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.264341343Z [info] Adding 192.168.1.0/24 as route via docker eth0
2023-06-09T17:46:27.264347261Z
2023-06-09T17:46:27.281984153Z 2023-06-09 11:46:27,277 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.282006911Z [info] ip route defined as follows...
2023-06-09T17:46:27.282013095Z --------------------
2023-06-09T17:46:27.282018326Z default via 172.18.0.1 dev eth0
2023-06-09T17:46:27.282023417Z 172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.17
2023-06-09T17:46:27.282028544Z 192.168.1.0/24 via 172.18.0.1 dev eth0
2023-06-09T17:46:27.282033625Z local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
2023-06-09T17:46:27.282038744Z local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
2023-06-09T17:46:27.282043795Z broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
2023-06-09T17:46:27.282048885Z local 172.18.0.17 dev eth0 table local proto kernel scope host src 172.18.0.17
2023-06-09T17:46:27.282053978Z broadcast 172.18.255.255 dev eth0 table local proto kernel scope link src 172.18.0.17
2023-06-09T17:46:27.282072549Z --------------------
2023-06-09T17:46:27.282078539Z
2023-06-09T17:46:27.300375844Z 2023-06-09 11:46:27,298 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.300393684Z iptable_mangle         16384  1
2023-06-09T17:46:27.300399859Z ip_tables              28672  6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle
2023-06-09T17:46:27.300405100Z x_tables               45056  15 ip6table_filter,xt_conntrack,iptable_filter,xt_tcpudp,xt_addrtype,xt_nat,xt_comment,ip6_tables,xt_connmark,iptable_raw,ip_tables,iptable_nat,xt_MASQUERADE,iptable_mangle,xt_mark
2023-06-09T17:46:27.300410598Z
2023-06-09T17:46:27.323296964Z 2023-06-09 11:46:27,319 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.323315631Z [info] iptable_mangle support detected, adding fwmark for tables
2023-06-09T17:46:27.323321457Z
2023-06-09T17:46:27.363829827Z 2023-06-09 11:46:27,354 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.363851514Z [info] iptables defined as follows...
2023-06-09T17:46:27.363857637Z --------------------
2023-06-09T17:46:27.363862873Z
2023-06-09T17:46:27.363873387Z 2023-06-09 11:46:27,358 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.363878655Z -P INPUT DROP
2023-06-09T17:46:27.363883713Z -P FORWARD DROP
2023-06-09T17:46:27.363888769Z -P OUTPUT DROP
2023-06-09T17:46:27.363893829Z -A INPUT -s 181.214.206.236/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363904965Z -A INPUT -s 212.102.35.25/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363910573Z -A INPUT -s 212.102.35.26/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363915667Z -A INPUT -s 104.18.14.49/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363920743Z -A INPUT -s 104.18.15.49/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363925810Z -A INPUT -s 104.17.108.63/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363930988Z -A INPUT -s 104.17.107.63/32 -i eth0 -j ACCEPT
2023-06-09T17:46:27.363936111Z -A INPUT -s 172.18.0.0/16 -d 172.18.0.0/16 -j ACCEPT
2023-06-09T17:46:27.363941310Z -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
2023-06-09T17:46:27.363946462Z -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT
2023-06-09T17:46:27.363951652Z -A INPUT -s 192.168.1.0/24 -d 172.18.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
2023-06-09T17:46:27.363956898Z -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
2023-06-09T17:46:27.363962050Z -A INPUT -i lo -j ACCEPT
2023-06-09T17:46:27.363967155Z -A INPUT -i wg0 -j ACCEPT
2023-06-09T17:46:27.363977418Z -A OUTPUT -d 181.214.206.236/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.363983017Z -A OUTPUT -d 212.102.35.25/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.363989576Z -A OUTPUT -d 212.102.35.26/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.364007057Z -A OUTPUT -d 104.18.14.49/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.364013198Z -A OUTPUT -d 104.18.15.49/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.364018396Z -A OUTPUT -d 104.17.108.63/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.364023637Z -A OUTPUT -d 104.17.107.63/32 -o eth0 -j ACCEPT
2023-06-09T17:46:27.364028886Z -A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/16 -j ACCEPT
2023-06-09T17:46:27.364035289Z -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
2023-06-09T17:46:27.364040679Z -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT
2023-06-09T17:46:27.364051499Z -A OUTPUT -s 172.18.0.0/16 -d 192.168.1.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
2023-06-09T17:46:27.364057292Z -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
2023-06-09T17:46:27.364062481Z -A OUTPUT -o lo -j ACCEPT
2023-06-09T17:46:27.364067554Z -A OUTPUT -o wg0 -j ACCEPT
2023-06-09T17:46:27.364072822Z --------------------
2023-06-09T17:46:27.364077941Z
2023-06-09T17:46:27.372854076Z 2023-06-09 11:46:27,364 DEBG 'start-script' stdout output:
2023-06-09T17:46:27.372874794Z [info] Attempting to bring WireGuard interface 'up'...
2023-06-09T17:46:27.372881264Z
2023-06-09T17:46:27.393080310Z 2023-06-09 11:46:27,388 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.393104288Z Warning: `/config/wireguard/wg0.conf' is world accessible
2023-06-09T17:46:27.393110253Z
2023-06-09T17:46:27.446099737Z 2023-06-09 11:46:27,445 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.446116871Z [#] ip link add wg0 type wireguard
2023-06-09T17:46:27.446126122Z
2023-06-09T17:46:27.453905614Z 2023-06-09 11:46:27,451 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.453924075Z [#] wg setconf wg0 /dev/fd/63
2023-06-09T17:46:27.453930258Z
2023-06-09T17:46:27.453935495Z 2023-06-09 11:46:27,453 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.453940732Z [#] ip -4 address add 10.25.221.154 dev wg0
2023-06-09T17:46:27.453945869Z
2023-06-09T17:46:27.477621758Z 2023-06-09 11:46:27,465 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.477640872Z [#] ip link set mtu 1420 up dev wg0
2023-06-09T17:46:27.477646886Z
2023-06-09T17:46:27.510233779Z 2023-06-09 11:46:27,500 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.510248802Z [#] wg set wg0 fwmark 51820
2023-06-09T17:46:27.510254556Z
2023-06-09T17:46:27.510259637Z 2023-06-09 11:46:27,501 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.510264851Z [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820
2023-06-09T17:46:27.510269968Z
2023-06-09T17:46:27.510274903Z 2023-06-09 11:46:27,505 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.510293676Z [#] ip -4 rule add not fwmark 51820 table 51820
2023-06-09T17:46:27.510299975Z
2023-06-09T17:46:27.538507424Z 2023-06-09 11:46:27,510 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.538526853Z [#] ip -4 rule add table main suppress_prefixlength 0
2023-06-09T17:46:27.538532997Z
2023-06-09T17:46:27.538538214Z 2023-06-09 11:46:27,528 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.538589657Z [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
2023-06-09T17:46:27.538595432Z
2023-06-09T17:46:27.538600486Z 2023-06-09 11:46:27,534 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.538605565Z [#] iptables-restore -n
2023-06-09T17:46:27.538610605Z
2023-06-09T17:46:27.551011445Z 2023-06-09 11:46:27,549 DEBG 'start-script' stderr output:
2023-06-09T17:46:27.551028578Z [#] '/root/wireguardup.sh'
2023-06-09T17:46:27.551034420Z
2023-06-09T17:46:28.903554859Z 2023-06-09 11:46:28,902 DEBG 'start-script' stdout output:
2023-06-09T17:46:28.903574118Z [info] Attempting to get external IP using 'http://checkip.amazonaws.com'...
2023-06-09T17:46:28.903580573Z
2023-06-09T17:46:34.490854442Z 2023-06-09 11:46:34,464 DEBG 'start-script' stdout output:
2023-06-09T17:46:34.490870892Z [info] Successfully retrieved external IP address 181.214.206.236
2023-06-09T17:46:34.490876939Z
2023-06-09T17:46:34.490883265Z 2023-06-09 11:46:34,476 DEBG 'start-script' stdout output:
2023-06-09T17:46:34.490888585Z [info] WireGuard interface 'up'
2023-06-09T17:46:34.490893714Z [info] Script started to assign incoming port
2023-06-09T17:46:34.490898818Z [info] Port forwarding is enabled
2023-06-09T17:46:34.490903938Z [info] Checking endpoint 'nl-amsterdam.privacy.network' is port forward enabled...
2023-06-09T17:46:34.490909267Z
2023-06-09T17:46:35.141998216Z 2023-06-09 11:46:35,129 DEBG 'start-script' stdout output:
2023-06-09T17:46:35.142016862Z [info] PIA endpoint 'nl-amsterdam.privacy.network' is in the list of endpoints that support port forwarding
2023-06-09T17:46:35.142023325Z [info] List of PIA endpoints that support port forwarding:-
2023-06-09T17:46:35.142028780Z [info] nl-amsterdam.privacy.network
2023-06-09T17:46:35.142034009Z [info] philippines.privacy.network
2023-06-09T17:46:35.142039135Z [info] ua.privacy.network
2023-06-09T17:46:35.142044278Z [info] bahamas.privacy.network
2023-06-09T17:46:35.142049308Z [info] ad.privacy.network
2023-06-09T17:46:35.142054319Z [info] man.privacy.network
2023-06-09T17:46:35.142059376Z [info] bangladesh.privacy.network
2023-06-09T17:46:35.142064448Z [info] italy.privacy.network
2023-06-09T17:46:35.142069518Z [info] au-sydney.privacy.network
2023-06-09T17:46:35.142085652Z [info] panama.privacy.network
2023-06-09T17:46:35.142091696Z [info] mexico.privacy.network
2023-06-09T17:46:35.142096846Z [info] au-brisbane-pf.privacy.network
2023-06-09T17:46:35.142101978Z [info] ae.privacy.network
2023-06-09T17:46:35.142107048Z [info] in.privacy.network
2023-06-09T17:46:35.142112108Z [info] georgia.privacy.network
2023-06-09T17:46:35.142120606Z [info] tr.privacy.network
2023-06-09T17:46:35.142125719Z [info] is.privacy.network
2023-06-09T17:46:35.142130765Z [info] egypt.privacy.network
2023-06-09T17:46:35.142135847Z [info] uk-2.privacy.network
2023-06-09T17:46:35.142140943Z [info] cambodia.privacy.network
2023-06-09T17:46:35.142146013Z [info] austria.privacy.network
2023-06-09T17:46:35.142151080Z [info] venezuela.privacy.network
2023-06-09T17:46:35.142156163Z [info] vietnam.privacy.network
2023-06-09T17:46:35.142161226Z [info] denmark-2.privacy.network
2023-06-09T17:46:35.142166290Z [info] qatar.privacy.network
2023-06-09T17:46:35.142171373Z [info] de-frankfurt.privacy.network
2023-06-09T17:46:35.142176450Z [info] santiago.privacy.network
2023-06-09T17:46:35.142181579Z [info] lv.privacy.network
2023-06-09T17:46:35.142186641Z [info] taiwan.privacy.network
2023-06-09T17:46:35.142191723Z [info] ca-toronto.privacy.network
2023-06-09T17:46:35.142196788Z [info] slovenia.privacy.network
2023-06-09T17:46:35.142201854Z [info] fi.privacy.network
2023-06-09T17:46:35.142206915Z [info] mk.privacy.network
2023-06-09T17:46:35.142211964Z [info] aus-melbourne.privacy.network
2023-06-09T17:46:35.142217122Z [info] italy-2.privacy.network
2023-06-09T17:46:35.142222230Z [info] au-australia-so.privacy.network
2023-06-09T17:46:35.142227328Z [info] brussels.privacy.network
2023-06-09T17:46:35.142232396Z [info] hungary.privacy.network
2023-06-09T17:46:35.142237482Z [info] kualalumpur.privacy.network
2023-06-09T17:46:35.142242564Z [info] ar.privacy.network
2023-06-09T17:46:35.142248744Z [info] ca-ontario.privacy.network
2023-06-09T17:46:35.142253926Z
2023-06-09T17:46:35.142258985Z 2023-06-09 11:46:35,139 DEBG 'start-script' stdout output:
2023-06-09T17:46:35.142264124Z [info] israel.privacy.network
2023-06-09T17:46:35.142269158Z [info] za.privacy.network
2023-06-09T17:46:35.142274212Z [info] spain.privacy.network
2023-06-09T17:46:35.142279290Z [info] sanjose.privacy.network
2023-06-09T17:46:35.142284370Z [info] sweden-2.privacy.network
2023-06-09T17:46:35.142289405Z [info] japan.privacy.network
2023-06-09T17:46:35.142294436Z [info] sg.privacy.network
2023-06-09T17:46:35.142305263Z [info] denmark.privacy.network
2023-06-09T17:46:35.142310903Z [info] monaco.privacy.network
2023-06-09T17:46:35.142316050Z [info] jakarta.privacy.network
2023-06-09T17:46:35.142321196Z [info] es-valencia.privacy.network
2023-06-09T17:46:35.142326341Z [info] montenegro.privacy.network
2023-06-09T17:46:35.142331435Z [info] ireland.privacy.network
2023-06-09T17:46:35.142336508Z [info] macau.privacy.network
2023-06-09T17:46:35.142341583Z [info] sofia.privacy.network
2023-06-09T17:46:35.142346684Z [info] greenland.privacy.network
2023-06-09T17:46:35.142352638Z [info] ee.privacy.network
2023-06-09T17:46:35.142357694Z [info] pt.privacy.network
2023-06-09T17:46:35.142362703Z [info] china.privacy.network
2023-06-09T17:46:35.142367779Z [info] france.privacy.network
2023-06-09T17:46:35.142372826Z [info] yerevan.privacy.network
2023-06-09T17:46:35.142378012Z [info] morocco.privacy.network
2023-06-09T17:46:35.142383154Z [info] dz.privacy.network
2023-06-09T17:46:35.142389401Z [info] nz.privacy.network
2023-06-09T17:46:35.142394493Z [info] ro.privacy.network
2023-06-09T17:46:35.142399538Z [info] lu.privacy.network
2023-06-09T17:46:35.142404630Z [info] bogota.privacy.network
2023-06-09T17:46:35.142409684Z [info] rs.privacy.network
2023-06-09T17:46:35.142414807Z [info] liechtenstein.privacy.network
2023-06-09T17:46:35.142419911Z [info] zagreb.privacy.network
2023-06-09T17:46:35.142424999Z [info] mongolia.privacy.network
2023-06-09T17:46:35.142430062Z [info] sweden.privacy.network
2023-06-09T17:46:35.142435124Z [info] czech.privacy.network
2023-06-09T17:46:35.142440191Z [info] md.privacy.network
2023-06-09T17:46:35.142445239Z [info] malta.privacy.network
2023-06-09T17:46:35.142450304Z [info] uk-southampton.privacy.network
2023-06-09T17:46:35.142455374Z [info] ca-montreal.privacy.network
2023-06-09T17:46:35.142460443Z [info] ca-vancouver.privacy.network
2023-06-09T17:46:35.142465524Z [info] uk-london.privacy.network
2023-06-09T17:46:35.142470593Z [info] japan-2.privacy.network
2023-06-09T17:46:35.142475665Z [info] kazakhstan.privacy.network
2023-06-09T17:46:35.142480713Z [info] gr.privacy.network
2023-06-09T17:46:35.142485760Z [info] saudiarabia.privacy.network
2023-06-09T17:46:35.142490823Z [info] srilanka.privacy.network
2023-06-09T17:46:35.142495878Z [info] nigeria.privacy.network
2023-06-09T17:46:35.142500932Z [info] sk.privacy.network
2023-06-09T17:46:35.142505972Z [info] swiss.privacy.network
2023-06-09T17:46:35.142511031Z [info] hk.privacy.network
2023-06-09T17:46:35.142522147Z [info] al.privacy.network
2023-06-09T17:46:35.142527797Z [info] no.privacy.network
2023-06-09T17:46:35.142532895Z [info] cyprus.privacy.network
2023-06-09T17:46:35.142537975Z [info] poland.privacy.network
2023-06-09T17:46:35.142543068Z [info] uk-manchester.privacy.network
2023-06-09T17:46:35.142548145Z [info] fi-2.privacy.network
2023-06-09T17:46:35.142553225Z [info] de-berlin.privacy.network
2023-06-09T17:46:35.142558285Z [info] lt.privacy.network
2023-06-09T17:46:35.142563360Z [info] au-adelaide-pf.privacy.network
2023-06-09T17:46:35.142568422Z [info] br.privacy.network
2023-06-09T17:46:35.142573484Z [info] aus-perth.privacy.network
2023-06-09T17:46:35.142578629Z
2023-06-09T17:46:38.140934827Z 2023-06-09 11:46:38,130 DEBG 'start-script' stdout output:
2023-06-09T17:46:38.140960138Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T17:46:38.140966596Z
2023-06-09T17:47:07.881922648Z 2023-06-09 11:47:07,876 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:07.881939469Z [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP 10.25.221.154 different, marking for reconfigure
2023-06-09T17:47:07.881945778Z
2023-06-09T17:47:07.935784138Z 2023-06-09 11:47:07,912 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:07.935807031Z [info] Deluge not running
2023-06-09T17:47:07.935812631Z
2023-06-09T17:47:07.935817508Z 2023-06-09 11:47:07,918 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:07.935826007Z [info] Deluge Web UI not running
2023-06-09T17:47:07.935830999Z [info] Deluge incoming port 6890 and VPN incoming port 57964 different, marking for reconfigure
2023-06-09T17:47:07.935835862Z
2023-06-09T17:47:07.935840506Z 2023-06-09 11:47:07,918 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:07.935845271Z [info] Attempting to start Deluge...
2023-06-09T17:47:07.935849952Z [info] Removing deluge pid file (if it exists)...
2023-06-09T17:47:07.935854694Z
2023-06-09T17:47:09.070018007Z 2023-06-09 11:47:09,059 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:09.070034728Z [info] Deluge key 'listen_interface' currently has a value of '10.25.151.127'
2023-06-09T17:47:09.070040384Z [info] Deluge key 'listen_interface' will have a new value '10.25.221.154'
2023-06-09T17:47:09.070045343Z [info] Writing changes to Deluge config file '/config/core.conf'...
2023-06-09T17:47:09.070050302Z
2023-06-09T17:47:09.765440805Z 2023-06-09 11:47:09,757 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:09.765459280Z [info] Deluge key 'outgoing_interface' currently has a value of 'wg0'
2023-06-09T17:47:09.765465299Z [info] Deluge key 'outgoing_interface' will have a new value 'wg0'
2023-06-09T17:47:09.765480850Z [info] Writing changes to Deluge config file '/config/core.conf'...
2023-06-09T17:47:09.765486795Z
2023-06-09T17:47:15.150731456Z 2023-06-09 11:47:15,141 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:15.150756396Z [info] Deluge key 'default_daemon' currently has a value of 'c083e0a1c2df423eb9d2042a0811036d'
2023-06-09T17:47:15.150762861Z [info] Deluge key 'default_daemon' will have a new value 'c083e0a1c2df423eb9d2042a0811036d'
2023-06-09T17:47:15.150768114Z [info] Writing changes to Deluge config file '/config/web.conf'...
2023-06-09T17:47:15.150773227Z
2023-06-09T17:47:15.778936187Z 2023-06-09 11:47:15,776 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:15.778959797Z [info] Deluge process started
2023-06-09T17:47:15.778965712Z [info] Waiting for Deluge process to start listening on port 58846...
2023-06-09T17:47:15.778970984Z
2023-06-09T17:47:16.618092500Z 2023-06-09 11:47:16,601 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:16.618112100Z [info] Deluge process listening on port 58846
2023-06-09T17:47:16.618118221Z
2023-06-09T17:47:21.013191890Z 2023-06-09 11:47:21,012 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:21.013211142Z Setting "random_port" to: False
2023-06-09T17:47:21.013217475Z Configuration value successfully updated.
2023-06-09T17:47:21.013222571Z
2023-06-09T17:47:21.018932116Z 2023-06-09 11:47:21,018 DEBG 'watchdog-script' stderr output:
2023-06-09T17:47:21.019002816Z <Deferred at 0x14f7946cb150 current result: None>
2023-06-09T17:47:21.019008500Z
2023-06-09T17:47:24.014668791Z 2023-06-09 11:47:24,014 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:24.014684096Z Setting "listen_ports" to: (57964, 57964)
2023-06-09T17:47:24.014690025Z Configuration value successfully updated.
2023-06-09T17:47:24.014695133Z
2023-06-09T17:47:24.014887933Z 2023-06-09 11:47:24,014 DEBG 'watchdog-script' stderr output:
2023-06-09T17:47:24.014897743Z <Deferred at 0x14aa5518b710 current result: None>
2023-06-09T17:47:24.014904623Z
2023-06-09T17:47:26.739626440Z 2023-06-09 11:47:26,739 DEBG 'watchdog-script' stderr output:
2023-06-09T17:47:26.739643008Z <Deferred at 0x146cefdbb1d0 current result: None>
2023-06-09T17:47:26.739649222Z
2023-06-09T17:47:26.895582088Z 2023-06-09 11:47:26,895 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:26.895600698Z [info] No torrents with state 'Error' found
2023-06-09T17:47:26.895606858Z
2023-06-09T17:47:26.900165956Z 2023-06-09 11:47:26,900 DEBG 'watchdog-script' stdout output:
2023-06-09T17:47:26.900180070Z [info] Starting Deluge Web UI...
2023-06-09T17:47:26.900197720Z [info] Deluge Web UI started
2023-06-09T17:47:26.900204357Z
2023-06-09T18:01:38.658312333Z 2023-06-09 12:01:38,657 DEBG 'start-script' stdout output:
2023-06-09T18:01:38.658519953Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T18:01:38.658530073Z
2023-06-09T18:16:39.194884646Z 2023-06-09 12:16:39,194 DEBG 'start-script' stdout output:
2023-06-09T18:16:39.195110098Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T18:16:39.195124071Z
2023-06-09T18:31:39.703971899Z 2023-06-09 12:31:39,703 DEBG 'start-script' stdout output:
2023-06-09T18:31:39.704587094Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T18:31:39.704597715Z
2023-06-09T18:46:40.203284271Z 2023-06-09 12:46:40,203 DEBG 'start-script' stdout output:
2023-06-09T18:46:40.203910048Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T18:46:40.203923076Z
2023-06-09T19:01:40.694467010Z 2023-06-09 13:01:40,694 DEBG 'start-script' stdout output:
2023-06-09T19:01:40.695012937Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T19:01:40.695024313Z
2023-06-09T19:16:41.195958038Z 2023-06-09 13:16:41,191 DEBG 'start-script' stdout output:
2023-06-09T19:16:41.197898883Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T19:16:41.197912326Z
2023-06-09T19:31:41.678490233Z 2023-06-09 13:31:41,678 DEBG 'start-script' stdout output:
2023-06-09T19:31:41.678508479Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T19:31:41.678514868Z
2023-06-09T19:46:42.129774170Z 2023-06-09 13:46:42,129 DEBG 'start-script' stdout output:
2023-06-09T19:46:42.129794154Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T19:46:42.129800513Z
2023-06-09T20:01:42.598175996Z 2023-06-09 14:01:42,598 DEBG 'start-script' stdout output:
2023-06-09T20:01:42.598193765Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T20:01:42.598200389Z
2023-06-09T20:16:43.050857155Z 2023-06-09 14:16:43,050 DEBG 'start-script' stdout output:
2023-06-09T20:16:43.050882522Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T20:16:43.050889705Z
2023-06-09T20:31:43.522216612Z 2023-06-09 14:31:43,522 DEBG 'start-script' stdout output:
2023-06-09T20:31:43.522236255Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T20:31:43.522242742Z
2023-06-09T20:46:43.991793433Z 2023-06-09 14:46:43,991 DEBG 'start-script' stdout output:
2023-06-09T20:46:43.991815119Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T20:46:43.991845648Z
2023-06-09T21:01:44.466752579Z 2023-06-09 15:01:44,466 DEBG 'start-script' stdout output:
2023-06-09T21:01:44.466770210Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T21:01:44.466776515Z
2023-06-09T21:16:44.942338302Z 2023-06-09 15:16:44,942 DEBG 'start-script' stdout output:
2023-06-09T21:16:44.942358491Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T21:16:44.942364794Z
2023-06-09T21:31:45.392481432Z 2023-06-09 15:31:45,392 DEBG 'start-script' stdout output:
2023-06-09T21:31:45.392501537Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T21:31:45.392507968Z
2023-06-09T21:46:45.855713347Z 2023-06-09 15:46:45,855 DEBG 'start-script' stdout output:
2023-06-09T21:46:45.855732051Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T21:46:45.855738774Z
2023-06-09T22:01:46.297384724Z 2023-06-09 16:01:46,297 DEBG 'start-script' stdout output:
2023-06-09T22:01:46.297400908Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T22:01:46.297406933Z
2023-06-09T22:16:46.764392764Z 2023-06-09 16:16:46,760 DEBG 'start-script' stdout output:
2023-06-09T22:16:46.764414256Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T22:16:46.764430928Z
2023-06-09T22:31:47.237180329Z 2023-06-09 16:31:47,237 DEBG 'start-script' stdout output:
2023-06-09T22:31:47.237201745Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T22:31:47.237208278Z
2023-06-09T22:46:47.693509130Z 2023-06-09 16:46:47,693 DEBG 'start-script' stdout output:
2023-06-09T22:46:47.693528525Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T22:46:47.693534824Z
2023-06-09T23:01:48.160110576Z 2023-06-09 17:01:48,159 DEBG 'start-script' stdout output:
2023-06-09T23:01:48.160128811Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T23:01:48.160135375Z
2023-06-09T23:16:48.608953300Z 2023-06-09 17:16:48,608 DEBG 'start-script' stdout output:
2023-06-09T23:16:48.608977468Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T23:16:48.608984030Z
2023-06-09T23:31:49.055606322Z 2023-06-09 17:31:49,055 DEBG 'start-script' stdout output:
2023-06-09T23:31:49.055623898Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T23:31:49.055630187Z
2023-06-09T23:46:49.523531372Z 2023-06-09 17:46:49,523 DEBG 'start-script' stdout output:
2023-06-09T23:46:49.523547986Z [info] Successfully assigned and bound incoming port '57964'
2023-06-09T23:46:49.523554135Z
2023-06-10T00:01:49.989186664Z 2023-06-09 18:01:49,989 DEBG 'start-script' stdout output:
2023-06-10T00:01:49.989220716Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T00:01:49.989227896Z
2023-06-10T00:16:50.428527376Z 2023-06-09 18:16:50,428 DEBG 'start-script' stdout output:
2023-06-10T00:16:50.428546254Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T00:16:50.428552566Z
2023-06-10T00:31:50.883124359Z 2023-06-09 18:31:50,883 DEBG 'start-script' stdout output:
2023-06-10T00:31:50.883140679Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T00:31:50.883146718Z
2023-06-10T00:46:51.407792440Z 2023-06-09 18:46:51,407 DEBG 'start-script' stdout output:
2023-06-10T00:46:51.411087652Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T00:46:51.411106944Z
2023-06-10T01:01:51.867466125Z 2023-06-09 19:01:51,867 DEBG 'start-script' stdout output:
2023-06-10T01:01:51.867486375Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T01:01:51.867492765Z
2023-06-10T01:16:52.335232371Z 2023-06-09 19:16:52,335 DEBG 'start-script' stdout output:
2023-06-10T01:16:52.335250697Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T01:16:52.335256973Z
2023-06-10T01:31:52.767344480Z 2023-06-09 19:31:52,767 DEBG 'start-script' stdout output:
2023-06-10T01:31:52.767360253Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T01:31:52.767366375Z
2023-06-10T01:46:53.207028887Z 2023-06-09 19:46:53,206 DEBG 'start-script' stdout output:
2023-06-10T01:46:53.207051815Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T01:46:53.207058240Z
2023-06-10T02:01:53.652159782Z 2023-06-09 20:01:53,652 DEBG 'start-script' stdout output:
2023-06-10T02:01:53.652745982Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T02:01:53.652756787Z
2023-06-10T02:16:54.163492714Z 2023-06-09 20:16:54,161 DEBG 'start-script' stdout output:
2023-06-10T02:16:54.164960946Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T02:16:54.164977600Z
2023-06-10T02:31:54.637119610Z 2023-06-09 20:31:54,636 DEBG 'start-script' stdout output:
2023-06-10T02:31:54.637757714Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T02:31:54.637769592Z
2023-06-10T02:46:55.092198467Z 2023-06-09 20:46:55,092 DEBG 'start-script' stdout output:
2023-06-10T02:46:55.092218772Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T02:46:55.092225566Z
2023-06-10T03:01:55.528404389Z 2023-06-09 21:01:55,528 DEBG 'start-script' stdout output:
2023-06-10T03:01:55.528423576Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T03:01:55.528445667Z
2023-06-10T03:16:55.982174551Z 2023-06-09 21:16:55,982 DEBG 'start-script' stdout output:
2023-06-10T03:16:55.982194249Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T03:16:55.982200485Z
2023-06-10T03:31:56.421523865Z 2023-06-09 21:31:56,421 DEBG 'start-script' stdout output:
2023-06-10T03:31:56.421543627Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T03:31:56.421550082Z
2023-06-10T03:46:56.907077167Z 2023-06-09 21:46:56,906 DEBG 'start-script' stdout output:
2023-06-10T03:46:56.907098619Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T03:46:56.907105369Z
2023-06-10T04:01:57.373470406Z 2023-06-09 22:01:57,373 DEBG 'start-script' stdout output:
2023-06-10T04:01:57.373488103Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T04:01:57.373494275Z
2023-06-10T04:16:57.840526936Z 2023-06-09 22:16:57,840 DEBG 'start-script' stdout output:
2023-06-10T04:16:57.840543915Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T04:16:57.840550096Z
2023-06-10T04:31:58.315972468Z 2023-06-09 22:31:58,315 DEBG 'start-script' stdout output:
2023-06-10T04:31:58.315994094Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T04:31:58.316000755Z
2023-06-10T04:46:58.780955881Z 2023-06-09 22:46:58,780 DEBG 'start-script' stdout output:
2023-06-10T04:46:58.780975333Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T04:46:58.780981859Z
2023-06-10T05:01:59.296692718Z 2023-06-09 23:01:59,296 DEBG 'start-script' stdout output:
2023-06-10T05:01:59.296714710Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T05:01:59.296721374Z
2023-06-10T05:16:59.824142281Z 2023-06-09 23:16:59,824 DEBG 'start-script' stdout output:
2023-06-10T05:16:59.828805593Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T05:16:59.828843107Z
2023-06-10T05:32:00.316876388Z 2023-06-09 23:32:00,314 DEBG 'start-script' stdout output:
2023-06-10T05:32:00.322691573Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T05:32:00.322708695Z
2023-06-10T05:47:00.848464820Z 2023-06-09 23:47:00,848 DEBG 'start-script' stdout output:
2023-06-10T05:47:00.848666847Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T05:47:00.848676254Z
2023-06-10T06:02:01.649509411Z 2023-06-10 00:02:01,649 DEBG 'start-script' stdout output:
2023-06-10T06:02:01.653480196Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T06:02:01.653496201Z
2023-06-10T06:17:02.140388150Z 2023-06-10 00:17:02,140 DEBG 'start-script' stdout output:
2023-06-10T06:17:02.144442349Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T06:17:02.144459311Z
2023-06-10T06:32:02.663277547Z 2023-06-10 00:32:02,660 DEBG 'start-script' stdout output:
2023-06-10T06:32:02.663322245Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T06:32:02.663330535Z
2023-06-10T06:47:03.145805995Z 2023-06-10 00:47:03,145 DEBG 'start-script' stdout output:
2023-06-10T06:47:03.146522753Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T06:47:03.146534560Z
2023-06-10T07:02:03.645981205Z 2023-06-10 01:02:03,645 DEBG 'start-script' stdout output:
2023-06-10T07:02:03.646634881Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T07:02:03.646645791Z
2023-06-10T07:17:04.100567794Z 2023-06-10 01:17:04,100 DEBG 'start-script' stdout output:
2023-06-10T07:17:04.100917853Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T07:17:04.100926873Z
2023-06-10T07:32:04.536496168Z 2023-06-10 01:32:04,536 DEBG 'start-script' stdout output:
2023-06-10T07:32:04.536514437Z [info] Successfully assigned and bound incoming port '57964'
2023-06-10T07:32:04.536520768Z
2023-06-10T07:54:13.456344554Z Created by...
2023-06-10T07:54:13.457073506Z ___.   .__       .__
2023-06-10T07:54:13.457084258Z \_ |__ |__| ____ |  |__   ____ ___  ___
2023-06-10T07:54:13.457089422Z  | __ \|  |/    \|  |  \_/ __ \\  \/  /
2023-06-10T07:54:13.457094449Z  | \_\ \  |   |  \   Y  \  ___/ >    <
2023-06-10T07:54:13.457099442Z  |___  /__|___|  /___|  /\___  >__/\_ \
2023-06-10T07:54:13.457104352Z      \/        \/     \/     \/      \/
2023-06-10T07:54:13.457109115Z    https://hub.docker.com/u/binhex/
2023-06-10T07:54:13.457113850Z
2023-06-10T07:54:13.632721528Z 2023-06-10 01:54:13.629817 [info] Host is running unRAID
2023-06-10T07:54:13.665067404Z 2023-06-10 01:54:13.664818 [info] System information Linux b8284f85b1d1 5.19.17-Unraid #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022 x86_64 GNU/Linux
2023-06-10T07:54:13.721624082Z 2023-06-10 01:54:13.719831 [info] OS_ARCH defined as 'x86-64'
2023-06-10T07:54:13.769542323Z 2023-06-10 01:54:13.765372 [info] PUID defined as '99'
2023-06-10T07:54:13.832605346Z 2023-06-10 01:54:13.830161 [info] PGID defined as '100'
2023-06-10T07:54:13.966073456Z 2023-06-10 01:54:13.962662 [info] UMASK defined as '000'
2023-06-10T07:54:14.024478065Z 2023-06-10 01:54:14.022350 [info] Permissions already set for '/config'
2023-06-10T07:54:14.109936795Z 2023-06-10 01:54:14.094516 [info] Deleting files in /tmp (non recursive)...
2023-06-10T07:54:14.232728316Z 2023-06-10 01:54:14.231734 [info] VPN_ENABLED defined as 'yes'
2023-06-10T07:54:14.261582652Z 2023-06-10 01:54:14.261014 [info] VPN_CLIENT defined as 'wireguard'
2023-06-10T07:54:14.301393289Z 2023-06-10 01:54:14.294235 [info] VPN_PROV defined as 'pia'
2023-06-10T07:54:14.407996890Z 2023-06-10 01:54:14.404278 [info] WireGuard config file (conf extension) is located at /config/wireguard/wg0.conf
2023-06-10T07:54:14.492223127Z 2023-06-10 01:54:14.489894 [info] VPN_REMOTE_SERVER defined as 'nl-amsterdam.privacy.network'
2023-06-10T07:54:14.578599102Z 2023-06-10 01:54:14.578421 [info] VPN_REMOTE_PORT defined as '1337'
2023-06-10T07:54:14.642234306Z 2023-06-10 01:54:14.637673 [info] VPN_DEVICE_TYPE defined as 'wg0'
2023-06-10T07:54:14.682086934Z 2023-06-10 01:54:14.677448 [info] VPN_REMOTE_PROTOCOL defined as 'udp'
2023-06-10T07:54:15.207258933Z 2023-06-10 01:54:15.206980 [info] LAN_NETWORK defined as '192.168.1.0/24'
2023-06-10T07:54:15.251413310Z 2023-06-10 01:54:15.246746 [info] NAME_SERVERS defined as '209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1'
2023-06-10T07:54:15.315908383Z 2023-06-10 01:54:15.298308 [info] VPN_USER defined as 'p5676653'
2023-06-10T07:54:15.346145533Z 2023-06-10 01:54:15.331895 [info] VPN_PASS defined as 'A4skPSonJ8'
2023-06-10T07:54:15.391183313Z 2023-06-10 01:54:15.390494 [info] STRICT_PORT_FORWARD defined as 'yes'
2023-06-10T07:54:15.431212845Z 2023-06-10 01:54:15.425335 [info] ENABLE_PRIVOXY defined as 'no'
2023-06-10T07:54:15.491627770Z 2023-06-10 01:54:15.484435 [info] VPN_INPUT_PORTS not defined (via -e VPN_INPUT_PORTS), skipping allow for custom incoming ports
2023-06-10T07:54:15.567903961Z 2023-06-10 01:54:15.556905 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports
2023-06-10T07:54:15.671573967Z 2023-06-10 01:54:15.656552 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info'
2023-06-10T07:54:15.704901648Z 2023-06-10 01:54:15.697591 [info] DELUGE_WEB_LOG_LEVEL defined as 'info'
2023-06-10T07:54:15.786848904Z 2023-06-10 01:54:15.785027 [info] DELUGE_ENABLE_WEBUI_PASSWORD not defined,(via -e DELUGE_ENABLE_WEBUI_PASSWORD), defaulting to 'yes'
2023-06-10T07:54:15.879121706Z 2023-06-10 01:54:15.878787 [info] Starting Supervisor...
2023-06-10T07:54:17.335615340Z 2023-06-10 01:54:17,328 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing
2023-06-10T07:54:17.335633081Z 2023-06-10 01:54:17,328 INFO Set uid to user 0 succeeded
2023-06-10T07:54:17.338965295Z 2023-06-10 01:54:17,336 INFO supervisord started with pid 6
2023-06-10T07:54:18.346937941Z 2023-06-10 01:54:18,341 INFO spawned: 'start-script' with pid 278
2023-06-10T07:54:18.346955769Z 2023-06-10 01:54:18,343 INFO spawned: 'watchdog-script' with pid 279
2023-06-10T07:54:18.357587633Z 2023-06-10 01:54:18,356 INFO reaped unknown pid 7 (exit status 0)
2023-06-10T07:54:18.392879966Z 2023-06-10 01:54:18,392 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.392897937Z [info] VPN is enabled, beginning configuration of VPN
2023-06-10T07:54:18.392913778Z
2023-06-10T07:54:18.395528780Z 2023-06-10 01:54:18,393 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-06-10T07:54:18.395544551Z 2023-06-10 01:54:18,394 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-06-10T07:54:18.407397001Z 2023-06-10 01:54:18,406 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.407412609Z [info] Adding 209.222.18.222 to /etc/resolv.conf
2023-06-10T07:54:18.407418513Z
2023-06-10T07:54:18.450972246Z 2023-06-10 01:54:18,440 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.450992040Z [info] Adding 84.200.69.80 to /etc/resolv.conf
2023-06-10T07:54:18.450998136Z
2023-06-10T07:54:18.484609577Z 2023-06-10 01:54:18,469 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.484633493Z [info] Adding 37.235.1.174 to /etc/resolv.conf
2023-06-10T07:54:18.484639902Z
2023-06-10T07:54:18.484645049Z 2023-06-10 01:54:18,478 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.484650134Z [info] Adding 1.1.1.1 to /etc/resolv.conf
2023-06-10T07:54:18.484655191Z
2023-06-10T07:54:18.497640999Z 2023-06-10 01:54:18,495 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.497659621Z [info] Adding 209.222.18.218 to /etc/resolv.conf
2023-06-10T07:54:18.497665374Z
2023-06-10T07:54:18.554363944Z 2023-06-10 01:54:18,554 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.554378026Z [info] Adding 37.235.1.177 to /etc/resolv.conf
2023-06-10T07:54:18.554383714Z
2023-06-10T07:54:18.572285154Z 2023-06-10 01:54:18,568 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.572301466Z [info] Adding 84.200.70.40 to /etc/resolv.conf
2023-06-10T07:54:18.572307242Z
2023-06-10T07:54:18.577249095Z 2023-06-10 01:54:18,573 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.577269044Z [info] Adding 1.0.0.1 to /etc/resolv.conf
2023-06-10T07:54:18.577275091Z
2023-06-10T07:54:18.637821865Z 2023-06-10 01:54:18,627 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.637838971Z [info] Attempting to load iptable_mangle module...
2023-06-10T07:54:18.637845151Z
2023-06-10T07:54:18.637850251Z 2023-06-10 01:54:18,634 DEBG 'start-script' stderr output:
2023-06-10T07:54:18.637855321Z modprobe: FATAL: Module iptable_mangle not found in directory /lib/modules/5.19.17-Unraid
2023-06-10T07:54:18.637860432Z
2023-06-10T07:54:18.640258853Z 2023-06-10 01:54:18,637 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.640269616Z [warn] Unable to load iptable_mangle module using modprobe, trying insmod...
2023-06-10T07:54:18.640275161Z
2023-06-10T07:54:18.662438079Z 2023-06-10 01:54:18,661 DEBG 'start-script' stderr output:
2023-06-10T07:54:18.662453882Z insmod: ERROR: could not load module /lib/modules/iptable_mangle.ko: No such file or directory
2023-06-10T07:54:18.662459973Z
2023-06-10T07:54:18.662465144Z 2023-06-10 01:54:18,662 DEBG 'start-script' stdout output:
2023-06-10T07:54:18.662473937Z [warn] Unable to load iptable_mangle module, you will not be able to connect to the applications Web UI or Privoxy outside of your LAN
2023-06-10T07:54:18.662479445Z [info] unRAID/Ubuntu users: Please attempt to load the module by executing the following on your host: '/sbin/modprobe iptable_mangle'
2023-06-10T07:54:18.662484543Z [info] Synology users: Please attempt to load the module by executing the following on your host: 'insmod /lib/modules/iptable_mangle.ko'
2023-06-10T07:54:18.662489703Z
2023-06-10T07:54:19.247440184Z 2023-06-10 01:54:19,237 DEBG 'start-script' stdout output:
2023-06-10T07:54:19.247460563Z [info] Token generated for PIA wireguard authentication
2023-06-10T07:54:19.247466669Z
2023-06-10T07:54:19.300595300Z 2023-06-10 01:54:19,300 DEBG 'start-script' stdout output:
2023-06-10T07:54:19.300612054Z [info] Trying to connect to the PIA WireGuard API on 'nl-amsterdam.privacy.network'...
2023-06-10T07:54:19.300618212Z
2023-06-10T07:54:20.122217447Z 2023-06-10 01:54:20,116 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.122236157Z [info] Default route for container is 172.18.0.1
2023-06-10T07:54:20.122242076Z
2023-06-10T07:54:20.778664528Z 2023-06-10 01:54:20,768 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.778683608Z [info] Docker network defined as    172.18.0.0/16
2023-06-10T07:54:20.778689770Z
2023-06-10T07:54:20.829093327Z 2023-06-10 01:54:20,808 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.829107553Z [info] Adding 192.168.1.0/24 as route via docker eth0
2023-06-10T07:54:20.829116185Z
2023-06-10T07:54:20.829122311Z 2023-06-10 01:54:20,813 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.829130044Z [info] ip route defined as follows...
2023-06-10T07:54:20.829135955Z --------------------
2023-06-10T07:54:20.829141915Z
2023-06-10T07:54:20.829147745Z 2023-06-10 01:54:20,815 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.829154703Z default via 172.18.0.1 dev eth0
2023-06-10T07:54:20.829160536Z
2023-06-10T07:54:20.829166261Z 2023-06-10 01:54:20,815 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.829173896Z 172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4
2023-06-10T07:54:20.829179779Z 192.168.1.0/24 via 172.18.0.1 dev eth0
2023-06-10T07:54:20.829190583Z local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
2023-06-10T07:54:20.829198317Z local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
2023-06-10T07:54:20.829219684Z broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
2023-06-10T07:54:20.829226480Z local 172.18.0.4 dev eth0 table local proto kernel scope host src 172.18.0.4
2023-06-10T07:54:20.829232517Z
2023-06-10T07:54:20.829239002Z 2023-06-10 01:54:20,815 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.829245663Z broadcast 172.18.255.255 dev eth0 table local proto kernel scope link src 172.18.0.4
2023-06-10T07:54:20.829251651Z
2023-06-10T07:54:20.835380618Z 2023-06-10 01:54:20,835 DEBG 'start-script' stdout output:
2023-06-10T07:54:20.835397989Z --------------------
2023-06-10T07:54:20.835404120Z
2023-06-10T07:54:21.118108213Z 2023-06-10 01:54:21,097 DEBG 'start-script' stdout output:
2023-06-10T07:54:21.118129357Z [info] iptables defined as follows...
2023-06-10T07:54:21.118135374Z --------------------
2023-06-10T07:54:21.118140519Z
2023-06-10T07:54:21.151308622Z 2023-06-10 01:54:21,141 DEBG 'start-script' stdout output:
2023-06-10T07:54:21.151331770Z -P INPUT DROP
2023-06-10T07:54:21.151337822Z -P FORWARD DROP
2023-06-10T07:54:21.151342946Z -P OUTPUT DROP
2023-06-10T07:54:21.151347962Z -A INPUT -s 143.244.41.164/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151360353Z -A INPUT -s 181.214.206.135/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151366009Z -A INPUT -s 191.96.168.217/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151370946Z -A INPUT -s 104.18.15.49/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151375824Z -A INPUT -s 104.18.14.49/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151380733Z -A INPUT -s 104.17.107.63/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151385603Z -A INPUT -s 104.17.108.63/32 -i eth0 -j ACCEPT
2023-06-10T07:54:21.151390503Z -A INPUT -s 172.18.0.0/16 -d 172.18.0.0/16 -j ACCEPT
2023-06-10T07:54:21.151395411Z -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
2023-06-10T07:54:21.151400284Z -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT
2023-06-10T07:54:21.151405202Z -A INPUT -s 192.168.1.0/24 -d 172.18.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
2023-06-10T07:54:21.151410204Z -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
2023-06-10T07:54:21.151415103Z -A INPUT -i lo -j ACCEPT
2023-06-10T07:54:21.151419941Z -A INPUT -i wg0 -j ACCEPT
2023-06-10T07:54:21.151424798Z -A OUTPUT -d 143.244.41.164/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151439604Z -A OUTPUT -d 181.214.206.135/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151445337Z -A OUTPUT -d 191.96.168.217/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151450277Z -A OUTPUT -d 104.18.15.49/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151455651Z -A OUTPUT -d 104.18.14.49/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151472133Z -A OUTPUT -d 104.17.107.63/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151477939Z -A OUTPUT -d 104.17.108.63/32 -o eth0 -j ACCEPT
2023-06-10T07:54:21.151482994Z -A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/16 -j ACCEPT
2023-06-10T07:54:21.151487957Z -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
2023-06-10T07:54:21.151492952Z -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT
2023-06-10T07:54:21.151497949Z -A OUTPUT -s 172.18.0.0/16 -d 192.168.1.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
2023-06-10T07:54:21.151503073Z -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
2023-06-10T07:54:21.151514474Z -A OUTPUT -o lo -j ACCEPT
2023-06-10T07:54:21.151520078Z -A OUTPUT -o wg0 -j ACCEPT
2023-06-10T07:54:21.151525025Z
2023-06-10T07:54:21.194165534Z 2023-06-10 01:54:21,192 DEBG 'start-script' stdout output:
2023-06-10T07:54:21.194188580Z --------------------
2023-06-10T07:54:21.194194727Z
2023-06-10T07:54:21.254169503Z 2023-06-10 01:54:21,254 DEBG 'start-script' stdout output:
2023-06-10T07:54:21.254185216Z [info] Attempting to bring WireGuard interface 'up'...
2023-06-10T07:54:21.254191131Z
2023-06-10T07:54:21.312798435Z 2023-06-10 01:54:21,312 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.312819235Z Warning: `/config/wireguard/wg0.conf' is world accessible
2023-06-10T07:54:21.312836298Z
2023-06-10T07:54:21.369114535Z 2023-06-10 01:54:21,355 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.369137626Z [#] ip link add wg0 type wireguard
2023-06-10T07:54:21.369143712Z
2023-06-10T07:54:21.380341099Z 2023-06-10 01:54:21,370 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.380361633Z [#] wg setconf wg0 /dev/fd/63
2023-06-10T07:54:21.380367712Z
2023-06-10T07:54:21.405091502Z 2023-06-10 01:54:21,386 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.405098695Z [#] ip -4 address add 10.8.158.118 dev wg0
2023-06-10T07:54:21.405103796Z
2023-06-10T07:54:21.410457689Z 2023-06-10 01:54:21,409 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.410473749Z [#] ip link set mtu 1420 up dev wg0
2023-06-10T07:54:21.410479890Z
2023-06-10T07:54:21.596721327Z 2023-06-10 01:54:21,588 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.596736935Z [#] wg set wg0 fwmark 51820
2023-06-10T07:54:21.596743075Z
2023-06-10T07:54:21.617795056Z 2023-06-10 01:54:21,599 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.617814089Z [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820
2023-06-10T07:54:21.617819971Z
2023-06-10T07:54:21.631922809Z 2023-06-10 01:54:21,628 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.631951836Z [#] ip -4 rule add not fwmark 51820 table 51820
2023-06-10T07:54:21.631958793Z
2023-06-10T07:54:21.671644693Z 2023-06-10 01:54:21,660 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.671665947Z [#] ip -4 rule add table main suppress_prefixlength 0
2023-06-10T07:54:21.671671891Z
2023-06-10T07:54:21.681269773Z 2023-06-10 01:54:21,674 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.681286274Z [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
2023-06-10T07:54:21.681292297Z
2023-06-10T07:54:21.681297382Z 2023-06-10 01:54:21,679 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.681302421Z [#] iptables-restore -n
2023-06-10T07:54:21.681307467Z
2023-06-10T07:54:21.797968247Z 2023-06-10 01:54:21,783 DEBG 'start-script' stderr output:
2023-06-10T07:54:21.797987266Z [#] '/root/wireguardup.sh'
2023-06-10T07:54:21.797993401Z
2023-06-10T07:54:38.156339834Z 2023-06-10 01:54:38,141 DEBG 'start-script' stdout output:
2023-06-10T07:54:38.156359685Z [info] Attempting to get external IP using 'http://checkip.amazonaws.com'...
2023-06-10T07:54:38.156366019Z
2023-06-10T07:54:43.794316830Z 2023-06-10 01:54:43,760 DEBG 'start-script' stdout output:
2023-06-10T07:54:43.794333910Z [info] Successfully retrieved external IP address 143.244.41.164
2023-06-10T07:54:43.794339749Z
2023-06-10T07:54:43.794344866Z 2023-06-10 01:54:43,781 DEBG 'start-script' stdout output:
2023-06-10T07:54:43.794349887Z [info] Script started to assign incoming port
2023-06-10T07:54:43.794354857Z
2023-06-10T07:54:43.815609747Z 2023-06-10 01:54:43,792 DEBG 'start-script' stdout output:
2023-06-10T07:54:43.815648339Z [info] Port forwarding is enabled
2023-06-10T07:54:43.815661158Z [info] Checking endpoint 'nl-amsterdam.privacy.network' is port forward enabled...
2023-06-10T07:54:43.815669399Z [info] WireGuard interface 'up'
2023-06-10T07:54:43.815680849Z
2023-06-10T07:54:44.941895816Z 2023-06-10 01:54:44,882 DEBG 'start-script' stdout output:
2023-06-10T07:54:44.941919914Z [info] PIA endpoint 'nl-amsterdam.privacy.network' is in the list of endpoints that support port forwarding
2023-06-10T07:54:44.941926085Z
2023-06-10T07:54:44.941931121Z 2023-06-10 01:54:44,890 DEBG 'start-script' stdout output:
2023-06-10T07:54:44.941936303Z [info] List of PIA endpoints that support port forwarding:-
2023-06-10T07:54:44.941941374Z [info] mexico.privacy.network
2023-06-10T07:54:44.941946356Z [info] de-berlin.privacy.network
2023-06-10T07:54:44.941951306Z [info] qatar.privacy.network
2023-06-10T07:54:44.941967121Z [info] srilanka.privacy.network
2023-06-10T07:54:44.941973143Z [info] mongolia.privacy.network
2023-06-10T07:54:44.941991449Z [info] ar.privacy.network
2023-06-10T07:54:44.941997533Z [info] macau.privacy.network
2023-06-10T07:54:44.942002513Z [info] dz.privacy.network
2023-06-10T07:54:44.942007658Z [info] ro.privacy.network
2023-06-10T07:54:44.942012621Z [info] za.privacy.network
2023-06-10T07:54:44.942017551Z [info] sweden.privacy.network
2023-06-10T07:54:44.942022494Z [info] ee.privacy.network
2023-06-10T07:54:44.942032727Z [info] ad.privacy.network
2023-06-10T07:54:44.942048388Z [info] cambodia.privacy.network
2023-06-10T07:54:44.942053490Z [info] santiago.privacy.network
2023-06-10T07:54:44.942058595Z [info] mk.privacy.network
2023-06-10T07:54:44.942063792Z [info] ca-toronto.privacy.network
2023-06-10T07:54:44.942068743Z [info] sweden-2.privacy.network
2023-06-10T07:54:44.942073686Z [info] china.privacy.network
2023-06-10T07:54:44.942078650Z [info] br.privacy.network
2023-06-10T07:54:44.942083637Z [info] zagreb.privacy.network
2023-06-10T07:54:44.942088574Z [info] slovenia.privacy.network
2023-06-10T07:54:44.942093664Z [info] taiwan.privacy.network
2023-06-10T07:54:44.942098589Z [info] uk-london.privacy.network
2023-06-10T07:54:44.942103520Z [info] is.privacy.network
2023-06-10T07:54:44.942108465Z [info] nl-amsterdam.privacy.network
2023-06-10T07:54:44.942121290Z [info] austria.privacy.network
2023-06-10T07:54:44.942126806Z [info] pt.privacy.network
2023-06-10T07:54:44.942131836Z [info] brussels.privacy.network
2023-06-10T07:54:44.942136799Z [info] montenegro.privacy.network
2023-06-10T07:54:44.942141730Z [info] sk.privacy.network
2023-06-10T07:54:44.942146673Z [info] czech.privacy.network
2023-06-10T07:54:44.942151634Z [info] kualalumpur.privacy.network
2023-06-10T07:54:44.942156588Z [info] denmark.privacy.network
2023-06-10T07:54:44.942161533Z [info] greenland.privacy.network
2023-06-10T07:54:44.942166508Z [info] saudiarabia.privacy.network
2023-06-10T07:54:44.942171471Z [info] al.privacy.network
2023-06-10T07:54:44.942176407Z [info] fi-2.privacy.network
2023-06-10T07:54:44.942182694Z [info] uk-manchester.privacy.network
2023-06-10T07:54:44.942195218Z [info] japan-2.privacy.network
2023-06-10T07:54:44.942200931Z [info] panama.privacy.network
2023-06-10T07:54:44.942205964Z [info] nigeria.privacy.network
2023-06-10T07:54:44.942210928Z [info] nz.privacy.network
2023-06-10T07:54:44.942215917Z [info] aus-melbourne.privacy.network
2023-06-10T07:54:44.942220976Z [info] jakarta.privacy.network
2023-06-10T07:54:44.942225989Z [info] hungary.privacy.network
2023-06-10T07:54:44.942238582Z [info] bangladesh.privacy.network
2023-06-10T07:54:44.942244066Z [info] bogota.privacy.network
2023-06-10T07:54:44.942249020Z [info] md.privacy.network
2023-06-10T07:54:44.942253947Z [info] au-adelaide-pf.privacy.network
2023-06-10T07:54:44.942258884Z [info] sofia.privacy.network
2023-06-10T07:54:44.963672799Z [info] cyprus.privacy.network
2023-06-10T07:54:44.963849895Z [info] swiss.privacy.network
2023-06-10T07:54:44.963857703Z [info] ca-ontario.privacy.network
2023-06-10T07:54:44.963863121Z [info] italy.privacy.network
2023-06-10T07:54:44.963868097Z [info] denmark-2.privacy.network
2023-06-10T07:54:44.963873079Z [info] rs.privacy.network
2023-06-10T07:54:44.963894843Z [info] philippines.privacy.network
2023-06-10T07:54:44.963900228Z [info] es-valencia.privacy.network
2023-06-10T07:54:44.963905383Z [info] poland.privacy.network
2023-06-10T07:54:44.963910371Z [info] morocco.privacy.network
2023-06-10T07:54:44.963915301Z [info] egypt.privacy.network
2023-06-10T07:54:44.963920199Z [info] tr.privacy.network
2023-06-10T07:54:44.963925148Z [info] man.privacy.network
2023-06-10T07:54:44.963930036Z [info] georgia.privacy.network
2023-06-10T07:54:44.963934934Z [info] monaco.privacy.network
2023-06-10T07:54:44.963939856Z [info] de-frankfurt.privacy.network
2023-06-10T07:54:44.963944857Z [info] no.privacy.network
2023-06-10T07:54:44.963949771Z [info] ca-montreal.privacy.network
2023-06-10T07:54:44.963954708Z [info] au-australia-so.privacy.network
2023-06-10T07:54:44.963969042Z [info] venezuela.privacy.network
2023-06-10T07:54:44.963978017Z [info] fi.privacy.network
2023-06-10T07:54:44.963983004Z [info] ca-vancouver.privacy.network
2023-06-10T07:54:44.963988026Z [info] au-brisbane-pf.privacy.network
2023-06-10T07:54:44.963993069Z [info] ireland.privacy.network
2023-06-10T07:54:44.963997974Z [info] israel.privacy.network
2023-06-10T07:54:44.964003156Z [info] lt.privacy.network
2023-06-10T07:54:44.964008064Z [info] italy-2.privacy.network
2023-06-10T07:54:44.964012967Z [info] uk-2.privacy.network
2023-06-10T07:54:44.964017881Z
2023-06-10T07:54:44.964022759Z 2023-06-10 01:54:44,894 DEBG 'start-script' stdout output:
2023-06-10T07:54:44.964027815Z [info] gr.privacy.network
2023-06-10T07:54:44.964040408Z [info] au-sydney.privacy.network
2023-06-10T07:54:44.964046184Z [info] lu.privacy.network
2023-06-10T07:54:44.964051219Z [info] malta.privacy.network
2023-06-10T07:54:44.964056126Z [info] japan.privacy.network
2023-06-10T07:54:44.964072351Z [info] uk-southampton.privacy.network
2023-06-10T07:54:44.964087604Z [info] ua.privacy.network
2023-06-10T07:54:44.964093509Z [info] kazakhstan.privacy.network
2023-06-10T07:54:44.964098501Z [info] spain.privacy.network
2023-06-10T07:54:44.964103412Z [info] in.privacy.network
2023-06-10T07:54:44.964108296Z [info] sg.privacy.network
2023-06-10T07:54:44.964113257Z [info] france.privacy.network
2023-06-10T07:54:44.964118162Z [info] vietnam.privacy.network
2023-06-10T07:54:44.964123083Z [info] liechtenstein.privacy.network
2023-06-10T07:54:44.964128099Z [info] lv.privacy.network
2023-06-10T07:54:44.964134135Z [info] aus-perth.privacy.network
2023-06-10T07:54:44.964139187Z [info] sanjose.privacy.network
2023-06-10T07:54:44.964144116Z [info] ae.privacy.network
2023-06-10T07:54:44.964149105Z [info] bahamas.privacy.network
2023-06-10T07:54:44.964154095Z [info] hk.privacy.network
2023-06-10T07:54:44.964167001Z [info] yerevan.privacy.network
2023-06-10T07:54:44.964172417Z
2023-06-10T07:54:51.335265560Z 2023-06-10 01:54:51,272 DEBG 'start-script' stdout output:
2023-06-10T07:54:51.335283402Z [info] Successfully assigned and bound incoming port '44326'
2023-06-10T07:54:51.335289514Z
2023-06-10T07:54:51.656718443Z 2023-06-10 01:54:51,597 DEBG 'watchdog-script' stdout output:
2023-06-10T07:54:51.656737240Z [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP 10.8.158.118 different, marking for reconfigure
2023-06-10T07:54:51.656743014Z
2023-06-10T07:54:51.780027289Z 2023-06-10 01:54:51,756 DEBG 'watchdog-script' stdout output:
2023-06-10T07:54:51.780044794Z [info] Deluge not running
2023-06-10T07:54:51.780050550Z
2023-06-10T07:54:51.808549003Z 2023-06-10 01:54:51,793 DEBG 'watchdog-script' stdout output:
2023-06-10T07:54:51.808566354Z [info] Deluge Web UI not running
2023-06-10T07:54:51.808571922Z
2023-06-10T07:54:51.808576674Z 2023-06-10 01:54:51,805 DEBG 'watchdog-script' stdout output:
2023-06-10T07:54:51.808581406Z [info] Deluge incoming port 6890 and VPN incoming port 44326 different, marking for reconfigure
2023-06-10T07:54:51.808586134Z
2023-06-10T07:54:51.884166636Z 2023-06-10 01:54:51,864 DEBG 'watchdog-script' stdout output:
2023-06-10T07:54:51.884187119Z [info] Attempting to start Deluge...
2023-06-10T07:54:51.884192780Z [info] Removing deluge pid file (if it exists)...
2023-06-10T07:54:51.884197628Z
2023-06-10T07:54:59.876612808Z 2023-06-10 01:54:59,830 DEBG 'watchdog-script' stdout output:
2023-06-10T07:54:59.876637061Z [info] Deluge key 'listen_interface' currently has a value of '10.25.221.154'
2023-06-10T07:54:59.876654852Z [info] Deluge key 'listen_interface' will have a new value '10.8.158.118'
2023-06-10T07:54:59.876661096Z [info] Writing changes to Deluge config file '/config/core.conf'...
2023-06-10T07:54:59.876666147Z
2023-06-10T07:55:03.897441439Z 2023-06-10 01:55:03,872 DEBG 'watchdog-script' stdout output:
2023-06-10T07:55:03.897462636Z [info] Deluge key 'outgoing_interface' currently has a value of 'wg0'
2023-06-10T07:55:03.897468599Z [info] Deluge key 'outgoing_interface' will have a new value 'wg0'
2023-06-10T07:55:03.897473619Z [info] Writing changes to Deluge config file '/config/core.conf'...
2023-06-10T07:55:03.897478645Z
2023-06-10T07:55:24.475064183Z 2023-06-10 01:55:24,450 DEBG 'watchdog-script' stdout output:
2023-06-10T07:55:24.475082617Z [info] Deluge key 'default_daemon' currently has a value of 'c083e0a1c2df423eb9d2042a0811036d'
2023-06-10T07:55:24.475088688Z [info] Deluge key 'default_daemon' will have a new value 'c083e0a1c2df423eb9d2042a0811036d'
2023-06-10T07:55:24.475093845Z [info] Writing changes to Deluge config file '/config/web.conf'...
2023-06-10T07:55:24.475099197Z
2023-06-10T07:55:27.648173442Z 2023-06-10 01:55:27,585 DEBG 'watchdog-script' stdout output:
2023-06-10T07:55:27.648198317Z [info] Deluge process started
2023-06-10T07:55:27.648204722Z [info] Waiting for Deluge process to start listening on port 58846...
2023-06-10T07:55:27.648209719Z
2023-06-10T07:55:30.973144206Z 2023-06-10 01:55:30,933 DEBG 'watchdog-script' stdout output:
2023-06-10T07:55:30.973164635Z [info] Deluge process listening on port 58846
2023-06-10T07:55:30.973170439Z
2023-06-10T07:55:50.747825724Z 2023-06-10 01:55:50,722 DEBG 'watchdog-script' stdout output:
2023-06-10T07:55:50.747841537Z Setting "random_port" to: False
2023-06-10T07:55:50.747847188Z Configuration value successfully updated.
2023-06-10T07:55:50.747851939Z
2023-06-10T07:55:50.781442507Z 2023-06-10 01:55:50,747 DEBG 'watchdog-script' stderr output:
2023-06-10T07:55:50.781461217Z <Deferred at 0x15199d1c6e50 current result: None>
2023-06-10T07:55:50.781466780Z
2023-06-10T07:56:06.910032260Z 2023-06-10 01:56:06,909 DEBG 'watchdog-script' stdout output:
2023-06-10T07:56:06.910051336Z Setting "listen_ports" to: (44326, 44326)
2023-06-10T07:56:06.910057810Z Configuration value successfully updated.
2023-06-10T07:56:06.910062704Z
2023-06-10T07:56:06.917095398Z 2023-06-10 01:56:06,917 DEBG 'watchdog-script' stderr output:
2023-06-10T07:56:06.917111914Z <Deferred at 0x14d5a5ac80d0 current result: None>
2023-06-10T07:56:06.917117789Z
2023-06-10T07:56:12.058669459Z 2023-06-10 01:56:12,058 DEBG 'watchdog-script' stderr output:
2023-06-10T07:56:12.058686472Z <Deferred at 0x147e6ec42a50 current result: None>
2023-06-10T07:56:12.058704026Z
2023-06-10T07:56:12.213938454Z 2023-06-10 01:56:12,213 DEBG 'watchdog-script' stdout output:
2023-06-10T07:56:12.213955197Z [info] No torrents with state 'Error' found
2023-06-10T07:56:12.213960970Z
2023-06-10T07:56:12.217938948Z 2023-06-10 01:56:12,214 DEBG 'watchdog-script' stdout output:
2023-06-10T07:56:12.217958350Z [info] Starting Deluge Web UI...
2023-06-10T07:56:12.217964306Z
2023-06-10T07:56:12.217969234Z 2023-06-10 01:56:12,214 DEBG 'watchdog-script' stdout output:
2023-06-10T07:56:12.217976450Z [info] Deluge Web UI started
2023-06-10T07:56:12.217981473Z
2023-06-10T08:09:51.866923381Z 2023-06-10 02:09:51,859 DEBG 'start-script' stdout output:
2023-06-10T08:09:51.866946309Z [info] Successfully assigned and bound incoming port '44326'
2023-06-10T08:09:51.866954395Z
2023-06-10T08:24:52.360184252Z 2023-06-10 02:24:52,360 DEBG 'start-script' stdout output:
2023-06-10T08:24:52.360206827Z [info] Successfully assigned and bound incoming port '44326'
2023-06-10T08:24:52.360214409Z

 

Link to comment

I am struggling here. I am trying to change from transmissionvpn to delugevpn. On transmission I can reach all my ports when I have the --net=container:tranmsionvpn. But when I change them to use the deluge client, and restart the stack, the containers have internet, but I can't reach their management ports anymore.

Link to comment
On 6/10/2023 at 2:47 AM, lrx345 said:

Hi @binhex,

 

I believe I've broken something with my instance and it is causing the unraid web UI to go unstable. After a short time of running the deluge webui no longer responds, and the container fails to exit even with a `docker kill` command.

 

I did recently change my downloads folder from a 1tb HDD to a 8tb HDD. I ended up using the same disk share name as the previous drive. That's the only change I've made recently. All my files loaded up great and started seeding so I don't think that is it?

 

Anyways, here are some logs from my system and the container itself.

 

System Log

 

Docker Log

 

Delugevpn Log

 

Unfortunately still struggling with this.

I've tried

  1. Changing the mount point for the new HDD just in case
  2. Deleting the delugevpn image and re-adding it
  3. Clearing app data and starting fresh

Unfortunately in all cases, the delugevpn webui stops responding after a few hours and the container crashes. To shut down the container I am having to use 

 

ps auxw | grep [containerID]

 

Followed by a kill -9 [processID]

 

Any help is greatly appreciated

 

 

-----

 

Edit:

I've found more logs that are related to deluge and are referencing call traces

 

Jun 13 03:24:04 NAS kernel: Call Trace:
Jun 13 03:24:04 NAS kernel: <TASK>
Jun 13 03:24:04 NAS kernel: __schedule+0x596/0x5f6
Jun 13 03:24:04 NAS kernel: ? get_futex_key+0x281/0x2ad
Jun 13 03:24:04 NAS kernel: schedule+0x8e/0xc3
Jun 13 03:24:04 NAS kernel: __down_read_common+0x241/0x295
Jun 13 03:24:04 NAS kernel: do_exit+0x279/0x8e5
Jun 13 03:24:04 NAS kernel: make_task_dead+0xba/0xba
Jun 13 03:24:04 NAS kernel: rewind_stack_and_make_dead+0x17/0x17
Jun 13 03:24:04 NAS kernel: RIP: 0033:0x154c05f6c60d
Jun 13 03:24:04 NAS kernel: RSP: 002b:0000154c01be6888 EFLAGS: 00010202
Jun 13 03:24:04 NAS kernel: RAX: 0000154be001ee90 RBX: 0000154be0000dd8 RCX: 0000154c01be6ac0
Jun 13 03:24:04 NAS kernel: RDX: 0000000000004000 RSI: 000015355e67fceb RDI: 0000154be001ee90
Jun 13 03:24:04 NAS kernel: RBP: 0000000000000000 R08: 0000000000000002 R09: 0000000000000000
Jun 13 03:24:04 NAS kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
Jun 13 03:24:04 NAS kernel: R13: 0000154be00272f0 R14: 0000000000000002 R15: 0000154bfc428540
Jun 13 03:24:04 NAS kernel: </TASK>
Jun 13 03:24:32 NAS kernel: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P22284 } 2052231 jiffies s: 47953 root: 0x0/T
Jun 13 03:24:32 NAS kernel: rcu: blocking rcu_node structures (internal RCU debug):
Jun 13 03:25:38 NAS kernel: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P22284 } 2117767 jiffies s: 47953 root: 0x0/T
Jun 13 03:25:38 NAS kernel: rcu: blocking rcu_node structures (internal RCU debug):
Jun 13 03:26:43 NAS kernel: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P22284 } 2183303 jiffies s: 47953 root: 0x0/T
Jun 13 03:26:43 NAS kernel: rcu: blocking rcu_node structures (internal RCU debug):
Jun 13 03:27:04 NAS kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Jun 13 03:27:04 NAS kernel: rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-3): P22284/1:b..l
Jun 13 03:27:04 NAS kernel: 	(detected by 0, t=2220062 jiffies, g=86521621, q=1614141 ncpus=4)
Jun 13 03:27:04 NAS kernel: task:deluged         state:D stack:    0 pid:22284 ppid: 21270 flags:0x00004002
Jun 13 03:27:04 NAS kernel: Call Trace:
Jun 13 03:27:04 NAS kernel: <TASK>
Jun 13 03:27:04 NAS kernel: __schedule+0x596/0x5f6
Jun 13 03:27:04 NAS kernel: ? get_futex_key+0x281/0x2ad
Jun 13 03:27:04 NAS kernel: schedule+0x8e/0xc3
Jun 13 03:27:04 NAS kernel: __down_read_common+0x241/0x295
Jun 13 03:27:04 NAS kernel: do_exit+0x279/0x8e5
Jun 13 03:27:04 NAS kernel: make_task_dead+0xba/0xba
Jun 13 03:27:04 NAS kernel: rewind_stack_and_make_dead+0x17/0x17
Jun 13 03:27:04 NAS kernel: RIP: 0033:0x154c05f6c60d
Jun 13 03:27:04 NAS kernel: RSP: 002b:0000154c01be6888 EFLAGS: 00010202
Jun 13 03:27:04 NAS kernel: RAX: 0000154be001ee90 RBX: 0000154be0000dd8 RCX: 0000154c01be6ac0
Jun 13 03:27:04 NAS kernel: RDX: 0000000000004000 RSI: 000015355e67fceb RDI: 0000154be001ee90
Jun 13 03:27:04 NAS kernel: RBP: 0000000000000000 R08: 0000000000000002 R09: 0000000000000000
Jun 13 03:27:04 NAS kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
Jun 13 03:27:04 NAS kernel: R13: 0000154be00272f0 R14: 0000000000000002 R15: 0000154bfc428540
Jun 13 03:27:04 NAS kernel: </TASK>

 

Any thoughts?

Edited by lrx345
Link to comment
14 hours ago, lrx345 said:

Unfortunately in all cases, the delugevpn webui stops responding after a few hours and the container crashes. To shut down the container I am having to use 

this sounds to me like it maybe the libtorrentv2 issue, try appending a tag name of 'libtorrentv1' to your repository name, if you dont know how to do this then see Q5:- https://github.com/binhex/documentation/blob/master/docker/faq/unraid.md 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.