UnraidOS instance been hacked?


Recommended Posts

Hi everyone,

 

Little nervous given the topics of the forums that I've found from searching, especially the one starting with "TL;DR  If you're seeing constant logs from avahi-daemon, beware, you probably got hacked."

 

What has happened recently:

  • For the past few months I've had random crashes where unraidos would freeze up, I thought it was because of my build as well as docker containers, etc but none of it has stopped it. I've disabled C-States, locked docker containers to the core, altered the power idling, and I still have random crashes ranging from 1 day to 3 months (increasing more recently)
  • Starting today I've had logs spamming my syslog from avahi-daemon and that lead me on a search to the attached forum posts
  • I checked my ifconfig and I've found a bunch of Network Interfaces I've never seen before (I'm used to eth0, lo, wgo, and br0. I had a bond0, and multiple br-XXXXXXX
  • I've had issues connecting to my server VIA WireGuard and Tailscale. I was able to a few weeks ago but I tried this morning and no connection.
  • I was unable to ping anything, hostnames (google.com) or numerical lookup of google (142.250.68.110)
  • Docker Community Apps throws an error saying it can't retrieve a feed
  • Common Fix and Problems reports that it can't connect to github.com
  • Recently got alerts from my "Deco" app, which notifies my whenever a new device joins the network and I've been getting a few "UNKNOWN DEVICE HAS JOINED THE NETWORK". This pops up from time to time so never thought about it till now.
  •  

Exposure of UnraidOS server (192.168.68.114):

  • Port Forwardings
    • 192.168.68.114 (UnraidOS Server) Internal: 6881 External: 6881 (nothing running on that port)
    • 192.168.68.114 (UnraidOS Server) Internal: 51820 External: 51820 (Wireguard VPN Service)
    • 192.168.68.48 (Nginx) Internal: 8080 External: 80 (Nginx Proxy Manager)
    • 192.168.68.48 (Nginx) Internal: 4443 External: 443(Nginx Proxy Manager)
  • Nginx Proxy Manager
    • All of my docker containers are on br0 and all have static IPs assigned to them, and then I do routing of services I want to expose outside (like Jellyfin, or Gitea, etc) of my network. All have SSL certs
  • I have numerous shares on my network and they all require a username and password to access

 

Only recently starting working on my UnraidOS instance again last couple days. For the most part I leave it alone, but one of the things I wanted to do was create VLANs either at the Router level, Unraid Level, or Software level so I enabled Virtual Machines and installed the stock CentOS iso to play around with. I've made some VLANs VIA Unraid and also tried to do some VIA my router but none of it has worked so I've kind of reversed whatever I did.

 

Please any advice would be appreciated, my server crashed at 3AM this morning so I had to do an unclean shutdown so my parity is currently rebuilding.

 

image.thumb.png.989891c7e40ab415388ef20f285295b2.png

image.thumb.png.5000906e88ba0925248df30573ea4bf6.png

 

Edited by 97WaterPolo
Clairifications
Link to comment

My logs are having this problem. Found my memory use at 70% (it usually rides around 20%). 

 

Also today I observed the webgui flying through screens in main and dashboard that showed the array, then it would flip to zero drives, then some drives unmounted, then no drives, then the network name changed and the system log just kept filling up with the avahi-daemon. 

 

Ended up unplugging the network cable while I plan my next steps. 

 

Reading the system log shows the avahi-daemon starts ssh, ftp and samba, puts fix common problems on hold for 10 minutes and then sets up it's eth0 bridges across docker's and then goes to work silently doing whatever it wants. 

 

It creates a user0 share with access to all my internal shares. 

 

What have I done lately? I installed Plex media server - totally scanned all my media and no problems there. 

 

Installed tdarr and it's node and no issues there.

 

Installed radarr and didn't complete setup. Since these issues started i have since deleted Radarr.

 

Installed sabnzbd and it ran fine, but it needed permissions for docker to be reset, which I did.

 

I noticed today I could only get 5mb/s down instead of the usual 200mb/s. 

 

That's when I saw the memory usage high and the syslog issues. 

 

I went into global settings and disabled ssh and ftp and that stopped the script momentarily. There was a temp account in my user account fields that I deleted, but after reboot it was back. 

 

I've got full system logs and screen caps of the weird browser functionality. 

 

As of right now, the machine is off, pulled the network cable and unplugged the USB boot drive. I looked in diagnostic logs from Feb 20 and it has the avahi-daemon there but not nearly as much activity as today. 🤔

Edited by xjumper84
Typos
Link to comment
Apr 11 15:05:35 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth918626d: link becomes ready
Apr 11 15:05:35 NAS kernel: docker0: port 1(veth918626d) entered blocking state
Apr 11 15:05:35 NAS kernel: docker0: port 1(veth918626d) entered forwarding state
Apr 11 15:05:35 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
Apr 11 15:05:35 NAS rc.docker: sabnzbd: started succesfully!
Apr 11 15:05:35 NAS kernel: docker0: port 2(vethc3e9ea1) entered blocking state
Apr 11 15:05:35 NAS kernel: docker0: port 2(vethc3e9ea1) entered disabled state
Apr 11 15:05:35 NAS kernel: device vethc3e9ea1 entered promiscuous mode
Apr 11 15:05:35 NAS kernel: docker0: port 2(vethc3e9ea1) entered blocking state
Apr 11 15:05:35 NAS kernel: docker0: port 2(vethc3e9ea1) entered forwarding state
Apr 11 15:05:36 NAS kernel: docker0: port 2(vethc3e9ea1) entered disabled state
Apr 11 15:05:36 NAS kernel: eth0: renamed from veth0cea1bc
Apr 11 15:05:36 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc3e9ea1: link becomes ready
Apr 11 15:05:36 NAS kernel: docker0: port 2(vethc3e9ea1) entered blocking state
Apr 11 15:05:36 NAS kernel: docker0: port 2(vethc3e9ea1) entered forwarding state
Apr 11 15:05:36 NAS rc.docker: tdarr: started succesfully!
Apr 11 15:05:36 NAS kernel: docker0: port 3(veth1883d53) entered blocking state
Apr 11 15:05:36 NAS kernel: docker0: port 3(veth1883d53) entered disabled state
Apr 11 15:05:36 NAS kernel: device veth1883d53 entered promiscuous mode
Apr 11 15:05:36 NAS kernel: docker0: port 3(veth1883d53) entered blocking state
Apr 11 15:05:36 NAS kernel: docker0: port 3(veth1883d53) entered forwarding state
Apr 11 15:05:36 NAS kernel: eth0: renamed from veth0c057d1
Apr 11 15:05:36 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1883d53: link becomes ready
Apr 11 15:05:37 NAS  avahi-daemon[2822]: Joining mDNS multicast group on interface veth918626d.IPv6 with address fe80::c054:e2ff:fe55:e4a.
Apr 11 15:05:37 NAS  avahi-daemon[2822]: New relevant interface veth918626d.IPv6 for mDNS.
Apr 11 15:05:37 NAS  avahi-daemon[2822]: Registering new address record for fe80::c054:e2ff:fe55:e4a on veth918626d.*.
Apr 11 15:05:37 NAS  avahi-daemon[2822]: Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:dbff:fe85:788e.
Apr 11 15:05:37 NAS  avahi-daemon[2822]: New relevant interface docker0.IPv6 for mDNS.
Apr 11 15:05:37 NAS  avahi-daemon[2822]: Registering new address record for fe80::42:dbff:fe85:788e on docker0.*.
Apr 11 15:05:38 NAS rc.docker: tdarr_node: started succesfully!
Apr 11 15:05:38 NAS  avahi-daemon[2822]: Joining mDNS multicast group on interface vethc3e9ea1.IPv6 with address fe80::e8f5:14ff:fe6c:a779.
Apr 11 15:05:38 NAS  avahi-daemon[2822]: New relevant interface vethc3e9ea1.IPv6 for mDNS.
Apr 11 15:05:38 NAS  avahi-daemon[2822]: Registering new address record for fe80::e8f5:14ff:fe6c:a779 on vethc3e9ea1.*.
Apr 11 15:05:38 NAS  avahi-daemon[2822]: Joining mDNS multicast group on interface veth1883d53.IPv6 with address fe80::f0e7:9aff:feb5:6d16.
Apr 11 15:05:38 NAS  avahi-daemon[2822]: New relevant interface veth1883d53.IPv6 for mDNS.
Apr 11 15:05:38 NAS  avahi-daemon[2822]: Registering new address record for fe80::f0e7:9aff:feb5:6d16 on veth1883d53.*.
Apr 11 15:05:50 NAS  nmbd[2792]: [2023/04/11 15:05:50.603129,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
Apr 11 15:05:50 NAS  nmbd[2792]:   *****
Apr 11 15:05:50 NAS  nmbd[2792]:   
Apr 11 15:05:50 NAS  nmbd[2792]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 192.168.0.80
Apr 11 15:05:50 NAS  nmbd[2792]:   
Apr 11 15:05:50 NAS  nmbd[2792]:   *****
Apr 11 15:07:28 NAS kernel: docker0: port 1(veth918626d) entered disabled state
Apr 11 15:07:28 NAS kernel: veth0651c7f: renamed from eth0
Apr 11 15:07:28 NAS  avahi-daemon[2822]: Interface veth918626d.IPv6 no longer relevant for mDNS.
Apr 11 15:07:28 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface veth918626d.IPv6 with address fe80::c054:e2ff:fe55:e4a.
Apr 11 15:07:28 NAS kernel: docker0: port 1(veth918626d) entered disabled state
Apr 11 15:07:28 NAS kernel: device veth918626d left promiscuous mode
Apr 11 15:07:28 NAS kernel: docker0: port 1(veth918626d) entered disabled state
Apr 11 15:07:28 NAS  avahi-daemon[2822]: Withdrawing address record for fe80::c054:e2ff:fe55:e4a on veth918626d.
Apr 11 15:07:30 NAS kernel: docker0: port 2(vethc3e9ea1) entered disabled state
Apr 11 15:07:30 NAS kernel: veth0cea1bc: renamed from eth0
Apr 11 15:07:30 NAS  avahi-daemon[2822]: Interface vethc3e9ea1.IPv6 no longer relevant for mDNS.
Apr 11 15:07:30 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface vethc3e9ea1.IPv6 with address fe80::e8f5:14ff:fe6c:a779.
Apr 11 15:07:30 NAS kernel: docker0: port 2(vethc3e9ea1) entered disabled state
Apr 11 15:07:30 NAS kernel: device vethc3e9ea1 left promiscuous mode
Apr 11 15:07:30 NAS kernel: docker0: port 2(vethc3e9ea1) entered disabled state
Apr 11 15:07:30 NAS  avahi-daemon[2822]: Withdrawing address record for fe80::e8f5:14ff:fe6c:a779 on vethc3e9ea1.
Apr 11 15:07:31 NAS kernel: docker0: port 3(veth1883d53) entered disabled state
Apr 11 15:07:31 NAS kernel: veth0c057d1: renamed from eth0
Apr 11 15:07:31 NAS  avahi-daemon[2822]: Interface veth1883d53.IPv6 no longer relevant for mDNS.
Apr 11 15:07:31 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface veth1883d53.IPv6 with address fe80::f0e7:9aff:feb5:6d16.
Apr 11 15:07:31 NAS kernel: docker0: port 3(veth1883d53) entered disabled state
Apr 11 15:07:31 NAS kernel: device veth1883d53 left promiscuous mode
Apr 11 15:07:31 NAS kernel: docker0: port 3(veth1883d53) entered disabled state
Apr 11 15:07:31 NAS  avahi-daemon[2822]: Withdrawing address record for fe80::f0e7:9aff:feb5:6d16 on veth1883d53.
Apr 11 15:10:46 NAS  ntpd[1328]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Apr 11 15:11:03 NAS  nmbd[2792]: [2023/04/11 15:11:03.573821,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
Apr 11 15:11:03 NAS  nmbd[2792]:   *****
Apr 11 15:11:03 NAS  nmbd[2792]:   
Apr 11 15:11:03 NAS  nmbd[2792]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1
Apr 11 15:11:03 NAS  nmbd[2792]:   
Apr 11 15:11:03 NAS  nmbd[2792]:   *****
Apr 11 15:11:59 NAS root: ACPI action kpenter is not defined
Apr 11 15:12:03 NAS root: ACPI action up is not defined
Apr 11 15:12:04 NAS root: ACPI action down is not defined
Apr 11 15:12:04 NAS root: ACPI action right is not defined
Apr 11 15:12:04 NAS root: ACPI action up is not defined
Apr 11 15:14:58 NAS  emhttpd: shcmd (101): /usr/local/emhttp/webGui/scripts/update_access
Apr 11 15:14:58 NAS  sshd[2328]: Received signal 15; terminating.
Apr 11 15:14:59 NAS  emhttpd: shcmd (102): /etc/rc.d/rc.nginx reload
Apr 11 15:14:59 NAS root: Checking configuration for correct syntax and
Apr 11 15:14:59 NAS root: then trying to open files referenced in configuration...
Apr 11 15:14:59 NAS root: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Apr 11 15:14:59 NAS root: nginx: configuration file /etc/nginx/nginx.conf test is successful
Apr 11 15:14:59 NAS root: Reloading Nginx configuration...
Apr 11 15:15:00 NAS root: Fix Common Problems Version 2023.03.22
Apr 11 15:15:07 NAS webGUI: Successful login user root from 192.168.0.57
Apr 11 15:15:12 NAS root: Fix Common Problems: Warning: Docker Application Plex-Media-Server has an update available for it
Apr 11 15:15:12 NAS root: Fix Common Problems: Warning: Docker Application sabnzbd has an update available for it
Apr 11 15:32:34 NAS root: ACPI action kpenter is not defined
Apr 11 15:38:29 NAS  emhttpd: shcmd (151): smbpasswd -x temp
Apr 11 15:38:29 NAS root: Failed to find entry for user temp.
Apr 11 15:38:29 NAS  emhttpd: shcmd (151): exit status: 1
Apr 11 15:38:29 NAS  emhttpd: shcmd (152): userdel temp
Apr 11 15:38:29 NAS  userdel[21604]: delete user 'temp'
Apr 11 15:38:29 NAS  emhttpd: Starting services...
Apr 11 15:38:29 NAS  emhttpd: shcmd (154): /etc/rc.d/rc.samba restart
Apr 11 15:38:29 NAS  wsdd2[2802]: 'Terminated' signal received.
Apr 11 15:38:29 NAS  nmbd[2792]: [2023/04/11 15:38:29.578783,  0] ../../source3/nmbd/nmbd.c:59(terminate)
Apr 11 15:38:29 NAS  nmbd[2792]:   Got SIGTERM: going down...
Apr 11 15:38:29 NAS  winbindd[2805]: [2023/04/11 15:38:29.578812,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Apr 11 15:38:29 NAS  winbindd[2808]: [2023/04/11 15:38:29.578829,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Apr 11 15:38:29 NAS  winbindd[2805]:   Got sig[15] terminate (is_parent=1)
Apr 11 15:38:29 NAS  winbindd[2808]:   Got sig[15] terminate (is_parent=0)
Apr 11 15:38:29 NAS  wsdd2[2802]: terminating.
Apr 11 15:38:29 NAS  winbindd[6121]: [2023/04/11 15:38:29.579012,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Apr 11 15:38:29 NAS  winbindd[6121]:   Got sig[15] terminate (is_parent=0)
Apr 11 15:38:31 NAS root: Starting Samba:  /usr/sbin/smbd -D
Apr 11 15:38:31 NAS  smbd[21713]: [2023/04/11 15:38:31.794522,  0] ../../source3/smbd/server.c:1741(main)
Apr 11 15:38:31 NAS  smbd[21713]:   smbd version 4.17.3 started.
Apr 11 15:38:31 NAS  smbd[21713]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Apr 11 15:38:31 NAS root:                  /usr/sbin/nmbd -D
Apr 11 15:38:31 NAS  nmbd[21715]: [2023/04/11 15:38:31.811631,  0] ../../source3/nmbd/nmbd.c:901(main)
Apr 11 15:38:31 NAS  nmbd[21715]:   nmbd version 4.17.3 started.
Apr 11 15:38:31 NAS  nmbd[21715]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Apr 11 15:38:31 NAS root:                  /usr/sbin/wsdd2 -d 
Apr 11 15:38:31 NAS  wsdd2[21729]: starting.
Apr 11 15:38:31 NAS root:                  /usr/sbin/winbindd -D
Apr 11 15:38:31 NAS  winbindd[21730]: [2023/04/11 15:38:31.885357,  0] ../../source3/winbindd/winbindd.c:1440(main)
Apr 11 15:38:31 NAS  winbindd[21730]:   winbindd version 4.17.3 started.
Apr 11 15:38:31 NAS  winbindd[21730]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Apr 11 15:38:31 NAS  winbindd[21732]: [2023/04/11 15:38:31.890641,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
Apr 11 15:38:31 NAS  winbindd[21732]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
Apr 11 15:38:31 NAS  emhttpd: shcmd (158): /etc/rc.d/rc.avahidaemon restart
Apr 11 15:38:31 NAS root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
Apr 11 15:38:31 NAS  avahi-daemon[2822]: Got SIGTERM, quitting.
Apr 11 15:38:31 NAS  avahi-dnsconfd[2831]: read(): EOF
Apr 11 15:38:31 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface docker0.IPv6 with address fe80::42:dbff:fe85:788e.
Apr 11 15:38:31 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Apr 11 15:38:31 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.0.80.
Apr 11 15:38:31 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface lo.IPv6 with address ::1.
Apr 11 15:38:31 NAS  avahi-daemon[2822]: Leaving mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Apr 11 15:38:31 NAS  avahi-daemon[2822]: avahi-daemon 0.8 exiting.
Apr 11 15:38:32 NAS root: Starting Avahi mDNS/DNS-SD Daemon:  /usr/sbin/avahi-daemon -D
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Successfully dropped root privileges.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: avahi-daemon 0.8 starting up.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Successfully called chroot().
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Successfully dropped remaining capabilities.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Loading service file /services/sftp-ssh.service.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Loading service file /services/smb.service.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Loading service file /services/ssh.service.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:dbff:fe85:788e.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: New relevant interface docker0.IPv6 for mDNS.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: New relevant interface docker0.IPv4 for mDNS.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.0.80.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: New relevant interface br0.IPv4 for mDNS.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Joining mDNS multicast group on interface lo.IPv6 with address ::1.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: New relevant interface lo.IPv6 for mDNS.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: New relevant interface lo.IPv4 for mDNS.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Network interface enumeration completed.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Registering new address record for fe80::42:dbff:fe85:788e on docker0.*.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Registering new address record for 172.17.0.1 on docker0.IPv4.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Registering new address record for 192.168.0.80 on br0.IPv4.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Registering new address record for ::1 on lo.*.
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Registering new address record for 127.0.0.1 on lo.IPv4.
Apr 11 15:38:32 NAS  emhttpd: shcmd (159): /etc/rc.d/rc.avahidnsconfd restart
Apr 11 15:38:32 NAS root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
Apr 11 15:38:32 NAS root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
Apr 11 15:38:32 NAS  avahi-dnsconfd[21762]: Successfully connected to Avahi daemon.
Apr 11 15:38:32 NAS  emhttpd: shcmd (164): cp /etc/passwd /etc/shadow /var/lib/samba/private/smbpasswd /boot/config
Apr 11 15:38:32 NAS  avahi-daemon[21751]: Server startup complete. Host name is NAS.local. Local service cookie is 1342057464.
Apr 11 15:38:33 NAS  avahi-daemon[21751]: Service "NAS" (/services/ssh.service) successfully established.
Apr 11 15:38:33 NAS  avahi-daemon[21751]: Service "NAS" (/services/smb.service) successfully established.
Apr 11 15:38:33 NAS  avahi-daemon[21751]: Service "NAS" (/services/sftp-ssh.service) successfully established.
Apr 11 15:38:54 NAS  nmbd[21719]: [2023/04/11 15:38:54.853843,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
Apr 11 15:38:54 NAS  nmbd[21719]:   *****
Apr 11 15:38:54 NAS  nmbd[21719]:   
Apr 11 15:38:54 NAS  nmbd[21719]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1
Apr 11 15:38:54 NAS  nmbd[21719]:   
Apr 11 15:38:54 NAS  nmbd[21719]:   *****
Apr 11 15:38:54 NAS  nmbd[21719]: [2023/04/11 15:38:54.853947,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
Apr 11 15:38:54 NAS  nmbd[21719]:   *****
Apr 11 15:38:54 NAS  nmbd[21719]:   
Apr 11 15:38:54 NAS  nmbd[21719]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 192.168.0.80
Apr 11 15:38:54 NAS  nmbd[21719]:   
Apr 11 15:38:54 NAS  nmbd[21719]:   *****

 

 

Here is one of the log dumps where it kept changing the eth0 adapter to ipv6 and then hitting up mDNS and learning it needed a new name. Then it would reboot samba and reload the ssh, smb and sftp services. 

Link to comment

I was one of those posts and after learning way more about everything involving cybersecurity, malware, etc,  as well as essentially deciding to change my career completely into cybersec because there is NO ONE that's gonna help me with this so I'm going to learn it my damn self especially because everyone assumes I'm exaggerating. Anyways..

So far, in my situation anyways, I believe it for sure started at my Unraid server but I have no logs or proof of entrypoints yet. I went blasting through a ton of files for things I KNOW shouldn't be on my Windows PCs and also installed Linux onn a few of my PCs while I was learning all this stuff. AND got a friends old Macbook Pro, they hacked into every single one.


Wiping drives, secure erase nothing removes this shit. I'm able to see this shit in the memory yet I still don't know enough to do anything. I'm pretty sure they got into firmwware they for SURE did on the Macbook by somehow installing 4 virtual drives into the BASE SYSTEM little tiny firmware chip like holy shit (They managed that while I was unfortunately in Recovery Mode, thanks Apple!)
On Windows everything has been hidden and turned into DLLs using .NET.

The only shitty thing is I have no real way of keeping track of it all, but I will gladly prove it to ANYONE who thinks I'm exaggerating.
     Whoever this is they are everywhere on my network. I used to have a Netgear XR1000, but I got so nervous they got into it I went and bought TP Link Omada equipment hoping to god I can shut some of this down with firewalls and vlans.

        

  I have seen the proof of them having virtual drives that are somehow always starting ahead of everything else. Anytime I made a new Linux installation for instance, immediately flood my /tmp with bullshit. Seems the first thing is always using CUPS, and if the internet goes down, they switch to using my mobile phone data over bluetooth.
But I can't see any of it happening realtime. The evidence is elsewhere when digging through all the shit I can.
I used their own fucked up version of Busybox to show me shit using something called DMAdecoder i think. I dont really know what half of the shit is they keep installing on my Linux installations but its always the same. If I add something (like when I switched to Linux Mint, it came with Bluez or whatever it is, i much nicer bluetooth manager, when I went back to Ubuntu, guess what started installing every single time from then on...Bluez or blueman i think its called idk) it gets added to their list and becomes the whole pwned PC distro lol

I even found their log files for their own shit. From some magical tiny drive that mounted out of NO WHERE.

 


This shit has just straight pissed me off, like I WANT TO USE MY HARDWARE. I have gigabit internet, theyre taking half

I want to run Virtual Box and test out CTFs and just utilize this beautiful hardware I put together but I can't even run virtualbox or VMware or MS Hyper-V. Why? because they take it over every f*cking time, and make a ton of virtual networks, and interfaces. I can't even put containers or Docker on my PCs, they always use these things and dig in deeper.
I can see all the lowercase files names, why even attempt to hide just gtfo of my network, im broke, i have zero use aside from hardware use and clearly they have they on lock.
I built an 18TB unraid server, I cant even fucking use it

I dont have a big enough imagination to come up with this shit in my head and I've never been paranoid in my whole life.
I was a bit paranoid and nervous at first but I seriously just said fuck it. I hope they use my network and PCs to destroy some giant Antivirus company
At this point, I think they're using our god damn iPhones, or Androids nearby as a C2 server.
They already cloned most of my usual programs but mostly just stuck to cloning Windows system files.
They 100% used ..NET to hide their exploits and malware which allows nothing to pick up on it and at this point, I have no way of getting rid of anything.

I'm pretty savvy at computers but I started to believe these know-it-alls. Thinking it's in my head, these are just how modern PCs are now.
I know how PCs feel when someone else isn't on them. I havent felt that way in at least a year or two maybe more but I'll be damned if I let it get to me any more. I guess me, my wife and my kids get to just put up w this until I learn how to end it myself.
I have way too many systems to just throw away and get fresh stuff, and frankly I no longer have the money to replace all of it. Some yeah but not as much as we have now.

Where's Mr. Robot when you need him, he'd make them regret ever crossing my WAN port.
 

Link to comment

Oh yeah and I forgot, the way they have it set up, some how used ChromeOS, at least in my case. Maybe it's how they can keep their OS small and in firmware IDK.
They spoof all my stuff anyways why not. Ugh, I wish this on no man. Especially if I find out its someone local hacking from down the fucking street. Oh better hope that's not it

This whole AVAHI thing, well it doesnt stop at Linux. They're using ICMP or whatever it is and IPV6 UDP connections to do whatever.... probably to stream your desktops.
I have no clue. They make you think your updating shit but your'e probably not. Any time I restart they just revert it back anyways. God. I hate this shit.
I dont trust anything on my PCs. I fucking hate it cuz I love computers.. It's been my passion forever i just wish I wouldve stuck w it instead of get an Xbox 360 lol.

If anyone wants proof let me know. I'll find as much as I can. My next step is learning Volatility, DnSpy....memory dump reading and .NET program shit idk. Blue team go i guess..

Link to comment
Quote

Apr 11 15:10:46 NAS ntpd[1328]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized

  That's what jumped out to me..
Had the same issue and I think its so they can do stuff that would normally slow down the network, CPU etc...but since the de-synced everything, hardly even notice

Link to comment

I was able to boot into safe mode with gui no plugins on a router with no internet access. I copied the data I really cared about and then watched the system log. 

 

It then loads a ton of users in passed, shadow and smbpasswd. Then it starts ssh, SFTP and samba and routes everything through a docker. I've disabled docker and the system goes crazy with errors. 

 

I'm at the point of erasing my USB drive (savings my key) and trying to reload Unraid fresh from there.

 

Maybe I'll have time to do that on Thursday. I feel the best thing that's happened this week is to turn it all off. 

 

Link to comment
On 4/17/2023 at 10:14 PM, xjumper84 said:

I was able to boot into safe mode with gui no plugins on a router with no internet access. I copied the data I really cared about and then watched the system log. 

 

It then loads a ton of users in passed, shadow and smbpasswd. Then it starts ssh, SFTP and samba and routes everything through a docker. I've disabled docker and the system goes crazy with errors. 

 

If you would like help from the community we will need to see logs and settings. The next time you boot the system, wait for the problems to happen then generate a diagnostics.zip file (from Tools -> Diagnostics or by typing "diagnostics" via SSH) and upload the zip file here.

 

The diagnostics does not include your list of users, so go ahead and post /etc/passwd as well so we can look for anything out of the ordinary.

Link to comment
On 4/19/2023 at 7:57 AM, ljm42 said:

 

If you would like help from the community we will need to see logs and settings. The next time you boot the system, wait for the problems to happen then generate a diagnostics.zip file (from Tools -> Diagnostics or by typing "diagnostics" via SSH) and upload the zip file here.

 

The diagnostics does not include your list of users, so go ahead and post /etc/passwd as well so we can look for anything out of the ordinary.

When it last happened I couldn't create a diagnostics.zip file. The system wouldn't let me. It all wigged out with the webpage being unavailable. Examples below. At the same time the syslog filled up with errors. Also attached below. The copy of the system log that I included pictures for is included as a text file called "nas hacked.txt".

 

Eventually, after the system became stable again, I was able to pull the diagnostic file. 

 

I only later saw the added logins/passwords after the fact when I had pulled internet from the unraid box. I'll have to copy those off another computer as they're not on here. 

 

 

Interested on your thoughts about what I experienced. Thank you for the help. 

 

Untitled-8.jpg

Untitled-7.jpg

Untitled-5.jpg

Untitled-1.jpg

Untitled-2.jpg

Untitled-3.jpg

Untitled-4.jpg

nas-diagnostics-20230411-1513.zip nas hacked.txt

Untitled-6.jpg

Edited by xjumper84
added one more screen cap
Link to comment

I don't see any evidence of a hack in these files or screenshots. Let's see if someone else does.

 

One thing to note - the diagnostics were generated before Fix Common Problems had a chance to run, so we can't see what it might have detected:

Apr 11 15:05:25 NAS root: Delaying execution of fix common problems scan for 10 minutes

 

If the OS was hacked, the hacker's changes were probably wiped out when you rebooted the server, since Unraid is loaded fresh each time it is rebooted.

 

It is possible that a Docker container was hacked, that would not show up here. And those are not wiped after a reboot. If you have reason to believe a Docker container was hacked you should delete and recreate those.

Link to comment

What do you make of the drives/array disappearing? I wasn't doing anything with the machine to warrant such a display of action.

 

Or samba going up and down with network configs changing constantly. 

 

It all seems weird and not normal. I've been an Unraid user for more than a decade and have NEVER seen Unraid act this way. Just because I don't post in the forums doesn't make me a noob to the software.

Link to comment

The screenshots show a line that says:

nmbd: messaging_reinit() failed: NT_STATUS_DISK_FULL

 

A full root filesystem can mess up all kinds of things. I would guess that whatever caused that is the source of the weirdness you were seeing. Unfortunately I don't see anything here that can tell us why it got full.

 

One thing you can do is mirror the syslog to the flash drive:
  https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-781601 
so if this happens again there should be some clues as to what happened. This does add wear and tear on the flash drive so you wouldn't want to keep it running for months on end.

 

The other thing is, make sure you have the ability to SSH to the server. Then even if the webgui doesn't work you can still type 'diagnostics'. Or if that fails you can type 'cp /var/log/syslog /boot' to copy the syslog to the flash drive so you can access it from another computer after shutting down.

Link to comment

I'll get the most recent diagnostic of the server in the next boot and write another post there. 

 

I was going to erase the drive running the docker and recreate the thumbstick with my key for that drive. 

 

🤞 That should stop these problems, if as you say, they're docker related. 

 

Thank you for your insight.

Link to comment

I formated my unraid USB, reinstalled a fresh copy and booted up. 

 

I noticed a few new shares appeared - domain, iso, system - i never created these. 

 

I had docker disabled and vm's disabled, but found a file created on my cache drive that I never put there, it was in the system folder called libvirt.img

 

Once I removed the cache drive, all the issues i've been having have gone away.  The latest diagnostics file is here:

nas-diagnostics-20230425-1111.zip

Link to comment
3 minutes ago, xjumper84 said:

I noticed a few new shares appeared - domain, iso, system - i never created these. 

 

I had docker disabled and vm's disabled, but found a file created on my cache drive that I never put there, it was in the system folder called libvirt.img

All expected parts of the system to support the functionality of docker and vms.

Link to comment

So the weird continues... ssh and sftp are disabled. as are dockers and vm's

 

For the life of me I can't get Community Apps to load as it says there is no internet on the machine. But when I go into terminal I can ping github.com, google and everything responds as normal. 

 

While on the dashboard it shows my network traffic to the machine is a consistent 17.4mbps, netstat doesn't provide any helpful data (it only lists my desktop as being connected to it), the network traffic on my windows 10 desktop is nothing and its definitely not sending it any data. 

 

I added unassigned.devices to clear the nvme drive earlier. Now it says its mounting remote shares, but in the webgui i see nothing. 

 

Lastly, the time on the machine is fine, it shows correctly in the bios. Any help would be good. 

 

This is the latest system log:

Apr 25 16:18:56 NAS  avahi-dnsconfd[4059]: Successfully connected to Avahi daemon.
Apr 25 16:18:56 NAS  emhttpd: nothing to sync
Apr 25 16:18:56 NAS unassigned.devices: Mounting 'Auto Mount' Remote Shares...
Apr 25 16:18:57 NAS  avahi-daemon[4050]: Server startup complete. Host name is NAS.local. Local service cookie is 2735995368.
Apr 25 16:18:58 NAS  avahi-daemon[4050]: Service "NAS" (/services/ssh.service) successfully established.
Apr 25 16:18:58 NAS  avahi-daemon[4050]: Service "NAS" (/services/smb.service) successfully established.
Apr 25 16:18:58 NAS  avahi-daemon[4050]: Service "NAS" (/services/sftp-ssh.service) successfully established.
Apr 25 16:19:01 NAS unassigned.devices: Using Gateway '192.168.0.1' to Ping Remote Shares.
Apr 25 16:21:48 NAS nginx: 2023/04/25 16:21:48 [error] 3540#3540: nchan: A message from the past has just been published. Unless the system time has been adjusted, this should never happen.
Apr 25 16:24:18 NAS kernel: kvm_arch_init: 2 callbacks suppressed
Apr 25 16:24:18 NAS kernel: kvm: support for 'kvm_amd' disabled by bios
Apr 25 16:27:42 NAS  ntpd[1396]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Apr 25 16:28:00 NAS root: Fix Common Problems Version 2023.03.22
Apr 25 16:30:00 NAS root: Fix Common Problems: Error: Unable to communicate with GitHub.com
Apr 25 16:30:00 NAS root: Fix Common Problems: Other Warning: Could not check for blacklisted plugins
Apr 25 16:30:02 NAS root: Fix Common Problems: Other Warning: Could not perform unknown plugins installed checks

 

nas-weird-network-traffic.jpg

nas-diagnostics-20230425-1626.zip

Link to comment
  • 1 month later...

A follow-up for others that come across the same issues. 

 

There was my fix:

First - unplug the unraid box from the internet and re-setup on another router with ZERO internet access. Whatever this machine was trying to phone home to couldn't without an internet connection. This part was KEY.

 

Copy all want-to-be-saved data to a drive not in the unraid machine - for me that meant buying a large enough HDD that could backup the entire array. This took about a week to copy all the data. 

 

Then, back up USB drive, re-format. 

 

Download new unraid server bits and copy over key file. Re-boot unraid server. pre-clear all drives. Basically dump the array, force a format on all drives, clear out all plugins/apps/dockers/etc. Re-create a new server on your original hardware. Re-setup the machine.

 

While doing this, run a virus scan on all the data you copied over.  

This took a few days. 

 

In the mean time I noticed the version on my router was 5 years out of date. Did an update here and re-setup the router. Upon further investigation it warranted buying a new router and using the old router as a wifi extender on another side of the property. This helped greatly with network security and ease of use around the house in general.

 

Move the machine back to the internet network. And re-setup your shares (this became a great time to re-do all the share's i had setup as over 12 years of data was a mess).

 

Reinstall vital plugins and monitor logs for strange activity. I added ClamAV and let it run for about a week. The weirdness has gone, the server acts like new. 

 

I don't know what caused it, but it was a pricey fix. Also, its worth having your key backed-up.

 

Things I learned:

You can wipe your USB drive with an intact array and the USB stick will read all your shares. 

Reddit.com/unraid is your friend

Use google search for your error messages or things out of place

Keep you firmware on your most cared about devices up to date

Edited by xjumper84
typo
Link to comment
  • 5 months later...

I came here to see what the hell are those Avahi instances in the log... I had to bring popcorn to finish this thread.

 

Quick google was enough to stop wondering....

Dude that's just a normal parts of Unraid bootup, just nobody answered you with authority... and the other guy is straight outta long jacket.

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.