earthworm

Members
  • Posts

    68
  • Joined

  • Last visited

Posts posted by earthworm

  1. I'm really appreciating all the replies here.  My main goal was to bring attention to this issue and have people check their servers to ensure they haven't been compromised.  I doubt I'm the only one who has this problem.  It's possible it started with the CRON file and when that event fired it infected NGINX.

  2. I don't know.  I've been running jlesage/nginx-proxy-manager for years and haven't touched it since the initial configuration.  It may not even be related to that.  I've only noticed issues with the CRON files but dismissed them as being potentially file corruption and didn't think anything of it at that time.

     

    I run the sudo grep -al LD_L1BRARY_PATH /proc/*/environ | grep -v self/ command on the host directly and not from within a container.  Every time I run it I receive a new process ID.  I don't know if the process is restarting itself or if every instance is a new attack on my server.  There is never more than one process returned by this command.

     

    There's supposed to exist /dev/shm/php-shared but my /dev/shm folder is empty.  Not sure if this is normal for Unraid or not.

     

     

  3. I don't post much but I feel this is important to those using NGINX on their servers.

     

    Technical details are here:

    https://www.bleepingcomputer.com/news/security/new-malware-hides-as-legit-nginx-process-on-e-commerce-servers/

    or, more directly:

    https://sansec.io/research/cronrat

    https://sansec.io/research/nginrat

     

    Summary:

    A vulnerability in NGINX allows a threat actor to install a RAT running virtually undetectable on your server.  One of the options is for it to also hide in CRON with a date of Feb 31.

     

    I mention this because I believe my server got hit and it's very likely others could be vulnerable as well.  In the past I have noticed what I thought was a corrupt cron.d/root file and I've manually cleaned that file in the past.  Where I'm stumped is on how to clean the NGINX infestation.  I can identify the malicious processes and their solution is to just terminate them, however, every time I check, the process ID is different.

     

    If anyone else has detected this activity on their server, I would really like to find a way to permanently eradicate NginRAT from my server.  All I've done so far is block the payload IP address on my router.  I only discovered this issue today.

  4. 22 hours ago, J89eu said:

    Can this run on AMD GPUs? I have a Vega 56 and it seems the Windows app does work with GPU but perhaps not on Linux?

    I have 2 older AMD GPUs (5xxx, 6xxx) and neither of them has ever received a work unit which is disappointing because they would certainly be faster than any CPU I own.  My systems are running Windows.

  5. The Atom board will be sufficient for your needs.  I'm using the C2750 Asrock board, got bit by the C2000 bug but got it replaced on warranty.  Replacement still works great.  I pair it with an older 24 port Areca card and currently have 18 drives connected.  I don't transcode either.

     

    I like having the 8 cores available and low power usage as well as the BMC for remote monitoring.

    • Like 1
  6. Something also happened to my server after upgrading to 6.6.

     

    Server was named storage but was renamed to storagemain for testing.

    Added DNS record so both storage and storagemain both point to the same IP.

     

    "ping storage" - works fine

    "ping storagemain" - works fine

    \\storagemain - works fine

    \\storage - FAILS

    http://storage/ - works fine

    http://storagemain/ - works fine

  7. Do you think it's one of your VM's (eg. "Siwat PC") hogging the system?

     

    Looking though my own syslog I found the following:

    Jan  7 09:54:55 StorageMain kernel: Out of memory: Kill process 8878 (mono) score 456 or sacrifice child
    Jan  7 09:54:55 StorageMain kernel: Killed process 8878 (mono) total-vm:7063192kB, anon-rss:3708676kB, file-rss:0kB, shmem-rss:4kB
    Jan  7 09:54:56 StorageMain kernel: oom_reaper: reaped process 8878 (mono), now anon-rss:0kB, file-rss:0kB, shmem-rss:4kB

    I'm thinking maybe one of my dockers is misbehaving.

  8. Looks like we're approaching a point where the attractiveness of unRAID is more it's docker, plugin and ease of use over the fact you can plug in all the drives.

     

    I'm still a huge fan of the Areca cards mainly for their network port and network management functionality.  I started with the ARC-1170 in a Dell server then moved to the ARC-1261ML.  I have a pair of ARC-1280ML cards I will eventually put into service whenever I get around to it.

     

    For the OP, unRAID will work for your purpose.  The disadvantage of running R6 is you must use similar size drives and they must all keep spinning.

  9. 1 hour ago, trurl said:

    So is this a file that is on your Windows computer, and you are copying it to your unRAID server? Are you copying it to a user share? Can you read that user share from Windows OK?

    All yes.  Everything else works great.  I noticed the SMB speed increase from 6.3.1 to 6.3.2 and am thankful for that.

  10. I'm not sure if it's just my server but, sometime after upgrading to 6.3, when I copy a file to my idle server from a Windows box it will spin up all my drives then tell me the file already exists.  If I don't overwrite then it creates a 0 byte file.  If I do overwrite then it copies to the cache drive normally.  Has anyone else experienced this?

  11. I don't think unRAID can see SCSI disks but you can try going into the controller's bios and creating a single disk RAID 0 for each of the hard drives...unless there's a JBOD option.

     

    Or you can do what I did and get a cheap PCI-X SATA RAID card and start acquiring SATA disks.

  12. I used to have an OLD Kingston 120GB SSD as a cache drive.  I had plenty of trouble with it.  Mover didn't want to do it's job causing the drive to fill.  It also disappeared from the system on multiple occasions.  I moved that drive to another computer and it works fine as an OS drive.  I'm currently using an OCZ Vertex 3 drive for cache and so far no problems.