Ransomware resistance


RobJ

130 posts in this topic Last Reply

Recommended Posts

Isn't a security risk with drwxrwxrwx on all shares?

that is true if you apply no restrictions at the Samba level.  However note that those permissions you list are not the permissions on the shares, but the Linux permissions on the folders.  However if you apply restrictions at the Samba level to the shares then these are applied on top,of the Linux permissions, so in practise that is the way to mitigate the risk and still keep unRAID functioning happily.
Link to post
  • Replies 129
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I've been giving this problem some thought, and did some experimenting. If you set a file to read-only permissions (chmod 444 file.ext), and set it to an owner other than your own userid (e.g., root),

Those are some cool ideas!  I'm thinking put a jpg on the root of every file system (call it nude.jpg? financials.jpg? but actually be a pic of a horses back end, or a lime).  In the root, it should l

A good starting point would be from @NAS s post in the Accelerator Drives thread. I would start with a "find" command and piped to exec.   The -ls parameter does a file listing of the resul

Ok so keeping disk+users set to secure should be tight enough security?

 

Which leads to another question. I have a VM that runs mcebuddy to take TVshows from my HTPC, cut commercials, convert and write them to unraid share.

 

Is there a way say from Midnight -6am to turn the share back to read/write so that MCEBuddy can write the file to the share in a batch file, cron job etc?

Link to post

Ok so keeping disk+users set to secure should be tight enough security?

 

I can't help you with your other question, but to answer your first question.

 

It depends on what your goals and concerns are. The major concern in this thread is that a Windows (or less likely a Mac) machine gets infected with ransomware that traverses the shares it has access to and encrypts the data on your Linux machine. If you don't allow that samba share to have write permission that ransomware running on the Windows machine can't modify the files on your Linux machine.

 

This does not protect against your Linux machine getting infected with ransomeware. If this should be a concern or not is dependent on a lot of factors but for most users it's way less of a concern then an external machine with access causing problems.

 

 

 

Link to post

Question:

 

Does the samba recycle bin plugin give any protection against ransomware? Or would that be bypassed?

i would be extremely surprised if the recycle bin is not by-passed.    Ransom ware rewrites files, it does not delete them.
Link to post

Question:

 

Does the samba recycle bin plugin give any protection against ransomware? Or would that be bypassed?

i would be extremely surprised if the recycle bin is not by-passed.    Ransom ware rewrites files, it does not delete them.

 

Yeah that is what I suspect as well.

 

A couple of random thoughts.

 

A lot of users have a set up where they have a cache drive and cache enabled shares.

 

Which in effect results in reading from one directory say /mnt/user/movies and writing to another directory /mnt/cache/movies. I believe this is a product of our FUSE set up.

 

My question is this, if one modifies a file in /mnt/user/movies does it modify in place to /mnt/user/movies, or is that modified file placed in /mnt/cache/movies. I suspect the former, but it would be great if it were the latter.

 

If it works such that a file that is opened over SMB and modified is moved to /mnt/cache/movies then using rsync with the --ignore-existing option would allow all modified files to remain on the cache and not be updated over existing files. Then you could use a second rsync command to park those files in some sort of quarantine where they await user action (confirmation) before they are moved to the array.

 

This of course only will work if there is a way to disable modify in place and force the creation of a working file in an alternative directory.

Link to post

Another idea would be a new security permission level: Read & Append

Users could read files, and add uniquely named files, but not overwrite or delete existing files.

This would be good for our media shares that are mostly cold storage, as well as being a good fight against malware.

One would stay logged in at this level for day to day activity, then switch to a higher level login for cleaning and file maintenance.

 

Link to post
  • 2 weeks later...

Another idea would be a new security permission level: Read & Append

Users could read files, and add uniquely named files, but not overwrite or delete existing files.

This would be good for our media shares that are mostly cold storage, as well as being a good fight against malware.

One would stay logged in at this level for day to day activity, then switch to a higher level login for cleaning and file maintenance.

 

Yes that would be really great!

I saw this morning that in the past weekends more than thousands who visited thepiratebay got ransomware in their computers.

And suddenly realized if there's a way to protect my unRAID against ransomware/virus.

I am at my gaming PC right now, and only have games and non-neccessary stuff installed, so basically want to protect my unRAID-server...

Link to post

Another idea would be a new security permission level: Read & Append

Users could read files, and add uniquely named files, but not overwrite or delete existing files.

This would be good for our media shares that are mostly cold storage, as well as being a good fight against malware.

One would stay logged in at this level for day to day activity, then switch to a higher level login for cleaning and file maintenance.

Awesome idea!

 

I think right now, unRAID can be made resistant to Ransomeware attack but it would be a quite round-about way (e.g. having an isolated "high risk" VM with only some share accessible). The above would make it even better.

 

Link to post
Read & Append - Users could read files, and add uniquely named files, but not overwrite or delete existing files.

 

That's a good idea.... not just for ransomware resistance.

 

All but one of my shares are read-only or simply not accessible, except by one maintenance account (not root).

 

The one writable share ("incoming") is used for depositing files on the server, and after files are uploaded to "incoming" I log in and move them to a final destination from the command line.  It's a PITA, but secure.

 

With a "Read & Append" share, 99% of it could be done (safely) directly to the destination share.

Link to post

With a "Read & Append" share, 99% of it could be done (safely) directly to the destination share.

 

I think the major hurdle here is, that "Read & Append" or "Read and New Writes Only" isn't a permission set that exists in Linux currently... so it would take significant efforts to make that happen.

 

Also BubbaQ I'm doing the same thing as you, expect in a few shares like iTunes which other computers need to log into...

 

Question about Securing the Root though, by default unRAID ships with root having no password, if you add a password does it prompt you every time you bring up the webgui (it would be sweet if there was a way to avoid that but still secure root), in addition how did you set up your non-root maintainer?

Link to post

There are the append-only and immutable extended attributes, but they aren't quite what we need.

 

What I'm thinking of is something similar to SnapLock system by NetApp... essentially a software-enforced WORM.  It uses a commit-architecture so you can create and modify files inside a commit window, and they then transition to immutable once "committed."

 

So a "commit" job (either scheduled or on-demand) can flag them immutable as needed.

 

Similarly, you could make the share directory on cache as writable to defined users (or public), but the share directory on the array disks is read-only except to the mover (and of course root).

Link to post

We also have to take into account hos SMB handles the extended attributes.  Plus, IIRC, SMB sees a directory where the sticky bit is set as an append-only directory (often used for tmp files).

 

Have to do some experimenting to confirm that.

Link to post

What about this:

 

1) A share is set read only

- This already exists

 

2) ALL writes go to the cache, regardless of the files current presence on the array?

- There will be some logistics to deconflict the duplicate named file (maybe just make it a hidden file?).

- The hope here is that a change to the fusion and/or md driver would allow the system to refuse to even accept the duplicate named file and push a write permission error back over SMB to the source client. Worst case, silently reject it but that could really suck so it might be better handled in 3

 

3) The mover, which is just a script, is modified to only copy over new files, (The assumption here being we couldn't reject it in 2 above), then the moved deletes the duplicate from the cache and is done.

- The problem with this of course is the user has no idea at all they just lost their legit modified file. So, again assuming we can't deal with it in 2 above:

      a) rename the original with "-COPY_[datetimestamp]" appended to the end and then copy the new file over or,

      b) copy the new file over "-COPY_[datetimestamp]" appended to the end. This one will mean the user will have to search for a minute for their new file but the file is there and they should damn well know this behavior is what they asked for when they turned on the feature.

- Basically a rudimentary COW protocol

Link to post

With a "Read & Append" share, 99% of it could be done (safely) directly to the destination share.

 

I think the major hurdle here is, that "Read & Append" or "Read and New Writes Only" isn't a permission set that exists in Linux currently... so it would take significant efforts to make that happen.

 

Also BubbaQ I'm doing the same thing as you, expect in a few shares like iTunes which other computers need to log into...

 

Question about Securing the Root though, by default unRAID ships with root having no password, if you add a password does it prompt you every time you bring up the webgui (it would be sweet if there was a way to avoid that but still secure root), in addition how did you set up your non-root maintainer?

 

Not necessarily if you combine it with the cachedrive... Enable cache on every malware protected share, writes get sent to the cachedrive and the mover script sends files to the array that are NEW, but files that would be replacing existing files are placed in a seperate "locker" (seperate share within the array) that need explicit permission to be sent over... You possible exclude files like .png / .txt because they might be changed due to media indexing.. same for subtitles..

 

Sounds like a not to difficult plan really.. Could even be plugin-based.. Changing the default mover behaviour to this..

 

 

Link to post

Question 1: What would unRAID do if you write to a file that is already on the array? Does it write straight to the array or does it make a copy in cache first?

 

Question 2: What if the cache is full? As far as I know, if cache is full, unRAID will write straight to array - negating any defense.

 

Perhaps a "plug-in-able" thing to do is to have Cache-only shares linked to Array-only shares and have the plugin does the moving (since as far as I know, plugins have full access to all shares). The complication is how to see all the content in Cache + Array.

 

Or things would be a lot easier done at LT code level. Perhaps have a "protected" flag for shares which turn cache on and disable writing if cache is full + "copy-on-write" kind of flag. The complication is with how to deal with free space and stuff.

 

Nothing is easy. *sigh*  :-\

 

 

 

Link to post

Question 1: What would unRAID do if you write to a file that is already on the array? Does it write straight to the array or does it make a copy in cache first?

 

Question 2: What if the cache is full? As far as I know, if cache is full, unRAID will write straight to array - negating any defense.

 

Perhaps a "plug-in-able" thing to do is to have Cache-only shares linked to Array-only shares and have the plugin does the moving (since as far as I know, plugins have full access to all shares). The complication is how to see all the content in Cache + Array.

 

Or things would be a lot easier done at LT code level. Perhaps have a "protected" flag for shares which turn cache on and disable writing if cache is full + "copy-on-write" kind of flag. The complication is with how to deal with free space and stuff.

 

Nothing is easy. *sigh*  :-\

 

1) it will write to the cache drive as far as I understand.

2) if cache is full it starts writing back to the array, but I am guessing that could be made configurable easy..

 

So I think it -can- be  easy, or it can be made difficult (and possibly a bit better)

Link to post

Question 1: What would unRAID do if you write to a file that is already on the array? Does it write straight to the array or does it make a copy in cache first?

 

Question 2: What if the cache is full? As far as I know, if cache is full, unRAID will write straight to array - negating any defense.

 

Perhaps a "plug-in-able" thing to do is to have Cache-only shares linked to Array-only shares and have the plugin does the moving (since as far as I know, plugins have full access to all shares). The complication is how to see all the content in Cache + Array.

 

Or things would be a lot easier done at LT code level. Perhaps have a "protected" flag for shares which turn cache on and disable writing if cache is full + "copy-on-write" kind of flag. The complication is with how to deal with free space and stuff.

 

Nothing is easy. *sigh*  :-\

 

1) it will write to the cache drive as far as I understand.

2) if cache is full it starts writing back to the array, but I am guessing that could be made configurable easy..

 

So I think it -can- be  easy, or it can be made difficult (and possibly a bit better)

your answer 1) is incorrect.    If a file already exists then unRAID writes directly to it by-passing the cache.
Link to post

Jumperalex and Helmonder, that was actually very similar to the idea I had in mind.

 

Like Helmonder said it would be easy to include the ignore-existing flag in mover, then have a second move step that moved the remaining files to a "Jail" of sorts awaiting positive confirmation (or some variation with incremental backups or something, lot's of options here.)

 

The problem as far as I can tell is actually getting a share to be Read & Write but forcing ALL writes to the cache. I'm not sure if that's something that can be done, and even if it could, I'm not sure that solves the problem of how ransomware actually works.

 

 

Link to post

The problem as far as I can tell is actually getting a share to be Read & Write but forcing ALL writes to the cache. I'm not sure if that's something that can be done, and even if it could, I'm not sure that solves the problem of how ransomware actually works.

 

One way is to link a cache-only share to the main user share.  So the user share "stuff" is linked to a cache-only share called "stuff_in" and a scheduled/manual process will commit the changes in "stuff_in" to "stuff."

 

You should be able to "merge" the two shares in FUSE for live reading.

 

Files with the same name have to be rejected.... otherwise you will be letting potentially corrupted/malware files overwrite a good file on the R/O share.  Not sure if you can detect the name collision when copying to the cache-only share or if it will have to be caught in the commit process later.

Link to post

I'm really happy to see the ideas flowing here, as I think this is very important.  I'm hoping LimeTech will put a high priority on this for 6.2, or as soon as they can fit it in.

 

I'd like to suggest another approach, not instead of, but as an alternative way to handle the problem, especially for those who *have* to provide read/write access.  For want of better, I'll call it "Ransomware resistant mirroring".  A background process (could be a plugin-controlled cron statement) periodically monitors a given list of paths, and mirrors them to safe destination paths that ARE NOT AVAILABLE EXTERNALLY.  It's not the normal mirroring as it does not copy changed files, nor does it immediately remove deleted files.  The original's folder trees are exactly copied, and all new files are copied, but deletions must wait for a configured delay (default 30 days?) before removal, OR a manual function that requires user review of the deletions first before removing them.  And changed files simply raise notifications of the changes, but nothing more.  Then the user must review the list of changed files first, before OK'ing their mirroring.  It should be immediately clear, crystal clear at a glance, whether the changes are legitimate or malware caused.  It would be nice if the change and delete notifications report a summary of how many files are changed/deleted.  That might be all we need, to know we've been attacked.

 

This could be a plugin, written now.  Could be a Dynamix plugin.  But it would be nice to see it built in.  First version could be rather simple, request a series of source paths, and specify the mirror path for each.  One of them could be the flash drive /boot.  Destination paths could be on the same drive (data and mirror on same drive), but the disk share must be off, don't want any external access at all to the RR mirror.  Or the destinations could be on specially assigned drives for the purpose.  Or the destinations could be on a second machine.  Then actual operations could be built on cron and rsync statements, rsync's with appropriate options that are selective in which files they copy.

 

A future enhancement would be to add restore functions.  But we don't need them initially, we can do it manually.  The critical thing now is to make the data safe.

 

Each user will have to do some planning, as to how to rearrange drive space for the mirrors, but it should be a straightforward process.  AND, it forces backup procedures on everyone!  Those who already have backups may still find advantages in this approach, and revise their procedures to suit.

Link to post

For backups I use Syncrify, which lets me preserve multiple iterations of changed files, and preserve against deletions, as well as retain multiple versions of modified files.  The backup server has no writable shares....it all goes through rsync or Syncrify.  Plus it has deleted file retention so files deleted from the source will be preserved in the backup for XX days before being deleted from the backup.

 

 

Link to post

I have a similar (and inferior) arrangement to RobJ and bubbaQ but with Crashplan.

 

I set up a back-up share which is private and not published and set Crashplan to back up there every 3 days. Basically my rationale is if I get a ransomware attack, I should be able to detect it within that period and switch off Crashplan.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.