(SOLVED) Defend against crypto virus on auto backup job?


je82

Recommended Posts

Hi,

 

My nightmare scenario is that somehow the entire data pool gets encrypted and then my nightly backup job runs and overwrites all the good data with encrypted data.

 

What different techniques do you deploy to avoid this? Is there a way in bash to perhaps verify a couple of files integrity that  you know won't ever change before running the backup script?

 

Ideas welcome.

Edited by je82
Link to comment

My 2 cents: online services offer several backups, so you are able to restore some older backup if the newer is overwritten with encrypted data. Drawback is that you need a lot of more space for backups and you need to manually check if you are infected; this depends on the frequency you do a backup and on the number of backups.

Note that if you store backup locally the ransomware can infect all backups, so ideally a backup should be stored offline or on a cloud which is disconnected after a backup.

As you pointed out you can check for hashes of few files that do not change (you can create some files with different extensions for this, different extensions because maybe that ransomware do not encrypt all files). I think you can write a script checking for hashes, then if they are the same do the backup.

 

However if you get infected with a ransomware I'm not sure that you will be able to run the backup job..so you're backup could be safe.

Edited by ghost82
Link to comment
1 minute ago, testdasi said:

I go for a simpler approach: my backup trigger is manual. :D

I trust me more than any script and automatic backup, in my opinion, defeats the purpose even without the cryptovirus scenario e.g. maybe I accidentally delete something or edit something wrongly -> automatic backup -> no recovery.

 

 

Risk with that is that you eventually forget about it and then disaster happens and you're months behind in the backup.

 

2 minutes ago, ghost82 said:

My 2 cents: online services offer several backups, so you are able to restore some older backup if the newer is overwritten with encrypted data. Drawback is that you need a lot of more space for backups and you need to manually check if you are infected; this depends on the frequency you do a backup and on the number of backups.

Note that if you store backup locally the ransomware can infect all backups, so ideally a backup should be stored offline or on a cloud which is disconnected after a backup.

As you pointed out you can check for hashes of few files that do not change (you can create some files with different extensions for this, different extensions because maybe that ransomware do not encrypt all files). I think you can write a script checking for hashes, then if they are the same do the backup.

Yeah i would want some kind of way to verify the hash of a couple of different files that i can plant around the data pool and if they exist, and hashes match, continue to run the backup. Not sure how to do this though but it would be a decent extra defense against these types of attacks.

Link to comment

found a solution, md5sum can be used to do this..

 

example:

Quote

if [ "$(md5sum < /mnt/disk1/whatever/file1.txt)" = "1053fc6716c3a86911e1ebdaabe30814  -" ]
then
echo run backup job
else
....

just add a bunch of files that never changes unless there's something fishy, if something encrypts or changes the files the md5sum will no longer match.

Link to comment
2 hours ago, je82 said:

Risk with that is that you eventually forget about it and then disaster happens and you're months behind in the backup.

 

Yeah i would want some kind of way to verify the hash of a couple of different files that i can plant around the data pool and if they exist, and hashes match, continue to run the backup. Not sure how to do this though but it would be a decent extra defense against these types of attacks.

There's a product out there called Syncrify that does what you talk about. It leaves a "canary" file in the root of the shared data folders, and if it sees that it's changed it cancels the backup/sync immediately.

 

This is one of the reasons why *nix based systems probably won't be my choice for a future project, even though unRAID would be a great storage target; the inability to do native snapshot backups (ala Windows Server Backup in 2008 and up) is a critical function for my work.  It's already proven its merit at least once.

Link to comment
38 minutes ago, je82 said:

found a solution, md5sum can be used to do this..

 

example:

just add a bunch of files that never changes unless there's something fishy, if something encrypts or changes the files the md5sum will no longer match.

Sorry if I'm repeating myself but in my opinion it should be better to check for different file extensions, let's say txt, exe, dll and doc.

Edited by ghost82
Link to comment
13 minutes ago, ghost82 said:

Sorry if I'm repeating myself but in my opinion it should be better to check for different file extensions, let's say txt, exe, dll and doc.

Not sure, some lockers can encrypt but keep the extension? Also i guess this may also be a way to avoid backuping corrupted data as if the data is corrupted it should also return a different hash then when it wasn't corrupted (only speculating, not sure)

Link to comment
7 minutes ago, je82 said:

some lockers can encrypt but keep the extension?

Yes, it's possible, CrypMIC is an example.

 

7 minutes ago, je82 said:

Also i guess this may also be a way to avoid backuping corrupted data as if the data is corrupted it should also return a different hash

Teoretically true but keep into account that a bounch of files are not representative of the whole space of your hard drive: corruption can happen in different sectors, parity check is the only way as it checks the whole space.

Edited by ghost82
Link to comment
9 minutes ago, ghost82 said:

Yes, it's possible, CrypMIC is an example.

 

Teoretically true but keep into account that a bounch of files are not representative of the whole space of your hard drive: corruption can happen in different sectors, parity check is the only way as it checks the whole space.

if data is actually corrupt yeah, but what if your hba is corrupted and actually just reads the data corrupt, i wonder if that too would change the hash? in that case it would cover all data read by the corrupt hba i guess?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.