Question about existing feature or plugin?


Recommended Posts

Hi
I hope i'm placing this right, but general support seems like the right place. 
I'm just days from starting to use unRaid for the first time and i've read up on much of what i need to think of etc.
Im building a machine with 8x8TB WD Red (1xPar 7xData) and a 500gb SSD as cache. But lets say in the future i lose two drives at the same time, from what i understand i'm going to lose the data on those two drives because the parity just keeps me "safe" from one data disk failure. 

So heres my question is there a feature / plugin that can backup the file structure to a .txt file each night? Like what files are on which drive? So in case of a two disk failure i can reacquire the files that was on just those two drives? The data stored on this server is going to be stored on another server as well.

English is not my native language so i hope you understand what i mean.

Link to comment

It may be possible to do what you suggest.  But you already mention that you will have a backup server.  So this is what I do...

 

On the main server I have the user shares, and I put them on different disks as needed. Some shares are on one disk only; some big shares are on more than one disk.

 

On my backup server I have a different share structure.  I have one share for each disk on the main server.  So I created some shares called "disk1backup", "disk2backup", and so on.  These can be on single disks or multiple disks.  There is no requirement to match the disk layout of the main server on the backup server.  Then I have some simple rsync scripts to backup from each disk on the main server to each disk backup share on the backup server.   I can run that one a week, once a month or whenever I need. 

 

If I lose a disk in the main server due to a hardware failure, and if parity is not enough to recover for some reason, then the disk backup share on the backup server will allow me to get back all the data on the lost disk(s) on the main server, on a disk by disk basis.

 

Here is an example of the script command that I use to create disk copies on the backup server.  In this example my backup server is named "BackupServer", and the share for the backup from disk 1 on the main serviver is called "Disk1-backup".

rsync -av -W --xattrs --delete --timeout=3600 --progress /mnt/disk1/* BackupServer::mnt/user/Disk1-backup

Note the the rsync daemon must be set up on the backup server for this to work in this way.  Some users run a script on the backup to pull the files from the main server.  It is probably safer that way around, but I haven't set mine up that way. 

Edited by S80_UK
  • Upvote 1
Link to comment

You can run something like the following, assuming you have mounted in some remote disk to send the information to.

#!/bin/sh

date=`date +%Y%m%d`

echo "Hello [$date]"
ls -lR /etc > <some-foreign-mount>/lslR-n54l-$date-etc
for i in {1..3}; do
    ls -lR /mnt/disk$i > <some-foreign-mount>/lslR-n54l-$date-disk$i
done

If you want a list with just names instead of having size and timestamps for everything, then you could use "find" instead of "ls -lR".

  • Upvote 1
Link to comment
15 minutes ago, pwm said:

You can run something like the following, assuming you have mounted in some remote disk to send the information to.


#!/bin/sh

date=`date +%Y%m%d`

echo "Hello [$date]"
ls -lR /etc > <some-foreign-mount>/lslR-n54l-$date-etc
for i in {1..3}; do
    ls -lR /mnt/disk$i > <some-foreign-mount>/lslR-n54l-$date-disk$i
done

If you want a list with just names instead of having size and timestamps for everything, then you could use "find" instead of "ls -lR".


So this will print something like this? And the file will have the name of the current date?

 


#Disk01
	Folder01
		File01
		File02
		File03
	Folder02
		File04
		File05
		File06

#Disk02
	Folder03
		File07
		File08
		File09
	Folder04
		File10
		File11
		File12
1 hour ago, S80_UK said:

It may be possible to do what you suggest.  But you already mention that you will have a backup server.  So this is what I do...

 

On the main server I have the user shares, and I put them on different disks as needed. Some shares are on one disk only; some big shares are on more than one disk.

 

On my backup server I have a different share structure.  I have one share for each disk on the main server.  So I created some shares called "disk1backup", "disk2backup", and so on.  These can be on single disks or multiple disks.  There is no requirement to match the disk layout of the main server on the backup server.  Then I have some simple rsync scripts to backup from each disk on the main server to each disk backup share on the backup server.   I can run that one a week, once a month or whenever I need. 

 

If I lose a disk in the main server due to a hardware failure, and if parity is not enough to recover for some reason, then the disk backup share on the backup server will allow me to get back all the data on the lost disk(s) on the main server, on a disk by disk basis.

 

Here is an example of the script command that I use to create disk copies on the backup server.  In this example my backup server is named "BackupServer", and the share for the backup from disk 1 on the main serviver is called "Disk1-backup".


rsync -av -W --xattrs --delete --timeout=3600 --progress /mnt/disk1/* BackupServer::mnt/user/Disk1-backup

Note the the rsync daemon must be set up on the backup server for this to work in this way.  Some users run a script on the backup to pull the files from the main server.  It is probably safer that way around, but I haven't set mine up that way. 


Then just make a script with oneline for each disk?

Edited by Luke_Starkiller
Link to comment

ls -lR (notice character ell) gives an output that looks like:

disk1:
total 64
-rwxrwxrwx 1 root root     1 Jun  9 14:32 benny*
-rwxrwxrwx 1 root root     1 Jun  9 14:10 charlie*
drwxrwxrwx 2 root root 16384 Jun  9 14:33 sub1/
drwxrwxrwx 2 root root 16384 Jun  9 14:33 sub2/

disk1/sub1:
total 32
-rwxrwxrwx 1 root root 1 Jun  9 14:33 sub1.bin*
-rwxrwxrwx 1 root root 1 Jun  9 14:33 sub1.txt*

disk1/sub2:
total 16
-rwxrwxrwx 1 root root 1 Jun  9 14:33 sub2.txt*

 

ls -1R (notice digit one - output in single column) gives an output that looks like:

disk1:
benny*
charlie*
sub1/
sub2/

disk1/sub1:
sub1.bin*
sub1.txt*

disk1/sub2:
sub2.txt*

 

find gives an output that looks like:

disk1
disk1/charlie
disk1/benny
disk1/sub1
disk1/sub1/sub1.txt
disk1/sub1/sub1.bin
disk1/sub2
disk1/sub2/sub2.txt

 

  • Upvote 1
Link to comment
22 minutes ago, pwm said:

ls -lR (notice character ell) gives an output that looks like:


disk1:
total 64
-rwxrwxrwx 1 root root     1 Jun  9 14:32 benny*
-rwxrwxrwx 1 root root     1 Jun  9 14:10 charlie*
drwxrwxrwx 2 root root 16384 Jun  9 14:33 sub1/
drwxrwxrwx 2 root root 16384 Jun  9 14:33 sub2/

disk1/sub1:
total 32
-rwxrwxrwx 1 root root 1 Jun  9 14:33 sub1.bin*
-rwxrwxrwx 1 root root 1 Jun  9 14:33 sub1.txt*

disk1/sub2:
total 16
-rwxrwxrwx 1 root root 1 Jun  9 14:33 sub2.txt*

 

ls -1R (notice digit one - output in single column) gives an output that looks like:


disk1:
benny*
charlie*
sub1/
sub2/

disk1/sub1:
sub1.bin*
sub1.txt*

disk1/sub2:
sub2.txt*

 

find gives an output that looks like:


disk1
disk1/charlie
disk1/benny
disk1/sub1
disk1/sub1/sub1.txt
disk1/sub1/sub1.bin
disk1/sub2
disk1/sub2/sub2.txt

 

 

Oh i see, and the script you wrote above will it do all the data disks or do you have to run it on each disk? :)

Edited by Luke_Starkiller
Link to comment
On 6/9/2018 at 1:30 PM, Luke_Starkiller said:


Then just make a script with oneline for each disk?

 

Basically, yes.  I put them in pairs so I do 1 and 2, 3 and 4, etc.  Buy you can do more.  Note that this is very simplistic - there is no logging or error checking, so if it goes wrong somewhere you won't know if you don't watch it.  So I run each script twice.  The first time through it does the sync with the files being deleted / copied / updated as needed.  The second time it should complete quickly because there's nothing more to do.

Link to comment

Ended up with the following script. I have an other computer in the network which fetches the data from the /boot/backup dir on the flash drive and dumps it to folders on my onedrive and dropbox as backups. 

 

Now i just need to figure out how to sort the output, ive tried piping in a sort -n but it wont work. And im still not sure what the "%h/%f\r\n" flag does hehe


Thanks for your help guys ^^
 

#!/bin/bash
find /mnt/disk* -type d -fprintf /boot/backup/dirlist_$(date +"%Y%m%d".txt) "%h/%f\r\n"
find /mnt/disk* -type f -fprintf /boot/backup/filelist_$(date +"%Y%m%d".txt) "%h/%f\r\n"

 

Edited by Luke_Starkiller
Link to comment

The -fprintf option is printing directory to the file specified with /boot/backup/dirlist_$(date +"%Y%m%d".txt)" so piping the standard output into sort will not do anything.

The "%h/%f\r\n" is the 2nd parameter to the -fprintf command and controls what is printed:

  • %h is the containing directory of the item being output.
  • %f is the final filename (or directory if printing directories) of the item being output.
  • \r is the <carriage-return> character.
  • \n is the <new-line> character.
     

The default output of find is the -print command which effectively does the 1st, 2nd and 4th of those ("%h/%f\n").


I assume that the output file is intended to be read on windows where text lines are expected to end with <carriage-return><new-line> characters, while on Linux the end with just <new-line>; hence the need to include the <carriage-return> character in the output.
 

I think the form of command you are looking for is:

 

find /mnt/disk* -type d -printf "%h/%f\r\n" | sort -o /boot/backup/dirlist_$(date +"%Y%m%d".txt)


 

Edited by remotevisitor
Link to comment
5 hours ago, remotevisitor said:

The -fprintf option is printing directory to the file specified with /boot/backup/dirlist_$(date +"%Y%m%d".txt)" so piping the standard output into sort will not do anything.

The "%h/%f\r\n" is the 2nd parameter to the -fprintf command and controls what is printed:

  • %h is the containing directory of the item being output.
  • %f is the final filename (or directory if printing directories) of the item being output.
  • \r is the <carriage-return> character.
  • \n is the <new-line> character.
     

The default output of find is the -print command which effectively does the 1st, 2nd and 4th of those ("%h/%f\n").


I assume that the output file is intended to be read on windows where text lines are expected to end with <carriage-return><new-line> characters, while on Linux the end with just <new-line>; hence the need to include the <carriage-return> character in the output.
 

I think the form of command you are looking for is:

 


find /mnt/disk* -type d -printf "%h/%f\r\n" | sort -o /boot/backup/dirlist_$(date +"%Y%m%d".txt)


 


Ah! Thank you so much for clearing that up for me and it worked like a charm!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.