Dynamix File Integrity plugin


bonienl

Recommended Posts

After installing this plugin, I received the following email overnight. What does it mean?

 

From: [my email address]
To: "root" <root>
Bcc: [my email address]

Subject: cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null

Body:

sed: can't read /boot/config/plugins/dynamix.file.integrity/disks.ini: No such file or directory
sed: can't read /boot/config/plugins/dynamix.file.integrity/disks.ini: No such file or directory

Link to comment

After installing this plugin, I received the following email overnight. What does it mean?

 

From: [my email address]
To: "root" <root>
Bcc: [my email address]

Subject: cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null

Body:

sed: can't read /boot/config/plugins/dynamix.file.integrity/disks.ini: No such file or directory
sed: can't read /boot/config/plugins/dynamix.file.integrity/disks.ini: No such file or directory

 

The message itself is harmless, it means the file disks.ini isn't present, because no Build and/or Export has been done yet.

 

I have made an update to suppress these cron messages, together with the exclusion of the docker image if this file is located on a data disk.

 

New version 2016.01.13 2016.01.13a is available.

 

Link to comment

I was running a check all disk and the following happened and the same thing happened last night as well with the scheduled check. The files are still getting checked but I wonder if I should be concerned?

 

Jan 16 10:58:15 Pithos kernel: ------------[ cut here ]------------
Jan 16 10:58:15 Pithos kernel: WARNING: CPU: 7 PID: 23760 at arch/x86/kernel/cpu/perf_event_intel_ds.c:315 reserve_ds_buffers+0x10e/0x347()
Jan 16 10:58:15 Pithos kernel: alloc_bts_buffer: BTS buffer allocation failure
Jan 16 10:58:15 Pithos kernel: Modules linked in: xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables kvm_intel kvm vhost_net vhost macvtap macvlan tun xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod ahci i2c_i801 libahci r8169 mii acpi_cpufreq
Jan 16 10:58:15 Pithos kernel: CPU: 7 PID: 23760 Comm: qemu-system-x86 Not tainted 4.1.15-unRAID #1
Jan 16 10:58:15 Pithos kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H77 Pro4-M, BIOS P2.00 08/06/2013
Jan 16 10:58:15 Pithos kernel: 0000000000000009 ffff88040f60f9a8 ffffffff815f1ad0 0000000000000000
Jan 16 10:58:15 Pithos kernel: ffff88040f60f9f8 ffff88040f60f9e8 ffffffff8104775b ffff88044f255ec0
Jan 16 10:58:15 Pithos kernel: ffffffff8101fd97 0000000000000000 0000000000000000 0000000000010e10
Jan 16 10:58:15 Pithos kernel: Call Trace:
Jan 16 10:58:15 Pithos kernel: [<ffffffff815f1ad0>] dump_stack+0x4c/0x6e
Jan 16 10:58:15 Pithos kernel: [<ffffffff8104775b>] warn_slowpath_common+0x97/0xb1
Jan 16 10:58:15 Pithos kernel: [<ffffffff8101fd97>] ? reserve_ds_buffers+0x10e/0x347
Jan 16 10:58:15 Pithos kernel: [<ffffffff810477b6>] warn_slowpath_fmt+0x41/0x43
Jan 16 10:58:15 Pithos kernel: [<ffffffff8101fd97>] reserve_ds_buffers+0x10e/0x347
Jan 16 10:58:15 Pithos kernel: [<ffffffff8101ab58>] x86_reserve_hardware+0x141/0x153
Jan 16 10:58:15 Pithos kernel: [<ffffffff8101abae>] x86_pmu_event_init+0x44/0x240
Jan 16 10:58:15 Pithos kernel: [<ffffffff810a7b77>] perf_try_init_event+0x42/0x74
Jan 16 10:58:15 Pithos kernel: [<ffffffff810ad260>] perf_init_event+0x9d/0xd4
Jan 16 10:58:15 Pithos kernel: [<ffffffff810ad61c>] perf_event_alloc+0x385/0x4f7
Jan 16 10:58:15 Pithos kernel: [<ffffffffa015755b>] ? stop_counter+0x2f/0x2f [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffff810ad7bc>] perf_event_create_kernel_counter+0x2e/0x12c
Jan 16 10:58:15 Pithos kernel: [<ffffffffa0157676>] reprogram_counter+0xc0/0x109 [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa0157741>] reprogram_fixed_counter+0x82/0x8d [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa0157929>] reprogram_idx+0x4a/0x4f [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa015818b>] kvm_handle_pmu_event+0x66/0x87 [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa01842dc>] ? vmx_invpcid_supported+0x1b/0x1b [kvm_intel]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa01412dc>] kvm_arch_vcpu_ioctl_run+0x4f9/0xeb0 [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffff815f7d59>] ? retint_kernel+0x1b/0x1d
Jan 16 10:58:15 Pithos kernel: [<ffffffffa014c49f>] ? em_cli+0x2d/0x2d [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa0186773>] ? __vmx_load_host_state.part.53+0x125/0x12c [kvm_intel]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa013c65b>] ? kvm_arch_vcpu_load+0x139/0x143 [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffffa0133ff1>] kvm_vcpu_ioctl+0x169/0x48f [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffff810a5b3e>] ? perf_ctx_unlock+0x20/0x24
Jan 16 10:58:15 Pithos kernel: [<ffffffffa014c49f>] ? em_cli+0x2d/0x2d [kvm]
Jan 16 10:58:15 Pithos kernel: [<ffffffff8110c316>] do_vfs_ioctl+0x367/0x421
Jan 16 10:58:15 Pithos kernel: [<ffffffff81114033>] ? __fget+0x6c/0x78
Jan 16 10:58:15 Pithos kernel: [<ffffffff8110c409>] SyS_ioctl+0x39/0x64
Jan 16 10:58:15 Pithos kernel: [<ffffffff815f71ee>] system_call_fastpath+0x12/0x71
Jan 16 10:58:15 Pithos kernel: ---[ end trace 3d8a9b1ae359c9a2 ]---

Link to comment

Perhaps you need to select fewer disks and lower the concurrency rate. Every selected disk is processed in parallel, and checksumming is quite processor intensive.

 

I suppose I should give that a try. I figured with an i7 and only 2 disks that I was OK but maybe with the 3 VMs and multiple dockers it is just too much. I'll report back (probably in a few days) when I have a chance to do this.

Link to comment

I am new to this plugin and have a question, is it necessary to select the option:Save new hashing results to flash? If I don't select this where are they, if they are at all, saved?

 

Also how much CPU horsepower does doing a file integrity use? I realize it depends on the number of disks selected, the hashing algorithm and the type of CPU. In my system I have a Core 2 Duo 3GHz with 8GB of RAM, I have selected 4 groups each with 6 disks in it to run on a weekly schedule. I use my server as a backup with a few dockers. Can you speculate on how taxed my CPU may be doing an integrity check on 6 disks?

Link to comment

I am new to this plugin and have a question, is it necessary to select the option:Save new hashing results to flash? If I don't select this where are they, if they are at all, saved?

 

Hashing results are always stored in the extended attributes of the file. Optionally new hashing results can be copied to flash too, this creates a daily file which can be viewed/examined but isn't necessary for the operation of the plugin.

 

Also how much CPU horsepower does doing a file integrity use? I realize it depends on the number of disks selected, the hashing algorithm and the type of CPU. In my system I have a Core 2 Duo 3GHz with 8GB of RAM, I have selected 4 groups each with 6 disks in it to run on a weekly schedule. I use my server as a backup with a few dockers. Can you speculate on how taxed my CPU may be doing an integrity check on 6 disks?

 

I have an Intel Core i3 at 2.9GHz and doing 3 disks simultaneously is about the max my processor can handle. Doing 6 disks simultaneously on a Core 2 Duo is likely going to kill the processor.

 

Link to comment

I am new to this plugin and have a question, is it necessary to select the option:Save new hashing results to flash? If I don't select this where are they, if they are at all, saved?

 

Also how much CPU horsepower does doing a file integrity use? I realize it depends on the number of disks selected, the hashing algorithm and the type of CPU. In my system I have a Core 2 Duo 3GHz with 8GB of RAM, I have selected 4 groups each with 6 disks in it to run on a weekly schedule. I use my server as a backup with a few dockers. Can you speculate on how taxed my CPU may be doing an integrity check on 6 disks?

The process seems to be CPU bound so you probably do not want to run more parallel tasks than you have cores in your machine.  Certainly I see that as soon as I increase the number of parallel tasks above the number of cores I have the ETA for all the tasks starts getting longer.
Link to comment

Also how much CPU horsepower does doing a file integrity use? I realize it depends on the number of disks selected, the hashing algorithm and the type of CPU. In my system I have a Core 2 Duo 3GHz with 8GB of RAM, I have selected 4 groups each with 6 disks in it to run on a weekly schedule. I use my server as a backup with a few dockers. Can you speculate on how taxed my CPU may be doing an integrity check on 6 disks?

 

Install Dynamix system stats plugin, start building/checking one and look at the CPU load, if low start another one and so on.

 

With MD5 I can do four at a time in my fastest servers, 2.5 to 2.7Ghz Dual Core Sandy and Ivy Brige CPUs, my HPs N54L microservers on the other hand can only handle two at a time.

Link to comment

 

I have an Intel Core i3 at 2.9GHz and doing 3 disks simultaneously is about the max my processor can handle. Doing 6 disks simultaneously on a Core 2 Duo is likely going to kill the processor.

 

Ok thanks for that, I have broken them down into groups of two disk now, we'll see how that goes.

 

Thanks.

Link to comment

I've made a new version 2016.01.17 available, which alllows to set the priority of the background processes.

 

Choices are "normal" and "low". Both nice and ionice are used to put the process in low cpu and I/O mode.

If I select the low option but nothing else of mine is running, will I still get the results of the normal option?

Link to comment

I've made a new version 2016.01.17 available, which alllows to set the priority of the background processes.

 

Choices are "normal" and "low". Both nice and ionice are used to put the process in low cpu and I/O mode.

If I select the low option but nothing else of mine is running, will I still get the results of the normal option?

 

Ran a quick/simple test doing md5sum on a blu-ray rip (not using plugin):

 

Two simultaneous md5sum, one with nice and ionice and the other at normal, using the following commands on two screens:

 

Screen 1:

root@Tower:~# /usr/bin/nice -n 10 /usr/bin/ionice -c 2 -n 4 /usr/bin/md5sum /mnt/cache/8GBtestfile.mkv

 

Screen 2:

root@Tower:~# /usr/bin/md5sum /mnt/user/Movies/9GBtestfile.mkv

 

two_with_without_1.png?sbsr=3a10d5023cf3a2f4ee050cb9f96d0224962&lgfp=3000

Notice the "NI" column (nice), third from left, one md5sum is 0, the other is 10.  Running both without nice/ionice they run at 100% CPU and %wa is quite high (>30%)

 

For time to complete with nice/ionice:

 

root@Tower:~# time /usr/bin/nice -n 10 /usr/bin/ionice -c 2 -n 4 /usr/bin/md5sum /mnt/cache/8GBtestfile.mkv
real     0m41.074s
...

 

For time to complete without nice/ionice (and buffer drop):

 

root@Tower:~# time /usr/bin/md5sum /mnt/cache/8GBtestfile.mkv
real     0m40.808s
...

 

Note: Both drives being used are Toshiba SATA III, ~160MB transfer of test file from one to the other

Link to comment

I start receiving every day the emails I write about in this thread

http://lime-technology.com/forum/index.php?topic=45609.0

 

Are them from this plugin?

If they are, any idea how to stop them?

 

Can you upgrade to the latest version 2016.01.13a, see plugins page -> Check for updates.

 

Already there but I am not sure when I upgraded (today?).

Wait until tomorrow and I will post back.

 

I am back again,

 

I got another email today and I was using yesterdays latest version.

I upgraded to 17/01 version already.

 

cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null

Inbox

x

 

Console and webGui login account <**********@gmail.com>

04:52 (10 hours ago)

 

to root, bcc: me

sed: -e expression #1, char 4: unknown command: `

'

sed: -e expression #1, char 4: unknown command: `

Link to comment

I got another email today and I was using yesterdays latest version.

I upgraded to 17/01 version already.

 

Ok, I maybe on to something and think I solved the issue.

 

There is a new version 2016.01.18 available, can you try this and report back...

 

ok. i upgraded already and i will report tomorrow morning when i usually get those emails

 

thanks!

Link to comment

I just installed this plugin. I think I've got everything set correctly, but when I try to "Build" I get the following error-

Jan 18 18:08:24 Brunnhilde bunker: error: no export of file: /mnt/disk1/TM-Mini/HT-Mini.sparsebundle/com.apple.TimeMachine.SnapshotHistory.plist

Jan 18 18:08:24 Brunnhilde bunker: exported 3 files from /mnt/disk1 with mask *. Duration: 00:00:00

Jan 18 18:15:59 Brunnhilde kernel: ata2.00: exception Emask 0x10 SAct 0x7fffffff SErr 0x280100 action 0x6 frozen

Jan 18 18:15:59 Brunnhilde kernel: ata2.00: irq_stat 0x08000000, interface fatal error

 

Link to comment

I just installed this plugin. I think I've got everything set correctly, but when I try to "Build" I get the following error-

Jan 18 18:08:24 Brunnhilde bunker: error: no export of file: /mnt/disk1/TM-Mini/HT-Mini.sparsebundle/com.apple.TimeMachine.SnapshotHistory.plist

Jan 18 18:08:24 Brunnhilde bunker: exported 3 files from /mnt/disk1 with mask *. Duration: 00:00:00

Jan 18 18:15:59 Brunnhilde kernel: ata2.00: exception Emask 0x10 SAct 0x7fffffff SErr 0x280100 action 0x6 frozen

Jan 18 18:15:59 Brunnhilde kernel: ata2.00: irq_stat 0x08000000, interface fatal error

 

Only this line indicates an exception, the file wasn't exported because no checksum value is present in the extended attributes. Rerun Build for disk 1 to solve.

 

The kernel messages are not related.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.