Jump to content

ljm42

Administrators
  • Posts

    4,427
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by ljm42

  1. Hi dmacias, I'm running the 2016.10.07a version of the IPMI plugin on a 6.3.0-rc1 VM, and I get this error on the console when booting the VM: modprobe: ERROR: could not insert 'ipmi_si' : No such device Everything works fine since it uses a network connection to access IPMI, but it would be nice to suppress the console error if possible Not urgent at all I've got a current diagnostics over here if it helps: https://lime-technology.com/forum/index.php?topic=52462.msg506075#msg506075
  2. I've upgraded to 2016.10.09. I really like the automatic popup when you're on the RW page The new version is definitely faster! My windows box loses write access in about 1/10 of a second now: 9:13:57.59 - About to delete bait 9:13:57.60 - 0.txt created 9:13:57.60 - 1.txt created 9:13:57.60 - 2.txt created 9:13:57.61 - 3.txt created 9:13:57.61 - 4.txt created 9:13:57.61 - 5.txt created 9:13:57.62 - 6.txt created 9:13:57.62 - 7.txt created 9:13:57.63 - 8.txt created 9:13:57.63 - 9.txt created 9:13:57.64 - 10.txt created 9:13:57.65 - 11.txt created 9:13:57.65 - 12.txt created 9:13:57.66 - 13.txt created 9:13:57.66 - 14.txt created 9:13:57.67 - 15.txt created 9:13:57.68 - 16.txt created 9:13:57.69 - 17.txt created 9:13:57.69 - 18.txt created 9:13:57.70 - 19.txt NOT created Access is denied. One minor thing, it adds some lines to the syslog that don't have timestamps. I'm thinking that might confuse other tools that expect each line to start with a timestamp? Oct 9 15:32:24 TowerVM root: ransomware protection:Starting Background Monitoring Of Bait Files Setting up watches. Watches established. Oct 9 15:43:16 TowerVM emhttp: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Also... without this plugin, my VM boots in 25 seconds. With the plugin installed, it is almost 3 minutes before there is a login prompt on the console or the gui is available. Diagnostics are attached. It seems to spend a bit of time with vsftpd? Or is it trying to place the bait files before the array is online? Oct 9 15:29:49 TowerVM root: plugin: installing: /boot/config/plugins/ransomware.bait.plg Oct 9 15:29:49 TowerVM root: plugin: skipping: /boot/config/plugins/ransomware.bait/ransomware.bait-2016.10.09-x86_64-1.txz already exists Oct 9 15:29:49 TowerVM root: plugin: running: /boot/config/plugins/ransomware.bait/ransomware.bait-2016.10.09-x86_64-1.txz Oct 9 15:29:49 TowerVM root: Oct 9 15:29:49 TowerVM root: +============================================================================== Oct 9 15:29:49 TowerVM root: | Installing new package /boot/config/plugins/ransomware.bait/ransomware.bait-2016.10.09-x86_64-1.txz Oct 9 15:29:49 TowerVM root: +============================================================================== Oct 9 15:29:49 TowerVM root: Oct 9 15:29:49 TowerVM root: Verifying package ransomware.bait-2016.10.09-x86_64-1.txz. Oct 9 15:29:49 TowerVM root: Installing package ransomware.bait-2016.10.09-x86_64-1.txz: Oct 9 15:29:49 TowerVM root: PACKAGE DESCRIPTION: Oct 9 15:29:49 TowerVM root: Package ransomware.bait-2016.10.09-x86_64-1.txz installed. Oct 9 15:29:49 TowerVM root: Oct 9 15:29:49 TowerVM root: Oct 9 15:29:49 TowerVM root: plugin: running: anonymous Oct 9 15:29:49 TowerVM root: Stopping the service and deleting pre-existing bait files. This may take a bit Oct 9 15:29:49 TowerVM vsftpd[2814]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:29:50 TowerVM vsftpd[2820]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:29:52 TowerVM vsftpd[2830]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:29:55 TowerVM vsftpd[2844]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:29:59 TowerVM vsftpd[2862]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:04 TowerVM vsftpd[2884]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:10 TowerVM vsftpd[2910]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:17 TowerVM vsftpd[2940]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:25 TowerVM vsftpd[2974]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:34 TowerVM vsftpd[3012]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:44 TowerVM vsftpd[3054]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:30:54 TowerVM vsftpd[3096]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:31:04 TowerVM vsftpd[3138]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:31:14 TowerVM vsftpd[3180]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:31:24 TowerVM vsftpd[3222]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:31:34 TowerVM vsftpd[3264]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:31:44 TowerVM vsftpd[3306]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:31:54 TowerVM vsftpd[3348]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:32:04 TowerVM vsftpd[3390]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:32:14 TowerVM vsftpd[3432]: connect from 127.0.0.1 (127.0.0.1) Oct 9 15:32:14 TowerVM root: ransomware protection:Ransomware protection service not running Oct 9 15:32:14 TowerVM root: Restarting the background service Oct 9 15:32:14 TowerVM root: -------------------------------- Oct 9 15:32:14 TowerVM root: Ransomware Protection Installed Oct 9 15:32:14 TowerVM root: This plugin requires inotify-tools (available within the NerdPack plugin) to operate Oct 9 15:32:14 TowerVM root: Copyright 2016, Andrew Zawadzki Oct 9 15:32:14 TowerVM root: Version: 2016.10.09 Oct 9 15:32:14 TowerVM root: -------------------------------- Oct 9 15:32:14 TowerVM root: plugin: installed Oct 9 15:32:14 TowerVM root: Starting go script towervm-diagnostics-20161009-1544.zip
  3. LOL You rock I was wondering about that too. If we can assume the clients will auto-reconnect, then it might work to call smbstatus after restarting smb? Not sure. I wonder why CrashPlan didn't think of that? They make the user update it manually: https://support.code42.com/CrashPlan/4/Troubleshooting/Linux_Real-Time_File_Watching_Errors I'd much rather do it your way Would it make sense to use cron to restart the service every night? I really like the idea to exclude folders too. Thanks again for all of the thought you are putting in to this!
  4. Hi Squid, I installed this on a VM and then deleted one of the bait files. Everything went read-only pretty quickly, very nice! Restoring to normal mode was easy too. I decided to see just *how quickly* the server was able to switch to read-only mode. Here is a batch file to run from Windows. It deletes a bait file from a test directory and then tries creating a bunch of test files: @echo off REM for simplicity, don't use spaces in TESTDIR REM TESTDIR should be empty, other than the bait files SET TESTDIR=\\TOWERVM\DataVM SET MAXFILES=75 ECHO %TESTDIR% REM clean up previous run SET /a "x = 0" :clean1 if %x% leq %MAXFILES% ( IF EXIST %TESTDIR%\%x%.txt ( DEL %TESTDIR%\%x%.txt ) SET /a "x = x + 1" GOTO :clean1 ) DIR /O:N %TESTDIR% REM if "clean" was passed in, exit before beginning next run IF %1.==clean. ( GOTO end ) REM delete the bait ECHO %TIME% - About to delete bait DEL %TESTDIR%\SquidBait-DO_NOT_DELETE.jpg REM loop through and create as many files as possible SET /a "x = 0" :while1 if %x% leq %MAXFILES% ( ECHO hi > %TESTDIR%\%x%.txt IF EXIST %TESTDIR%\%x%.txt ( ECHO %TIME% - %x%.txt created ) ELSE ( ECHO %TIME% - %x%.txt NOT created ) SET /a "x = x + 1" GOTO :while1 ) DIR /O:N %TESTDIR% :end I ran it multiple times, and on my system, it is able to write around 50 files after deleting the bait: C:\stuff\ransomewaredetect>test_rw_prot.bat \\TOWERVM\DataVM Volume in drive \\TOWERVM\DataVM is DataVM Volume Serial Number is 6EE2-FEAD Directory of \\TOWERVM\DataVM 10/07/2016 12:33 PM <DIR> . 10/07/2016 09:23 AM <DIR> .. 10/07/2016 10:53 AM 10,807 SquidBait-DO_NOT_DELETE.docx 10/07/2016 10:53 AM 6,685 SquidBait-DO_NOT_DELETE.jpg 10/07/2016 10:53 AM 216,611 SquidBait-DO_NOT_DELETE.pdf 10/07/2016 10:53 AM 8,947 SquidBanking-DO_NOT_DELETE.xlsx 10/07/2016 10:53 AM <DIR> top level 4 File(s) 243,050 bytes 3 Dir(s) 4,133,871,616 bytes free 12:34:32.49 - About to delete bait 12:34:32.49 - 0.txt created 12:34:32.49 - 1.txt created 12:34:32.50 - 2.txt created 12:34:32.50 - 3.txt created 12:34:32.50 - 4.txt created 12:34:32.51 - 5.txt created 12:34:32.51 - 6.txt created 12:34:32.51 - 7.txt created 12:34:32.52 - 8.txt created 12:34:32.52 - 9.txt created 12:34:32.52 - 10.txt created 12:34:32.53 - 11.txt created 12:34:32.53 - 12.txt created 12:34:32.54 - 13.txt created 12:34:32.54 - 14.txt created 12:34:32.54 - 15.txt created 12:34:32.55 - 16.txt created 12:34:32.55 - 17.txt created 12:34:32.55 - 18.txt created 12:34:32.56 - 19.txt created 12:34:32.56 - 20.txt created 12:34:32.57 - 21.txt created 12:34:32.57 - 22.txt created 12:34:32.57 - 23.txt created 12:34:32.58 - 24.txt created 12:34:32.58 - 25.txt created 12:34:32.59 - 26.txt created 12:34:32.59 - 27.txt created 12:34:32.59 - 28.txt created 12:34:32.60 - 29.txt created 12:34:32.60 - 30.txt created 12:34:32.60 - 31.txt created 12:34:32.61 - 32.txt created 12:34:32.61 - 33.txt created 12:34:32.61 - 34.txt created 12:34:32.62 - 35.txt created 12:34:32.62 - 36.txt created 12:34:32.63 - 37.txt created 12:34:32.63 - 38.txt created 12:34:32.63 - 39.txt created 12:34:32.64 - 40.txt created 12:34:32.64 - 41.txt created 12:34:32.65 - 42.txt created 12:34:32.65 - 43.txt created 12:34:32.65 - 44.txt created 12:34:32.66 - 45.txt created 12:34:32.66 - 46.txt created 12:34:32.67 - 47.txt created 12:34:32.67 - 48.txt created 12:34:32.67 - 49.txt created 12:34:32.68 - 50.txt created 12:34:32.68 - 51.txt created 12:34:32.69 - 52.txt created 12:34:32.69 - 53.txt created 12:34:32.70 - 54.txt created An unexpected network error occurred. 12:34:32.70 - 55.txt NOT created Access is denied. 12:34:36.36 - 56.txt NOT created Access is denied. 12:34:37.20 - 57.txt NOT created Access is denied. 12:34:37.21 - 58.txt NOT created Access is denied. 12:34:37.22 - 59.txt NOT created Access is denied. 12:34:37.23 - 60.txt NOT created Access is denied. 12:34:37.24 - 61.txt NOT created Access is denied. 12:34:37.25 - 62.txt NOT created Access is denied. 12:34:37.26 - 63.txt NOT created Access is denied. 12:34:37.26 - 64.txt NOT created Access is denied. 12:34:37.27 - 65.txt NOT created Access is denied. 12:34:37.28 - 66.txt NOT created Access is denied. 12:34:37.28 - 67.txt NOT created Access is denied. 12:34:37.29 - 68.txt NOT created Access is denied. 12:34:37.29 - 69.txt NOT created Access is denied. 12:34:37.30 - 70.txt NOT created Access is denied. 12:34:37.30 - 71.txt NOT created Access is denied. 12:34:37.30 - 72.txt NOT created Access is denied. 12:34:37.31 - 73.txt NOT created Access is denied. 12:34:37.31 - 74.txt NOT created Access is denied. 12:34:37.31 - 75.txt NOT created Volume in drive \\TOWERVM\DataVM is DataVM Volume Serial Number is 6EE2-FEAD Directory of \\TOWERVM\DataVM 10/07/2016 12:34 PM <DIR> . 10/07/2016 09:23 AM <DIR> .. 10/07/2016 12:34 PM 5 0.txt 10/07/2016 12:34 PM 5 1.txt 10/07/2016 12:34 PM 5 10.txt 10/07/2016 12:34 PM 5 11.txt 10/07/2016 12:34 PM 5 12.txt 10/07/2016 12:34 PM 5 13.txt 10/07/2016 12:34 PM 5 14.txt 10/07/2016 12:34 PM 5 15.txt 10/07/2016 12:34 PM 5 16.txt 10/07/2016 12:34 PM 5 17.txt 10/07/2016 12:34 PM 5 18.txt 10/07/2016 12:34 PM 5 19.txt 10/07/2016 12:34 PM 5 2.txt 10/07/2016 12:34 PM 5 20.txt 10/07/2016 12:34 PM 5 21.txt 10/07/2016 12:34 PM 5 22.txt 10/07/2016 12:34 PM 5 23.txt 10/07/2016 12:34 PM 5 24.txt 10/07/2016 12:34 PM 5 25.txt 10/07/2016 12:34 PM 5 26.txt 10/07/2016 12:34 PM 5 27.txt 10/07/2016 12:34 PM 5 28.txt 10/07/2016 12:34 PM 5 29.txt 10/07/2016 12:34 PM 5 3.txt 10/07/2016 12:34 PM 5 30.txt 10/07/2016 12:34 PM 5 31.txt 10/07/2016 12:34 PM 5 32.txt 10/07/2016 12:34 PM 5 33.txt 10/07/2016 12:34 PM 5 34.txt 10/07/2016 12:34 PM 5 35.txt 10/07/2016 12:34 PM 5 36.txt 10/07/2016 12:34 PM 5 37.txt 10/07/2016 12:34 PM 5 38.txt 10/07/2016 12:34 PM 5 39.txt 10/07/2016 12:34 PM 5 4.txt 10/07/2016 12:34 PM 5 40.txt 10/07/2016 12:34 PM 5 41.txt 10/07/2016 12:34 PM 5 42.txt 10/07/2016 12:34 PM 5 43.txt 10/07/2016 12:34 PM 5 44.txt 10/07/2016 12:34 PM 5 45.txt 10/07/2016 12:34 PM 5 46.txt 10/07/2016 12:34 PM 5 47.txt 10/07/2016 12:34 PM 5 48.txt 10/07/2016 12:34 PM 5 49.txt 10/07/2016 12:34 PM 5 5.txt 10/07/2016 12:34 PM 5 50.txt 10/07/2016 12:34 PM 5 51.txt 10/07/2016 12:34 PM 5 52.txt 10/07/2016 12:34 PM 5 53.txt 10/07/2016 12:34 PM 5 54.txt 10/07/2016 12:34 PM 0 55.txt 10/07/2016 12:34 PM 5 6.txt 10/07/2016 12:34 PM 5 7.txt 10/07/2016 12:34 PM 5 8.txt 10/07/2016 12:34 PM 5 9.txt 10/07/2016 10:53 AM 10,807 SquidBait-DO_NOT_DELETE.docx 10/07/2016 10:53 AM 216,611 SquidBait-DO_NOT_DELETE.pdf 10/07/2016 10:53 AM 8,947 SquidBanking-DO_NOT_DELETE.xlsx 10/07/2016 10:53 AM <DIR> top level 59 File(s) 236,640 bytes 3 Dir(s) 4,132,155,392 bytes free C:\stuff\ransomewaredetect> Note that it only takes 2 tenths of a second to cut write access! Since any ransomware would need time to encrypt the individual files, it is very unlikely that much damage could be done after the bait is triggered. Thanks for building this! On the placement of bait files - right now there are options for "root only" and "all folders", could we have some options in between? I'm thinking "top level" and "second level folders" (i.e. one level down from root and two levels down from root). Similar reasons as excluding appdata, I don't want to cause problems for inotify, considering I also have Dynamix File Integrity, Crashplan, and Plex all looking for changes in files. Well, plus I don't know how Plex, MediaMonkey, and other apps will react to all these bait files. And they look messy (as an aside, is there anything FCP can do to help make sure /proc/sys/fs/inotify/max_user_watches is set high enough for everything that is going on?)
  5. If you can access the CP gui, then by all means use the gui to change the memory usage. But some of us were stuck in a crashing loop, where the app gui wouldn't stay up long enough to make changes.
  6. I figured out why CrashPlan keeps restarting - it needs more memory. I think the 4.8 upgrade wiped out our previous adjustments to run.conf This article explains the problem. Unfortunately the article no longer contains the details on how to solve it, it just says to contact support. So the fix is: shutdown the CP docker SSH to your server type: cd /mnt/cache/appdata/CrashPlan/bin (or wherever your CP appdata files are) type: nano run.conf on the line that starts with SRV_JAVA_OPTS, change "-Xmx1024m" to "-Xmx2048m" press CTRL-O, CTRL-X type: exit start the CP docker This doubles the amount of RAM available to CP from 1 GB to 2 GB. The article above explains what CP recommends, you may need a different number. Note: one thing I do is have multiple backup sets, I *think* (not positive) that this reduces the memory needs of CP. Unfortunately, it looks like the run.conf file can revert to defaults when CP upgrades (maybe other times too) gfjardim - would you consider adding a new env variable for "ram" which automatically updates the run.conf file when the container starts?
  7. See bonienl's message two comments ago, should help with exclusions.
  8. The plugin file doesn't install freeipmi because the logic was for 6.2. You can manually installpkg freeipmi-1.5.1 from the plugin directory. I have a new version but I was just double checking before I released it. I added some new features for mapping the fans to the correct ipmi-raw positions and changed some underlying code. I'm looking through it now. I'll probably go ahead and release it. Thanks! installpkg was all it needed edit - the updated plugin works great too thanks!
  9. Hey dmacias, I saw you tweaked NerdPack for 6.3.0-rc1, does the IPMI work plugin for for you on that version of unRAID? IPMI isn't returning any data for me, either for the sensors or the event log. I see error on the console: sh: ipmi-fru: command not found but I'm not sure why it wouldn't be installed? I don't see anything obvious in the syslog, but I've attached my diagnostics in case anything stands out tower-diagnostics-20161005-2015.zip
  10. Hey gfjardim, In another thread Tom wrote: Perhaps the preclear script could detect SSDs and do this rather than the standard preclear?
  11. This sounds incredible Squid! Before stopping SMB/AFS/NFS, can the plugin capture anything about who is connected and what IP they are connected from? That would help diagnose which client computer is responsible for triggering the bait. Along these lines, is there a way to log when any file is added/changed/deleted via SMB/AFS/NFS? Once the bait is triggered, a log like this would be invaluable in tracking down what was affected. I really like the idea of restarting network services in read-only mode, particularly due to the risk of users deleting a bait file because they don't know what it is. A DOS due to a false positive is going to be pretty disruptive for a small business; read-only mode would allow them to limp along until they get help.
  12. Thanks dmacias! No more error messages on bootup
  13. Thanks! I'll be able to do that tonight Edit: yep that took care of it. Thank you very much!
  14. WOW that was fast! I can confirm the error is gone, thanks bonienl!
  15. Hi bonienl, When my system boots it throws this error on the console: cpuload started grep: /proc/mdcmd: No such file or directory <-- this is the one I'm asking about ipmifan[4092]: Your motherboard is not supported yet The "grep: /proc/mdcmd" line is the one I'm asking about, I included the others for context. A current diagnostics is here if it helps: https://lime-technology.com/forum/index.php?topic=52052.msg499727#msg499727 I am on 6.2 final. Not sure when this started, might have been the 6.2 betas. I assume this is due to a plugin, and the only plugins I have that reference mdcmd are dynamix-related: root@Tower:/usr/local/emhttp/plugins# grep -ri "mdcmd" * dynamix/include/ColorCoding.php: 'text' => [': (mdcmd|md|super\.dat|unraid system|unregistered|running, size)\b','key detected, registered'] dynamix/include/update.parity.php: $cron = "# Generated parity check schedule:\n$time $dotm $month $day $term/usr/local/sbin/mdcmd check $write &> /dev/null\n\n"; dynamix.cache.dirs/scripts/cache_dirs:# Version 2.0.5 - Updated for unRaid 6.1.2 new location of mdcmd (used to find idle-times of disks) dynamix.cache.dirs/scripts/cache_dirs: mdcmd_cmd=/usr/local/sbin/mdcmd dynamix.cache.dirs/scripts/cache_dirs: last=$($mdcmd_cmd status | grep -a rdevLastIO | grep -v '=0') dynamix.file.integrity/include/update.watcher.php: if ($new['parity']) $text[] = "[[ \$(grep -Po '^mdResync=\K\S+' /proc/mdcmd) -ne 0 ]] && exit 0"; unRAIDServer/unRAIDServer.plg: sed -i 's|/root/mdcmd|/usr/local/sbin/mdcmd|g' /boot/config/plugins/dynamix/parity-check.cron &> /dev/null root@Tower:/boot/config# grep -ri "mdcmd" * plugins/dynamix/parity-check.cron:0 22 1 * * /usr/local/sbin/mdcmd check &> /dev/null plugins/dynamix.cache.dirs.plg: if [[ -e /proc/mdcmd ]]; then plugins/dynamix.cache.dirs.plg: grep -Po "^$key=\K.*" /proc/mdcmd plugins/dynamix.file.integrity/integrity-check.sh:[[ $(grep -Po '^mdResync=\K\S+' /proc/mdcmd) -ne 0 ]] && exit 0 plugins/dynamix.file.integrity.plg:[[ ! -f $conf || $(grep -Po '^mdState=\K.*' /proc/mdcmd) != STARTED || ! -f $path/disks.ini ]] && exit 0 plugins/dynamix.file.integrity.plg:array=$(grep -Po '^mdState=\K.*' /proc/mdcmd) Of those, the only thing that stands out is that some of them access /proc/mdcmd directly instead of using /usr/local/sbin/mdcmd. Not sure that would change anything though. Do you see the same error on your system? Any idea how to track it down?
  16. Hey dmacias, I have the 2016.09.16 version of IPMI support on unRAID 6.2 final. When I boot my system I get these error messages in the console: ipmifan[4092]: Your motherboard is not supported yet ipmifan[4099]: Your motherboard is not supported yet Is there any way to disable these error messages on the console? Since I have fan control disabled I don't really care that it isn't supported. BTW, the web interface to fan control sure makes it look like my board is supported. It enumerates my fans and provides settings for all of them, and it looks like I could enable Fan Control if I wanted to. I have an ASRock E3C226D2I I posted a recent diagnostics here if you need one: https://lime-technology.com/forum/index.php?topic=52052.msg499727#msg499727
  17. Hey dlandon, I have the 2016.09.16 version of Tips and Tweaks on 6.2 final. When I start the array I get this error on the console: /usr/local/emhttp/plugins/tips.and.tweaks/scripts/rc.tweaks: line 69: [: =: unary operator expected. Line 69 is: if [ $POWERDOWN = "yes" ]; then but I don't see any other instances of POWERDOWN in the script?
  18. I'm not sure what that error code means but once your docker.img has filled up, nothing works right. I'd delete your CP docker at a minimum, and consider deleting/recreating the docker.img again. I don't know of any reason why backing up unraid to the cloud would fill up your docker.img, but incoming backups definitely will if CP hasn't been configured. It sounds like the incoming backups are starting automatically? In that case you should go to those other computers and shut them off or disable backups until you can get CP on unraid configured. For me, it is easier when I can reference /mnt/user/[whatever] inside the container, the same as I would outside the container, so I would map /mnt/user/Backup to /mnt/user/Backup (doesn't matter whether you use capital or lowercase 'b', with or without the 's', as long as you are consistent). Once you're happy with the mapping and the container starts again, and you are sure there are no incoming backups try backing up unraid to the cloud. You should see various files added to /mnt/cache/appdata/CrashPlan. Keep an eye on your docker.img and be sure it doesn't fill up. Then look for the "Default backup archive location" in the CP settings and set it to /mnt/user/Backup (or whatever directory you mapped) as described in the first post. Also set the default location for each friend as described in the first post. At this point it should be safe to turn on the other computers and have them start backing up. Confirm that files are going to /mnt/user/Backup and your docker.img is not filling up.
  19. What kind of backup are you doing, unraid to the cloud, or another computer to unraid? If it is another computer to unraid, have you had a chance to read the first post? You need to configure CrashPlan to store the backups on the array and not in your docker.img. Setting the drive mapping is only part of it, you have to actually load the CrashPlan interface and tell it where to store the backup.
  20. Yeah, read the q&a in the first post Sent from my ONEPLUS A3000 using Tapatalk
  21. Random idea for this script... since you pretty much have to run it in screen (or on the console) maybe you could detect when it is not in screen and display a warning or even force it to load in screen (assuming screen is installed)? I found some potentially useful code here: https://unix.stackexchange.com/questions/162133/run-script-in-a-screen
  22. Maybe there's a bug. What's the SERVER in speedtest.cfg in the plugin config folder on the flash drive? I actually don't think it is a bug, I think the server that I was using just isn't available any more. Unfortunately, I overwrote the "bad" value by choosing a new one from the dropdown, so I can't really tell you what it was before. This is my current value, which works fine: SERVER="9383"
  23. Interesting. I just checked my history and I have no results for the last few days. I ran a manual check and saw this error: Internet bandwidth test started Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from XFINITY... Invalid server ID Any chance you can detect when an "Invalid server ID" has been specified and switch it back to automatic mode?
  24. LOL I guess not Well, I started a parity check and was seeing numbers hover around 157 MB/s, so it doesn't seem like a standard parity check started off slow like we anticipated. I then changed my settings as you suggested: nr_requests 128 -> 8 md_sync_window 512 -> 1536 md_sync_thresh 192 -> 1535 md_num_stripes 1408 -> 3072 And started a parity check and was seeing numbers a bit higher, about 160 MB/s. I cancelled that and ran the short test using v4.0b3. The baseline with the new numbers was still a bit lower than I would have expected, but no as bad as before. Actually, *all* of the numbers were slightly lower than before (none of them beat 148), but perhaps that is the difference between a 30 second test and a 5 minute test. Or maybe I should have rebooted between tests? unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2) Tunables Report produced Thu Aug 25 19:20:21 PDT 2016 Run on server: Tower Short Automatic Parity Sync Test Current Values: md_num_stripes=3072, md_sync_window=1536, md_sync_thresh=1535 Global nr_requests=8 sdl nr_requests=8 sdi nr_requests=8 sdj nr_requests=8 sdk nr_requests=8 --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 30sec Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s ------------------------------------------------------- 1 | 67 | 3072 | 1536 | 8 | 1535 | 121.1 --- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 60sec Duration)--- Test | num_stripes | sync_window | nr_requests | sync_thresh | Speed --------------------------------------------------------------------------- 1 | 1536 | 768 | 128 | 767 | 115.7 MB/s 2 | 1536 | 768 | 128 | 384 | 144.8 MB/s 3 | 1536 | 768 | 8 | 767 | 145.6 MB/s 4 | 1536 | 768 | 8 | 384 | 143.5 MB/s Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 145.6 MB/s This nr_requests value will be used for the next test. --- FULLY AUTOMATIC TEST PASS 1a (Rough - 4 Sample Points @ 30sec Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s | thresh | MB/s ------------------------------------------------------------------------ 1 | 16 | 768 | 384 | 8 | 383 | 143.9 | 192 | 143.3 2 | 28 | 1280 | 640 | 8 | 639 | 145.9 | 320 | 143.9 3 | 39 | 1792 | 896 | 8 | 895 | 146.6 | 448 | 144.0 4 | 50 | 2304 | 1152 | 8 | 1151 | 147.9 | 576 | 145.4 --- FULLY AUTOMATIC TEST PASS 1c (Rough - 5 Sample Points @ 30sec Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s | thresh | MB/s ------------------------------------------------------------------------ 1 | 53 | 2432 | 1216 | 8 | 1215 | 148.4 | 608 | 144.3 2 | 64 | 2944 | 1472 | 8 | 1471 | 148.1 | 736 | 147.1 3 | 76 | 3456 | 1728 | 8 | 1727 | 148.2 | 864 | 148.4 4 | 87 | 3968 | 1984 | 8 | 1983 | 148.7 | 992 | 147.4 5 | 98 | 4480 | 2240 | 8 | 2239 | 148.9 | 1120 | 148.5 --- FULLY AUTOMATIC TEST PASS 1d (Rough - 5 Sample Points @ 30sec Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s | thresh | MB/s ------------------------------------------------------------------------ 1 | 101 | 4608 | 2304 | 8 | 2303 | 149.0 | 1152 | 148.3 2 | 112 | 5120 | 2560 | 8 | 2559 | 148.7 | 1280 | 148.8 3 | 124 | 5632 | 2816 | 8 | 2815 | 147.7 | 1408 | 148.3 4 | 135 | 6144 | 3072 | 8 | 3071 | 148.0 | 1536 | 146.1 5 | 146 | 6656 | 3328 | 8 | 3327 | 148.5 | 1664 | 148.4 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE NORMAL AUTO --- If the speeds changed with different values you should run a NORMAL AUTO test. Completed: 0 Hrs 19 Min 5 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. System Info: Tower unRAID version 6.2.0-rc4 md_num_stripes=3072 md_sync_window=1536 md_sync_thresh=1535 nr_requests=8 (Global Setting) sbNumDisks=5 CPU: Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz RAM: 16GiB System Memory Outputting lshw information for Drives and Controllers: H/W path Device Class Description ====================================================== /0/100/1f.2 storage 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] /0/1 scsi0 storage /0/1/0.0.0 /dev/sda disk 16GB Cruzer Fit /0/2 scsi1 storage /0/2/0.0.0 /dev/sdb disk USB3.0 CRW-CF/MD /0/2/0.0.0/0 /dev/sdb disk /0/2/0.0.1 /dev/sdc disk USB3.0 CRW-SM/xD /0/2/0.0.1/0 /dev/sdc disk /0/2/0.0.2 /dev/sdd disk USB3.0 CRW-SD /0/2/0.0.2/0 /dev/sdd disk /0/2/0.0.3 /dev/sde disk USB3.0 CRW-MS /0/2/0.0.3/0 /dev/sde disk /0/2/0.0.4 /dev/sdf disk USB3.0 CRW-SD/MS /0/2/0.0.4/0 /dev/sdf disk /0/3 scsi6 storage /0/3/0.0.0 /dev/sdk disk 4TB ST4000VN000-1H41 /0/4 scsi8 storage /0/4/0.0.0 /dev/sdl disk 4TB ST4000VN000-1H41 /0/5 scsi2 storage /0/5/0.0.0 /dev/sdg disk 15GB Patriot Memory /0/5/0.0.0/0 /dev/sdg disk 15GB /0/6 scsi3 storage /0/6/0.0.0 /dev/sdh disk 512GB Samsung SSD 850 /0/9 scsi4 storage /0/9/0.0.0 /dev/sdi disk 4TB ST4000VN000-1H41 /0/a scsi5 storage /0/a/0.0.0 /dev/sdj disk 4TB ST4000VN000-1H41 Array Devices: Disk0 sdl is a Parity drive named parity Disk1 sdi is a Data drive named disk1 Disk2 sdj is a Data drive named disk2 Disk3 sdk is a Data drive named disk3 Outputting free low memory information... total used free shared buff/cache available Mem: 16464376 178796 15181456 484652 1104124 15374096 Low: 16464376 1282920 15181456 High: 0 0 0 Swap: 0 0 0 *** END OF REPORT *** I rebooted and started a full parity check. We'll see how that compares. Update... Wow, it started off around 160 MB/s, but I'm 20 minutes in and it has consistently been 170-175! Can't wait to see the final numbers Update 2... it must have really slowed down at the end. The final was an improvement, but it only knocked about 20 minutes off my previous best: 2016-08-26, 04:06:57 8 hr, 12 min, 31 sec 135.4 MB/s OK 2016-08-01, 08:43:21 8 hr, 43 min, 20 sec 127.4 MB/s OK 2016-07-01, 08:32:08 8 hr, 32 min, 6 sec 130.2 MB/s OK 2016-06-01, 08:58:34 8 hr, 58 min, 32 sec 123.8 MB/s OK
  25. Here are the results on my boring server. unRAID Tunables Tester v4.0b2 by Pauven (for unRAID v6.2) Tunables Report produced Wed Aug 24 19:38:43 PDT 2016 Run on server: Tower Normal Automatic Parity Sync Test NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Current Values: md_num_stripes=1408, md_sync_window=512, md_sync_thresh=192 Global nr_requests=128 sdl nr_requests=128 sdi nr_requests=128 sdj nr_requests=128 sdk nr_requests=128 --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 5min Duration)--- Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh | Speed ----------------------------------------------------------------------------- 1 | 31 | 1408 | 512 | 128 | 192 | 63.7 MB/s --- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 10min Duration)--- Test | num_stripes | sync_window | nr_requests | sync_thresh | Speed --------------------------------------------------------------------------- 1 | 1536 | 768 | 128 | 767 | 87.5 MB/s 2 | 1536 | 768 | 128 | 384 | 135.5 MB/s 3 | 1536 | 768 | 8 | 767 | 155.0 MB/s 4 | 1536 | 768 | 8 | 384 | 153.4 MB/s Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 155.0 MB/s This nr_requests value will be used for the next test. --- FULLY AUTOMATIC TEST PASS 1a (Rough - 13 Sample Points @ 5min Duration)--- Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh | Speed ----------------------------------------------------------------------------- 1a | 16 | 768 | 384 | 8 | 383 | 149.3 MB/s 1b | 16 | 768 | 384 | 8 | 192 | 148.2 MB/s 2a | 19 | 896 | 448 | 8 | 447 | 149.8 MB/s 2b | 19 | 896 | 448 | 8 | 224 | 148.9 MB/s 3a | 22 | 1024 | 512 | 8 | 511 | 150.2 MB/s 3b | 22 | 1024 | 512 | 8 | 256 | 148.7 MB/s 4a | 25 | 1152 | 576 | 8 | 575 | 150.6 MB/s 4b | 25 | 1152 | 576 | 8 | 288 | 148.9 MB/s 5a | 28 | 1280 | 640 | 8 | 639 | 151.2 MB/s 5b | 28 | 1280 | 640 | 8 | 320 | 149.8 MB/s 6a | 31 | 1408 | 704 | 8 | 703 | 151.7 MB/s 6b | 31 | 1408 | 704 | 8 | 352 | 150.0 MB/s 7a | 33 | 1536 | 768 | 8 | 767 | 152.1 MB/s 7b | 33 | 1536 | 768 | 8 | 384 | 149.7 MB/s 8a | 36 | 1664 | 832 | 8 | 831 | 152.5 MB/s 8b | 36 | 1664 | 832 | 8 | 416 | 150.6 MB/s 9a | 39 | 1792 | 896 | 8 | 895 | 152.9 MB/s 9b | 39 | 1792 | 896 | 8 | 448 | 150.9 MB/s 10a | 42 | 1920 | 960 | 8 | 959 | 153.1 MB/s 10b | 42 | 1920 | 960 | 8 | 480 | 151.0 MB/s 11a | 45 | 2048 | 1024 | 8 | 1023 | 153.8 MB/s 11b | 45 | 2048 | 1024 | 8 | 512 | 150.8 MB/s 12a | 47 | 2176 | 1088 | 8 | 1087 | 154.3 MB/s 12b | 47 | 2176 | 1088 | 8 | 544 | 151.2 MB/s 13a | 50 | 2304 | 1152 | 8 | 1151 | 154.4 MB/s 13b | 50 | 2304 | 1152 | 8 | 576 | 151.1 MB/s --- FULLY AUTOMATIC TEST PASS 1c (Rough - 18 Sample Points @ 5min Duration)--- Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh | Speed ----------------------------------------------------------------------------- 1a | 53 | 2432 | 1216 | 8 | 1215 | 154.9 MB/s 1b | 53 | 2432 | 1216 | 8 | 608 | 152.1 MB/s 2a | 56 | 2560 | 1280 | 8 | 1279 | 155.0 MB/s 2b | 56 | 2560 | 1280 | 8 | 640 | 152.4 MB/s 3a | 59 | 2688 | 1344 | 8 | 1343 | 155.0 MB/s 3b | 59 | 2688 | 1344 | 8 | 672 | 152.4 MB/s 4a | 62 | 2816 | 1408 | 8 | 1407 | 155.4 MB/s 4b | 62 | 2816 | 1408 | 8 | 704 | 152.2 MB/s 5a | 64 | 2944 | 1472 | 8 | 1471 | 155.4 MB/s 5b | 64 | 2944 | 1472 | 8 | 736 | 155.4 MB/s 6a | 67 | 3072 | 1536 | 8 | 1535 | 155.3 MB/s 6b | 67 | 3072 | 1536 | 8 | 768 | 155.2 MB/s 7a | 70 | 3200 | 1600 | 8 | 1599 | 155.4 MB/s 7b | 70 | 3200 | 1600 | 8 | 800 | 155.6 MB/s 8a | 73 | 3328 | 1664 | 8 | 1663 | 155.6 MB/s 8b | 73 | 3328 | 1664 | 8 | 832 | 155.5 MB/s 9a | 76 | 3456 | 1728 | 8 | 1727 | 154.9 MB/s 9b | 76 | 3456 | 1728 | 8 | 864 | 155.2 MB/s 10a | 79 | 3584 | 1792 | 8 | 1791 | 155.4 MB/s 10b | 79 | 3584 | 1792 | 8 | 896 | 155.4 MB/s 11a | 81 | 3712 | 1856 | 8 | 1855 | 155.5 MB/s 11b | 81 | 3712 | 1856 | 8 | 928 | 155.3 MB/s 12a | 84 | 3840 | 1920 | 8 | 1919 | 154.9 MB/s 12b | 84 | 3840 | 1920 | 8 | 960 | 155.5 MB/s 13a | 87 | 3968 | 1984 | 8 | 1983 | 155.3 MB/s 13b | 87 | 3968 | 1984 | 8 | 992 | 155.4 MB/s 14a | 90 | 4096 | 2048 | 8 | 2047 | 155.5 MB/s 14b | 90 | 4096 | 2048 | 8 | 1024 | 155.2 MB/s 15a | 93 | 4224 | 2112 | 8 | 2111 | 155.2 MB/s 15b | 93 | 4224 | 2112 | 8 | 1056 | 155.4 MB/s 16a | 95 | 4352 | 2176 | 8 | 2175 | 155.1 MB/s 16b | 95 | 4352 | 2176 | 8 | 1088 | 155.6 MB/s 17a | 98 | 4480 | 2240 | 8 | 2239 | 155.4 MB/s 17b | 98 | 4480 | 2240 | 8 | 1120 | 155.4 MB/s 18a | 101 | 4608 | 2304 | 8 | 2303 | 155.0 MB/s 18b | 101 | 4608 | 2304 | 8 | 1152 | 155.5 MB/s --- FULLY AUTOMATIC TEST PASS 1d (Rough - 18 Sample Points @ 5min Duration)--- Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh | Speed ----------------------------------------------------------------------------- 1a | 104 | 4736 | 2368 | 8 | 2367 | 155.5 MB/s 1b | 104 | 4736 | 2368 | 8 | 1184 | 155.5 MB/s 2a | 107 | 4864 | 2432 | 8 | 2431 | 155.5 MB/s 2b | 107 | 4864 | 2432 | 8 | 1216 | 155.2 MB/s 3a | 110 | 4992 | 2496 | 8 | 2495 | 155.3 MB/s 3b | 110 | 4992 | 2496 | 8 | 1248 | 155.3 MB/s 4a | 112 | 5120 | 2560 | 8 | 2559 | 155.3 MB/s 4b | 112 | 5120 | 2560 | 8 | 1280 | 155.3 MB/s 5a | 115 | 5248 | 2624 | 8 | 2623 | 155.4 MB/s 5b | 115 | 5248 | 2624 | 8 | 1312 | 155.1 MB/s 6a | 118 | 5376 | 2688 | 8 | 2687 | 155.3 MB/s 6b | 118 | 5376 | 2688 | 8 | 1344 | 155.6 MB/s 7a | 121 | 5504 | 2752 | 8 | 2751 | 155.6 MB/s 7b | 121 | 5504 | 2752 | 8 | 1376 | 155.5 MB/s 8a | 124 | 5632 | 2816 | 8 | 2815 | 155.4 MB/s 8b | 124 | 5632 | 2816 | 8 | 1408 | 155.1 MB/s 9a | 127 | 5760 | 2880 | 8 | 2879 | 155.4 MB/s 9b | 127 | 5760 | 2880 | 8 | 1440 | 155.6 MB/s 10a | 129 | 5888 | 2944 | 8 | 2943 | 155.4 MB/s 10b | 129 | 5888 | 2944 | 8 | 1472 | 155.2 MB/s 11a | 132 | 6016 | 3008 | 8 | 3007 | 155.5 MB/s 11b | 132 | 6016 | 3008 | 8 | 1504 | 155.2 MB/s 12a | 135 | 6144 | 3072 | 8 | 3071 | 155.0 MB/s 12b | 135 | 6144 | 3072 | 8 | 1536 | 155.4 MB/s 13a | 138 | 6272 | 3136 | 8 | 3135 | 155.4 MB/s 13b | 138 | 6272 | 3136 | 8 | 1568 | 155.4 MB/s 14a | 141 | 6400 | 3200 | 8 | 3199 | 155.2 MB/s 14b | 141 | 6400 | 3200 | 8 | 1600 | 155.2 MB/s 15a | 143 | 6528 | 3264 | 8 | 3263 | 155.3 MB/s 15b | 143 | 6528 | 3264 | 8 | 1632 | 155.5 MB/s 16a | 146 | 6656 | 3328 | 8 | 3327 | 155.5 MB/s 16b | 146 | 6656 | 3328 | 8 | 1664 | 154.8 MB/s 17a | 149 | 6784 | 3392 | 8 | 3391 | 131.7 MB/s 17b | 149 | 6784 | 3392 | 8 | 1696 | 123.8 MB/s 18a | 152 | 6912 | 3456 | 8 | 3455 | 123.1 MB/s 18b | 152 | 6912 | 3456 | 8 | 1728 | 118.1 MB/s --- Targeting Fastest Result of md_sync_window 1600 bytes for Final Pass --- --- FULLY AUTOMATIC nr_requests TEST 2 (4 Sample Points @ 10min Duration)--- Test | num_stripes | sync_window | nr_requests | sync_thresh | Speed --------------------------------------------------------------------------- 1 | 3200 | 1600 | 128 | 1599 | 119.9 MB/s 2 | 3200 | 1600 | 128 | 800 | 128.0 MB/s 3 | 3200 | 1600 | 8 | 1599 | 130.6 MB/s 4 | 3200 | 1600 | 8 | 800 | 123.3 MB/s Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 130.6 MB/s This nr_requests value will be used for the next test. --- FULLY AUTOMATIC TEST PASS 2 (Fine - 33 Sample Points @ 5min Duration)--- Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh | Speed ----------------------------------------------------------------------------- 1a | 64 | 2944 | 1472 | 8 | 1471 | 119.3 MB/s 1b | 64 | 2944 | 1472 | 8 | 736 | 110.9 MB/s 2a | 65 | 2960 | 1480 | 8 | 1479 | 122.9 MB/s 2b | 65 | 2960 | 1480 | 8 | 740 | 120.2 MB/s 3a | 65 | 2976 | 1488 | 8 | 1487 | 111.7 MB/s 3b | 65 | 2976 | 1488 | 8 | 744 | 133.5 MB/s 4a | 65 | 2992 | 1496 | 8 | 1495 | 124.7 MB/s 4b | 65 | 2992 | 1496 | 8 | 748 | 129.5 MB/s 5a | 66 | 3008 | 1504 | 8 | 1503 | 129.2 MB/s 5b | 66 | 3008 | 1504 | 8 | 752 | 126.1 MB/s 6a | 66 | 3024 | 1512 | 8 | 1511 | 114.0 MB/s 6b | 66 | 3024 | 1512 | 8 | 756 | 120.7 MB/s 7a | 67 | 3040 | 1520 | 8 | 1519 | 120.0 MB/s 7b | 67 | 3040 | 1520 | 8 | 760 | 121.8 MB/s 8a | 67 | 3056 | 1528 | 8 | 1527 | 143.8 MB/s 8b | 67 | 3056 | 1528 | 8 | 764 | 155.3 MB/s 9a | 67 | 3072 | 1536 | 8 | 1535 | 155.5 MB/s 9b | 67 | 3072 | 1536 | 8 | 768 | 155.4 MB/s 10a | 68 | 3088 | 1544 | 8 | 1543 | 155.4 MB/s 10b | 68 | 3088 | 1544 | 8 | 772 | 155.2 MB/s 11a | 68 | 3104 | 1552 | 8 | 1551 | 155.3 MB/s 11b | 68 | 3104 | 1552 | 8 | 776 | 155.4 MB/s 12a | 68 | 3120 | 1560 | 8 | 1559 | 155.5 MB/s 12b | 68 | 3120 | 1560 | 8 | 780 | 155.4 MB/s 13a | 69 | 3136 | 1568 | 8 | 1567 | 155.2 MB/s 13b | 69 | 3136 | 1568 | 8 | 784 | 155.5 MB/s 14a | 69 | 3152 | 1576 | 8 | 1575 | 155.6 MB/s 14b | 69 | 3152 | 1576 | 8 | 788 | 155.4 MB/s 15a | 69 | 3168 | 1584 | 8 | 1583 | 155.3 MB/s 15b | 69 | 3168 | 1584 | 8 | 792 | 155.4 MB/s 16a | 70 | 3184 | 1592 | 8 | 1591 | 155.2 MB/s 16b | 70 | 3184 | 1592 | 8 | 796 | 155.4 MB/s 17a | 70 | 3200 | 1600 | 8 | 1599 | 155.4 MB/s 17b | 70 | 3200 | 1600 | 8 | 800 | 155.3 MB/s 18a | 70 | 3216 | 1608 | 8 | 1607 | 155.4 MB/s 18b | 70 | 3216 | 1608 | 8 | 804 | 155.3 MB/s 19a | 71 | 3232 | 1616 | 8 | 1615 | 155.2 MB/s 19b | 71 | 3232 | 1616 | 8 | 808 | 155.3 MB/s 20a | 71 | 3248 | 1624 | 8 | 1623 | 155.5 MB/s 20b | 71 | 3248 | 1624 | 8 | 812 | 155.4 MB/s 21a | 71 | 3264 | 1632 | 8 | 1631 | 155.4 MB/s 21b | 71 | 3264 | 1632 | 8 | 816 | 155.3 MB/s 22a | 72 | 3280 | 1640 | 8 | 1639 | 155.2 MB/s 22b | 72 | 3280 | 1640 | 8 | 820 | 155.4 MB/s 23a | 72 | 3296 | 1648 | 8 | 1647 | 155.5 MB/s 23b | 72 | 3296 | 1648 | 8 | 824 | 155.4 MB/s 24a | 73 | 3312 | 1656 | 8 | 1655 | 155.3 MB/s 24b | 73 | 3312 | 1656 | 8 | 828 | 155.3 MB/s 25a | 73 | 3328 | 1664 | 8 | 1663 | 155.3 MB/s 25b | 73 | 3328 | 1664 | 8 | 832 | 155.3 MB/s 26a | 73 | 3344 | 1672 | 8 | 1671 | 155.3 MB/s 26b | 73 | 3344 | 1672 | 8 | 836 | 155.5 MB/s 27a | 74 | 3360 | 1680 | 8 | 1679 | 155.4 MB/s 27b | 74 | 3360 | 1680 | 8 | 840 | 155.4 MB/s 28a | 74 | 3376 | 1688 | 8 | 1687 | 155.4 MB/s 28b | 74 | 3376 | 1688 | 8 | 844 | 155.5 MB/s 29a | 74 | 3392 | 1696 | 8 | 1695 | 155.6 MB/s 29b | 74 | 3392 | 1696 | 8 | 848 | 155.4 MB/s 30a | 75 | 3408 | 1704 | 8 | 1703 | 155.4 MB/s 30b | 75 | 3408 | 1704 | 8 | 852 | 155.4 MB/s 31a | 75 | 3424 | 1712 | 8 | 1711 | 155.3 MB/s 31b | 75 | 3424 | 1712 | 8 | 856 | 155.4 MB/s 32a | 75 | 3440 | 1720 | 8 | 1719 | 155.2 MB/s 32b | 75 | 3440 | 1720 | 8 | 860 | 155.4 MB/s 33a | 76 | 3456 | 1728 | 8 | 1727 | 155.3 MB/s 33b | 76 | 3456 | 1728 | 8 | 864 | 155.5 MB/s The results below do NOT include the Basline test of current values. The Fastest Sync Speed tested was md_sync_window=1576 at 155.6 MB/s Tunable (md_num_stripes): 3152 Tunable (md_sync_window): 1576 Tunable (md_sync_thresh): 1575 Tunable (nr_requests): 8 This will consume 69 MB with md_num_stripes=3152, 2x md_sync_window. This is 38MB more than your current utilization of 31MB. The Thriftiest Sync Speed tested was md_sync_window=384 at 149.3 MB/s Tunable (md_num_stripes): 768 Tunable (md_sync_window): 384 Tunable (md_sync_thresh): 383 Tunable (nr_requests): 8 This will consume 16 MB with md_num_stripes=768, 2x md_sync_window. This is 15MB less than your current utilization of 31MB. The Recommended Sync Speed is md_sync_window=1216 at 154.9 MB/s Tunable (md_num_stripes): 2432 Tunable (md_sync_window): 1216 Tunable (md_sync_thresh): 1215 Tunable (nr_requests): 8 This will consume 53 MB with md_num_stripes=2432, 2x md_sync_window. This is 22MB more than your current utilization of 31MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. Completed: 15 Hrs 51 Min 46 Sec. System Info: Tower unRAID version 6.2.0-rc4 md_num_stripes=1408 md_sync_window=512 md_sync_thresh=192 nr_requests=128 (Global Setting) sbNumDisks=5 CPU: Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz RAM: 16GiB System Memory Outputting lshw information for Drives and Controllers: <snip> because this is too long to submit!</snip> Array Devices: Disk0 sdl is a Parity drive named parity Disk1 sdi is a Data drive named disk1 Disk2 sdj is a Data drive named disk2 Disk3 sdk is a Data drive named disk3 Outputting free low memory information... total used free shared buff/cache available Mem: 16464376 175904 15185584 484584 1102888 15376812 Low: 16464376 1278792 15185584 High: 0 0 0 Swap: 0 0 0 *** END OF REPORT *** I find the "baseline" test confusing, because it says my current values should give a speed of 63.7 MB/s, yet my actual parity check history shows much better values: 2016-08-01, 08:43:21 8 hr, 43 min, 20 sec 127.4 MB/s OK 2016-07-01, 08:32:08 8 hr, 32 min, 6 sec 130.2 MB/s OK 2016-06-01, 08:58:34 8 hr, 58 min, 32 sec 123.8 MB/s OK This script added a few "cancelled" entries in parity check history as well, it would be nice if there were a way to prevent that: 2016-08-25, 11:30:29 5 min, 14 sec Unavailable Canceled 2016-08-24, 19:32:34 31 sec Unavailable Canceled Any other thoughts on the results?
×
×
  • Create New...